Sunday, September 21, 2008

Units of measure in F#

I'm at ICFP and just watched Andrew Kennedy give a delightful invited talk about the support for units of measure in F#. It was a very impressive example of work that's well-designed all the way through, from theory to practice. In essence, it allows you to say things like
let gravity = 9.8<m/s^2>
and the type system tracks not only the type float but also the unit of measure m/s^2 and prevents unit-mismatch errors in the program.

One of the questions I had was based on my brief experience several years ago with statistical natural language processing, where a lot of the computation we did was in logspace. In a situation like that you would like to keep track of both the ordinary units of measure as well as the fact that everything is in logspace.

Dan Grossman took this a step further and pointed out that you want your arithmetic operations to have different types when you're working in logspace. For example, while the + operator usually has a type like:
+: float<'u> -> float<'u> -> float<'u>
in logspace it should instead have the type:
+: float<log 'u> -> float<log 'v> -> float<log ('u · 'v)>
Now, I don't know how much support there is for "unit constructors," but the F# unit system does allow you to use units with custom abstract data types. According to Andrew, F# also supports operator overloading and ties it in with the unit system. So it might in fact be possible to express the logspace version of the + operator!

This integration between overloading and units could be a nice solution to the issue of different representations of numbers. Libraries like Java's BigInteger are a usability disaster, but we only have one syntax for numerals and operators. Tying a type-and-unit system in with overloading could allow you to use various machine representations (fixnums, flonums, IEEE754r decimal, bignums, exact rationals, ...) by annotating literals and variables, and type reconstruction could help with the operators. I'm not a big fan of unification-based inference, but in this case I wonder if simple structurally recursive reconstruction would be enough in practice.

Update: Ken Shan points out that only the implementation would use +, but you really would want to overload the * operator. Conceptually you're doing a multiplication.

Monday, September 15, 2008

True unions, revisited

I've spoken about true unions (or ad-hoc unions, or non-disjoint unions) sort of abstractly before, but Typed Scheme has 'em, and they're great.

In my recent experiences with Haskell and ML, the amount of unnecessary and distracting injection and projection that goes on starts to bloat code to the point of illegibility. Disjoint unions in ML and Haskell provide two orthogonal pieces of functionality in one feature. Typed Scheme separates these two: struct types give you disjointness, and unions give you, well, unions. It's up to you to guarantee that you don't create unions with overlap, but so far I haven't seen that cause any problems.

Then you can build disjoint unions on top of that, as I have in my new types.plt PLaneT package:
(define-datatype Expr
[Var ([name : Symbol])]
[Abs ([var : Symbol] [body : Expr])]
[App ([rator : Expr] [rand : Expr])])
macro-expands to:
(begin
(define-struct: Var ([name : Symbol]))
(define-struct: Abs ([var : Symbol] [body : Expr]))
(define-struct: App ([rator : Expr] [rand : Expr]))
(define-type-alias Expr (U Var Abs App)))

Scheme record syntax

I think if I were designing my own Scheme I'd tighten up some of the aspects of PLT's structs into a smaller syntactic footprint. In PLT, when you say
(define-struct thing (foo bar))
it binds thing to an identifier carrying static information about the struct type (which is needed for other macros such as match), struct:thing to a runtime value containing introspective information about the struct type, and make-thing to a constructor for the struct type.

I think I would just bind the single identifier thing for all three purposes: statically, it would carry the static information for match and friends; dynamically, it would be the constructor. (I never liked the imperative-sounding "make-" prefix.) For dynamic introspection, I would probably include a library form where you could say (struct foo) to reflect the struct type information into a dynamic value.

Wednesday, August 27, 2008

The death of namespaces

As promised in my post on ECMAScript Harmony, I want to talk about the problems with the proposed namespaces feature and why I'm glad it's gone.

In ES3, one of the problems for information hiding is that the language's primary datatype is a mutable table mapping transparent strings to values. As a result, sharing an object creates abstraction hazards: anyone can view -- or even modify! -- an object's internals. Namespaces were an attempt to facilitate hidden properties by generalizing objects in a backwards-compatible way.

So objects became mutable tables mapping namespace/string pairs to values. This has precedent in Common Lisp, and it seems natural. But here's where it started going awry: JavaScript has an ill-conceived history of specifying variable scope by way of a specification construct known as the variable object: a fictional JavaScript object that serves as a rib in the lexical environment. So JavaScript variables are conceptualized as being the same thing as properties. (I'm sure I know where this came from: implementors could use the same internal mechanism to lookup properties in an object inheritance chain as for looking up variables in the environment.) So now with namespaces, variable names were not just strings but also these (namespace × string) pairs.

But namespaces were first-class values. There were even forms for qualifying variable references with a dynamically computed namespace! Lexical scope was slipping away from JavaScript as an almost-was. Even more troubling was the concept of "open namespaces." For convenience and backwards-compatibility, there has to be a reasonable default namespace for looking up unqualified variable and field references. The proposed spec allowed multiple namespaces to be given as candidates for defaults, with a particular search order. Now, variable lookup is tough enough in JavaScript. With bizarre dynamic constructs like with, lexical eval, and the global object, there can be two dimensions of lookup: up the environment and up the inheritance chain of an object. Now with default namespaces there was a third dimension of lookup. For implementors, this creates all sorts of efficiency nightmares. For language users, it would likely be a usability nightmare.

There are ways to simplify namespaces; even Common Lisp's approach is simpler that what was originally on the table for ECMAScript. But when I heard that namespaces, which had been a part of the proposed standard since before I got involved, were on the chopping block, I was thrilled. It had never even occurred to me that they might be cut! I proposed a simpler solution: if we just add gensym to the operations on property names, then we can create private, unguessable properties. No property name pairs, no extra dimension of search, just opaque names. In an OO world, this is more likely to look like new Name() than gensym(), but same difference.

ECMAScript Harmony

There's been a lot of buzz about recent events in Ecma TC39 since Brendan Eich made his announcement of ECMAScript Harmony resulting from the seminal Oslo meeting last month. Some of the public discussion has been so misleading I won't even link to it. I've been getting a number of questions from friends and colleagues, asking if it's true that we've stopped working on ECMAScript Edition 4. I'm giving them all the same answer: don't believe the hype!

As an invited expert, I try to avoid politics. But from my perspective, the Harmony effort is promising to be a great development for the technical quality of the ECMAScript standard. Both sides of the split in the committee have good technical points to make and I've longed for more cooperation. Now we have much more cooperation, and the language design is improving for it.

As for the future of ECMAScript Edition 4: whatever name gets attached to it, the work on improving and standardizing ECMAScript continues apace. We'd already deferred static types for potential future work well before the Oslo meeting, and as part of the Harmony effort we dropped some of the more questionable aspects of the proposed ES4, most notably namespaces. These were not pure political concessions but in fact good technical decisions. (I'll blog about this separately.) The proposed language is changing all the time, but reports of ECMAScript's death have been grossly exaggerated.

Friday, August 08, 2008

Nirvana

I have achieved hygienic macro enlightenment.

Tuesday, June 24, 2008

Implicit homomorphism

One of the really nice features of macros and their mathematical first cousin notational definitions is that they leave their straightforward, recursive expansion implicit. When we write in math:
A ⇔ B = A ⇒ B ∧ B ⇒ A
what we really mean is that all occurrences of ⇔ should be recursively expanded within a term. But there's essentially an implicit structural recursion through all propositions in the logic (which is left open "until" all notational definitions have been provided) which expands any occurrences of notational definitions homomorphically through sub-propositions. We don't require anyone to go through the painstaking and virtually information-free process of explicitly marking all points of recursive expansion.

I would love a similar notational style for defining annotations. I often find myself defining an instrumentation function that inserts one or two kinds of extra pieces of information into a tree. Maybe term-rewriting would work pretty well for this. Say we're writing a function that annotates certain λ-expressions with their height. We could write
λ x . e → λ [ |e| ] x . e
and then leave the compatible, reflexive, and transitive closure of this rewriting rule implicit, since they're obvious.

Then I would really like some way to make this style of definition composable with other definitions, so for example I could define a type-checking-cum-instrumentation algorithm
Γ ⊢ e : τ → e'
where the instrumentation portion (→ e') is mostly left implicit.

Tuesday, June 17, 2008

How to impair a declarative

Declarative programming is a vague term but could loosely be interpreted to mean not imperative programming. It's also about defining a program by breaking it into separate, modular units of partial specification.

Of course, the language's semantics has to have some algorithmic means to combine these declarations into a complete program definition. And if the language admits recursive definitions, the combined program may contains cycles. So the implementation of a declarative language will often have an imperative flavor, combining recursive elements by incrementally updating the set of definitions.

The question is whether it is still possible for the user to understand the semantics without resorting to reasoning about its possibly-imperative implementation. Specifically, the user shouldn't have to worry about time. If the order in which definitions are combined matters, then the semantics becomes imperative. Worse, if the programming interface allows definitions in multiple files, the order of definition might not even be under the user's control--or worse, it might only be controllable through extra-linguistic means like compiler command-line arguments.

Take as an example a language with open recursion, the ability to define a recursive function in multiple, disparate pieces. If an open recursive function has overlapping cases in separate declarations, the language can handle this in one of several ways:
  1. Let rules defined "later" take precedence over rules defined "earlier."
  2. Define a total order on rule specificity, and let more specific rules take precedence over less specific rules.
  3. Disallow overlapping cases statically.
Option #1 reintroduces the notion of time. The resulting semantics is less modular, because it requires understanding not just the individual modules, but subtle aspects of their composition. (If the algorithm for combining definitions is sophisticated--unification, for example--this order may even be a complicated dynamic notion, not just textual order.) The other two options eliminate the imperative flavor, defining time out of the semantics. This frees the programmer to reorder definitions without affecting the program's meaning, and making the program's meaning more modular.

ADTs in JavaScript

Sjoerd Visscher has written a neat post about implementing algebraic data types in JavaScript. There are lots of ways to do this, but this one looks interesting. I don't quite understand it but I thought I'd point it out.

Note the use of expression closures! Good stuff.

Thursday, June 12, 2008

Clumsy existential

This might be a dumb question: I often want to prove something like this:
(∃ v . ev) ⇔ (∃ v ~ v . [[e]] ⇒ v′)
except that the scope doesn't work out: v is only in scope in the first parenthesized proposition, so it can't be referred to in the second one. Of course, it's generally sufficient to prove co-termination and just leave out the relationship between the final values, since that's usually easily derivable. But it's an important part of the intuition behind correctness, so I want the relationship between answers to be in the statement of the theorem.

An awkward way to make this work is to write big clunky conjunctions like:
(e⇓ ⇔ [[e]]⇓) ∧ (∀ v . ev . ∃ v′ ~ v . [[e]] ⇒ v′)
But it would be nice if there were some way to write it more compactly like the first pseudo-proposition I wrote. I like that it reads almost like you'd say in English: "e and [[e]] always get the same answers."

Anyone have any suggestions?

Tuesday, May 20, 2008

The bat-signal operator

Starting now, I propose that every programming language strive to incorporate the bat-signal operator: ~@~

Friday, May 16, 2008

Fail-soft

Automatically repairing errors and continuing execution goes against the grain of software engineering, for good reason, but I've come to see that it's what makes the web go 'round. Robert O'Callahan has an interesting market-oriented take on error recovery:
...the economics of error recovery --- the fact that recovering from malformed input, instead of hard failure, is a competitive advantage for client software so in a competitive market it's sure to develop and you might as well put it in specifications...
In other words: ignore the errors, users will love you, and you win. We may not like it, but you fight economics at your own peril.

Friday, May 09, 2008

N-ary infix predicates

In languages with operator overloading, it's possible to define binary infix predicates such as (using Haskell here):
infix <:
(<:) :: Ty -> Ty -> Bool
x <: y = x `subtype` y -- for some auxiliary subtype function
With arithmetic operators that continue to return elements of the same type (does this make their types endofunctors? I'm no category theorist), so as long as you declare an associativity, the n-ary case is easy to parse. Predicates aren't closed over their input type, though, so in Haskell I can't take the above definition and write:
t1 <: t2 <: t3
But in math, we overload binary predicates to the n-ary case by assuming an implicit "and" between each pair. It would just take a little kludge (but one with the full weight of historical precedent) to let the operator overloading and type system conspire to determine that any binary predicate (i.e., with result type Bool) can be extended to the n-ary case using &&.

(You could maybe also extend this to any type Monad m => m a using >>= instead of &&?)

Thursday, May 08, 2008

Solving the wrong problem

I've been having a depressing couple of weeks trying to solve a really hard problem in my research. Occasionally I thought I had some promising leads, but usually they just led to more despair.

Then I discovered today that I've been tackling the entirely wrong problem all along.

I never knew feeling so stupid could feel so good.

Monday, May 05, 2008

Sigh

> (eqv? +NaN.0 +NaN.0)
#t
> (eq? +NaN.0 +NaN.0)
#t
> (= +NaN.0 +NaN.0)
#f
> (equal? +NaN.0 +NaN.0)
#t

A functional visitor pattern

When your data definitions get big enough, it becomes Tedious and Error-Prone (oh my!) to write multiple operations over the same datatype, so it's useful to define a generic traversal. Here's a pattern I used recently for the ECMAScript Edition 4 reference implementation.

Lots of languages have mutually recursive AST definitions. For example, take a language with mutually recursive expressions Ast.expr, statements Ast.stmt, and declarations Ast.decl. Define an interface VISITOR:
signature VISITOR = sig
datatype cmd = Stop | Skip | Cont

type ('a, 'z) method = 'a * 'z -> 'z * cmd

type 'z visitor = { visitExpr : (Ast.expr, 'z) method,
visitStmt : (Ast.stmt, 'z) method,
visitDecl : (Ast.decl, 'z) method }

val foldExpr : 'z visitor * Ast.expr * 'z -> 'z
val foldStmt : 'z visitor * Ast.stmt * 'z -> 'z
val foldDecl : 'z visitor * Ast.decl * 'z -> 'z
end;
A visitor is a record of operations, one for each variant of AST nodes. (Alternatively, we could have defined a single type datatype ast = Expr of ... | Stmt of ... | Decl of ...;. But there's no need for the extra indirection.) Each operation takes a node and an intermediate result, and produces a new intermediate result along with a "traversal command" cmd.

This simple traversal language instructs the fold either to continue (Cont), to skip all descendants of the current node (Skip), or to abort the traversal entirely (Stop).

The implementation of the fold uses a few auxiliary functions to compute a list of child nodes for each node:
type children = Ast.expr list * Ast.stmt list * Ast.decl list

fun exprChildren (e : Ast.expr) : children = ...
fun stmtChildren (e : Ast.stmt) : children = ...
fun declChildren (e : Ast.decl) : children = ...
The fold itself is (internally) in CPS. At each node, there are three potential continuations: the empty continuation for aborting, the given continuation for skipping the current node's children, or a new continuation that first traverses the current node's children and then uses the given continuation.
type 'z cont = 'z -> 'z

fun cont (result : 'z, cmd : cmd)
(skipk : 'z cont)
(contk : 'z cont)
: 'z =
case cmd of
Stop => result
| Skip => skipk result
| Cont => contk result

fun foldExprK (v as { visitExpr, ... }, expr, init) k =
cont (visitExpr (expr, init))
(fn x => foldAllK v (exprChildren expr) x k)
k

and foldStmtK (v as { visitStmt, ... }, stmt, init) k =
cont (visitStmt (stmt, init))
(fn x => foldAllK v (stmtChildren stmt) x k)
k

and foldDeclK (v as { visitDecl, ... }, decl, init) k =
cont (visitDecl (decl, init))
(fn x => foldAllK v (declChildren decl) x k)
k

and foldAllK (v : 'z visitor)
((exprs, stmts, decls) : children)
(init : 'z)
(k : 'z cont)
: 'z =
let
fun foldList f (v, asts, init) k =
case asts of
[] => k init
| [ast] => f (v, ast, init) k
| ast::asts => f (v, ast, init)
(fn x => foldList f (v, asts, x) k)
in
foldList foldExprK (v, exprs, init) (fn result =>
foldList foldStmtK (v, stmts, result) (fn result =>
foldList foldDeclK (v, decls, result) k))
end
I originally had foldList defined at the top-level, in the same mutual recursion group with foldExprK et al, and got smacked by SML because that would require polymorphic recursion. Luckily I was able to work around it by placing its definition inside the body of foldAllK, but since I'd never run into SML's lack of polymorphic recursion before, it took quite a while to decipher the type error (with help from #sml).

Finally, the top-level functions seed the CPS functions with an initial continuation:
fun id x = x

fun foldExpr x = foldExprK x id
fun foldStmt x = foldStmtK x id
fun foldDecl x = foldDeclK x id
(Note that because of the value restriction, there's no possibility of refactoring these into point-free val-declarations.)

Aren't auto-curried functions non-expansive?

The value restriction in ML is so limited. If I have an auto-curried function
> fun foo (k : int -> 'a) (x : int) = ...;
it should be the case that partial application of foo is always non-expansive, right? It's really annoying that partial applications of auto-curried functions often can't be given general types.
> val raisek (x : int) = raise (Fail (Int.toString x));
> val fooRaise = foo raisek;
Warning: type vars not generalized because of value restriction...
I hate when the type system interferes with standard, semantics-preserving transformations.

Wednesday, April 30, 2008

Dynamic languages need modules

A dynamic language starts as an implementation. And that implementation almost always includes some variation on a REPL. Look at Lisp, Scheme, Python, Ruby, JavaScript, Lua, etc. One of the things that makes these languages "dynamic" is that they're attached to some long-running process: an IDE, a user prompt, an embedding application, a web page. So these languages are simply implemented with some global table of definitions that gets updated through the lifetime of the host process. Easy enough to understand from the implementor's perspective, but what happens to the user of the embedded language?

The problem with the "top-level" (that ever-changing table of program definitions) is the creeping specter of dynamic scope. While static vs. dynamic typing may be a debate that civilization takes to its grave, the dust seems to have more or less settled on dynamic scoping. The fact is, even though many decisions a program makes must depend on its context, it's very hard to understand a program if you can't nail down its definitions.

To some degree, if a dynamic language has lambda, you can escape the top-level with a design pattern that simulates a poor man's module. The module pattern is almost always a variation of what Schemers call "left-left-lambda"--the immediate application of a function literal. Bindings inside the lambda are no longer floating in that airy and transient stratosphere of the top-level; they're nailed to the function that is being applied. And you know that function is being applied only once, because once you've created it, it's applied an discarded.

This pattern goes a long way, and if you have macros, you can create sugar to give the pattern linguistic status. But a module system it ain't.

Linking: Nothing in this pattern deals with the relationships between modules. There's no way to declare what a module's imports and exports are. In fact, if you want a module to communicate with any other modules, the top-level's poison tends to seep back in. To export a value, a lambda-module can mutate some global variable, or it can return a value--but where does the caller save the value? You can always nest these solutions within more lambda-modules, but ultimately there's a problem of infinite regress: in the end you have to have at least one special top-most module surrounding your program.

Separate development: And that's only if you have control over the whole program. If you want to create a library and share it, there needs to be some shared common framework in which people and organizations can share code without stomping on each other's invariants or polluting each other's global environments. To be sure, a built-in module system doesn't eliminate all such issues (you still need conventions for naming and registering modules within a common framework), but modules help to standardize on these issues, and they can provide helpful errors when modules step on each other's toes, rather than silently overwriting one another.

Loading: There's not much flexibility in the loading of an immediately applied function. If your language involves multiple stages of loading, the implementation may be able to be smarter about loading and linking multiple modules at once.

Scoping conveniences: Lexical scope is a widget in the programmer's UI toolkit, and for different scenarios, there are different appropriate designs. The tree shape of expressions makes the lexical scoping rule ("inner trumps outer") appropriate; it favors the local over the global. But, ignoring nested modules for the moment, modules aren't tree shaped; they're more like a global table. In a sense, all modules are peers. So when you import the same name from two different modules, which one should win? You could say that whichever you import later wins, but this is much more subtle than the obvious nesting structure of ordinary variable bindings. I find it's more helpful for the module system to give me an error if I import the same name from different sources (unless it's a diamond import). Other useful facilities are selective import, import with renaming, or import with common prefixes. These are subtle usability designs where modules differ from lambda.

Extensibility: In PLT Scheme, we've used the module system as the point for language design and extension. By allowing modules to be parameterized over their "language", we have a natural way for introducing modalities into PLT Scheme. As languages grow, these modalities are an inevitability (cf. "use strict" in Perl and ECMAScript Edition 4). Buried within pragmas or nested expressions, this makes the design of the language much harder. But within a module, bindings are sacrosanct and interactions with other modules are limited to imports and exports. This significantly cuts down on the space of possible interactions and interferences between the language's modalities. As an example, Sam Tobin-Hochstadt has made good use of this for the design of Typed Scheme, a statically typed modality of PLT Scheme that can still interact reliably with dynamically typed modules.

The unrestricted mutation of the top-level environment is a good thing thing for many purposes: interactive development, self-adjusting execution environments, etc. But it's terrible for nailing down program definitions. Modules are a way of circumscribing a portion of code and declaring it "finished". It can still be useful to have an environment where modules can be dynamically loaded and possibly even replaced, but it's critical for the language to provide the programmer with basic invariants about the definitions within a module.

All of this is stuff I wish I'd had a clearer head about earlier in the ES4 process. But I hope that, down the road, we'll consider a module system for ECMAScript.

Monday, April 28, 2008

Literate code

I used to find the notion of literate code attractive, but right now my dissertation prototype implementation is undergoing some serious redevelopment and the horribly out-of-date docs are dragging me down. Now I find myself wishing I'd kept them separate.

Tuesday, April 22, 2008

How to spell StopIteration

Some of the pseudo-constants in JavaScript are actually mutable global variables: undefined is a notorious example. Luckily, there's generally some reliable way to generate such a value without having to hope that no one has reassigned a global variable.

For undefined, it's the void operator, which, given any argument, produces the value that undefined is initially bound to. Conventionally, people write void(0).

In JavaScript 1.7 and later, there's a special constant called StopIteration that's thrown after an iterator runs out of values. This too is a mutable global. But this value is a little trickier to get at reliably. Here's the simplest expression I could come up with that produces the StopIteration value:
(function(){try{(function(){true||(yield)})().next()}catch(e){return e}})()
The inner function is a generator by virtue of textually containing the yield keyword, but it returns immediately, yielding no values, because of the short-circuited logical or. (Note that because of yield's finicky grammar, it has to be parenthesized.) So by calling next() on it, it immediately raises StopIteration, which the outer function catches and returns.

Sunday, April 20, 2008

Compilation, obfuscation, encryption

It's interesting that we tend to use compilation as a proxy for encryption. It seems to me it would help clarify the system design of a language semantics and architecture if you separate the concerns of code ownership with performance. Compilation is primarily used to cache the transformation of one semantics into another--a partial evaluation of an interpreter. Using this as a weak encryption mechanism is pretty silly. If you want to encrypt program source, why not use real encryption? That way you can use whatever representation of the code you want; source, byte-code, microcode, whatever.

Friday, April 11, 2008

A poem for the weekend

Listen to Mustn'ts, child, listen to the Don'ts.
Listen to the Shouldn'ts, the Impossibles, the Won'ts.
Listen to the Never Haves, then listen close to me.
Anything can happen, child, Anything can be.

--Shel Silverstein

Tuesday, April 08, 2008

Different perspectives on parameterized types

To the PL theorist, parameterized types are a necessity for the expressiveness of a statically typed language. Imagine Hindley-Milner without them: you'd have to duplicate definitions all over the place. But in practice, statically typed languages have loopholes--long before generics, Java always had the type Object so that you could essentially "turn off" type-checking when you couldn't type things precisely. This meant that you could always write generic operations and generic collections, but without type-checking.

So in practice, people don't look at parameterized types as increasing the expressiveness of Java at all; it just looks like a way to increase the amount of type-checking in the language, to make it stricter. And they're right.

So you have two correct perspectives on parameterized types: roughly, that they make the language less restrictive and more restrictive. It's not actually a contradiction, but it's enough to have caused me confusion in some conversations.

Sunday, April 06, 2008

Keep your /tmp clean

For a couple years my laptop's cygwin installation has been dramatically, inexplicably slow to start up the first time after rebooting. I mean, like, 5 to 10 minutes to start up. (You can imagine how much more this makes me love Windows Update for forcing a reboot every time they patch the latest buffer overrun that authorizes Solitaire to initiate a nuclear first strike against North Dakota.)

Well, today I found the culprit. A program I haven't used in quite a long time had left thousands of orphaned temp files in /tmp. Don't ask my why, but this was enough to obliterate the performance of cygwin across the board, and especially on startup. Maybe /tmp is memory-mapped? No idea. Anyway, I wish I'd known to keep my /tmp clean. Now you know.

ESOP in a heartbeat

I presented my latest paper, A Theory of Hygienic Macros, co-authored with Mitch, at ESOP in Budapest this past week. Because I'm a genius, I managed to let my passport expire. Conveniently, there's an expiration notification service at every airport known as "not letting you on the plane."

It's a long story, but it involves emergency passport renewal, an extra weekend in San Francisco, and very merciful conference organizers and attendees.

All of this means I got to spend a total of about 48 hours in lovely Budapest--about long enough to see the Danube and a few of its glorious monuments, experience cheap beer, try delicious Tokaji, present a paper, chat with a few friends, and get right back on a plane again.

What a shame I couldn't have stayed longer. But the talk went great, and I'm still very glad I got to go.

For the record, the San Francisco passport office is staffed exclusively with miracle workers.

Case exhaustiveness for interactivity

In functional programming we're very good at exhaustive case analysis of our data definitions. But I've found I'm pretty bad at reasoning about cases in interactive programs. (Embarrassingly, I recently lost a week's worth of data in an Ajax application I wrote due to a bad response to an unpredicted server failure.)

What are all the possible actions the user could perform? What are all the failure cases? What are all the possible interleavings of interactions? These are harder questions than enumerating variants of a disjoint union.

I think this is where dynamic safety (e.g., runtime tag checking in dynamically typed languages or case-dispatch catchalls in statically typed languages) is so critical: if you don't have a sound model for enumerating the space of possible behaviors, you have to make sure that there's some reasonable recovery programmed into the system for when you encounter an event you didn't predict. This reminds me of recovery-oriented computing and Erlang's "let it fail" philosophy.

Wednesday, April 02, 2008

Monkey-patching evolves

Do you like the ability to dynamically mutate the definition of your program, but keep getting thwarted by sophisticated static analyses such as grep? Your prayers have been answered: ninja-patching is here.

Sunday, March 23, 2008

Makefile for HUnit tests

Here's a nice idiom for running HUnit tests from a Makefile. Jesse Tov gave me the idea that you can use make's dependencies to test only modules that have changed since the last time you ran the tests. Pick a standard name for each module's tests--tests, say--and make sure it's defined in the first column (since we'll be searching for it with a simple grep). Every time we test a module, we'll touch a dummy file of the same name in some hidden directory--call it .test. So we have two lists of files, the modules of the application and the test dummy files:
SOURCES := $(wildcard *.hs) $(wildcard *.lhs)
TESTS := $(foreach src,$(SOURCES),.test/$(src))
To run the tests, first we collect the list of modified modules and save the names of their tests (Foo.tests, Bar.tests, etc.) in a temporary file .test/args.
.test/%hs: %hs
@if grep '^tests ' $< > /dev/null 2>&1 ; then \
touch $@ ; \
echo $* | sed -e 's/\..*/.tests/' >> .test/args ; \
fi
Now every time we run the tests, we first make a fresh .test/args file, and then we run GHC with a command-line option to evaluate an expression that runs those tests:
test: teststart $(TESTS)
@echo ghc Test -e "\"Test.run [ `xargs -a .test/args | tr ' ' ','` ]\""
@ghc Test -e "Test.run [ `xargs -a .test/args | tr ' ' ','` ]"

teststart:
@mkdir -p .test
@$(RM) -f .test/args
@touch .test/args
It's also useful to have a target that always runs all the tests:
retest: testclean test

testclean:
$(RM) -rf .test
This assumes the existence of a simple test harness module:
module Test (run) where
import Test.HUnit
run :: [Test] -> IO ()
run ts = (runTestTT $ TestList ts) >> (return ())

Some tools just don't like each other

brit·tle, adj.
  1. Multi-line string literals in Haskell require so-called "string gaps": one '\' character to terminate a line and another '\' to start the next line.
  2. With GHC, Haskell programs may be preprocessed with CPP, which coincidentally strips the "\ ... \" characters from the source, resulting in an illegal Haskell string literal.
  3. Mercifully, it also happens that CPP doesn't strip the characters if the first '\' character is followed by a space before the newline.
  4. But of course, a commonly used feature of emacs is to silently strip trailing whitespace at the end of lines on every save.
  5. Not that you can see the difference, given the well-known human limitations at visually distinguishing whitespace.
(Sensitive types should take this neither as a criticism of GHC nor as implicit condonement of using CPP for Haskell programs--I know it's shunned. You've gotta admit this is beautiful, though.)

Friday, March 21, 2008

When to use point-free style

I've often struggled with the question of when/whether to use point-free style. It can be dangerously addictive, especially in languages with syntactic support for it like Haskell. But it's notorious for creating dense and impenetrable code. Yesterday I linked to advice from the GHC community on when not to create needless abstractions, advice which could be applied when considering a point-free abstraction (man, does it ever take self-restraint not to go wild with the puns here).

The reason why pointful style can be so helpful is it allows us to think about the definition of a computation in terms of particular examples. For any number, let's call it x, ... This is also why set comprehensions in math, and consequently list comprehensions in programming languages, are so approachable: they describe a set by appeal to representative examples.

I think the key to successfully using point-free style is when you want to treat a function itself as a single data point. For example, if you're sorting a list with some comparison function, you don't want to have to drop down a level of abstraction to think about the individual pairs of elements being compared; you just want to think about the comparison function (like "numeric" or "reverse alphabetical") as the object of your attention.

Wednesday, March 19, 2008

Well said

From the GHC wiki:
It's much better to write code that is transparent than to write code that is short.

Often it's better to write out the code longhand than to reuse a generic abstraction (not always, of course). Sometimes it's better to duplicate some similar code than to try to construct an elaborate generalisation with only two instances. Remember: other people have to be able to quickly understand what you've done, and overuse of abstractions just serves to obscure the really tricky stuff.

Friday, March 14, 2008

Not fade away

From the old-languages-never-die dept.:
Obsolete programming languages such as FORTRAN and COBOL are still widely used.

-- J.J. Duby, 1971

Monday, March 10, 2008

Another cute Haskell function

join :: Eq a => [(a,b)] -> [(a,c)] -> [(a,(b,c))]
join a1 a2 = [ (a,(b,c)) | (a,b) <- a1, c <- [ c | (a',c) <- a2, a == a' ] ]

infixr 5 ⋈
(⋈) :: Eq a => [(a,b)] -> [(a,c)] -> [(a,(b,c))]
(⋈) = join

-- for example:
[("a",1),("b",2),("c",3)] ⋈ [("a",'a'),("b",'b'),("c",'c')]

Friday, March 07, 2008

Crazy debugging feature idea

Here's a crazy idea. Closures are like objects with private fields, right? Specifically, outside of the function body, you're not allowed to refer to the variables in the closed environment. But one of the most annoying things that happen when I'm debugging is that I want to call some local function, but it's not available from the REPL:
(define (foo x y)
(define (helper z)
... x ... y ...)
...)
What if, at the REPL (i.e., in debug mode only), closures were openable? You could use dot-notation to refer to them, and all you'd have to do is somehow provide the extra bindings needed to fill in their environment. Keyword arguments, maybe?
> ((close foo.helper [#:x 42] [#:y 'blah]) "value of z")
'result-value-ftw!

Wednesday, March 05, 2008

Xor

While I'm at it, let's continue the series of logical Maybe operators with xor:
infixr 2 ^^^

(^^^) :: Maybe a -> Maybe a -> Maybe a
Nothing ^^^ r@(Just _) = r
l@(Just _) ^^^ Nothing = l
_ ^^^ _ = Nothing

My current favorite little Haskell function

Here's my pet Haskell combinator du jour:
infixr 2 |||

(|||) :: Maybe a -> Maybe a -> Maybe a
Nothing ||| x = x
x ||| _ = x
This gives you a concise notation for branching in the Maybe monad, so instead of having to write:
case x `lookup` env1 of
Just v -> Just $ f1 v
Nothing -> case x `lookup` env2 of
Just v -> Just $ f2 v
Nothing -> Nothing
you can write:
do { v <- x `lookup` env1; return $ f1 v } |||
do { v <- x `lookup` env2; return $ f2 v }
This gives you the same kind of idiom that Schemers use or for. Come to think of it, I guess you could also write the analog of and:
infixr 3 &&&

(&&&) :: Maybe a -> Maybe a -> Maybe a
Just _ &&& r@(Just _) = r
_ &&& _ = Nothing
I haven't used this one yet, but it seems natural.

Tuesday, March 04, 2008

Intentional capture

Procedural hygienic macro systems like the syntax-case system make it possible to write capturing macros--macros which, depending on your philosophy, you might call "non-hygienic." The classic example is the "anaphoric" conditional form if-it, which implicitly binds a variable it to the result of the test expression:
(if-it 42 (+ it 1) #f) ; => 43
The difficulty in getting such a macro right comes when you try to write another macro that expands into if-it. To quote the mzscheme manual's section on macros, "macros that expand into non-hygienic macros rarely work as intended."

Andre van Tonder's SRFI 72 document contains a perfect and concise example, due to Kent Dybvig, of two different ways a macro might expand into a capturing macro. On the one hand, we might want to write when-it, a simple "one-armed" conditional that implicitly binds it in the same way as if-it:
(when-it 42 (+ it 1)) ; => 43
On the other hand, we might want to use if-it to implement the hygienic or macro, which shouldn't capture any variables.
(let ([it 10]) (or #f it)) ; => 10
First, here's the implementation of if-it: we create an identifier for it with the same lexical context as the operator of the expression:
(define-syntax (if-it stx)
(syntax-case stx ()
[(op e1 e2 e3)
(with-syntax ([it (datum->syntax #'op 'it)])
#'(let ([it e1])
(if it e2 e3)))]))
The references that will be captured by the introduced binding of it are the ones that were introduced into the program in the same expansion step as the occurrence of if-it in the macro call. In particular, if the occurrence of if-it was in the original program (i.e., written explicitly by the programmer), it captures references to it that were in the original program; if the occurrence of if-it is the result of a macro expansion, it captures only those references to it that were generated in that same expansion step.

This means that a hygienic macro that expands into if-it will work as expected:
(define-syntax or
(syntax-rules ()
[(op e1 e2)
(if-it e1 it e2)]))
Since the reference to it appears in the same expansion step as the occurrence of if-it, that reference is captured, but no references to it within subexpressions e1 or e2 (which had to have already been there before this expansion step) are captured.

If you want to write another capturing macro that expands into if-it, it's a little more work. Essentially, you have to capture it all over again. The moral of the story is that you always have to ask explicitly for a macro to capture an introduced identifier.
(define-syntax (when-it stx)
(syntax-case stx ()
[(op e1 e2)
(with-syntax ([it* (datum->syntax #'op 'it)])
#'(if-it e1 (let ([it* it]) e2) (void)))]))
Here we once again create an identifier with the same lexical context as the operator, and we bind it to the occurrence of it introduced by if-it.

These are good defaults for a hygienic macro system: it's easier to write hygienic macros but still possible (albeit a little harder) to write macros that capture. This is even true when you abstract over capturing macros: macros that expand into capturing macros are hygienic by default, but with a little more work again, you can create capturing macros that abstract over other capturing macros.

Friday, February 22, 2008

True unions

I am a Super Big Fan of disjoint union datatypes in programming languages, but there are places where they are really inconvenient. Having to inject and project data between related types can be prohibitively cumbersome, especially when dealing with large data definitions such as the AST definitions for realistic programming languages. I know of a couple of languages where "true" unions are used instead, including Typed Scheme and the draft ECMAScript Edition 4. In both cases, unions are being added to a language where the runtime is already paying the price for runtime type tags, so keeping the variants distinct where they don't overlap doesn't introduce any extra burden.

But I was thinking this morning, what would happen if you tried to add true unions to Hindley-Milner languages? For concreteness, let's imagine extending Haskell. So instead of writing
data Foo = I Int
| S String
you could instead write
type Foo = Int | String
Now if you want to pattern match on such a construct, you have to explicitly mention the type; there's no other way to distinguish the variants. So you could imagine type-annotating the patterns in a match expression:
showFoo :: Foo -> String
showFoo foo = case foo of
n :: Int -> show n
s :: String -> s
Note that because there's nothing prevent overlap in the variants of a true union, the order in which you match the patterns is significant. Alternatively, you could write the definition of showFoo in pattern-definition style:
showFoo :: Foo -> String
showFoo (n :: Int) = show n
showFoo (s :: String) = s
Consider the possible drawbacks to such a feature:

Cost of tagging:

One of the benefits of (sound) static typing is the ability to compile to efficient runtimes that avoid tagging runtime objects with their datatype. The Foo type, by contrast would require its Ints and Strings to be tagged. But this is the wrong way of looking at it; the language requires you to tag them anyway if you use the disjoint union type, so there's no additional cost over what you would already have been paying. For non-union types, you can still do the same amount of erasure.

Possible overlap:

You can express overlapping types like (Int | (Int | String)) with true unions, which makes the order of patterns significant, could result in surprising (order-dependent) logical relationships between case dispatches, and could generally lead to messy type definitions. Maybe a more principled way of looking at it is a disjoint union can be thought of as a type abstraction, whereas with a true union you might have to know its full definition to use it effectively. But hey, the same is true of type-definitions but they're still useful; and besides, nothing's preventing you from using disjoint unions when they're more appropriate.

Case exhaustiveness:

Standard ML's pattern matching facility is carefully designed to allow the compiler to prove or disprove case exhaustiveness. I tend to think this is a bad trade-off; the language is significantly crippled to enable a very weak theorem prover to prove a theorem that's of limited utility. Sure, it has its uses, but when you know things the theorem prover doesn't, you have to abandon pattern matching entirely. Other language designers seem to agree with me, since Haskell and I think Ocaml also don't check for case exhaustiveness.

Inference problems:

I have no expertise in Hindley-Milner inference, but I'm sure true unions are not inferrable. But one thing recent Haskell research has demonstrated is that convenient but uninferrable extensions to Hindley-Milner type systems can often be safely added with the requirement that they be explicitly annotated. I bet you could add true unions, require explicit type annotations when they're used, and get along just fine.

Function types and polymorphic types:

This is where it gets tougher. How do you tag these kinds of types? Do you do runtime subtyping checks for things like the more-polymorphic-than relation? I don't have an answer. A simple one is to restrict unions to types with simple tags, although those kinds of arbitrary restrictions lead to lack of compositionality in language design. But perhaps it's possible to make this work with the entire type system.

Some of this design space has been explored with Ocaml's polymorphic variants. I should take a look at the literature on that.

Sunday, January 06, 2008

Thank heavens for PLT v4

I've just finally installed my first pre-release of PLT v4 (actually v3.99.0.9 at present), and there are so many little conveniences that are already qualitatively improving my life. Here are a few of the ways v4 is making me happy:

automatic opt-lambda: As I understand it, the underlying low-level #%plain-lambda is like the original lambda, but in the full scheme language, the automatically available, high-level lambda has support for the functionality of opt-lambda from the old (lib "etc.ss") library. More often than not, that was the only reason I needed to require that library. It's so nice just to be able to reach for opt-args right away, without having to jump through any extra hoops.

keyword integration: Moreover, the high-level lambda also has automatic support for keyword arguments! This means there's a very smooth evolution path for functions: 1) positional when you just need a few arguments; 2) optional when one or two of them have reasonable defaults; and 3) keywords when there are enough arguments with enough defaults that it's too hard for the user to remember their order.

built-in pattern matching: No need to require (lib "plt-match.ss") -- it's already in the full scheme language by default. I stumbled on that by accident!

generalized define-struct: There used to be a huge gap between what you could do with define-struct and what you could do with the very general but hard-to-use make-struct-type. But thanks to the expressivity of keywords, the new define-struct form is much more powerful. Plus it's so much easier to understand what
(define-struct posn (x y) #:transparent)
means, as opposed to
(define-struct posn (x y) #f)

built-in list libraries: Many more commonplace list operations are just automatically there when you use the full scheme language, such as foldl, foldr, last, andmap, ormap, build-list, etc. It used to be confusing trying to remember what was in (lib "list.ss") and what was in (lib "list.ss" "srfi" "1"). Now, if I'm not mistaken, I just don't ever need (lib "list.ss") anymore.

my first two columns are mine again: Last but not least, the new #lang module declaration form means I no longer have to indent the entire body of a module.

It looks like the PLT crew have really worked hard on fixing lots of these little inconveniences (some of which I only noticed consciously now that I no longer have to deal with them!).

Thursday, January 03, 2008

The Culpepper Method of Good Writing

Ryan Culpepper's algorithm for good writing:

Blather. Condense. Repeat.

Thursday, November 08, 2007

König's Lemma

How do you show that a simulation preserves non-termination? I think you could probably use coinduction, but I'm not very familiar with coinductive arguments. I just learned about a useful result called König's Lemma, which I think allows you to use a simple induction.

Let's say we have a simulation relation e′ ~ e ("e′ simulates e") and a proof that for any step in a specification semantics:
e1 → e2
we have related terms e1′ ~ e1 and e2′ ~ e2 such that
e1′ →+ e2
It's easy to show by induction that if the specification semantics converges to a value then the implementation semantics converges to a related value. If the specification semantics diverges, i.e. has an infinite reduction sequence, then we'd like to show the implementation diverges too.

König's Lemma states that for any finitely branching tree, if there exists a path of length n from the root for any n, then there exists an infinite path from the root. Consider the tree of possible reduction sequences from a term, where branches indicate points of non-determinism in the semantics. If every point of non-determinism has only a finite number of alternative reductions, then the tree is finitely branching.

So now consider a diverging term e in the specification semantics. For any finite prefix of the infinite reduction sequence, we can easily show by induction that there is a reduction sequence in the implementation semantics of equal or greater length. Since the computation tree of the implementation term is finitely branching, König's Lemma provides an infinite reduction of the implementation.

Wednesday, October 17, 2007

Two optimizations, only one safe

PLT Scheme's module system suggests some simple compiler optimizations. For one, identifiers bound in a module cannot be mutated from outside the module. This makes it very easy to determine whether the binding is immutable: just scan the source for instances of set!. If there aren't any, you can rely on the binding's immutability for optimizations like constant propagation.

Here's another tempting one: if a module imports a library but doesn't use some of the library's exported bindings, it seems like it should be safe not to load those bindings, right? Nope! Because the macro system has complete access to the syntactic environment, a macro exported by the module might actually compute a reference to the binding:
(define-syntax (foo stx)
(syntax-case stx
... (datum->syntax-object
#'here
(string->symbol "mumbly-foo")) ...
The syntax object #'here encapsulates the entire syntactic environment, and if the imported library exports mumbly-foo, looking up 'mumbly-foo in the syntactic environment will find it. Similarly, run-time code in the module might perform
(eval (datum->syntax-object #'here 'mumbly-foo))
and it would again find the binding. So it's not generally possible to prune any library imports, as long as macros and eval have complete access to the syntactic environment through syntax objects.

Well, eval is scary... does this mean the first optimization is unsafe, say, if eval tries to mutate it? I don't think so. Module bindings are stored in the lexical (i.e., local) environment, which I believe is not available to eval. The bindings available to eval are the ones that come from PLT Scheme's "namespace" objects. As I understand it, these namespaces do not contain mappings for local variables, only for globals and module imports.

Monday, October 15, 2007

Up for air

It's been a busy month! Last week I got back from ICFP in lovely Freiburg, Germany, where I gave a talk about my functional pearl as well as a status report on the JavaScript reference implementation. ICFP was a blast--I can't count the number of cool people I got to meet, hang out with, trade ideas with, argue with, and drink delicious German beer with. Then as soon as I got back to Boston, I had just a week to finish my submission to ESOP with Mitch on our theory of hygienic macros.

Friday, September 07, 2007

Science and engineering

Joel Spolsky has a post about the phases of the software development cycle that's remarkably close to my own observations. In Joel's view, the first phase is art (i.e., design phase); the second is engineering (construction); and the third is science (debugging and testing).

Joel's interest is in project management and management tools, but mine is more in development tools. Once you recognize the divide between the engineering and science aspects of software development, you can better understand one of the tensions in the approach to development, a tension which leads to plenty of heated debate. This tension comes about because the Fundamental Immutable and Inviolable (Not to Mention Sacred, Holy, and Good) Laws of Engineering are sometimes at odds with the practice of science.

To wit: abstraction and modularity are the heart and lungs of software engineering. Rules #1 , 2 and 3 are "Localize concerns, i.e. Don't Repeat Yourself, separate concerns and enforce their orthogonality." More simply: use abstractions and don't violate them. By making one concern completely oblivious to (i.e., parametric in) another, you maximize your freedom to change one without affecting the other. This allows for local reasoning which in turn leads to separable development and maintenance. Disciplined developers create layered abstractions and fastidiously respect their boundaries.

But what happens when you start debugging? Dogmatically adhering to abstraction boundaries is like wearing blinders; when a bug first arises, you never know which abstraction layer it's hiding in, or if it's in the interaction between layers. Another common consequence of thinking inside the abstraction box is impulsively assuming the bug is someone else's fault. ("The compiler must be broken!") I'm reminded of Knuth's quote about computer scientists:
Such people are especially good at dealing with situations where different rules apply in different cases; they are individuals who can rapidly change levels of abstraction, simultaneously seeing things "in the large" and "in the small."
          -- quoted in Hartmanis's Turing Award lecture
I think this is describing more the science and perhaps also the design aspects--but not the engineering aspect--of software development.

Because debugging and testing are about observing and understanding an existing system, rather than constructing or modifying a system, the barriers we construct to enforce our engineering principles become obstacles. Debugging tools, IDE's, testing frameworks, etc. are all characterized by a need to violate abstraction boundaries.

As a result, the Cleans and Dirties (as Mitch calls them) fight tooth and nail about whether our software development frameworks should be absolutely strict in their adherence to the FIaI(NtMSHaG)LoE (ML) or absolutely lax (Smalltalk). I wonder if we couldn't do better by building software frameworks that were aware of these different modes of development.

Wednesday, September 05, 2007

Progress is what matters

It's a well-known slogan in PL theory circles that Type Soundness = Preservation + Progress. But sometimes, people just write the Preservation lemma. Sometimes they even confuse type soundness with type preservation. Richard and I were chatting today and he reminded me that the goal of a type soundness theorem is to guarantee the absence of runtime type errors. In that sense, just proving that expressions remain well-typed is skipping the punch-line. The Progress lemma tells you that, assuming you're well-typed, you'll never get stuck with a type error--that's the point!

Monday, August 20, 2007

Real-world trace analysis

Robert O'Callahan has just released a debugger called Chronomancer, which is based on a technology he's been working on for a little while: Chronicle is a database for storing and querying program events, built on top of Valgrind, a dynamic analysis tool that instruments Linux binaries to monitor their runtime behavior. Chronomancer is a C/C++ debugger deployed as an Eclipse plug-in. There's a rough screencast here, though I didn't follow everything he was doing.

One of the cooler aspects of the debugger is the idea of history-based stack reconstruction. There's a tug-of-war that goes on between debuggers and compilers: compilers want to mess with data structures and control that is not supposed to be observable to programs, and debuggers want to observe them! As a result, there's constant disagreement or confusion about the interface between the two, resulting in impartial stack information or corrupted debugging information. Chronicle avoids these issues by relying on the history of events, rather than the stack layout, to determine what calls have happened when, and which ones have returned. This means, for example, that tail calls don't interfere with stack traces, because stack traces are based on program history instead of continuations.

The song that doesn't end

M Ward's song Chinese Translation sounds like an MIT AI koan from the 70's:
I met an old, old man
beneath a weeping willow tree
He said now if you got some questions
go and lay them at my feet...

And I said
What do you do with the pieces of a broken heart...

...and then the sun went down
and he sang for me this song

See I once was a young fool like you...
I met an old, old man
he sat beneath a sapling tree
He said now if you got some questions
go and lay them at my feet...

And I said
What do you do with the pieces of a broken heart...
The song ends before we reach a base case, but notice the trees get smaller, so there might be an inductive measure after all.

Thursday, August 02, 2007

ECMAScript's new home

Thanks to a generous domain-name donation from OpenDomain, the new home of ECMAScript is http://www.ecmascript.org! The previous domains we created for it now redirect to the new domain.

Saturday, July 21, 2007

Cute idiom from Haskell

I discovered this fun idiom while implementing an operational semantics in Haskell. The monadic syntax allows you to express monadic binding in either of two directions:
x <- expr;
or
expr -> x;
Combine that with a custom operator definition:
infix 4 |-
and now you can write interpreters whose syntax looks like a big-step evaluation judgment:
(|-) :: Env -> Expr -> Eval Value
env |- Lit n = return (Number n)
env |- Var x = case lookup x env of
Just v -> return v
Nothing -> throwError $ "free variable " ++ x
env |- Abs x e = return (Closure env x e)
env |- App e1 e2 =
do env |- e1 -> v1;
env |- e2 -> v2;
case v1 of
Closure env' x e ->
do (x,v2):env' |- e -> v;
return v;
Number n -> throwError $ "apply " ++ (show n)

Thursday, June 14, 2007

Unix quoting

Unix quoting is a great idea executed badly. The idea that you should be able to compose commands from the output of other commands is excellent. The problem is that you can't compose them arbitrarily.

Composition is completely hamstrung by a simple syntactic flaw: the begin- and end-delimiters for all the quoting forms are indistinguishable. So if I write:
`ls `pwd`/files`
the shell cannot determine whether the second occurrence of the backtick (`) character is ending the command or starting a sub-command.

This wouldn't be so terrible if there were a reasonable form for local binding so you could write your shell commands in A-normal form.

Thursday, May 31, 2007

The OO dream?

Signed message-passing! Multi-language, multi-implementation services! Actor-based distributed computation! Is this the OO dream?
[A] portable implementation of Termite is underway. ... The implementation will also support signed message passing... [A] system will only be able to receive messages from trusted sources... Will this be a way to re-unite the Scheme community? Is it a better approach than providing a common package infrastructure?
From Dominique Boucher. Heady stuff. The mind reels at the possibilities. We'll see.

Sunday, May 20, 2007

Capture-avoiding substitution in PLT Redex, Part 2

Following up on yesterday's post, there's another way to specify capture-avoiding substitution that has a convenient representation in Scheme. In the last decade, Pitts and Gabbay have built a research program on reasoning about binding using an algebra of names with name-swapping as their fundamental operation. The notation
(a b) ⋅ x
means roughly "swap occurrences of names a and b in the term x". This is very easy to code in a general way using S-expressions:
(define-metafunction swap
lambda-calculus
[(x_1 x_2 x_1) x_2]
[(x_1 x_2 x_2) x_1]
[(x_1 x_2 (any_1 ...)) ((swap (x_1 x_2 any_1)) ...)]
[(x_1 x_2 any_1) any_1])
The new definition of subst is very similar to the one I posted yesterday, except instead of using change-variable it uses swap:
(define-metafunction subst
lambda-calculus
[(x_1 e_1 (lambda (x_1) e_2))
(lambda (x_1) e_2)]
[(x_1 e_1 (lambda (x_2) e_2))
,(term-let ([x_new (variable-not-in (term e_1) (term x_2))])
(term
(lambda (x_new)
(subst (x_1 e_1 (swap (x_2 x_new e_2)))))))]
[(x_1 e_1 x_1) e_1]
[(x_1 e_1 x_2) x_2]
[(x_1 e_1 (e_2 e_3))
((subst (x_1 e_1 e_2)) (subst (x_1 e_1 e_3)))])
This corresponds to Pitts and Gabbay's definition of capture-avoiding substitution.

The cool thing about swap is that its definition doesn't have to change as you add new expression forms to your language; it's completely oblivious to the binding structure of the expression, and in a sense to any of the structure of the expression. All it needs is the ability to visit every node in the tree. So S-expressions as a term representation and swap as a variable-freshening operation fit together very nicely to form a convenient implementation of capture-avoiding substitution in PLT Redex.

Saturday, May 19, 2007

Capture-avoiding substitution in PLT Redex

There are lots of ways to implement substitution in PLT Redex (an embedded DSL in PLT Scheme for defining operational semantics with rewrite rules and evaluation contexts). I'll demonstrate with the lambda calculus, of course:
(define lambda-calculus
(language
[e (e e) x v]
[v (lambda (x) e)]
[x (variable-except lambda)]))
Since the early days, Redex has come with a library for building capture-avoiding substitution functions called subst. It's a little awkward to work with, though. Here's a definition of substitution using subst:
;; lc-subst : variable × expression × expression → expression
(define lc-subst
(subst
[`(lambda (,x) ,body)
(all-vars (list x))
(build (lambda (vars body) `(lambda ,vars ,body)))
(subterm (list x) body)]
[(? symbol?) (variable)]
[`(,e1 ,e2)
(all-vars '())
(build (lambda (vars e1 e2) `(,e1 ,e2)))
(subterm '() e1)
(subterm '() e2)]))
The subst special form relies on the subform all-vars to list the bound variables of an expression. The build subform reconstructs an expression from its pieces, including the variables (potentially automatically freshened to avoid capture) and the subexpressions. Then the subterm subform identifies each subexpression and lists the bound variables in scope for the subexpression.

This requires a fair amount of work for the client to coax subst into automatically substituting or freshening bound variables. These days, the authors recommend directly implementing capture-avoiding substitution. The example on the Redex web site gives a definition of capture-avoiding substitution as a metafunction:
;; subst : variable × expression × expression → expression
(define-metafunction subst
lambda-calculus
;; 1. x_1 bound, so don't continue in lambda body
[(x_1 e_1 (lambda (x_1) e_2))
(lambda (x_1) e_2)]
;; 2. in this case, we know that no capture can occur,
;; so it is safe to recur normally.
[(x_1 e_1 (lambda (x_2) e_2))
(lambda (x_2) (subst (x_1 e_1 e_2)))
(side-condition
(equal? (variable-not-in (term e_1) (term x_2))
(term x_2)))]
;; 3. general purpose capture avoiding case
[(x_1 e_1 (lambda (x_2) e_2))
,(term-let ([x_new (variable-not-in (term (x_1 e_1 e_2))
(term x_2))])
(term
(lambda (x_new)
(subst (x_1 e_1 (subst (x_2 x_new e_2)))))))]
;; 4. replace x_1 with e_1
[(x_1 e_1 x_1) e_1]
;; 5. x_1 and x_2 are different, so don't replace
[(x_1 e_1 x_2) x_2]
;; 6. ordinary recursive case
[(x_1 e_1 (e_2 e_3))
((subst (x_1 e_1 e_2)) (subst (x_1 e_1 e_3)))])
This implements capture-avoiding substitution with Redex's new define-metafunction form. Redex's term macro is a quasiquote-like data constructor that produces S-expressions; define-metafunction defines implicitly unquoted functions that can be invoked within a term form. The subst metafunction substitutes an expression e_1 for a variable x_1 within an expression e_2 by applying
(subst (x_1 e_1 e_2))
within a term form.

This metafunction is correct, but I find its definition a little subtle. I think it's clearer to separate concerns a little more and divide its definition into two pieces. Following Gunter's definition of capture-avoiding substitution we define a separate "change of variable" function:
;; change-variable : variable × variable × expression → expression
(define-metafunction change-variable
lambda-calculus
[(x_1 x_2 x_1) x_2]
[(x_1 x_2 x_3) x_3]
[(x_1 x_2 (lambda (x_1) e_1))
(lambda (x_1) e_1)]
[(x_1 x_2 (lambda (x_3) e_1))
(lambda (x_3) (change-variable (x_1 x_2 e_1)))]
[(x_1 x_2 (e_1 e_2))
((change-variable (x_1 x_2 e_1)) (change-variable (x_1 x_2 e_2)))])
This function replaces a variable name x_1 with a new name x_2 within an expression e_1. The subst metafunction uses change-variable for renaming a bound variable with a fresh name.
;; subst : variable × expression × expression → expression
(define-metafunction subst
lambda-calculus
[(x_1 e_1 (lambda (x_1) e_2))
(lambda (x_1) e_2)]
[(x_1 e_1 (lambda (x_2) e_2))
,(term-let ([x_new (variable-not-in (term e_1) (term x_2))])
(term
(lambda (x_new)
(subst (x_1 e_1 (change-variable (x_2 x_new e_2)))))))]
[(x_1 e_1 x_1) e_1]
[(x_1 e_1 x_2) x_2]
[(x_1 e_1 (e_2 e_3))
((subst (x_1 e_1 e_2)) (subst (x_1 e_1 e_3)))])
I prefer this definition of capture-avoiding substitution for several reasons. First, it corresponds directly to Gunter's definition. Furthermore, its runtime efficiency is a little clearer. The first definition of the metafunction recursively calls itself twice on the same subexpression in case #3; the reason why this doesn't cause exponential growth is because its behavior in one of the two cases is equivalent to change-variable (because it substitutes a variable) and consequently more efficient. But this took me a while to figure out. Finally, the types are a little tighter. For example, if we were just defining call-by-value substitution, subst could take a value for its second argument, rather than arbitrary expressions.

Friday, May 18, 2007

Food for thought from Robert O'Callahan

Robert O'Callahan, former CMUer and current Mozilla hacker extraordinaire, has a couple thoughts on research directions for PL:
I also suggested that the worst code is not necessarily buggy code, but code that is unnecessarily complex. Detecting that would be an interesting new direction for program analysis.
And also:
...[T]he state of parallel programming models, languages and tools remains pathetic for general-purpose single-user programs and no breakthrough should be expected. My position is that for regular desktop software to scale to 32 cores by 2011 (as roadmaps predict) we'd have to rewrite everything above the kernel, starting today, using some parallel programming model that doesn't suck. Since that model doesn't exist, it's already too late. Probably we will scale out to a handful of cores, with some opportunistic task or data parallelism, and then hit Amdahl's law, hard. It is probably therefore more fruitful to focus on new kinds of applications which we think we have reasonable ideas for parallelizing. I think virtual worlds (which are not just "games", people!) are a prime candidate. That's a direction I think the PL/software engineering research community should be pushing in.

Wednesday, April 25, 2007

A quick diversion

When I was in high school I created a little fractal similar to the Koch curve and programmed it in BASIC.

The basic algorithm replaces a line segment

with nine line segments like so:

You can repeat the process arbitrarily, and you get pretty curves looking like this:

It was the first hard program I ever wrote (made especially hard by the fact that I didn't know about datatypes, recursion, or even basic procedural abstraction, really), and I was really proud of my 15-year-old self. Every once in a while I think of another cute way to program it, and it makes me nostalgic. Here's a nice little one.

A direction is one of four symbols, 'N, 'E, 'S, or 'W. An orientation is one of two symbols, 'CW or 'CCW.

To rotate a direction 90 degrees clockwise or counter-clockwise, keep a "clock" of the cardinal directions, and rotate the clock index:
(define clock '(N E S W))

;; rotate : orientation * direction -> direction
(define (rotate or dir)
(let ([shift (if (eq? or 'CW) add1 sub1)])
(list-ref clock (modulo (shift (list-index (lambda (x)
(eq? x dir))
clock))
4))))
Then computing a fractal iteration is decomposed into two stages. The first stage computes the list of directions for each line segment in sequence. It recursively computes each iteration by replacing each direction in the previous iteration with a sequence of nine rotated directions.
;; directions-for : nat * direction -> (listof direction)
(define (directions-for n dir)
(if (zero? n)
(list dir)
(append-map (lambda (d)
(list d
(rotate 'CCW d)
d
(rotate 'CW d)
d
(rotate 'CW d)
d
(rotate 'CCW d)
d))
(directions-for (sub1 n) dir))))
The second stage computes the actual line segments by simply "moving the cursor" from the starting point according to each subsequent direction.
;; fractal-iteration : nat * (listof direction) * (cons nat nat)
;; -> (listof (cons (cons nat nat) (cons nat nat)))
(define (fractal-iteration len dirs point)
(let ([x (car point)]
[y (cdr point)])
(if (null? dirs)
null
(let ([point* (case (car dirs)
[(N) (cons x (- y len))]
[(E) (cons (+ x len) y)]
[(S) (cons x (+ y len))]
[(W) (cons (- x len) y)])])
(cons (cons point point*)
(fractal-iteration len (cdr dirs) point*))))))

Tuesday, April 10, 2007

HOAS vs. Nominal Logic

I don't yet have a deep understanding in the debate between higher-order abstract syntax and nominal logic, but this is an interesting summary from Karl Crary and Bob Harper in a short article in the Logic Column for SIGACT News:
“The real contrast between HOAS and nominal logic lies in their philosophy toward binding structure. Both HOAS and nominal logic provide a sophisticated treatment of binding structure not enjoyed by first-order accounts, but ones of very different character. The philosophy of LF is that binding and alpha-conversion are fundamental concepts, and thus builds them in at the lowest level. Consequently, name management in LF is ordinarily a non-issue. In contrast, nominal logic promotes name management as an object of study. It makes binding into a sophisticated artifact, rather than a primitive concept as in LF.”