Thursday, November 08, 2007

König's Lemma

How do you show that a simulation preserves non-termination? I think you could probably use coinduction, but I'm not very familiar with coinductive arguments. I just learned about a useful result called König's Lemma, which I think allows you to use a simple induction.

Let's say we have a simulation relation e′ ~ e ("e′ simulates e") and a proof that for any step in a specification semantics:
e1 → e2
we have related terms e1′ ~ e1 and e2′ ~ e2 such that
e1′ →+ e2
It's easy to show by induction that if the specification semantics converges to a value then the implementation semantics converges to a related value. If the specification semantics diverges, i.e. has an infinite reduction sequence, then we'd like to show the implementation diverges too.

König's Lemma states that for any finitely branching tree, if there exists a path of length n from the root for any n, then there exists an infinite path from the root. Consider the tree of possible reduction sequences from a term, where branches indicate points of non-determinism in the semantics. If every point of non-determinism has only a finite number of alternative reductions, then the tree is finitely branching.

So now consider a diverging term e in the specification semantics. For any finite prefix of the infinite reduction sequence, we can easily show by induction that there is a reduction sequence in the implementation semantics of equal or greater length. Since the computation tree of the implementation term is finitely branching, König's Lemma provides an infinite reduction of the implementation.

Wednesday, October 17, 2007

Two optimizations, only one safe

PLT Scheme's module system suggests some simple compiler optimizations. For one, identifiers bound in a module cannot be mutated from outside the module. This makes it very easy to determine whether the binding is immutable: just scan the source for instances of set!. If there aren't any, you can rely on the binding's immutability for optimizations like constant propagation.

Here's another tempting one: if a module imports a library but doesn't use some of the library's exported bindings, it seems like it should be safe not to load those bindings, right? Nope! Because the macro system has complete access to the syntactic environment, a macro exported by the module might actually compute a reference to the binding:
(define-syntax (foo stx)
(syntax-case stx
... (datum->syntax-object
#'here
(string->symbol "mumbly-foo")) ...
The syntax object #'here encapsulates the entire syntactic environment, and if the imported library exports mumbly-foo, looking up 'mumbly-foo in the syntactic environment will find it. Similarly, run-time code in the module might perform
(eval (datum->syntax-object #'here 'mumbly-foo))
and it would again find the binding. So it's not generally possible to prune any library imports, as long as macros and eval have complete access to the syntactic environment through syntax objects.

Well, eval is scary... does this mean the first optimization is unsafe, say, if eval tries to mutate it? I don't think so. Module bindings are stored in the lexical (i.e., local) environment, which I believe is not available to eval. The bindings available to eval are the ones that come from PLT Scheme's "namespace" objects. As I understand it, these namespaces do not contain mappings for local variables, only for globals and module imports.

Monday, October 15, 2007

Up for air

It's been a busy month! Last week I got back from ICFP in lovely Freiburg, Germany, where I gave a talk about my functional pearl as well as a status report on the JavaScript reference implementation. ICFP was a blast--I can't count the number of cool people I got to meet, hang out with, trade ideas with, argue with, and drink delicious German beer with. Then as soon as I got back to Boston, I had just a week to finish my submission to ESOP with Mitch on our theory of hygienic macros.

Friday, September 07, 2007

Science and engineering

Joel Spolsky has a post about the phases of the software development cycle that's remarkably close to my own observations. In Joel's view, the first phase is art (i.e., design phase); the second is engineering (construction); and the third is science (debugging and testing).

Joel's interest is in project management and management tools, but mine is more in development tools. Once you recognize the divide between the engineering and science aspects of software development, you can better understand one of the tensions in the approach to development, a tension which leads to plenty of heated debate. This tension comes about because the Fundamental Immutable and Inviolable (Not to Mention Sacred, Holy, and Good) Laws of Engineering are sometimes at odds with the practice of science.

To wit: abstraction and modularity are the heart and lungs of software engineering. Rules #1 , 2 and 3 are "Localize concerns, i.e. Don't Repeat Yourself, separate concerns and enforce their orthogonality." More simply: use abstractions and don't violate them. By making one concern completely oblivious to (i.e., parametric in) another, you maximize your freedom to change one without affecting the other. This allows for local reasoning which in turn leads to separable development and maintenance. Disciplined developers create layered abstractions and fastidiously respect their boundaries.

But what happens when you start debugging? Dogmatically adhering to abstraction boundaries is like wearing blinders; when a bug first arises, you never know which abstraction layer it's hiding in, or if it's in the interaction between layers. Another common consequence of thinking inside the abstraction box is impulsively assuming the bug is someone else's fault. ("The compiler must be broken!") I'm reminded of Knuth's quote about computer scientists:
Such people are especially good at dealing with situations where different rules apply in different cases; they are individuals who can rapidly change levels of abstraction, simultaneously seeing things "in the large" and "in the small."
          -- quoted in Hartmanis's Turing Award lecture
I think this is describing more the science and perhaps also the design aspects--but not the engineering aspect--of software development.

Because debugging and testing are about observing and understanding an existing system, rather than constructing or modifying a system, the barriers we construct to enforce our engineering principles become obstacles. Debugging tools, IDE's, testing frameworks, etc. are all characterized by a need to violate abstraction boundaries.

As a result, the Cleans and Dirties (as Mitch calls them) fight tooth and nail about whether our software development frameworks should be absolutely strict in their adherence to the FIaI(NtMSHaG)LoE (ML) or absolutely lax (Smalltalk). I wonder if we couldn't do better by building software frameworks that were aware of these different modes of development.

Wednesday, September 05, 2007

Progress is what matters

It's a well-known slogan in PL theory circles that Type Soundness = Preservation + Progress. But sometimes, people just write the Preservation lemma. Sometimes they even confuse type soundness with type preservation. Richard and I were chatting today and he reminded me that the goal of a type soundness theorem is to guarantee the absence of runtime type errors. In that sense, just proving that expressions remain well-typed is skipping the punch-line. The Progress lemma tells you that, assuming you're well-typed, you'll never get stuck with a type error--that's the point!

Monday, August 20, 2007

Real-world trace analysis

Robert O'Callahan has just released a debugger called Chronomancer, which is based on a technology he's been working on for a little while: Chronicle is a database for storing and querying program events, built on top of Valgrind, a dynamic analysis tool that instruments Linux binaries to monitor their runtime behavior. Chronomancer is a C/C++ debugger deployed as an Eclipse plug-in. There's a rough screencast here, though I didn't follow everything he was doing.

One of the cooler aspects of the debugger is the idea of history-based stack reconstruction. There's a tug-of-war that goes on between debuggers and compilers: compilers want to mess with data structures and control that is not supposed to be observable to programs, and debuggers want to observe them! As a result, there's constant disagreement or confusion about the interface between the two, resulting in impartial stack information or corrupted debugging information. Chronicle avoids these issues by relying on the history of events, rather than the stack layout, to determine what calls have happened when, and which ones have returned. This means, for example, that tail calls don't interfere with stack traces, because stack traces are based on program history instead of continuations.

The song that doesn't end

M Ward's song Chinese Translation sounds like an MIT AI koan from the 70's:
I met an old, old man
beneath a weeping willow tree
He said now if you got some questions
go and lay them at my feet...

And I said
What do you do with the pieces of a broken heart...

...and then the sun went down
and he sang for me this song

See I once was a young fool like you...
I met an old, old man
he sat beneath a sapling tree
He said now if you got some questions
go and lay them at my feet...

And I said
What do you do with the pieces of a broken heart...
The song ends before we reach a base case, but notice the trees get smaller, so there might be an inductive measure after all.

Thursday, August 02, 2007

ECMAScript's new home

Thanks to a generous domain-name donation from OpenDomain, the new home of ECMAScript is http://www.ecmascript.org! The previous domains we created for it now redirect to the new domain.

Saturday, July 21, 2007

Cute idiom from Haskell

I discovered this fun idiom while implementing an operational semantics in Haskell. The monadic syntax allows you to express monadic binding in either of two directions:
x <- expr;
or
expr -> x;
Combine that with a custom operator definition:
infix 4 |-
and now you can write interpreters whose syntax looks like a big-step evaluation judgment:
(|-) :: Env -> Expr -> Eval Value
env |- Lit n = return (Number n)
env |- Var x = case lookup x env of
Just v -> return v
Nothing -> throwError $ "free variable " ++ x
env |- Abs x e = return (Closure env x e)
env |- App e1 e2 =
do env |- e1 -> v1;
env |- e2 -> v2;
case v1 of
Closure env' x e ->
do (x,v2):env' |- e -> v;
return v;
Number n -> throwError $ "apply " ++ (show n)

Thursday, June 14, 2007

Unix quoting

Unix quoting is a great idea executed badly. The idea that you should be able to compose commands from the output of other commands is excellent. The problem is that you can't compose them arbitrarily.

Composition is completely hamstrung by a simple syntactic flaw: the begin- and end-delimiters for all the quoting forms are indistinguishable. So if I write:
`ls `pwd`/files`
the shell cannot determine whether the second occurrence of the backtick (`) character is ending the command or starting a sub-command.

This wouldn't be so terrible if there were a reasonable form for local binding so you could write your shell commands in A-normal form.

Thursday, May 31, 2007

The OO dream?

Signed message-passing! Multi-language, multi-implementation services! Actor-based distributed computation! Is this the OO dream?
[A] portable implementation of Termite is underway. ... The implementation will also support signed message passing... [A] system will only be able to receive messages from trusted sources... Will this be a way to re-unite the Scheme community? Is it a better approach than providing a common package infrastructure?
From Dominique Boucher. Heady stuff. The mind reels at the possibilities. We'll see.

Sunday, May 20, 2007

Capture-avoiding substitution in PLT Redex, Part 2

Following up on yesterday's post, there's another way to specify capture-avoiding substitution that has a convenient representation in Scheme. In the last decade, Pitts and Gabbay have built a research program on reasoning about binding using an algebra of names with name-swapping as their fundamental operation. The notation
(a b) ⋅ x
means roughly "swap occurrences of names a and b in the term x". This is very easy to code in a general way using S-expressions:
(define-metafunction swap
lambda-calculus
[(x_1 x_2 x_1) x_2]
[(x_1 x_2 x_2) x_1]
[(x_1 x_2 (any_1 ...)) ((swap (x_1 x_2 any_1)) ...)]
[(x_1 x_2 any_1) any_1])
The new definition of subst is very similar to the one I posted yesterday, except instead of using change-variable it uses swap:
(define-metafunction subst
lambda-calculus
[(x_1 e_1 (lambda (x_1) e_2))
(lambda (x_1) e_2)]
[(x_1 e_1 (lambda (x_2) e_2))
,(term-let ([x_new (variable-not-in (term e_1) (term x_2))])
(term
(lambda (x_new)
(subst (x_1 e_1 (swap (x_2 x_new e_2)))))))]
[(x_1 e_1 x_1) e_1]
[(x_1 e_1 x_2) x_2]
[(x_1 e_1 (e_2 e_3))
((subst (x_1 e_1 e_2)) (subst (x_1 e_1 e_3)))])
This corresponds to Pitts and Gabbay's definition of capture-avoiding substitution.

The cool thing about swap is that its definition doesn't have to change as you add new expression forms to your language; it's completely oblivious to the binding structure of the expression, and in a sense to any of the structure of the expression. All it needs is the ability to visit every node in the tree. So S-expressions as a term representation and swap as a variable-freshening operation fit together very nicely to form a convenient implementation of capture-avoiding substitution in PLT Redex.

Saturday, May 19, 2007

Capture-avoiding substitution in PLT Redex

There are lots of ways to implement substitution in PLT Redex (an embedded DSL in PLT Scheme for defining operational semantics with rewrite rules and evaluation contexts). I'll demonstrate with the lambda calculus, of course:
(define lambda-calculus
(language
[e (e e) x v]
[v (lambda (x) e)]
[x (variable-except lambda)]))
Since the early days, Redex has come with a library for building capture-avoiding substitution functions called subst. It's a little awkward to work with, though. Here's a definition of substitution using subst:
;; lc-subst : variable × expression × expression → expression
(define lc-subst
(subst
[`(lambda (,x) ,body)
(all-vars (list x))
(build (lambda (vars body) `(lambda ,vars ,body)))
(subterm (list x) body)]
[(? symbol?) (variable)]
[`(,e1 ,e2)
(all-vars '())
(build (lambda (vars e1 e2) `(,e1 ,e2)))
(subterm '() e1)
(subterm '() e2)]))
The subst special form relies on the subform all-vars to list the bound variables of an expression. The build subform reconstructs an expression from its pieces, including the variables (potentially automatically freshened to avoid capture) and the subexpressions. Then the subterm subform identifies each subexpression and lists the bound variables in scope for the subexpression.

This requires a fair amount of work for the client to coax subst into automatically substituting or freshening bound variables. These days, the authors recommend directly implementing capture-avoiding substitution. The example on the Redex web site gives a definition of capture-avoiding substitution as a metafunction:
;; subst : variable × expression × expression → expression
(define-metafunction subst
lambda-calculus
;; 1. x_1 bound, so don't continue in lambda body
[(x_1 e_1 (lambda (x_1) e_2))
(lambda (x_1) e_2)]
;; 2. in this case, we know that no capture can occur,
;; so it is safe to recur normally.
[(x_1 e_1 (lambda (x_2) e_2))
(lambda (x_2) (subst (x_1 e_1 e_2)))
(side-condition
(equal? (variable-not-in (term e_1) (term x_2))
(term x_2)))]
;; 3. general purpose capture avoiding case
[(x_1 e_1 (lambda (x_2) e_2))
,(term-let ([x_new (variable-not-in (term (x_1 e_1 e_2))
(term x_2))])
(term
(lambda (x_new)
(subst (x_1 e_1 (subst (x_2 x_new e_2)))))))]
;; 4. replace x_1 with e_1
[(x_1 e_1 x_1) e_1]
;; 5. x_1 and x_2 are different, so don't replace
[(x_1 e_1 x_2) x_2]
;; 6. ordinary recursive case
[(x_1 e_1 (e_2 e_3))
((subst (x_1 e_1 e_2)) (subst (x_1 e_1 e_3)))])
This implements capture-avoiding substitution with Redex's new define-metafunction form. Redex's term macro is a quasiquote-like data constructor that produces S-expressions; define-metafunction defines implicitly unquoted functions that can be invoked within a term form. The subst metafunction substitutes an expression e_1 for a variable x_1 within an expression e_2 by applying
(subst (x_1 e_1 e_2))
within a term form.

This metafunction is correct, but I find its definition a little subtle. I think it's clearer to separate concerns a little more and divide its definition into two pieces. Following Gunter's definition of capture-avoiding substitution we define a separate "change of variable" function:
;; change-variable : variable × variable × expression → expression
(define-metafunction change-variable
lambda-calculus
[(x_1 x_2 x_1) x_2]
[(x_1 x_2 x_3) x_3]
[(x_1 x_2 (lambda (x_1) e_1))
(lambda (x_1) e_1)]
[(x_1 x_2 (lambda (x_3) e_1))
(lambda (x_3) (change-variable (x_1 x_2 e_1)))]
[(x_1 x_2 (e_1 e_2))
((change-variable (x_1 x_2 e_1)) (change-variable (x_1 x_2 e_2)))])
This function replaces a variable name x_1 with a new name x_2 within an expression e_1. The subst metafunction uses change-variable for renaming a bound variable with a fresh name.
;; subst : variable × expression × expression → expression
(define-metafunction subst
lambda-calculus
[(x_1 e_1 (lambda (x_1) e_2))
(lambda (x_1) e_2)]
[(x_1 e_1 (lambda (x_2) e_2))
,(term-let ([x_new (variable-not-in (term e_1) (term x_2))])
(term
(lambda (x_new)
(subst (x_1 e_1 (change-variable (x_2 x_new e_2)))))))]
[(x_1 e_1 x_1) e_1]
[(x_1 e_1 x_2) x_2]
[(x_1 e_1 (e_2 e_3))
((subst (x_1 e_1 e_2)) (subst (x_1 e_1 e_3)))])
I prefer this definition of capture-avoiding substitution for several reasons. First, it corresponds directly to Gunter's definition. Furthermore, its runtime efficiency is a little clearer. The first definition of the metafunction recursively calls itself twice on the same subexpression in case #3; the reason why this doesn't cause exponential growth is because its behavior in one of the two cases is equivalent to change-variable (because it substitutes a variable) and consequently more efficient. But this took me a while to figure out. Finally, the types are a little tighter. For example, if we were just defining call-by-value substitution, subst could take a value for its second argument, rather than arbitrary expressions.

Friday, May 18, 2007

Food for thought from Robert O'Callahan

Robert O'Callahan, former CMUer and current Mozilla hacker extraordinaire, has a couple thoughts on research directions for PL:
I also suggested that the worst code is not necessarily buggy code, but code that is unnecessarily complex. Detecting that would be an interesting new direction for program analysis.
And also:
...[T]he state of parallel programming models, languages and tools remains pathetic for general-purpose single-user programs and no breakthrough should be expected. My position is that for regular desktop software to scale to 32 cores by 2011 (as roadmaps predict) we'd have to rewrite everything above the kernel, starting today, using some parallel programming model that doesn't suck. Since that model doesn't exist, it's already too late. Probably we will scale out to a handful of cores, with some opportunistic task or data parallelism, and then hit Amdahl's law, hard. It is probably therefore more fruitful to focus on new kinds of applications which we think we have reasonable ideas for parallelizing. I think virtual worlds (which are not just "games", people!) are a prime candidate. That's a direction I think the PL/software engineering research community should be pushing in.

Wednesday, April 25, 2007

A quick diversion

When I was in high school I created a little fractal similar to the Koch curve and programmed it in BASIC.

The basic algorithm replaces a line segment

with nine line segments like so:

You can repeat the process arbitrarily, and you get pretty curves looking like this:

It was the first hard program I ever wrote (made especially hard by the fact that I didn't know about datatypes, recursion, or even basic procedural abstraction, really), and I was really proud of my 15-year-old self. Every once in a while I think of another cute way to program it, and it makes me nostalgic. Here's a nice little one.

A direction is one of four symbols, 'N, 'E, 'S, or 'W. An orientation is one of two symbols, 'CW or 'CCW.

To rotate a direction 90 degrees clockwise or counter-clockwise, keep a "clock" of the cardinal directions, and rotate the clock index:
(define clock '(N E S W))

;; rotate : orientation * direction -> direction
(define (rotate or dir)
(let ([shift (if (eq? or 'CW) add1 sub1)])
(list-ref clock (modulo (shift (list-index (lambda (x)
(eq? x dir))
clock))
4))))
Then computing a fractal iteration is decomposed into two stages. The first stage computes the list of directions for each line segment in sequence. It recursively computes each iteration by replacing each direction in the previous iteration with a sequence of nine rotated directions.
;; directions-for : nat * direction -> (listof direction)
(define (directions-for n dir)
(if (zero? n)
(list dir)
(append-map (lambda (d)
(list d
(rotate 'CCW d)
d
(rotate 'CW d)
d
(rotate 'CW d)
d
(rotate 'CCW d)
d))
(directions-for (sub1 n) dir))))
The second stage computes the actual line segments by simply "moving the cursor" from the starting point according to each subsequent direction.
;; fractal-iteration : nat * (listof direction) * (cons nat nat)
;; -> (listof (cons (cons nat nat) (cons nat nat)))
(define (fractal-iteration len dirs point)
(let ([x (car point)]
[y (cdr point)])
(if (null? dirs)
null
(let ([point* (case (car dirs)
[(N) (cons x (- y len))]
[(E) (cons (+ x len) y)]
[(S) (cons x (+ y len))]
[(W) (cons (- x len) y)])])
(cons (cons point point*)
(fractal-iteration len (cdr dirs) point*))))))

Tuesday, April 10, 2007

HOAS vs. Nominal Logic

I don't yet have a deep understanding in the debate between higher-order abstract syntax and nominal logic, but this is an interesting summary from Karl Crary and Bob Harper in a short article in the Logic Column for SIGACT News:
“The real contrast between HOAS and nominal logic lies in their philosophy toward binding structure. Both HOAS and nominal logic provide a sophisticated treatment of binding structure not enjoyed by first-order accounts, but ones of very different character. The philosophy of LF is that binding and alpha-conversion are fundamental concepts, and thus builds them in at the lowest level. Consequently, name management in LF is ordinarily a non-issue. In contrast, nominal logic promotes name management as an object of study. It makes binding into a sophisticated artifact, rather than a primitive concept as in LF.”

Sunday, March 25, 2007

Bisimulation bleg

I have a question for anyone who happens to know: are bisimulations more common in process calculi such as the π-calculus because of τ transitions?

Lately I've been proving some simulation theorems for λ calculi, and none of them is a bisimulation; there are too many "administrative" reductions in the implementation semantics, so it's not possible to guarantee that every one-step reduction in the implementation semantics corresponds to a reduction sequence in the specification semantics.

Now, it's been a while since I looked at the π-calculus, but I seem to recall that you can hide a sequence of reductions inside a "black box" and call them a single transition τ. I suspect this facility allows you to pretend a sequence of transitions is a single transition in order to hide the details of an implementation semantics, effectively producing a completely indistinguishable simulation.

(This is admittedly all a little vague since my knowledge of bisimulations is scant.)

Tuesday, March 20, 2007

John Backus (1924 - 2007)

"The beauty of FORTRAN--and the reason it was an improvement over assembly language--is that it relieves the programmer of the obligation to make up names for intermediate results."
--Andrew Appel, Compiling with Continuations

My knowledge of the early history of programming languages is limited, but I believe Backus is considered to have invented the compiler. Slashdot unfairly describes Backus's Turing Award lecture "Can Programming be Liberated from the von Neumann Style?" as "apologizing" for the creation of FORTRAN, but I think the idea of functional programming was already there in FORTRAN. Once the idea of a FORmula TRANslator had been invented--i.e., the notion of a programming language with expressions--it was a natural step to consider languages without any statements at all.

This is to say nothing of the enormous debt of gratitude we owe him for BNF.

Update: Oh dear, I spoke too soon. Grace Hopper wrote the first compiler (thanks to the anonymous commenter for setting me straight). Apparently the FORTRAN compiler is considered one of the first successful optimizing compilers for a high-level language.

Saturday, March 10, 2007

Inside-out evaluation contexts

I've been a big fan of the "inside-out" (i.e., zipper) formulation of evaluation contexts, but almost everyone I talk to has a real problem with it. I think this is primarily due to lack of familiarity, but I do recognize that it requires you to stand on your head.

I'd thought for a while it was the only real way to specify evaluation contexts for a CEK machine semantics, but in fact it's really only an optimized representation for that approach. The outside-in definitions work just fine.

All you need in a CEK machine is to be able to match the innermost frame of a context by decomposing it into E[F]. With the inside-out formulation, frames are stacked up from inside out, so this is tantamount to looking at the head of a list. This is so efficient that the entire history of stack-based language implementations uses it. However, it's still possible to match E[F] with the inside-out outside-in formulation, it just requires recursively traversing the context until you find the innermost frame F.

I think I'll probably stick with outside-in for most of my papers, since it seems to generate so much angst.

Update: whoops! corrected a typo.

Progress report

It's Saturday morning and I've had a full night's sleep--the first in a while and probably the last for a while. Progress report:

I've submitted my final paper on Space-Efficient Gradual Typing for the draft proceedings of Trends in Functional Programming with Aaron Tomb and Cormac Flanagan.

Now I'm working on several papers--one or two for ICFP and one for GPCE--and a couple of talks, all with deadlines in April.

Mostly

It occurred to me that my taste in programming languages tends to run in the "mostly" direction, such as mostly functional programming and mostly hygienic macros.

Friday, March 09, 2007

Avoiding coinduction in soundness proofs

Mads Tofte's thesis uses coinduction to prove type soundness for a language with a store, because of the problem of cycles within a store: if the value in a reference cell can point to itself (such as a closure that refers to the cell that contains it), then it's not possible to find the type of a value by looking in the store. That is, type soundness involves determining the type of values, and store addresses are one kind of value. If finding the type of an address means looking up its value in the store and finding the type of that value, but that value might itself contain the same address, then typing values is not inductive.

But according to Mitch and Matthias, it's not actually necessary to use coinduction, as long as you set up the soundness hypothesis right.

The statement of subject reduction usually looks like this:
For any type environment Γ, store typing Σ, expression e, and store σ, if
  1. Γ;Σ ⊢ e : τ and
  2. Γ;Σ ⊢ σ and
  3. e,σ → e′,σ′
then ∃ Σ′ ⊇ Σ s.t.
  1. Γ;Σ ⊢ e′ : τ and
  2. Γ;Σ′ ⊢ σ′.
By quantifying over all stores in the hypothesis, you can get the value of an address by simply looking it up directly in the store typing, without passing through the actual value. So the definition of Γ;Σ ⊢ σ is
dom(σ) = dom(Σ)
∀a ∈ dom(σ) . Γ;Σ ⊢ σ(a) : Σ(a)
Γ;Σ ⊢ σ
and finding the type of an address value is defined simply by looking up the address in the store typing:
     (a : τ) ∈ Σ     
Γ;Σ ⊢ a : τ ref


Update: More accurately, the hypothesis that quantifies over all stores is in the main soundness theorem:
If ∅;Σ ⊢ e : τ and ∅;Σ ⊢ σ then either
  1. e,σ diverges or
  2. e,σ →* v,σ′ and ∃ Σ′ s.t. ∅;Σ′ ⊢ v : t and ∅;Σ ⊢ σ′.
The proof of this theorem is a straightforward induction over the length of the reduction sequence using progress and preservation. The interesting part is that it simply assumes from the outset that we have a store σ and a store typing Σ that agree. We can easily provide an initial store and store typing , which trivially agree, to get the actual proposition we want as a corollary, namely that evaluating the program in the initial store produces a well-typed result.

Monday, March 05, 2007

Inkscape

Creating figures in LaTeX is a pain, especially if you want to embed LaTeX in your figures. In the past I've used xfig and pstricks, which allows you to embed typeset text in a diagram, but aside from all the hoops you have to jump through (I had to massage the output files before feeding them to TeX), what you see in the GUI is uninterpreted LaTeX source. So I was always nudging things over a pixel at a time, recompiling, and checking out the PDF.

This weekend I tried Inkscape, which has a more modern UI than xfig. From what I've seen so far it's pretty nice. It comes with a plugin that allows you to input a string of LaTeX source, and it creates a shell, compiles the LaTeX source, and uses pstoedit to convert the output to SVG to import back into your diagram. It's a hack, and a little slow, but it's true WYSIWYG again.

To get it to work in cygwin I had to tweak things a bit -- one very nice thing about Inkscape is that its extensions are just python source files stuck in a directory with a naming convention, which means I could tweak the file and rerun the command (without even restarting Inkscape!) and it would reload the plugin.

I created a script /usr/local/bin/tex2svg:
#!/bin/sh

base_dir=`cygpath -wl "$1"`
base_dir=`cygpath -u "${base_dir}"`

latex_file=$base_dir/eq.tex
aux_file=$base_dir/eq.aux
log_file=$base_dir/eq.log
ps_file=$base_dir/eq.ps
dvi_file=$base_dir/eq.dvi
svg_file=$base_dir/eq.svg
out_file=$base_dir/eq.out

latex -output-directory="${base_dir}" -halt-on-error \
"${latex_file}" > "${out_file}" 2>&1
dvips -q -f -E -D 600 -y 5000 -o "${ps_file}" "${dvi_file}"

export LD_LIBRARY_PATH=/usr/lib/pstoedit:$LD_LIBRARY_PATH
pstoedit -f plot-svg -adt -ssp "${ps_file}" \
"${svg_file}" > "${out_file}" 2>&1
The file $inkscape/share/extensions/eqtexsvg.py is the plugin; I commented out the lines that call latex, dvips, and pstoedit, and replaced them with a single command:
os.system("bash --login -e tex2svg '" + base_dir + "'")
Python and LaTeX were having all sorts of problems communicating about file names with different conventions (Unix, DOS 8.3, Windows long), so I just basically punted all the work to a shell script, where I could have better control over the file names via cygpath.

To use it, go to Effects | Render | LaTeX Formula, type in a LaTeX formula, and it will magically appear (after a few external windows blip on and off). I don't understand scaling issues, especially when you're dealing with multiple file formats (.tex, .dvi, .ps, .svg), but for whatever reason I've found that sometimes I have to kill the -y 5000 in the dvips command to get the right size.

Also, when I tried saving the diagram directly to PDF, it ended up kind of blurry. I've had better luck saving it to EPS. That means that you can't use pdflatex anymore; instead you have to use dvips followed by ps2pdf. But then they seem to look okay.

Thursday, March 01, 2007

'a typing

There are a lot of typing paradigms running around these days without clear definitions. One of the difficulties is that any given phrase carries a number of different meanings, including (but not necessarily limited to) the phrase's apparent meaning from its constituent words, the phrase's common usage in any one of a number of different communities, the originator's intended meaning of the phrase, the original intended application domain of the paradigm, or the subsequent historical application domain of the paradigm.

Here's what I tend to mean by the following phrases. Disclaimer: I try to keep a balance between the aforementioned categories of meaning, although my community's common usage generally takes precedence. In any case there's a lot of vagueness and variability in the terminology, not to mention controversy. Read this at your own risk.

static typing: systems for reasoning about and enforcing inherent language invariants algorithmically and without involving program evaluation. In order to guarantee the invariants, the runtime semantics is only allowed to evaluate statically well-typed programs.

dynamic typing: systems for enforcing inherent language invariants incrementally during program evaluation via runtime checks.

soft typing: systems for detecting possible violations of imposed (i.e., not inherent) language invariants algorithmically and without involving program evaluation. The runtime semantics does not depend on programs being well-typed according to the soft typing algorithm.

optional typing: systems for detecting possible violations of imposed language invariants algorithmically and without involving program evaluation. The runtime semantics does not depend on program being well-typed according to the optional typing algorithm. (Come to think of it, I can't come up with a distinction between soft typing and optional typing off-hand.)

pluggable typing: systems of optional typing that admit any number of different static checking algorithms.

hybrid typing: systems for enforcing inherent language invariants algorithmically both via static reasoning and dynamic checks. This paradigm was originally introduced as a way to combine the static guarantees of statically typed languages with the more expressive annotation languages of dynamic contract systems.

gradual typing: systems for enforcing inherent language invariants algorithmically both via static reasoning and dynamic checks. This paradigm was originally introduced to facilitate program evolution by permitting incremental annotation of programs.

strong/weak typing: your guess is as good as mine.

Monday, February 26, 2007

Hello, I'm iTunes.

Apple can make fun of Vista's pesky security dialogs, but do you know what they make me do every time a sub-point release of iTunes is available?
  1. iTunes asks me if I want to download the latest version.
  2. I click yes.
  3. Firefox opens to an ad-filled page with a form in the corner.
  4. I uncheck the first "do you want us to email ads to you?" question.
  5. I uncheck the second "do you want us to email ads to you?" question.
  6. Here's a secret: if you uncheck both those boxes, you don't have to fill in an email address.
  7. I click the "download" button.
  8. Firefox asks me what to do with the file.
  9. I click "Save file".
  10. I wait for a thirty-five megabyte download, even if it's just a small patch.
  11. I close the "Downloads" dialog box.
  12. I close Firefox.
  13. I close iTunes.
  14. I find the downloaded setup executable.
  15. I double-click the setup executable.
  16. I click "Next".
  17. I click "I accept".
  18. I click "Next".
  19. I click "Install".
  20. I wait several musicless minutes for it to completely overwrite the entire existing installation directory, even if it's just a small patch.
  21. Guess what, it still installs QuickTime on the side, as if I had any more interest in QuickTime than I have for the last 15 years.
  22. I click "Finish".
  23. iTunes starts (automatically! gee whiz!).
  24. The license agreement pops up again.
  25. I click "I Agree".
  26. I wait a minute for iTunes to "update the iTunes library".
  27. Here's the best part: as with every single previous release, iTunes still decides that maybe this time I really do want a QuickTime icon in the system tray.
  28. I right-click the QuickTime tray icon and choose "QuickTime Preferences" (er, I prefer not to know about QuickTime?)
  29. I click the "Advanced" tab.
  30. I uncheck the "Install QuickTime icon in system tray" option at the bottom.
  31. I click "OK".
And now I'm back to having iTunes working the way it did before. Only, as far as I can tell, the sole distinguishing feature of the new version is that it no longer asks me if I want to upgrade. For the moment.

Update: It looks like iTunes 7.0.2 installed a new "Apple Software Update" program on my computer. Does this mean my post was just in time to be obsolete? Here's hoping.

Sunday, February 25, 2007

Hygienic typedefs

ECMAScript Edition 4 will have ML/C-style typedefs to allow the creation of type synonyms. You can think of these like macros in that, intuitively, references to such types are substituted by their definitions. This is simpler than a general macro system, because they are only substituting types in type contexts, not expressions in expression contexts. But even though types are a simpler subset of the syntactic structure of ES4 programs, due to generics, they still have binding structure!

Consider this example:
class C.<T> {
type A = T
function f.<T>() {
var x : A = ...
}
}
If we want to substitute away the type definition, we might naively get:
class C.<T> {
function f.<T>() {
var x : T = ...
}
}
But then x's type annotation is referring to the wrong T! This corresponds exactly to the notion from hygienic macros (commonly referred to as referential transparency) that references in a macro definition should retain their meaning from the macro definition environment, never the macro use environment.

In other words, the above program should really be resolved to:
class C.<T> {
function f.<T'>() {
var x : T = ...
}
}
Now consider another example, this time with a parameterized typedef:
type A.<X> = function.<T>():[X,T]
class C.<T> {
var a : A.<T> = ...
}
A naive substitution would produce
class C.<T> {
var a : function.<T>():[T,T] = ...
}
This time the bug is that we've allowed a type definition to introduce a binding that captured a binding in the context of the use of the typedef. This is an example of the hygiene condition, which states that bindings introduced by a macro should not capture variables in the macro application site. The proper expansion of the typedef would instead be:
class C.<T> {
var a : function.<T'>():[T,T'] = ...
}
These are exactly the same conditions that hygienic macro expanders must meet in order to prevent unintended variable capture; the only difference is that here we're concerned with type variables rather than program variables.

Sunday, February 18, 2007

A little goes a long way

I just completed an automated, machine-checked simulation proof. What was my proof assistant? Scheme! The proof relies on some reduction relations and an inductively defined invariant relation. The former can be defined in PLT Redex with just a little cleverness for performing symbolic reduction; the latter can be defined with good, old-fashioned recursion.

Granted, there are a number of places where I could have gone wrong. For example, I have no meta-theorems about the correctness of the representation or the exhaustiveness of the manually directed case analysis (a dozen cases in as many lines of code). But implementing the proof on the computer helped me debug a number of minor issues in my theorem, and it did all the heavy lifting for me (particularly the multi-step reductions--thanks Robby and Jacob for PLT Redex!).

The moral? A little goes a long way, as Wand and Sullivan wrote. For a simple proof involving inductive invariants and lots of rote calculation, there's no need for complicated meta-logical frameworks. Implementing the proof in a general-purpose programming language gives you a higher degree of quality assuredness for a reasonable implementation cost.

Next, if I do more of these simulation proofs, I might start abstracting out the common patterns into a library to work in conjunction with PLT Redex. It would essentially be a simple proof assistant for simulations.

Thursday, February 08, 2007

Lazy substitutions

Substitution is expensive. That's why we tend to use environments in efficient implementations of programming languages. They encode all the information we need to do substitutions, but we wait until as late as possible to perform the substitutions. The complication, however, arises when expressions containing free variables (i.e., variables that haven't yet been substituted) travel around to disparate parts of the program. This happens in the lambda calculus when functions get passed or returned as values, and that's why we have to introduce closures. Closures allow expressions with free variables to travel around with enough information to continue performing their lazy substitutions.

In macro systems, we have exactly the same problem: the substitutions we need to perform (among others) are fresh names for bound variables. We need to substitute these fresh names throughout the bodies of expressions, but it's expensive to revisit syntax. So we close expressions over their substitutions and rename lazily.

But in macro systems, these kinds of closures are even more complicated than in the lambda calculus, because each macro invocation produces more code and mixes it together with other code. What we want is for the renamings to occur only on the code as it appears at the particular time of the current macro invocation. So in Dybvig, Hieb and Bruggeman's macro-expansion algorithm, the solution is to produce a fresh "mark" or "time-stamp" distinguishing code that resulted from a particular macro invocation. This way, renamings are applied only to code that existed at the time that the eager substitution would have been applied. When different code gets mixed together from different macro invocations, these marks keep track of what renamings apply to what pieces of syntax.

There are two salient features required of these marks: 1) every distinct invocation of a macro should have a distinct mark associated with it, and 2) only new code produced by a macro invocation should be marked as new, not code that it received as input. The first requirement is satisfied simply by generating fresh marks on each invocation. The second is satisfied by giving the mark algebra a "toggling" feature: re-applying the same mark twice to a piece of syntax erases the mark. The expander marks both the input and output of each macro invocation, which has the effect of "staining" new syntax generated by the invocation.

Symbolic evaluation with PLT Redex

PLT Redex is a great tool for testing and debugging a small-step operational semantics (particularly a reduction semantics using Felleisen-style evaluation contexts). I've just learned it's also useful for doing symbolic evaluation, where a program term may contain symbolic placeholders for some of its subterms. This is important when doing proofs, where you want to know how a fragment of a program is going to reduce for an entire class of terms.

For example, in a simulation proof I'm writing, I want to reason about how a program fragment would reduce in any initial continuation K0. So I extended the grammar of continuations to include the form
K ::= ... | (sym variable)
i.e., an s-expression containing the symbol 'sym followed by any variable. Then I could represent the abstract continuation K0 as '(sym K0). That's all it took, and Redex happily reduced my program.

Of course, once the reduction sequence gets to the point where it's returning a value to the continuation K0, it gets stuck, because there are no reduction rules that operate on these symbolic terms. But that's fine, because I'll never need to do such concrete reasoning about abstract terms.

Saturday, January 27, 2007

Universal vs. existential properties of programs

There's an age-old debate about whether abstraction is an absolute necessity or an obstruction to programmer efficiency. It shows up in the static-vs.-dynamic typing war, the abstract datatypes-vs.-concrete representations war, etc. I've always suspected we'd be worse off if we let one side or the other win. Of course I don't claim to know how to resolve it, but I wonder if it would shed light on the issue to think about existential vs. universal properties of programs.

When you're debugging or testing code ("is it possible for this list to be empty?"), or when you're exploring an API at the REPL ("what happens when I pass the result of Foo to Bar?"), you're often looking to observe a single program trace that serves as a witness to an existential property. By contrast, when you're building robust systems, you often want to assert universal propositions about the code ("this function will only ever produce a non-negative integer").

When you're building a system, you want your invariants to be universally true. This is one place where enforcement of abstractions (as with static type systems) is important. Letting the software enforce the abstractions automates the process of verifying these properties.

When you're inspecting a system, though, you're often looking for particular, concrete examples. In my experience, people are better at going from the particular to the general rather than vice versa. I don't think we tend to think well in abstractions; at least not at first. That's why verifiers that work by searching for counterexamples produce such good errors: instead of directly proving ∀x.P(x), they try to prove ¬∃x.¬P(x), and if they fail, they give you a concrete x. Learning API's, interacting with the REPL, and writing test cases are all examples of this mode of programming where you're more interested in concrete paths through a program rather than universal properties.

How would we design software that facilitates both these modes of reasoning about programs?

Friday, January 19, 2007

Non-native shift/reset with exceptions, revisited

Filinski's implementation of shift and reset in terms of undelimited continuations and a mutable cell doesn't interact well with exceptions out of the box, but it turns out we can get the right behavior with only a slight modification. (Ryan, Jacob and I came up with this after John Reppy suggested it to me as a challenge that, if possible, would be far simpler to implement than a native implementation in SML/NJ.)

The problem is that, as Shan and Kiselyov point out, a delimited continuation should only close over the exception handlers (and dynamic parameters in general) bound up to its prompt, and when invoked, it should be invoked in the context of the handlers of the current global continuation extended by the handlers closed over by the delimited continuation. This really is the expected behavior; an exception raised in a delimited continuation should either be handled by an appropriate handler installed within the delimited context, or if there isn't one, by the global context in which the delimited continuation is being invoked.

For example, both of the following programs should evaluate to 100, since the exception inside the delimited continuation is raised within a new context that installs a handler for it:
;; handler installed in body of shift
(reset (lambda ()
(+ (shift (lambda (k)
(with-handlers ([symbol? (lambda (exn) 100)])
(k 1))))
(raise 'boo))))

;; handler installed in new context
(let ([k (reset (lambda ()
(shift (lambda (k) k))
(raise 'boo)))])
(with-handlers ([symbol? (lambda (exn) 100)])
(k 1)))
However, with Filinski's implementation applied naively to a language with exceptions, neither example manages to catch the exception because the delimited continuations close over the wrong exception handlers. However, we can get the right behavior with only a slight modification to the implementation. The trick is to modify the protocol between the captured Scheme continuations to pass an exception along from one context to another to be re-raised.

As before, we store the global continuation (or "meta-continuation") in the single mutable cell:
(define meta-k (lambda (x) (raise 'missing-reset)))
But we'll change the contract of the meta-continuation to accept two different variants of results, those representing a normal return from a delimited continuation, and those representing an unhandled exception that should be re-raised in the context of the meta-continuation's handlers:
(define-struct succeeded (result))
(define-struct failed (exception))
(This is the familiar pattern of representing different return paths in the exception monad.)

Now, reset needs to be modified to ensure that evaluation of the body of the prompt is wrapped in a handler to prevent uncaught exceptions from escaping to the original global continuation. The wrapped result is sent to the meta-continuation, and the context of the call/cc performs the unwrapping, either to return the ordinary value or re-raised the exception. (I've highlighted the new parts.)
(define (reset thunk)
(let ([result
(call/cc (lambda (k)
(let ([meta-k0 meta-k])
(set! meta-k (lambda (r)
(set! meta-k meta-k0)
(k r)))
(let ([v (with-handlers
([(lambda (exn) #t) make-failed])

(make-succeeded (thunk)))])
(meta-k v)))))])
(if (succeeded? result)
(succeeded-result result)
(raise (failed-exception result)))))
The only change to shift is that again, where it invokes the meta-continuation, it needs to wrap the result in this variant type. Because the meta-continuation is called here to represent the case where evaluation of the delimited continuation terminates successfully, we wrap the result with make-succeeded.
(define (shift f)
(call/cc (lambda (k)
(let ([v (f (lambda (v)
(reset (lambda ()
(k v)))))])
(meta-k (make-succeeded v))))))
That's it! Trying out the test cases above, we get 100 and 100, as we should.

This is also well-typed and implementable in SML, for example as a modification to Oleg's code.

Saturday, January 13, 2007

POPL 2007 Program

I've found at least some version of all but two of the papers being presented at POPL next week. Here's a program with links to the papers I found.