Here's something that used to bother me when reading Winskel and learning denotational semantics.
We start with a language like the lambda calculus, and everyone looks at λn.(n+1) and knows that this represents the function that takes an integer and increments it. But that's not enough for us: we have to build up this big, complicated mathematical machinery to determine precisely what this function means; we treat it as dumb syntax and give it a translation to some domain that models it.
But when we're defining the model, we allow ourselves to represent meta-functions simply using the lambda calculus, and suddenly λn.(n+1) just means the set-theoretic function that maps integers to their successors. Why do we have to work so hard to give a precise model for our object language, but when we work with the meta-language, we can just say "you know what I mean"?
The reason is simply that the object language is generally a complex language that's made to look like lambda calculus, but has other features and effects (e.g., recursion and non-termination, mutation, etc.) that can't be modelled in obvious ways and require a precise translation. By contrast, when we work in the model, if it's simply set-theoretic functions, then we can use a simple meta-language based on the lambda calculus to represent these functions. So this meta-lambda calculus is just a convenient shorthand for elements in the domains of the model. As long as there's an obvious translation of such representations into sets, terms in the meta-lambda calculus such as λn.(n+1) can be treated as nothing more than a convenient notation, no different from { (n, n+1) | n ∈ N }.
Saturday, August 27, 2005
Wednesday, August 24, 2005
Pointed domains
In a denotational semantics for a language with non-termination, you use pointed domains, i.e., CPO's with a bottom element. But some of the usual constructions on compound domains such as pairs and sums end up combining multiple bottom elements into one domain. Because the existence of a bottom element is used for the proof of the existence of fixpoints for all functions, it's critical that every domain have a single bottom element. So that's what the "coalesced sum" and "smash product" constructions are for: they identify multiple bottom elements in multiple domains into a single bottom element in the combined domain. [Abramsky and Jung 94, 3.2]
Trust me
I don't understand how you can get away with leaving out proofs in a publication. Given the constraints on the length of the paper, I could imagine providing external references to more complete proofs, say in a companion technical report. It's one thing to trust provided proofs, when even then they are error-prone. But how can you believe a theorem stated without even any indication of a proof? That's kind of the point of math.
I'm reading a couple of papers on FreshML and FreshO'Caml, and they're great work. But the ICFP paper from 2003 has a number of unproved theorems, and then they posted an erratum stating that one of their theorems is flawed, but they haven't bothered fixing it because they have a new paper that takes a different approach. This kind of thing makes me generally nervous about the reliability of research and publications.
I'm reading a couple of papers on FreshML and FreshO'Caml, and they're great work. But the ICFP paper from 2003 has a number of unproved theorems, and then they posted an erratum stating that one of their theorems is flawed, but they haven't bothered fixing it because they have a new paper that takes a different approach. This kind of thing makes me generally nervous about the reliability of research and publications.
Tuesday, August 23, 2005
Reasonable Defaults
I went to add a mail filter rule to Apple Mail in Tiger, and the new rule came up with a default rule for filtering messages where "Any Recipient" matched the recipient of the current message I was looking at. Beautiful! It gets better -- I went to add another condition, which I changed to a "From" condition, and it automatically changed the initial value of the condition to match the sender of the current message. I never had to type in a single value, switch window panes, copy or paste.
This is something library and language designers could learn from HCI--making the common cases easy. (The Perl-mongers know it.) It's up there with progressive disclosure.
This is something library and language designers could learn from HCI--making the common cases easy. (The Perl-mongers know it.) It's up there with progressive disclosure.
Friday, August 05, 2005
alphaCaml
I've written my first Cαml program: a little lambda-calculus term manipulation language and interpreter. Since it's a language for manipulating terms, there's an object language (the lambda calculus) and a meta-language (the term manipulation language). Expressions in the meta-language evaluate to lambda calculus terms; for example, cps e evaluates the expression e to get a term and then converts the term to continuation-passing style.
Here's the data definition and binding specification of the object language datatype:
Now here's a term manipulation function written in Cαml. (In fact, this is really just an Ocaml program, since the only extension to Ocaml is the binding specification language. Beyond that, Cαml is just a library.)
Here's the data definition and binding specification of the object language datatype:
The declaration sort var declares a disjoint universe of atoms that will be used as lambda calculus variables. The type lambda is a binding datatype: it binds atoms of sort var, which are then in scope in the right-hand conjunct of the product type (the keyword inner indicates that all the bindings in lambda are in scope in the term). This binding specification actually automatically generates a bunch of Ocaml datatype and library definitions, including:sort var
type term =
| Var of atom var
| Lam of < lambda >
| App of term * term
type lambda binds var = atom var * inner term
By making the contents of the Lam constructor opaque, it forces programmers to use open_lambda whenever they need to get at its contents, which has the side effect of freshening all of its bound names. This alleviates the programmer from having to avoid capture explicitly, since bound names are kept unique.type term =
| Var of var
| Lam of opaque_lambda
| App of term * term
and var = Var.Atom.t
and lambda = var * expression
and opaque_lambda
val create_lambda : lambda → opaque_lambda
val open_lambda : opaque_lambda → lambda
Now here's a term manipulation function written in Cαml. (In fact, this is really just an Ocaml program, since the only extension to Ocaml is the binding specification language. Beyond that, Cαml is just a library.)
There's no need to implement capture-avoiding substitution, because every time an abstraction is opened, its bound variable is automatically freshened by open_lambda.let make_lambda : var → term → term = fun v body →
Lam (create_lambda (v, body))
let rec beta : term → var → term → term = fun t x arg →
match t with
| Var x' when Var.Atom.equal x x' → arg
| Var _ → t
| App (l, r) → App (beta l x arg, beta r x arg)
| Lam lam → let (x', body) = open_lambda lam in
make_lambda x' (beta body x arg)
let rec reduce : term → term = function
| App ((Lam lam), rand) when isValue rand →
let (x, body) = open_lambda lam in
beta body x rand
| App(rator, rand) when (isValue rator) && (isValue rand) →
raise No_redex
| App(rator, rand) when (isValue rator) →
App(rator, reduce rand)
| App(rator, rand) →
App(reduce rator, rand)
| _ → raise No_redex
Tuesday, August 02, 2005
Stupid keywords
I spent like 20 minutes trying to figure out what I was doing wrong with the syntax of pattern matching in Ocaml, only to discover the problem wasn't with pattern matching at all: I was trying to bind the name val, which is a keyword in ML:
Learning a new language is such a pain. I thought I'd grown out of stupid problems like this. You almost can't blame developers for resisting learning new technologies.let (key, val) = List.find( ... )
Subtleties of getElementsBySelector
I should point out that my implementation of getElementsBySelector corrects a couple of subtle bugs in the original version. For one thing, the original version searches for nodes of the form #foo by using document.getElementById; this results in finding the outermost node in the entire document whose ID matches, even if searching a subtree. Since ID's are not guaranteed to be unique, this can produce the wrong node.
An even more subtle bug in the original version is that multiple copies of the same node may appear in the result array. For example, the selector '.foo .bar .mumble', when applied to the following tree:
An even more subtle bug in the original version is that multiple copies of the same node may appear in the result array. For example, the selector '.foo .bar .mumble', when applied to the following tree:
...produces two copies of the mumble node, because it finds it by following two different paths. By contrast, my implementation visits each node in the entire document tree exactly once, so it is guaranteed to produce either 0 or 1 copy of each node in the result array.<div class="foo">
<div class="bar">
<div class="bar">
<div class="mumble">mumble</div>
</div>
</div>
</div>
Subscribe to:
Posts (Atom)