If the lambda calculus is a minimalist assembly language for modeling computation, then we can look at a lambda program as a mathematical black box with two possible outputs: halt and loop. These "outputs" of course correspond to whether a reduction in the abstract machine terminates, but we could just as well give them arbitrary labels, say, true and false.
When you encode a datatype and a computation in the lambda calculus, you might want to check that your computation gives the correct result, not just whether it halts. But the theorem will still be stated as a truth-valued proposition: either the computation got the right result, or it didn't. So assuming the encoding includes operations that can compare two results, you can always wrap the computation in an observer that drives the "lambda machine" to produce its true output (halt) if the computation produced the right result, and false output (loop forever) if the computation produced anything else.
For example, say we want to show that two different implementations of a function, f and f′, always produce the same result. We could do this directly by comparing their results, but this involves reasoning about the encoded result values. Since the datatype already has operations that simulate the inspection of its values, this would in a sense be a duplication of effort. Instead, we can use the observer operations--an equality test, for example--to compare the results to an expected value and then either halt or loop.
Thus if there is a context in which f and f′ are not equivalent, that is, in which they produce different results, then this implies that there is a context in which f causes the machine to halt and f′ causes the machine to loop. The statement of contextual equivalence is usually given as the contrapositive: if there are no such contexts, that is, in all contexts they either both halt or both loop (they "co-terminate"), then f and f′ are equivalent.
So these two outputs from the lambda machine are sufficient to formulate the statement of any proposition about expressions, so long as it depends only on properties that are observable within the lambda calculus.
Friday, May 13, 2005
Subscribe to:
Post Comments (Atom)
3 comments:
Interesting. Of course this is the bog-standard trick for reducing particular problems to the halting problem in the context of Turing machines, but for some reason I'd never thought about it in the context of the lambda calculus. Neat!
The odd thing about your choice of black box is that you can't build up larger black boxes in terms of the smaller one, because the terms themselves cannot observe the "false" output (loop).
So the black box is something that you always have to put at the top-level, wrapped around the underlying composition of lambda expressions.
To me, that's not as useful as something that I can build on top of. But I'm probably mixed up and thinking about this as an engineer, and not as a semanticist.
Well, maybe in that sense it's a misleading metaphor. But yes, I meant that it's a mathematical black box, where the observations of the machine's output are made by proving the machine halts or loops, not made within the computational framework itself.
So the black box is something that you always have to put at the top-level, wrapped around the underlying composition lf lambda expressions.
I think the natural expression of this generalization is in a process calculus. The lambda calculus is good for reasoning about computation, but when you want to reason about observing effects, process calculi are natural for expressing the relevant notions:
1. observation (reductions are tagged with observable labels)
2. information hiding (transition labels can be masked)
3. composition (processes can be combined in very general ways)
Post a Comment