"What's surprising to me is that this language ever managed to achieve widespread use - but I guess it's just another example of how you can break a whole bunch of precious rules and the sky doesn't necessarily fall in. Software is full of people declaiming their 'thou shalt not' lists, and right across the street there's another bunch of people breaking those very rules quite profitably."
-- Daniel Earwicker
Monday, August 17, 2009
Quote of the day
Tuesday, August 11, 2009
Monday, August 10, 2009
Call for Participation: Scheme Workshop 2009
[Note: only one day left to register! --Dave]
2009 Workshop on Scheme and Functional ProgrammingCo-Located with the Symposium in Honor of Mitchell Wand
August 22, 2009
Boston, Massachusetts, USA
http://www.schemeworkshop.org/2009
CALL FOR PARTICIPATION
To the delight of all and sundry, the 2009 Scheme and Functional Programming Workshop will be held on August 22nd at Northeastern University, and it is a signal honor for me to be able to invite YOU to the WORLD'S FOREMOST WORKSHOP on the marvelous Scheme language, and to present a program PACKED with contributions from familiar faces and new ones, certain to amaze, delight, and edify. Lend us your ears, and we will widen the space between them.
- John Clements
IMPORTANT DATES
August 11, 2009 - Registration deadline
August 22, 2009 - Workshop on Scheme and Functional Programming
August 23-24, 2009 - Symposium in Honor of Mitchell Wand
VENUE
Northeastern University
Boston Massachusetts
Curry Student Center Ballroom (Building 50)
346 Huntington Ave
Boston, MA 02115
ACCOMMODATION
A limited block of hotel rooms has been reserved for participants of the Scheme Workshop and/or the Mitchell Wand Symposium at hotels in Cambridge and Boston. See the workshop web site for more information, and please note that some of these special rates have already expired.
REGISTRATION
The registration fee will be $40 to help cover the operating costs and lunch accommodations. Please register by August 11, 2009 so that we will have an accurate head count. To register, please send an email to aoeuswreg@brinckerhoff.org with your name and any dietary restrictions for lunch.
PROGRAM COMMITTEE
- John Clements (Cal Poly State University (organizer & chair))
- Dominique Boucher (Nu Echo)
- Abdulaziz Ghuloum (Indiana University)
- David Herman (Northeastern University)
- Shriram Krishnamurthi (Brown University)
- Matthew Might (University of Utah)
- David Van Horn (Northeastern University)
PRELIMINARY PROGRAM
Invited: If programming is like math, why don't math teachers teach programming?
Emmanuel Schanzer
Invited Talk on Future Directions for the Scheme Language
The Newly Elected Scheme Language Steering Committee
The Scribble Reader: An Alternative to S-expressions for Textual Content
Eli Barzilay
World With Web: A compiler from world applications to JavaScript
Remzi Emre Başar, Caner Derici, Çağdaş Şenol
Scalable Garbage Collection with Guaranteed MMU
William D Clinger, Felix S. Klock II
Distributed Software Transactional Memory
Anthony Cowley
Sequence Traces for Object-Oriented Executions
Carl Eastlund, Matthias Felleisen
Keyword and Optional Arguments in PLT Scheme
Matthew Flatt, Eli Barzilay
Fixing Letrec (reloaded)
Abdulaziz Ghuloum, R. Kent Dybvig
Descot: Distributed Code Repository Framework
Aaron W. Hsu
A pattern-matcher for miniKanren -or- How to get into trouble with CPS macros
Andrew W. Keep, Michael D. Adams, Lindsey Kuper, William E. Byrd, Daniel P. Friedman
Randomized Testing in PLT Redex
Casey Klein, Robert Bruce Findler
Screen-Replay: A Session Recording and Analysis Tool for DrScheme
Mehmet Fatih Köksal, Remzi Emre Başar, Suzan Üsküdarlı
Interprocedural Dependence Analysis of Higher-Order Programs via Stack Reachability
Matthew Might, Tarun Prabhu
Get stuffed: Tightly packed abstract protocols in Scheme
John Moore
Higher-Order Aspects in Order
Eric Tanter
Peter J Landin (1930-2009)
Olivier Danvy
Thursday, August 06, 2009
Selectively disabling tail recursion

the stack trace you get back has very little information in it.(define (fact-iter n acc)
(if (zero? n)
(error 'fact "stop!")
(fact-iter (sub1 n) (* n acc))))
(fact-iter 3 1)
One argument you'll sometimes hear from people like me is that in a properly tail calling language, you can always turn a tail position into a non-tail-position, whereas the reverse is impossible. But in practice, when you want broadly better diagnostics, manually converting lots of individual function calls throughout a program is too hard to be practical, and you then have to go and undo it afterward.
So in the spirit of linguistic tools for diagnostics, I've discovered a dirt-simple trick for improving the information in stack traces while debugging.
PLT Scheme lets you hijack the normal meaning of function application by rebinding the #%app macro. So I defined a utility called trace-app that wraps its function application in (roughly) the identity function, forcing the given function call out of tail position. Then all you have to do is add to the top of a module:
and suddenly, all function applications that occur syntactically within the current module body are non-tail calls, and voilà:(require (planet dherman/tail:4))
(define-syntax #%app trace-app)

Update: I've posted a new version of the package, which exports trace-app in a slightly different way. The initial version had a hazard that client modules which used
would inadvertently re-provide #%app, which would then disable tail recursion for its client modules.(provide (all-defined-out))
The new way to use it is
or(require (rename-in (planet dherman/tail:5) [trace-app #%app]))
(require (only-in (planet dherman/tail:5) [trace-app #%app]))
Labels:
debugging,
diagnostics,
macros,
proper tail recursion,
stack traces
Wednesday, July 22, 2009
Linguistic tools for diagnostics
I've written before about how the stack gets misunderstood as a history thanks to non-tail-recursive languages. Of course, stack traces are indispensable: they're essential for debugging, and they're also useful for reporting crash information from live applications.
But this conflates two very different language requirements in one feature: the need for nested (unbounded, if your language isn't broken) function application, and the need for diagnostics of dynamic errors.
The first is what continuations and stacks are all about, of course:functions can't call each other function calls can't be nested in compound expressions without the language runtime keeping track of the work it has left to do. But the second is an independent need: there's all sorts of diagnostic information that's potentially useful, and the stack just happens to be one source of information that's already there.
Now, as it turns out, it's a pretty useful source of information. And if you happen not to have proper tail recursion, it's even more useful, since it provides a path through the function call graph. But I've been finding stack traces in Scheme less informative, because with proper tail calls, there's less function-call history in the continuation.
I recognize that sometimes the "complexity budget" in language design necessitates using one feature to satisfy multiple requirements. But I'd like to see more investigation of linguistic diagnostics designed independently of existing language features. For example, it might be useful to provide configurable tracing facilities that record program history during runtime, where the tuning of how much gets recorded is completely independent of the space semantics of the evaluator. So you have a properly tail recursive semantics with a flexible tracing facility grafted on top.
What classifies a language feature as "purely diagnostic" might be that it is designed not to change the observable behavior of the program, but merely to report some additional information. Contracts-as-projections is an example.
Update: rephrased my rookie characterization of continuations.
But this conflates two very different language requirements in one feature: the need for nested (unbounded, if your language isn't broken) function application, and the need for diagnostics of dynamic errors.
The first is what continuations and stacks are all about, of course:
Now, as it turns out, it's a pretty useful source of information. And if you happen not to have proper tail recursion, it's even more useful, since it provides a path through the function call graph. But I've been finding stack traces in Scheme less informative, because with proper tail calls, there's less function-call history in the continuation.
I recognize that sometimes the "complexity budget" in language design necessitates using one feature to satisfy multiple requirements. But I'd like to see more investigation of linguistic diagnostics designed independently of existing language features. For example, it might be useful to provide configurable tracing facilities that record program history during runtime, where the tuning of how much gets recorded is completely independent of the space semantics of the evaluator. So you have a properly tail recursive semantics with a flexible tracing facility grafted on top.
What classifies a language feature as "purely diagnostic" might be that it is designed not to change the observable behavior of the program, but merely to report some additional information. Contracts-as-projections is an example.
Update: rephrased my rookie characterization of continuations.
Labels:
diagnostics,
proper tail recursion,
stack traces,
tail calls,
tracing
Monday, July 13, 2009
Contractiveness not required for regularity
Yesterday I wrote about ensuring termination of subtyping for equirecursive types that I thought contractiveness ensures that the (potentially) infinite tree corresponding to a recursive type is regular. That was wrong. Contractiveness just ensures that you can't represent types that don't have a corresponding tree, such as μA.A. The fact that the tree is regular comes from the fact that the syntactic representation is finite and substitution doesn't add anything new.
Sunday, July 12, 2009
Regular => Subtree set finiteness => Termination of memoized algorithm
The termination argument behind the TAPL algorithm for subtyping of equirecursive types is not as scary as it sounds (or as it sounded to me, anyway). It's as simple as this: even though the subtyping rules generate bigger types when they substitute a recursive type into its own body, the fact that the types are regular means that there are only a finite number of distinct subterms in the infinite type tree--that's the definition of regular trees. So as long as you keep a cache of types you've looked at before and never have to look at the same pair of types twice, you can't do an infinite number of checks.
If I'm not mistaken, the syntactic restriction that types are contractive, i.e., recursive type variables have to appear under constructors, is sufficient to guarantee that the corresponding type trees are regular.
If I'm not mistaken, the syntactic restriction that types are contractive, i.e., recursive type variables have to appear under constructors, is sufficient to guarantee that the corresponding type trees are regular.
Labels:
equirecursive types,
recursive types,
TAPL,
termination
Thursday, July 09, 2009
Call for Participation: Symposium in Honor of Mitchell Wand
Symposium in Honor of Mitchell Wand
In Cooperation with ACM SIGPLAN
Coordinated with Scheme Workshop 2009
August 23-24, 2009
Boston, Massachusetts, USA
http://www.ccs.neu.edu/events/wand-symposium
In Cooperation with ACM SIGPLAN
Coordinated with Scheme Workshop 2009
August 23-24, 2009
Boston, Massachusetts, USA
http://www.ccs.neu.edu/events/wand-symposium
CALL FOR PARTICIPATION
IMPORTANT DATES
August 1, 2009 - Registration deadline
August 22, 2009 - Scheme Workshop
August 23-24, 2009 - Symposium in Honor of Mitchell Wand
VENUE
Northeastern University
346 Huntington Ave
Boston, MA 02115 USA
ACCOMMODATION
A limited block of hotel rooms will be reserved for participants of the Symposium and/or the Scheme Workshop at hotels in Boston and Cambridge. More information will be available soon; please check back on the event web site.
REGISTRATION
Registration is free. Please register by August 1, 2009 so that we will have an accurate head count. To register, please send an email to mitchfest-registration@ccs.neu.edu with your name and any dietary restrictions for lunch.
SCOPE
Northeastern University is hosting a special Symposium in celebration of Dr. Mitchell Wand's 60th birthday and honoring his pioneering work in the field of programming languages. For over 30 years Mitch has made important contributions to many areas of programming languages, including semantics, continuations, type theory, hygienic macros, compiler correctness, static analysis and formal verification.
Please join us at Northeastern on August 23rd and 24th as we celebrate this personal milestone and pay tribute to a great computer scientist, researcher, teacher and colleague, Dr. Mitchell (Mitch) Wand.
STEERING COMMITTEE
* Olivier Danvy (University of Aarhus)
* David Herman (Northeastern University)
* Dino Oliva (Bloomberg L.P.)
* Olin Shivers (Northeastern University)
PRELIMINARY PROGRAM
Functional un|unparsing
Kenichi Asai and Oleg Kiselyov and Chung-chieh Shan
A mechanized bisimulation for the nu-calculus
Nick Benton and Vasileios Koutavas
A shallow Scheme embedding of bottom-avoiding streams
William E. Byrd and Daniel P. Friedman and Ramana Kumar and Joseph P. Near
A model of functional traversal-based generic programming
Bryan Chadwick and Karl Lieberherr
The MacScheme compiler: using denotational semantics to prove correctness
William D. Clinger
Eliminating the middle man: Learning garbage collection without interpreters
Gregory H. Cooper and Arjun Guha and Shriram Krishnamurthi
Specializing continuations
Christopher Dutchyn
A Scheme for native threads
R. Kent Dybvig
Trampolining architectures
Steven E. Ganz and Daniel P. Friedman
Finding everything that can happen: Solving authentication tests by computer
Joshua D. Guttman and John D. Ramsdell
A theory of typed hygienic macros
David Herman
The MzScheme machine and bytecode verifier
Casey L. Klein and Matthew Flatt and Robert Bruce Findler
Featherweight X10: A core calculus for async-finish parallelism
Jonathan K. Lee and Jens Palsberg
Subcubic control-flow analysis algorithms
Jan Midtgaard and David Van Horn
A simplified multi-tier semantics for Hop
Manuel Serrano and Christian Queinnec
DDP for CFA
Olin Shivers and Dimitrios Vardoulakis and Alexander Spoon
The design and implementation of Typed Scheme
Sam Tobin-Hochstadt and Matthias Felleisen
Friday, May 15, 2009
Lexical syntax thought
We reserve the single-quote character (#\') in the Scheme lexical syntax so that it can't be used anywhere in an identifier. If you stick it in the middle of an identifier (e.g., foo'bar) it lexes as three tokens ("foo", "'", "bar"). But I don't recall ever seeing anyone make use of this fact; they always stick a space before the quote.
I really miss my precious "prime" character in ML and Haskell identifiers (let val state' = munge state ...). Scheme prides itself on a permissive lexical syntax for identifiers. Why couldn't we allow quote characters in all positions of an identifier other than the first?
P.S. I know I can use the Unicode prime character (U+2032). Sorry, not till I get my Unicode keyboard.
I really miss my precious "prime" character in ML and Haskell identifiers (let val state' = munge state ...). Scheme prides itself on a permissive lexical syntax for identifiers. Why couldn't we allow quote characters in all positions of an identifier other than the first?
P.S. I know I can use the Unicode prime character (U+2032). Sorry, not till I get my Unicode keyboard.
Follow-up on effect masking and backtracking
A follow-up on yesterday's post about backtracking with state: an anonymous commenter reminded me that persistent data structures are a useful and lightweight (compared to e.g. first-class stores) approach. In fact, probably my favorite paper from ESOP 2008 was Conchon and Filliâtre's Semi-Persistent Data Structures, which was addressing exactly this problem. Rather than introducing an entirely transactional semantics for the language, it's just a data structure with a transactional interface.
Continuation marks in exceptions
Rethrowing caught exceptions is an important idiom. In particular, there are lots of cases where it's important to conditionally handle an exception by handling it and then rethrowing it. In ML, I believe it's the actual definition of pattern matching on particular exception types: the pattern matching is syntactic sugar for inspecting the caught exception and conditionally rethrowing it. In standard JavaScript it's the only way to catch particular exception types, because there is no standard conditional handling form.
But a useful feature of some exception systems, like in Java, is the association of a stack trace with an exception. But when you rethrow an exception, this involves a choice: do you want a stack trace to be associated with the original continuation that threw the exception, or the one that rethrew it? There are genuine use cases for both. But in Java, the implementers have to choose one or the other for you.
PLT Scheme gives you the flexibility to choose for yourself. Exceptions are not special values in Scheme; there's an orthogonal way of reifying the control state. The PLT runtime stores stack trace information in a special continuation mark, so if you grab the current continuation mark set, you've got your hands on a value that provides DrScheme's IDE with all the information it needs to visualize the continuation.
The standard exception types in PLT Scheme are records with a field for the current continuation mark set. So when you conditionally rethrow an exception, you can either keep its version of the continuation marks, or functionally update it with the current continuation marks and throw that instead.
Afterthought: I think Java might have chosen something like a policy where an exception value saves its stack trace at the point where it's created. This gives you the ability to do both by either rethrowing the same exception (to keep the original stack trace) or wrapping the exception with a new one (to use the new stack trace). I still prefer the PLT Scheme approach, which makes the stack trace a first-class value and consequently more explicit.
But a useful feature of some exception systems, like in Java, is the association of a stack trace with an exception. But when you rethrow an exception, this involves a choice: do you want a stack trace to be associated with the original continuation that threw the exception, or the one that rethrew it? There are genuine use cases for both. But in Java, the implementers have to choose one or the other for you.
PLT Scheme gives you the flexibility to choose for yourself. Exceptions are not special values in Scheme; there's an orthogonal way of reifying the control state. The PLT runtime stores stack trace information in a special continuation mark, so if you grab the current continuation mark set, you've got your hands on a value that provides DrScheme's IDE with all the information it needs to visualize the continuation.
The standard exception types in PLT Scheme are records with a field for the current continuation mark set. So when you conditionally rethrow an exception, you can either keep its version of the continuation marks, or functionally update it with the current continuation marks and throw that instead.
Afterthought: I think Java might have chosen something like a policy where an exception value saves its stack trace at the point where it's created. This gives you the ability to do both by either rethrowing the same exception (to keep the original stack trace) or wrapping the exception with a new one (to use the new stack trace). I still prefer the PLT Scheme approach, which makes the stack trace a first-class value and consequently more explicit.
Thursday, May 14, 2009
Effect masking vs. purity
I've been working for a while on an algorithm that involves two effects: mutation and backtracking. Implementation strategies for the two effects seem to have different economics. I'm working in the context of PLT Scheme, here, but I suspect my observations would be similar in other non-pure languages.
For backtracking, I could implement the algorithm in a pure way using two continuations or the list monad. Or I could just use the built-in exceptions and exception handlers of PLT Scheme. For mutation, the choice is between store-passing or the built-in mutation of Scheme.
I've tried abstracting these two effects out into a single monad for a completely pure implementation. There are downsides to this approach, though. For example, debuggers and stack traces are completely useless since most of the program control flow goes through unit, bind, and the myriad intermediate closures created by such a higher-order style of programming. Besides, an excessively higher-order style imposes real readability costs on a program. So for implementing backtracking, I find exceptions to be a more straightforward approach.
But as it turns out, the algorithm also needs to roll back mutations when it backtracks. So I simply can't use mutation. I think this is one of the things that make mutation so unwieldy--there's typically no corresponding "masking" operation for a destructive update. Exceptions have handlers, continuations (sometimes) have delimiters-- even some I/O features come with effect masks: for example, in PLT Scheme you can override the current-output-port with a string port and capture a program's output. All of these allow you to bound the effects a computation can have, to put a sort of dynamic safety net around it. Not so for set!, vector-set!, set-box!, or set-record-field!.
So for mutation, I'm sticking with explicit store-passing-- not in monadic style. That seems to be the reasonable middle ground for my purposes.
For backtracking, I could implement the algorithm in a pure way using two continuations or the list monad. Or I could just use the built-in exceptions and exception handlers of PLT Scheme. For mutation, the choice is between store-passing or the built-in mutation of Scheme.
I've tried abstracting these two effects out into a single monad for a completely pure implementation. There are downsides to this approach, though. For example, debuggers and stack traces are completely useless since most of the program control flow goes through unit, bind, and the myriad intermediate closures created by such a higher-order style of programming. Besides, an excessively higher-order style imposes real readability costs on a program. So for implementing backtracking, I find exceptions to be a more straightforward approach.
So for mutation, I'm sticking with explicit store-passing-- not in monadic style. That seems to be the reasonable middle ground for my purposes.
Monday, May 11, 2009
Representing failure
I'm working on an algorithm that uses backtracking implemented with exceptions and handlers (rather than, say, explicit continuations). I just noticed that, for debugging, it's nice to represent the failure exceptions as records that contain not just the data about the operation that failed but also an optional list of other failure exceptions. In the disjunction cases, I can save the intermediate failures and, if they all fail, include them in the resulting failure record. This provides me with a nice data structure representing the control flow of the failed search tree. It only required modifying about 3 lines of code and the resulting data structure is easier to analyze than a manual debugging sequence. (I hate working with debuggers, even good ones.)
Wednesday, April 29, 2009
Typed closure conversion
Typed closure conversion is a really lovely explanation of the structure of a closure-converted function. Every closure has its own particular environment structure that binds the particular set of free variables that occur in its body. But this means that if you represent a closure directly as a pair of an environment record and a closed procedure (which is explicitly parameterized over the environment record), then the representations of functions that are supposed to have the same type can end up with incompatible environment types. For example, the program
converts naively to the transparent representationlet val y = 1
in
if true then
λx.x+y
else
λz.z
end
The problem here is the type of the environment has leaked information about the internals of the procedure into the external interface. So Minamide et al use existential types as a nice, lightweight type abstraction for hiding the environment layout as an abstract type. So you instead get the opaque representationlet val y = 1
in
if true then
(λ(env, x).x+#y(env), {y=y})
else
(λ(env,z).z, {})
end
Update: fixed the types in the existentials-- since I'm using an uncurried representation the argument type is a pair, not an arrow type.let val y = 1
in
if true then
pack {y:int} with (λ(env,x).x+#y(env), {y=y})
as ∃t.((t × int) → int) × t
else
pack {} with (λ(env,z).z, {})
as ∃t.((t × int) → int) × t
end
Saturday, April 18, 2009
PLT System Facilities (Software Components): Module name resolvers
Because of eval, one program's "compile time" is another program's run-time. Even though they are not first-class, this also applies to PLT Scheme modules, since they too can be dynamically compiled, loaded and linked. As I've described before, namespaces are the general construct for managing multiple instances of modules and maintaining their top-level bindings. But I still haven't described how dynamic invocation of modules happens.
Module loading is managed by the system's module name resolver. The primary responsibility of the module name resolver is to map distinct but equivalent references to modules into canonical forms. For example, it's possible to refer to the PLT Scheme string library with either of the two forms:
The other responsibilities of the standard module name resolver are:

The standard module name resolver translates these to an absolute path to the module's source file in the filesystem.(lib "string.ss" "scheme")
scheme/string
The other responsibilities of the standard module name resolver are:
- to ensure that a module is only loaded once,
- to make sure each module is loaded the first time it is needed, and
- to check for and prevent cyclic loading dependencies between modules.
Wednesday, April 15, 2009
Matthew Flatt running for SIGPLAN member-at-large
I like Matthew's statement, so I want to call it out here:
As a SIGPLAN member at large, I would be a voice for those who believe that conferences have become too important relative to journals. Our conferences have taken over the prestige that other disciplines reserve for journals, so that our conferences tend toward more conservative acceptances and more complex reviewing processes (including double-blind submission and author-response periods), which work against the goal of broadcasting new research directions and ambitious new ideas. In comparison, our journal system, which offers the time and expertise needed for the careful review of enduring results, is left under-used and under-supported (especially by paper referees) despite the admirable efforts of journal editors. I have no easy fixes, nor do I think that immediate radical change is appropriate, but I think we should do all we can to nudge the balance back in favor of journals.
Tuesday, April 07, 2009
Memo to myself: directions for `svn unfail'
Sorry for this-- I keep losing this information so I'm just sticking this note to myself up here. Who knows, if other people run into this it might be of use to them, too.
If you find you're inexplicable being asked to authenticate when doing an svn up and it won't accept your password, (particularly after renaming or moving a file in the repository), here's the solution:
If you find you're inexplicable being asked to authenticate when doing an svn up and it won't accept your password, (particularly after renaming or moving a file in the repository), here's the solution:
...if "svn up" doesn't work, you can make "svn switch" to the same url, it will update all with no problems.
Wednesday, April 01, 2009
Dear David Herman
Dear David Herman,
We have the same name. You seem to think you have my gmail address. Periodically I get email intended for you. It may be that you're giving out my address to people--or so I imagine, because every so often, I also get a message from Google saying that someone is initiating the process to reset my password. Naturally, I don't proceed. I don't want to reset my password. I'd prefer you stopped trying to.
I have no particular way of contacting you, because all I know is that your name is my name too, and I think you live in San Francisco (based on your mail that I've received). Which is all the more confusing, given that I spend a lot of time in the Bay Area. But I digress.
Mr. Herman, I'm afraid we can't share this gmail account. You seem to like the version without a dot, whereas I prefer the version with a dot, but either way, Google says it's mine. Sorry. I got there first.
Best regards,
David Herman
We have the same name. You seem to think you have my gmail address. Periodically I get email intended for you. It may be that you're giving out my address to people--or so I imagine, because every so often, I also get a message from Google saying that someone is initiating the process to reset my password. Naturally, I don't proceed. I don't want to reset my password. I'd prefer you stopped trying to.
I have no particular way of contacting you, because all I know is that your name is my name too, and I think you live in San Francisco (based on your mail that I've received). Which is all the more confusing, given that I spend a lot of time in the Bay Area. But I digress.
Mr. Herman, I'm afraid we can't share this gmail account. You seem to like the version without a dot, whereas I prefer the version with a dot, but either way, Google says it's mine. Sorry. I got there first.
Best regards,
David Herman
Tuesday, March 31, 2009
Your lambda-cube is puny.
Aaron just showed me this mind-blowing figure from a paper by van Glabbeek that attempts to abstract the wide variety of notions of equivalence in the concurrency literature:
A moment of respectful silence, please.

Hand-writing derivations
I've come to prefer one particular approach to writing out a derivation tree by hand. I used to try to draw the tree by stacking up rules with horizontal separators à la Gentzen, but I could never judge how much space I would need--I'm not accustomed to filling a page bottom to top--and one page never seemed to be enough anyway.
I've found that writing judgments top to bottom, indenting to represent nesting, is much more scalable. E.g.:
I've found that writing judgments top to bottom, indenting to represent nesting, is much more scalable. E.g.:
⊢ (λ f.f 1) (λ x.x) : int
⊢ (λ f.f 1) : (int → int) → int
f : int → int ⊢ f 1 : int
f : int → int ⊢ f : int → int
f : int → int ⊢ 1 : int
⊢ (λ x.x) : int → int
x : int ⊢ x : int
Subscribe to:
Posts (Atom)