Tuesday, June 20, 2006

Prophecies: future-generators

I've discovered a neat idiom. I have a web servlet that spawns a separate process and reads a bunch of terms from the process's output. I want to be able to view those terms one at a time in the web page, but I don't want to leave the process hanging indefinitely based on the user's interaction. At the same time, I want it to be safe for clone/back as all web programs should. So the program should be able to process the terms in a lazy fashion, and allow backtracking (without having to re-read terms after backtracking), but the actual reading of the terms should happen concurrently in order to close the separate process as soon as possible.

This is a sort of combination of generators, which allow programmers to separate producers from consumers, and futures, which are a concurrent variation of laziness, where evaluation of a subexpression can happen concurrently, but evaluation of the containing expression must block if it needs the value of the subexpression.

Let's call these future-generators "prophecies." We create a prophecy with a special syntax whose subexpressions are evaluated immediately but in a separate thread from the creator of the prophecy:
(let ([p (prophecy
(yield 1)
(yield 2)
(yield 3))])
...)
The difference between this and traditional generators is that yield does not halt evaluation in the current local evaluation context, which is happening in a separate thread. It simply queues up the value to make it accessible to the rest of the world.

We can access the yielded values with a library function prophecy-next, which takes a prophecy and blocks until there is a value available, or raises the sentinal value stop-iteration to indicate that "the prophecy is fulfilled."

Or we can extract the values lazily with a stream with prophecy->stream, which produces a stream, where (streamof a) = (promiseof (union null (cons a (streamof a)))).

Now my servlet is easy to write:
(define (start request)
(let ([p (prophecy
... (launch "prog.exe") ...
(let loop ()
... (yield (read-term)) ...)])
(go (prophecy->stream p))))
(define (go stream)
(if (null? (force stream))
`(html (body "DONE"))
(send/suspend/dispatch
(lambda (proc->url)
`(html
(body
(p ,(format "~a" (car (force stream))))
(p (a ([href ,(proc->url (lambda (req)
(go (cdr (force stream))))
"Next"))))))))
The separate program launches immediately, and all its output is read in as soon as possible. But the terms are packaged up via a prophecy into a lazy stream, which the web servlet reads in one page at a time. Users can safely backtrack, clone windows, and/or refresh any of the pages in the interaction, and the values always appear in the proper sequence, without ever having to regenerate them from the separate process.
Update: Added to PLaneT.

2 comments:

Anonymous said...

Hi Dave--

This looks interesting. Could you explain a little more?

First of all, what do you mean about viewing terms one at a time? Is each one on a separate page? Is it a list of items on one page? Are you intending to start serving the page before all of the terms are ready? A more concrete example would be helpful.

Can you state concisely what your feature does? I know it's "sort of" a combination of generators and futures, but I'm not seeing how they interact.

You say that the subexprs of the prophecy form are evaluated in a separate thread; is each one evaluated in its own separate thread? Or is one background thread generated for that whole prophecy? What happens after yielding--it queues up a value for other threads, but what does the present thread do?

Yours confusedly,
Ezra

Anonymous said...

Do you know Concurrent ML's "events"? Scheme48 (and Scsh) have an implementation of CML's Framework. There CML's "events" are called "rendezvous".