Sunday, July 31, 2005

Element.prototype.getElementsBySelector

All right, y'all had plenty of time to implement the imperative tree traversal, so let's compare notes. Here's what my Javascript version looks like--built to work on actual DOM nodes, of course, using real CSS selectors.

The matching function takes an element and an array of selector paths and returns the new array of selector paths, with an extra boolean field elementMatched indicating whether or not the node matched one of the given selector paths. I've elided the data definition for selectors and paths but it should be pretty clear from context. These get constructed from a parser for CSS selector strings.
// matchElement : Element * Array<Path>
// → Array<Path> & { elementMatched : boolean }
function matchElement(element, paths) {
var childPaths = new Array;
childPaths.elementMatched = false;
for (var i = 0; i < paths.length; i++) {
var path = paths[i];
if (path.isSimple()) {
if (path.selector().matches(element)) {
childPaths.elementMatched = true;
}
childPaths.push(path);
}
else if (path.selector().matches(element)) {
childPaths.push(path.tailPath());
childPaths.push(path);
}
else {
childPaths.push(path);
}
}
return childPaths;
}
Then the main select function takes an element (which can be document.documentElement if you want to start at the root of the HTML document) and a single selector path and searches the entire tree for all matching nodes.
function select(element, path) {
var continuation =
new Array(new Task(element, new Array(path)));
var result = new Array;

while (continuation.length > 0) {
var task = continuation.pop();
var childPaths = matchElement(task.element, task.paths);

// Add the element's child nodes to the work list.
var children = task.element.childNodes;
for (var i = children.length - 1; i >= 0; i--) {
continuation.push(new Task(children[i], childPaths));
}

// If matched, save element in result array.
if (childPaths.elementMatched) {
result.push(task.element);
}
}

return result;
}
I daresay my version is easier to understand than Simon Willison's getElementsBySelector implementation or Dean Edwards' cssQuery function. Mine's not quite as fully featured but the extra features--such as matching of pseudo-elements and arbitrary attributes--shouldn't actually affect the above code. It'll just require additions to the data definition of selector objects.

I've put the code up on my web site; a description page should appear at some point.

I'm back.

Tuesday, July 19, 2005

That's a vacation

Sunshine, cliffside villas, insanely attractive company, unending supply of cold beer, and Introduction to Lattices and Order.

Sunday, July 17, 2005

The Little Calculist goes to the Caribbean

I'm going out of town for my brother's wedding for the next couple of weeks. Cardelli, Wadler, and Milner couldn't decide amongst one another who'd get to be my guest blogger, so I suppose we'll just have to make do with nothing.

Thursday, July 14, 2005

Moderation in all things

Mike Spille blogs about computer folks' tendency to adhere to extremist positions on technologies and methodologies, as well as their tendency to jump to opposite extremes when they get burned. That tired "considered harmful" meme is one of the sounds you hear immediately preceding legions of lemming programmers making the leap.

Monday, July 11, 2005

Deriving an Efficient Tree Traversal, Part III

Previously:

Part I: Overview, specification, CPS
Part II: Contexts, data abstraction

A Small Refolding Simplification

We like our stack representation of contexts (after all, that's the well-known implementation technique we've been aiming for), so let's commit to that from now on. But there's a slight redundancy in the context implementation: notice that the select/c procedure takes a tree, a path, and a context, and the select*/c procedure takes a list of trees, a path, and a context. Those first two parameters for each procedure can also be encoded as context frames. Let's combine the two functions into a single function of just one parameter: the context.

(To distinguish this implementation from the previous version, and to emphasize our commitment to the initial algebra representation, we'll refer to contexts as "work lists" now, and context frames as "work items.")
;; item = (union tree (listof tree)) × path
(define-struct item (task path))

;; initial-work-list : tree × path → (listof item)
(define (initial-work-list tree path)
(list (make-item tree path)))

;; select/w : (listof item) → (listof tree)
(define (select/w items)
(if (null? items)
null
(let* ([item (car items)]
[task (item-task item)]
[path (item-path item)]
[items (cdr items)])
(cond
[(tree? task)
(cond
[(null? path) (cons task (select/w items))]
[(matches? task (car path))
(select/w (cons (make-item task (cdr path))
(cons (make-item (children task) path)
items)))]
[else
(select/w (cons (make-item (children task) path)
items))])]
[(null? task) (select/w items)]
[else (select/w (cons (make-item (car task) path)
(cons (make-item (cdr task) path)
items)))]))))
Now that we've folded both tree and (listof tree) arguments into the work list argument, there's just a single function. By now, we've completely gotten rid of all of the appends, so the implementation is linear.

(Update: Ryan found the removal of the appends unclear. The idea is that the context is only being applied to results that are either null or a single-element list, so we can optimize this away to a cons in one case and a simple tail-recursion in the other. Also, I claimed that this implementation is linear, but I don't think that's true: nodes can be visited multiple times since they can reappear in the work list.)

The only problems left are 1) that non-tail recursive call: (cons task (select/w items)), and 2) the use of lists instead of vectors. The first problem is easily solved with an accumulator. The second isn't really a problem, but imperative languages tend to be optimized for array manipulation, so in preparation we'll use an abstract stack datatype:
;; stack : a ... → (stackof a)
;; push : a × (stackof a) → (stackof a)
;; pop : (stackof a) → a × (stackof a)
;; stack-empty? : (stackof a) → boolean
The obvious implementation of stacks is as lists:
(define stack list)

(define (push x s)
(cons x s))

(define (pop s)
(values (car s) (cdr s)))

(define stack-empty? null?)

Work Lists with an Accumulator

The accumulator-passing-style implementation converts the recursive calls to tail calls:
(define (initial-work-stack tree path)
(stack (make-item tree path)))

;; select/a : (stackof item) → (stackof tree)
(define (select/a items)
(let loop ([items items] [result (stack)])
(if (stack-empty? items)
result
(let-values ([(item items) (pop items)])
(let ([task (item-task item)]
[path (item-path item)])
(cond
[(tree? task)
(cond
[(null? path) (loop items (push task result))]
[(matches? task (car path))
(loop (push (make-item task (cdr path))
(push (make-item (children task)
path)
items))
result)]
[else
(loop (push (make-item (children task) path)
items)
result)])]
[(null? task) (loop items result)]
[else (loop (push (make-item (car task) path)
(push (make-item (cdr task) path)
items))
result)]))))))
This version uses bounded control space, making only tail calls for any non-trivial control. It's easy to see how this would translate to a while-loop.

Imperative Implementation

At last, the final imperative implementation is easy to derive. Since each stack is used linearly in the previous version, we can replace it with a destructive update. So we change our stack representation to use a growable vector datatype (thanks to Jacob for the Scheme implementation): 1
(define stack gvector)

(define (pop v)
(let ([len (gvector-length v)])
(let ([x (gvector-ref v (- len 1))])
(gvector-remove! v (- len 1))
(values x v))))

(define (push x v)
(gvector-insert! v x)
v)

(define (stack-empty? v)
(zero? (gvector-length v)))
We'll represent paths as indices into a fixed path vector, and the child lists of tree nodes will be vectors as well. So we extend work list items to be triples with a task (either a tree or a vector of trees), an optional index into the tree vector (hidden invariants--yuck!), and a path index.
;; tree = symbol × (vectorof tree)
;; path = (vectorof symbol)
;; task = (union (tree × #f × nat) ((vectorof tree) × nat × nat))
And here it is, in all its horrifying glory, our final imperative implementation:
(define-struct item (task index path) #f)

(define (select/! tree path)
(let ([path-len (vector-length path)])
(let loop ([items (stack (make-item tree #f 0))]
[result (stack)])
(if (stack-empty? items)
result
(let-values ([(item items) (pop items)])
(let ([task (item-task item)]
[index (item-index item)]
[path-i (item-path item)])
(cond
[(tree? task)
(cond
[(>= path-i path-len)
(loop items (push task result))]
[(matches? task (vector-ref path path-i))
(loop (push (make-item task #f (+ path-i 1))
(push (make-item (children task)
0
path-i)
items))
result)]
[else
(loop (push (make-item (children task)
0
path-i)
items)
result)])]
[(>= index (vector-length task))
(loop items result)]
[else
(loop (push (make-item (vector-ref task index)
#f
path-i)
(push (make-item task
(+ index 1)
path-i)
items))
result)])))))))
Translation to your favorite imperative language is now trivial, and left as an exercise.
1 With the simple addition of a gvector-remove! operation.

Deriving an Efficient Tree Traversal, Part II

In the last episode, we tried representing control with continuations. But continuations are in fact just one implementation of an abstract datatype of contexts. To understand the kinds of contexts we encounter in this problem, let's look at all the ways continuations get constructed in the CPS version:
(define (select/k tree path)
...
[(matches? tree (car path))
(select/k tree (cdr path)
(lambda (ts1)
(select*/k (children tree) path
(lambda (ts2)
(k (append ts1 ts2))))))]
[else (select*/k (children tree) path k)]))

(define (select*/k trees path k)
(cond
...
[else (select/k (car trees) path
(lambda (ts1)
(select*/k (cdr trees) path
(lambda (ts2)
(k (append ts1 ts2))))))]))
In essence, there are two types of context frames, describing the work we have left to do: frames containing a tree and a path, describing a future application of select/k, and frames containing a list of trees and a path, describing a future application of select*/k. The bolded parts of the continuations are the only parts that rely on dynamic data; the rest is just boilerplate, which can be abstracted into the datatype of contexts.

Abstract Datatype: Contexts

The abstract datatype of traversal contexts includes (as always) the initial context, as well as compound contexts containing a base context and a context frame. Context frames in turn are either tree frames, containing a tree and a path, or tree list frames, containing a list of trees and a path. That is:
context ::= [] | context-frame ⋅ context
context-frame ::= tree × path | (listof tree) × path
We need two operations on contexts: one to build compound contexts, and one to apply an existing context.1
cons-context : context-frame × context → context
apply-context : context → (listof tree)
Our latest version of the program can now operate on these datatypes independent of their representation:
;; select/c : tree × path × context → (listof tree)
(define (select/c tree path c)
(cond
[(null? path) (apply-context c (list tree))]
[(matches? tree (car path))
(select/c tree (cdr path)
(cons-context (make-context-frame (children tree) path)
c))]
[else (select*/c (children tree) path c)]))

;; select*/c : (listof tree) × path × context → (listof tree)
(define (select*/c trees path c)
(cond
[(null? trees) (apply-context c null)]
[else (select/c (car trees) path
(cons-context (make-context-frame (cdr trees) path)
c))]))

Final Algebra Representation

We can recover the CPS implementation by representing the constructors of the context datatype (cons-context, make-context-frame, initial-context) as functions, and the observer of the datatype (apply-context) as function application. This is the so-called final algebra representation of an abstract datatype: all the intelligence is in the constructors.
;; context-frame = (context → context)
;; context = (listof tree) → (listof tree)


;; make-context-frame : (union tree (listof tree)) × path → context-frame
(define (make-context-frame work path)
(cond
[(tree? work) (lambda (k)
(lambda (ts1)
(select/c work path
(lambda (ts2)
(k (append ts1 ts2))))))]
[else (lambda (k)
(lambda (ts1)
(select*/c work path
(lambda (ts2)
(k (append ts1 ts2))))))]))

;; cons-context : context-frame × context → context
(define (cons-context frame context)
(frame context))

;; apply-context : context × (listof tree) → (listof tree)
(define (apply-context k trees)
(k trees))

(define initial-context identity)

Initial Algebra Representation

Alternatively, we can represent the constructors of the abstract datatype using a simple disjoint union datatype, and put all the intelligence in the observer. This is the initial algebra representation. The new representations of contexts and frames are:
;; context = (listof context-frame)
;; context-frame = (union tree (listof tree)) × path


(define-struct context-frame (work path))

(define cons-context cons)

(define initial-context null)

;; apply-context : context × (listof tree) → (listof tree)
(define (apply-context ls trees)
(cond
[(null? ls) trees]
[(tree? (context-frame-work (car ls)))
(append trees
(select/c (context-frame-work (car ls))
(context-frame-path (car ls))
(cdr ls)))]
[else
(append trees
(select*/c (context-frame-work (car ls))
(context-frame-path (car ls))
(cdr ls)))]))
This is looking promising: now we can represent contexts with simple tuples and unions, and without the use of higher-order functions.

To be concluded...
1 In general, the return type of applying a context could be polymorphic, but for our purposes, all computations will return lists of tree nodes.

Deriving an Efficient Tree Traversal, Part I

In an attempt to live up to this blog's name, I'm going to spend the next few posts describing a series of transformations that derive an efficient implementation of a tree traversal suitable for imperative languages, starting from a natural, structural recursion in a language that isn't purposefully broken. This is mostly based on material from Mitch's Principles of Programming Languages course that all first-year Northeastern PhD students take--especially data abstraction, contexts, and continuations. This is great stuff, and it's worth learning.

The Problem

The problem we'll solve is to write a traversal of the DHTML DOM tree in a web page. The problem is useful and ubiquitous, but it's hard to solve in a language that doesn't like recursion. It's pretty well-known that you need to maintain an explicit stack to drive the traversal, but it's not obvious how to get it right. I could point out popular libraries that get it wrong, but I'll be polite.

For the sake of example, I'll simplify the problem somewhat: we'll represent DOM nodes as a simple structure:
;; tree = symbol × (listof tree)
(define-struct tree (tag children))
Leaf nodes are represented as trees with empty lists of children. We'll traverse the DOM in the style of CSS selectors, which are represented as an ancestry path of tags:
;; path = (listof selector)
;; selector = symbol
For example, the path '(foo bar) represents the set of all bar nodes that are descendants of foo nodes.

Natural Solution

The clearest implementation of the problem is a structural induction on the DOM tree and selector path (technically, the ordering is lexicographic over both inputs):
;; select : tree × path → (listof tree)
(define (select tree path)
(cond
[(null? path) (list tree)]
[(matches? tree (car path)) ; does node's tag match first tag in path?
(append (select tree (cdr path))
(select* (children tree) path))]
[else (select* (children tree) path)]))

;; select* : (listof tree) × path → (listof tree)
(define (select* trees path)
(cond
[(null? trees) null]
[else (append (select (car trees) path)
(select* (cdr trees) path))]))
We have two distinct datatypes to recur on: tree and (listof tree), which leads to two separate functions. When we find a node that matches, we need to test it against the rest of the path as well; otherwise we just search its descendants. We only produce a match when we find a node that has matched the entire selector path.1

CPS

The recursive solution uses the implicit control stack of Scheme to drive the traversal, but we want an explicit implementation. The easiest way to represent control is to CPS a program, so let's try that:
;; select/k : tree × path × ((listof tree) → (listof tree)) → (listof tree)
(define (select/k tree path k)
(cond
[(null? path) (k (list tree))]
[(matches? tree (car path))
(select/k tree (cdr path)
(lambda (ts1)
(select*/k (children tree) path
(lambda (ts2)
(k (append ts1 ts2))))))]
[else (select*/k (children tree) path k)]))

;; select*/k : (listof tree) × path × ((listof tree) → (listof tree)) → (listof tree)
(define (select*/k trees path k)
(cond
[(null? trees) (k null)]
[else (select/k (car trees) path
(lambda (ts1)
(select*/k (cdr trees) path
(lambda (ts2)
(k (append ts1 ts2))))))]))
This no longer relies on the control stack of Scheme, because every function call is in tail position. However, it isn't a satisfying solution for an iterative language--even one with higher-order functions--because without proper tail calls, it will still use unbounded control space.

To be continued...
1 For the sake of simplicity, I'm not going to worry about duplicate elements in the result list.

Thursday, July 07, 2005

How will AJAX change the landscape of browser platforms?

All this AJAX mania means we're going to be seeing a lot more long-lived single pages on the web, with users interacting indefinitely within a single web page. This means browser scripting engines are going to have to deal with more garbage collection, for one thing. I'm sure that to date, the interpreters bundled with browsers have been built with the implicit assumption that all scripts will be short-lived.

There are currently serious performance issues with web scripting. (This multi-browser experiment at quirksmode demonstrates a few of them. Joel Webber describes some more.) A single developer trying to make a robust web app in Javascript would be up a creek, but with huge industry pressure backed by the craze of an Internet fad, there's incentive for the platforms to change. I have no idea how they'll change, though.

Wednesday, July 06, 2005

Latest Javascript AOP round-up

I've updated a couple of my web pages on Javascript. Check out the latest version of the CEKS machine semantics for "ClassicJavascript," as well as my new Behaviors library. There's still more to come, but they're getting better.

I also just discovered another post on AOP in Javascript, but don't have time to read it at the moment.

Monday, July 04, 2005

For the love of all that's good and holy

This JavaScript stuff is just getting silly. Every object has an internal prototype link, which is a reference to its prototype that, according to the spec, is not accessible to the user program. (The object's prototype is initially available as obj.constructor.prototype, but changing these fields doesn't affect the real prototype link.) There's a very good reason for hiding this link: the dynamic dispatch semantics searches the prototype chain automatically on field dereference (including method calls), and it would be a shame if cycles showed up in the prototype graph. I'm gonna make a rash generalization and guess that the legions of novice scripters would have no clue what was going on if simply dereferencing foo.bar caused their entire web browser to freeze.

But many implementations do expose the internal prototype link, via a non-standard __proto__ property! And what do they do about cycles? My guess was that they'd check for cycles during the dereferencing operation, but it appears they disallow it on mutation of the __proto__ field:
Rhino 1.6 release 1 2004 11 30
js> function Thing(v) { this.value = v; }
js> Thing.prototype = { common : 'blue' };
[object Object]
js> var thing1 = new Thing(1);
js> var thing2 = new Thing(2);
js> thing1.value
1
js> thing2.value
2
js> thing1.common
blue
js> thing2.common
blue
js> thing1.__proto__ == Thing.prototype
true
js> thing2.__proto__ == Thing.prototype
true
js> thing1.__proto__ = thing2;
[object Object]
js> thing2.__proto__ = thing1;
js: "<stdin>", line 12: Cyclic __proto__ value not allowed.
Good gravy, does everything have to be mutable?

Saturday, July 02, 2005

AOP in JavaScript

It's really easy to implement various aspect-oriented programming idioms in JavaScript. This shouldn't come as a big surprise, perhaps because it's a reflective language, but probably even more so simply because it allows you to mutate everything.

Danne Lundqvist implemented a little library that introduces before, after, and around advice to methods. It's very simple: it just replaces the method slot of an object with a wrapper around the original method.

Another creative aspect-oriented library for JavaScript is Ben Nolan's Behaviours library. The "advice" in this case is simply the standard DHTML event handlers, but the interesting part is the join-point model: CSS selectors. Because a web page is represented roughly as a tree of DOM objects (a graph, actually, because of the back edges), there are all kinds of libraries for encoding the selection of sets of nodes. Because CSS offers a high-level syntax for representing these selectors, Ben's library allows you to specify join points using those selectors and automatically attach new behavior to all the selected nodes.

Noel noted the connection to AOP on the Untyping blog, and I've made some comments about the design of such an AOP facility on Lambda the Ultimate.

Update: Here's a prototype implementation of a variation on Ben's library. I implemented some of the suggestions I made on Lambda the Ultimate.

Friday, July 01, 2005

Haskell's hidden mutation

Here's a clearer version of the Haskell function I described earlier today:
minTree t = t'
where (n, t') = replaceMin t n

replaceMin (Leaf n) m = (n, Leaf m)
replaceMin (Node t1 t2) m = (n, Node t1' t2')
where (n1, t1') = replaceMin t1 m
(n2, t2') = replaceMin t2 m
n = min n1 n2
In a sense, the binding of a new variable n introduces an implicit box that will get filled in at some point. So because we have letrec binding semantics, we can use the same box as both an output and an input to the replaceMin function. But the box can only be filled once; if, in the process of computing the value to be placed in the box, we end up having to use the value in the box, we end up with a dependency cycle and then loop. This program works because the box gets set at the end of the computation, and doesn't get used until after that point.

We can explicitly encode the order of the computation using continuations. I'll use Scheme now just to show there's no hidden reliance on laziness anymore.
(define (min-tree t n)
(let ([m (box #f)])
(match-let ([(n . t*) (replace/min t m)])
(set-box! m (n identity))
t*)))

(define (replace/min t m)
(match t
[($ Leaf n)
(cons (lambda (k) (k n)) (make-Leaf m))]
[($ Node t1 t2)
(match-let ([(k1 . t1*) (replace/min t1 m)]
[(k2 . t2*) (replace/min t2 m)])
(cons (lambda (k)
(k1 (lambda (n1)
(k2 (lambda (n2)
(k (min n1 n2)))))))
(make-Node t1* t2*)))]))
Now you can see exactly when we set the value of the box--after the computation finishes--and that we don't do anything with the returned "min-computations" except embed them in larger min-computations until after the traversal, when we run the final computation.

Circular programming in Haskell

Haskell lets you write some strange-looking programs in a style known as "circular programming." If you remember how you felt the first time you saw a recursive program (usually something in between awe and fear), get ready for this... Imagine transforming a tree with integer leaves into a tree that replaces all the leaves with the minimum leaf value. Normally we'd do this in two passes: first find the minimum leaf, and then traverse the tree a second time, replacing the leaves with the minimum value. In a lazy language you can do this in one go:
data Tree = Leaf Int
| Node Tree Tree

minTree :: Tree -> Tree
minTree t = t'
where minTree' (Leaf n) = (n, Leaf n')
minTree' (Node t1 t2) = let (m1, t1') = minTree' t1
(m2, t2') = minTree' t2
in (min m1 m2, Node t1' t2')
(n', t') = minTree' t
What is so absolutely bizarre about this program is that it uses the result of the entire recursion n' in one of the recursive calls, before the result has been computed! But because it isn't actually observed until after the tree has been constructed, it's safe to do so. Another way to think of this is to imagine the value n' as a reference cell or "box" whose contents get set once at the end of the computation; we don't look inside it until after the computation is over, but during the computation we're allowed to pass the box around and put it inside other nodes.

I've seen some pretty weird Haskell programs that use this kind of trick, passing the result of a function in as one of its own arguments, for example. Of course, you can't just do this however you like. For example, we can't possibly imagine this program would give us a useful result:
data Thing = Leaf
| Node Thing

f :: Bool -> Thing -> Int
f _ (Node v) = i'
where i' = f (i' == 0) v
f isZero Leaf = if isZero then 1 else 0
The isZero argument is a flag that means something like "will my result by unequal to itself?"