Friday, April 15, 2005

The balance of expressiveness and reasonability

Yesterday during the question and answer period of a talk, Matthias pointed out that there is usually a balance involved in language design between the expressive power of the language and the ability to do local reasoning on programs in the language. This confirmed and crystallized an intuition I've been mulling for a while. It was always clear to me that an increase in expressivity is not inherently "good" (for example, in his expressivity paper, Matthias proves that set! increases the expressive power of a language, but he certainly wasn't advocating the gratuitous use of mutation). But I hadn't quite put my finger on it until Matthias's comment.

It's in fact a direct result of the idea that powerful language constructs require global transformations: when you use a construct that affects your whole program, then you have to reason globally about the changes it makes. Essentially this means you'll need more powerful reasoning tools to be able to tame the power -- which can be both constructive and destructive -- of the new language feature.

As a simple example, the other day I was writing some pattern matching code in "substitution-passing style," where the matcher returns either a substitution or #f, the latter indicating that it failed to match. But before I knew it, I ended up with loads of intermediate checks to see if intermediate matches had failed. The real kicker was when I started writing helper functions (match-list) that themselves had to propagate these checks. I finally decided that there was no need to reimplement a poor man's exception mechanism when my implementation language already has one. So I changed the interface to have matchers raise a match-error exception when matching fails. But the upshot is that the code no longer locally indicates that such an exception can occur; this property then ends up in comments and in proof obligations of the clients.

Type and effect systems like Java's checked exceptions can mitigate these non-local changes by checking and maintaining invariants about the effects. This is precisely an example of the kinds of reasoning tools that help tame the power of expressive language features.

Another example is dynamic dispatch in OO languages. I was having a discussion last year with Shriram about local reasoning and AOP, and he made an analogy to dynamic dispatch. Programmers are completely accustomed to looking at a local piece of code and not knowing its behavior, he argued; in OO languages a method call could refer to any arbitrary reimplementation of the method. But this is par for the course, and in practice we assume that the reimplementations will be well-behaved.

Furthermore, we don't use this power with reckless abandon. I found a relevant quote from an aside made by Paul Wilson in the midst of an unrelated Usenet conversation:
It turns out that at most overtly polymorphic call sites in OO programs, the receiver is always of the same class. And of the minority of truly polymorphic call sites, there's only two or three classes of receiver. That makes it sound like OO doesn't do much work for you. But it does, really, because the 10% of sites where you need it, you need it bad.
Now, I was initially unhappy with these "pragmatic" justifications of language features that destroy local reasoning. They seemed to pit abstraction and local reasoning on one side against usefulness and practicality on the other, and I'm simply not prepared to give up on abstraction. I just don't see how anyone could survive a software project without it. But the underlying tension is this one between expressive power and local reasoning. So good language design requires a sweet spot between features that are just powerful enough to express the kinds of things you want to express while not being too much harder to reason about.1

I believe that the core principles of program design (should) stem from functional programming. But many important design principles require more powerful tools than pure functional programming offers. So more powerful language features are important. On the other hand, any time you introduce effects that can affect the behavior of arbitrary parts of a program, you need to be able to mitigate those changes with reasoning tools, whether they are proof techniques or automated reasoning software.

1As Moore's law allows lightweight formal methods to get closer to what used to be heavyweight formal methods, we should be able to make more powerful reasoning tools for our programs, which should allow us to slide the scale further in favor of more expressive language features. Which is all to say that I should really check out Shriram and Kathi's work on formal methods for AOP.

6 comments:

pnkfelix said...

I'm not convinced that your intuition is 100% on target.

More concretely: lets assume that we are choosing to explain the features of our language by describing how you could get the same effect in other languages that we have previously explained. Saying "the only way we can explain the addition of feature X to language L is via a global transformation of elements in L+{X} to L" does not to me immediately imply that L+{X} will be harder to reason about than L.

If our only reasoning tool available is the above transformation, then sure, I can see how our ability to reason about each program text (and program fragments) is hindered.

But if we were to first prove that the above definition of L+{X} is equivalent to some other semantic definition for our language (perhaps denotational), then we may be able to harness that other definition to be able to reason more effectively.

In short, you are hardcoding a restriction in your analysis technique by assuming definition via compilation of terms in L+{X} to other concrete terms of sublanguage of L. If your compilation target is a totally different language (e.g. the mathy world of denotational semantics, with Scott domains, P_omega, etc), you may get something that is more readily analyzed.

Or you may not. Hey, I'm not the denotational semanticist here.

Dave Herman said...

The fact that the transformation is global means that a local use of an effect may result in a non-local change to the program. This is exactly what thwarts local reasoning.

I don't think it's relying on the programmer to be thinking of the language feature in terms of a translation to a base language (e.g., Java programmers don't think of exceptions as being "really just a propagated disjoin union"). The important thing is there's no possible way to do a translation to a base language without global changes to the program. In other words, the formalism expresses the idea of "this code has non-local effects."

pnkfelix said...

The fact that the transformation is global means that a local use of an effect may result in a non-local change to the program. This is exactly what thwarts local reasoning.

So, I'm going to play devil's advocate for a moment.

Is the invocation of a first class procedure a local change to the program? What if we don't have the code for that procedure at hand? (Again, I am talking about reasoning about a fragment of the code, a module; not a fully linked and loaded program.)

In one sense, of course this requires global reasoning; we don't have the program text, so we don't know what code is going to be invoked.

On the other hand, the nature of our programming language allows us to make reasonable statements about what effects any procedure invocation could possibly have on the state of the machine, and use that information to put an upper bound on what might happen.

To me, this corresponds to when we explicitly model the domain of procedural values in our denotational semantics: we are trying to put an upper bound on the possible behaviors we expect. This was the sort of notion that I was trying to get at in my first comment.

pnkfelix said...

I am further digesting the matter, and I suspect that my original comment was wrong, or at least misleading.

I still am not 100% convinced that a proper "advance" in expressiveness leads to a proper "retreat" in analyzability.

But I do concede that when we choose a denotational semantics upon which to model our language, there are constraints built into the domain definitions we choose. These constraints may well then lead to the inability to express particular language features. And those cases may correspond exactly to the cases when we have a proper "advance" in expressiveness.

So my first comment does not provide a convincing argument that a different semantics could provide the analyzability we were lacking before. (Because such a new semantics may then make our old analysis techniques inapplicable).

[[ I'm really just trying to cut this discussion off before it becomes more tiresome than it already is ]]

Dave Herman said...

Is the invocation of a first class procedure a local change to the program?

Not at all! Guess why higher-order procedures are more expressive than first-order procedures?

That's why flow analysis is a useful reasoning tool -- it's a whole-program (or whole-module) analysis that helps mitigate the loss of ability to reason about local procedure calls.

Daniel Lisa said...

Quickbooks tool hub is a combination of all the minor and major issues related to QuickBooks. Before the company was not introduced it the users need to download different-different tools for the different-different issues. But now users only need to download the quickbooks tool hub it is a combination of tools.