I'm realizing that using laziness to simulate infinite quantities in finite resources is a fundamental trick in computer science. The obvious examples are coalgebraic structures in Haskell like streams, or stream processing in any language, for that matter (e.g., Java InputStreams).
But you can see this everywhere. A fascinating example is that functions represent infinite entities but can be represented in finite space by only computing their results on arguments by need. (Don't confuse this with call-by-need semantics; I mean that you only calculate the result of a function for any given input when you happen to apply it to that input.) In fact, the whole undecidability of function equality is essentially just the rather obvious observation that you can't compare two infinite quantities in finite time or space.
Contract checking for higher-order languages uses the same trick. Since the contract system can't test a function to see if it adheres to a contract in general, it just tests its particular inputs and outputs as they appear.
This is also how we describe functions in denotational semantics: a function is the limit of an ω-chain of successive finite approximations of the function. The length function, which is an infinite set of ordered pairs of inputs and outputs, can be simulated by only generating a finite approximation of recursion depth k for any particular input list of length k. And to prevent calculating all such approximations, which wouldn't terminate, we use the lazy Z-combinator in a call-by-value language, or we just use a lazy language.