When I took COMP210 from Matthias as a freshman at Rice, I didn't understand why he even bothered pointing out the notion of an accumulator, or why it had such a fancy name. If you need to give more information to another procedure you give it an extra argument -- duh. Why all the commotion?
There are many reasons why accumulators are so fundamental. First of all, accumulator-passing style--a "style" is what the FP cool kids call a design pattern--is a functional way to simulate mutation. It's used in semantics and it's used in functional programs to preserve referential transparency. The one-sentence justification for functional update is that it renders the dependencies of the function explicit, which makes the function's invariants clearer, and generally has the effect that you can't cheat and mistakenly call the procedure in an invalid state.
Once you become comfortable with accumulator-passing style, you start being able to write iterative loops that are more general and natural than for or while loops, and avoid nasty dependencies on statement order that arise from imperative update. This is one of those things that just drive you crazy after a while in C. What's really nice about the "named let" construct in Scheme is that you can easily write tail-recursive procedures that can be compiled to loops, which you can essentially think of as "while with arguments."
The next step, once you've mastered accumulator-passing style and named let, is to understand fold. I know, it makes Guido's head hurt. But never mind the haters. Accumulators are a general pattern of building up a result by successive approximations. And a subset of this pattern is when the construction of the result is driven by the deconstruction of an input, particularly a list. It happens that a whole bunch of the time, we build up some result by adding to it the result of computing something for every item in a list. For example, imagine a Scheme interpreter that adds a bunch of variable bindings to an environment: it starts with an initial environment, and one at a time it goes through the list of new bindings, each time generating a new environment with the added binding.
And here's the kicker--just for fun, once you've become a master of functional folds, you are primed to understand fixed-point constructions. The very idea of successive approximations to a result is the key to a fixed-point algorithm; the difference is what drives the recursion. In a fold, it's the simple deconstruction of a list that determines when to stop, whereas in a fixed-point recursion, it's when the approximations stop changing the result that the algorithm terminates.
Finally, once you can write fixed-point algorithms while standing on your head and whistling Jerusalem, you're ready to understand the Y combinator. Remember how when you first learned to write recursive functions, you had to teach yourself not to try to imagine the entire nested computation, but just to trust the recursive call to do its job? Well, you'll never be able to imagine an infinite fixed-point construction, so here this mental laziness is crucial.
When you understand what successive approximation has to do with all of these concepts, you will have achieved enlightenment.