Jack Dahlgren commented in a thread on Critical Chain as a fad . . .". . . several of the concepts are rather old indeed. The "critical chain" is just the critical path through a resource-leveled CPM schedule. TOC's Buffers and commonly used concepts of contingency management are very similar. "
While the concepts may seem familiar, the difference boils down to what is done with them. It blows me away how often I go into Fortune 500 organizations and see project plans that do not take resource dependency into consideration, either in indidvidual project plans or in the decisions to lauch projects into an existing overextended portfolio. If we use the term to explicitly define all critical dependencies, including resources, and it helps to do so, then it's a plus.
Regarding the relationship between buffers and contingency planning. When I hear "contingency planning," what usually comes to mind are numbers like 10%, 15%, or maybe 20%, applied to budgets or schedules. A typical project buffer in a well-built critical chain schedule is more likely to account for a full 1/3 of the project's lead time. Contingency plans are usually held close to management's vest. After all, we don't want people to rely on contingency. We won't really want to use it. On the other hand, we expect that buffers will be consumed to some degree during the life of the project. Murphy's Law has not yet been appealed or overturned, as far as I'm aware. We also expect that buffers will occasionally replenished as well, due to the behaviors that are at the core of project performance -- focus and a "relay race" attitude. Sometimes, PMs have been known to run "two sets of books" with contingency defining the difference between external promises and internal schedules. Buffers, if they are to be effectively used, are very visible entities. Project Buffer status is the key decision driver in a multi-project system, therefore it is best if everyone knows what the buffers are, and in what shape they are in.
This does not seem "very similar" to me.
Another comment I frequently hear about feeding buffers is that it is "very similar" to the concept of managing "float" or "slack." The differences start with the fact that buffers are calculated entities that reflect the expectations of variation in the chain feeding it, while slack and float are simply arithmetic outcomes of aribtrarily starting non-critical chains ASAP but bounded by the start of the critical path. Slack and float time may be more or less than is needed to do what feeding buffers are designed to do -- to isolate critical path/chain activities from the variation of paths/chains that integrate with them. There is no purpose to float or slack, they simply are. The purpose of designed feeding buffers is to help keep the critical critical, thereby providing a consistent focus for management, project management, and the team. I think that we can all agree that focus is a good thing.
Jack concluded . . . "So it is indeed a fad, but is that a bad thing? If you can use the fad momentum to bring a good project management technique to a team then it can be a very good thing. Certainly TOC is a fairly well thought out, easy to explain package of techniques based on solid common sense principles. Nothing really new or revolutionary, but a palatable package to help the medicine go down."
The newness and revolutionary stuff may be subtle to those people who think deeply about project management, but to those who actually live in the world that it supports certainly feel like they are in a new and revolutionary situation. I like Jack's mention of "solid common sense principles." The question that this raises in most project environments is "How common is common sense?"