“The mind, once stretched by a new idea, never returns to its original dimensions.” ? Ralph Waldo Emerson
In this post, I interview Daryl about what “nimble” means, why it is a strategic imperative, and why it seems to be so difficult for organizations to get traction with it.
For full disclosure, I work with Conner Partners, so I do have a bias. However, Daryl’s work of 40+ years speaks for itself—and you can make up your own mind. Please do share your comments below.
There are many definitions of “agile” and “nimble” in the business world. I know that you have a very precise meaning in mind. Would you share it?
Sure. The definition I use is “the organization’s sustained ability to quickly and effectively respond to the demands of change while delivering high performance.”
Some would say, “As long as you win the race you are first,” but I view nimbleness as a sustained, competitive, strategic advantage. It’s not enough just to ask, “Did we accomplish more change than our competitors this year?” Becoming truly nimble requires looking at the amount of energy that goes into accomplishing those changes and saying, “Was it optimized?”
In his own blog series, Nimble Organization, Daryl explores this further. In post 4 of that series, “Characteristics of Nimble Execution”, Daryl outlines the characteristics of organizations that are nimble at strategy execution:
As he notes, two components work together—environment and application:
How important is nimble for leaders today?
I published “Leading at the Edge of Chaos: How to Create the Nimble Organization” in 1998 and I thought then that I was late to the nimble game. But that was wrong. My first book, “Managing at the Speed of Change” (published in 1992), was about understanding how to implement the changes you have in front of you; “Leading at the Edge of Chaos” was about how to prepare for changes you can’t even envision.
The responses to the books, and many of the subsequent conversations I’ve had since their publication have been pretty consistent. There is an overwhelmingly positive affirmation of the idea of nimbleness. Leaders often say to me, “That’s exactly right. That’s what we have to do.”
I then make the point with them that, if you want your organization to be nimble, you have to treat executing change as a strategic capability. For example, it needs to be something you and your board talk about and take action on. This is when their interest in the idea of nimbleness starts to taper off. When it comes down to actually creating nimble DNA, I’ve found that very few leaders will invest the energy and mindshare that is required. They are so focused on the current change-related challenges that they can’t pick their heads up long enough to attend to a longer view.
Even though I‘ve had many such conversations with a wide range of executives, at this point in the discussion, I hear similar views: “Look, we are so overwhelmed with our existing portfolio of changes that you are going to have your hands full just teaching us how to deal with that. Isn’t it possible, Daryl, that if we manage this portfolio better with your help, we’ll automatically be more nimble? Can’t we leave it at that?”
My response is always, “Yes, you probably will be more nimble to an extent, but don’t confuse that with deeply embedding nimble DNA—at the level of personal mindsets and organizational structure—enough for people to be able to handle ongoing transformation as the norm. Will you be better prepared for new transitions after executing the changes you have before you? Of course you will, but that’s different than putting a stake in the ground and declaring that “It is imperative to become more intentional about being nimble…On my watch, this is going to happen.”
I have unsuccessfully made the case for years that being nimble is a crucial strategic advantage, not a luxury. Not that leaders aren’t responsive to the general notion, but actually following through with all the hard work involved in getting there is often not as well received. Getting a leader’s attention, interest, and enthusiasm isn’t that hard, but not many follow through with what it takes to actually build an enduring legacy of nimble operations. They almost always get diverted by the next crisis.
So, why don’t more organizations focus on becoming nimble?
There are many reasons, but one is that they fear they will have to stop what they are doing and pick up a separate task called nimble development.
That’s not really how it works, however. Shaping a nimble culture requires that leaders still do everything normally required of them, but they do it with the clear intention of fostering a nimble enterprise. For example, if an organization is seeking new talent anyway, why not hire people who have a predisposition for operating in a nimble fashion? Leaders know (or can learn) what those capabilities are and can incorporate a filter for nimble predisposition into their hiring criteria. Instead, I typically meet with leaders one week when they declare they are ready to move ahead with fostering a nimble culture (“We’re doing this!”), but by the next week, they meet with the board and there is a new customer service crisis or some other issue and all of their attention goes to that.
I’ve been fortunate over the years to work with several senior executives who were serious about architecting a nimble culture, so I’m not saying it never happens—I’m saying it is rare.
Does that mean organizations are not good at juggling multiple strategic priorities?
I think it’s more the reverse of that. They think they are so good at pursuing a huge number of priorities that they believe they can just add nimbleness to the ever-growing list of initiatives their organization must then endure.
More to come
Daryl shared more insights in the interview than can be covered in a single post, including thoughts on how leaders can manage their multiple strategic imperatives andstay focused on building organizational bandwidth and capability for “quickly and effectively responding to the demands of change while delivering high performance,” (i.e., a nimble culture).
Thoughts? Reactions? Please share in the Comments section.
Getting something out of this? Please do share with your network by forwarding this post over email or over social media using the buttons. Thanks!
The “70% failure rate” has been exploited enough already. It’s time to stop beating this dead horse and give it a decent burial.
I get why it resonates with most of us. Strategy execution is hard. Some falls short of objectives and some fails outright. The more transformational the change the more likely that it will fail in some way. We all abhor failure. Any failure feels like too much. It feels like 70%.
And I get why it is used—fear is a standard sales technique (I am recovering from this nasty habit myself). You know the drill: convince the leaders there is a high risk of failure unless they follow a different prescription. This is used internally to expand execution budgets as well as externally to tout execution services and solutions. Notwithstanding the intentions of either internals or externals, the origin of this legend was never a real statistic. Yes, you read that right. Where did the “70%” come from then? More on this below.
Furthermore, there is better data. In August 2013, The Economist published the first third-party (can anyone say “objective”) survey, “Why good strategies fail: Lessons for the C-suite.” (http://www.pmi.org/~/media/PDF/Publications/WhyGoodStrategiesFail_Report_EIU_PMI.ashx) How about we all do a three-point turn? What are the right questions to ask or conversations to have around success/failure?
First, how did we get here?
As I said above, as far as I can tell, the legendary failure rate is not an actual statistic.
Since I last wrote about this topic I was referred to an excellent article that attempts to track the source of this so-called data. The abstract of “Do 70 Per Cent of All Organizational Change Initiatives Really Fail?” (http://www.tandfonline.com/doi/abs/10.1080/14697017.2011.630506#.UoE8F_lJOHc) (Journal of Change Management, Mark Hughes, 2011) hits it hard: “This article critically reviews five separate published instances identifying a 70 per cent organizational-change failure rate. In each instance, the review highlights the absence of valid and reliable empirical evidence in support of the espoused 70 per cent failure rate.”
Let’s review that: “the absence of valid and reliable empirical evidence.” Wow. That’s pretty amazing, don’t you think?
Hughes reviews each of the five instances—you will probably recognize some of them:
For each instance, Hughes provides the original reference verbatim and also provides some context. I highly recommend this article.
The bottom line is that we don’t know, from the sources cited, what the real failure rate was at that time. We only know what a handful of pretty smart and experienced consultants / academics estimated it was based on their limited exposure.
To be fair, many academics and consulting firms have run surveys. Some consulting firms even run them annually. Their data comes pretty close to 70%. The one that comes to mind is “Success Rates for Different Types of Organizational Change,” (http://onlinelibrary.wiley.com/doi/10.1002/pfi.4140410107/abstract) Martin E. Smith, Performance Improvement Journal, International Society for Performance Improvement, 2002.
This is a “meta survey.” In other words, Smith analyses 49 surveys and drives out median success rates (the median failure rate being, obviously, the rest). What is also useful about this survey is that it identifies by type of change (e.g., restructuring has a 54% failure rate, culture change has an 81% failure rate). With that, one can arrive at a median 70% failure rate. So, maybe one can argue that there is a real 70% failure rate, or there was sometime before 2002 (11 years ago).
In March 2013, The Economist’s Intelligence Unit, sponsored by The Project Management Institute, initiated a survey of 587 senior executives globally and then undertook a series of in-depth interviews with additional executives and academics.
The result was a current and objective (non-commercial) touch-point on what executives believe. As noted on the PMI website:
“Key findings include:
So there it is. These executives believe the failure rate on “strategic initiatives” is 44%.
So while that’s a fair distance from 70%, it is still a very high risk.
So let’s review: we have looked at five original references that turned out to be opinions of thought leaders, a meta survey of 49 sources, and a survey of 587+ executives.
It seems odd, in this light, to realize that this is actually not primary research (i.e., real-time tracking of actual change initiatives on their performance against stated objectives).
This would be the real test, wouldn’t it? One hopes that inside of organizations there might be an appreciation for tracking success/failure and for improving strategy execution over time. Project management protocol calls for a Post Implementation Review and Lessons Learned, but this is rarely converted into organizational learning or harvested into enterprise best practices.
Strategy execution is hard, damn hard. It often fails or falls short.
We may disagree on the degree of risk but no one can predict the probability of success or failure for a particular initiative/organization based on a general survey.
What if we all at least started this conversation from more relevant space? For example, “What is this organization’s experience with strategy execution? Let’s have a look at the data and see how we can help you improve your results.”
Maybe organizations would begin to think about establishing strategy execution as an organizational competency and managing it across the organization, and in working to improve their performance would drive better results to the bottom line. Now, that would be good for all of us.
Earlier this Fall, I facilitated a discussion entitled “What if it’s not true that “70% of change initiatives fail”?” in the Organizational Change Practitioners Group in LinkedIn and many experienced and insightful practitioners chimed in—some agreeing and some disagreeing. Around the same time Jennifer Frahm published an excellent post entitled “70% of change projects fail: Bollocks!” (http://conversationsofchange.com.au/2013/09/02/70-of-change-projects-fail-bollocks1/#!). Much of my thinking above has been informed by their contributions.
“In theory, theory and practice are the same. In practice, they are not.” Albert Einstein
Kids these days have a phrase: “FAIL.” It means something like “epic failure” and describes scenarios often so common or standard that when someone fails it is all the more astounding. A huge pop culture industry has evolved around those occurrences that are particularly funny. It started with shows like “America’s Funniest Videos” and now “Ridiculousness”(http://en.wikipedia.org/wiki/Ridiculousness_(TV_series)http:/en.wikipedia.org/wiki/Ridiculousness_(TV_series)) takes it to the next level.
Seems to me that someone could make a show around “FAIL” in organizational strategy.
How can we all get out of this fail loop?
This little rant is inspired by an excellent post from Bill Fox called, “Jump, Rinse, Repeat. Why do we keep implementing change like this?”(http://leadchangegroup.com/jump-rinse-repeat-why-do-we-keep-implementing-change-like-this/).
Bill starts with, “It’s like going to the edge of a 50-foot cliff and jumping when all you’ve witnessed are others ahead of you jumping away. As a result, you don’t see what they did before they jumped, what the landing area looks like, or what happened when they landed!”
Why do we keep implementing change like this? The cliff-jumping metaphor is a great one and it makes my response obvious: Because it's more fun and there are relatively few consequences. I am only being a little glib. Having spent 20+ years in strategy execution as an internal and external consultant in several dozen organizations, I have come to believe that there are systemic issues, as in “built into the systems and culture of the organization.”
These systemic issues are bigger than individual leaders alone, and changing the system seems like such an impossible challenge that the only option is to keep jumping within the system.
The reality in the back rooms
In back rooms, people will tell you: "this is the third time we are taking a run at this strategy," "people may get fired but that's rare," or "those who get fired are the ones charged with executing the strategy and not those who designed it." There is little connection between the ideal strategy for the organization and the feasible strategy for the organization.
Those in execution are always preparing the contingency decoys. They have to—it’s a survival technique. In other words, the initiative fell short because x competitor changed its strategy, there was a y change in external environment (economic collapse, FX rate blind side, etc.) or z supplier failed (delivered late, over budget, or inferior product).
The real problems are more like:
Three suggestions to “jump” the system
Stop leaping from strategy into execution
I know, it feels like plenty of time is spent upfront, yet we still fail, so stay with me.
Invest more time in better planning. Expect that this learning investment can be amortized over several years.
Try a few different things, like:
Jettison “creative tension”
The notion of “creative tension” suggests that when executives have to compete (for ideas, resources, power, etc.), they perform better. This is a fallacy.
The only way to succeed going forward is to collaborate—it produces the best ideas and optimal cooperation within the whole organization. To some extent, this can be accomplished by alignment and tools like Balanced Scorecard but, apparently, there are limitations.
Internal competition is toxic to collaboration. Collaboration is a major culture change?invest in it appropriately.
Stop firing people for failing—fire for incompetence
Failure and incompetence are actually qualitatively different, but they’re hard to differentiate. You want to keep competent people who failed, so that the organization does not make the same mistake next time.
If we always do what we always did…
The obvious answer is this: if we want different results, we have to do different things.
Alas, what may be obvious is not simple.