You have all heard disaster stories of computer systems going into production that are over budget, over time, and deliver less than the expected scope. And we have all heard of the new mantra: Business Value/Benefits, Benefits Management, Benefits Realization. This is all good and a step in the right direction to carry us forward from the days of the Iron Triangle of Time, Scope and Cost that some of us may feel is like the fabled albatross hanging round our necks.
BUT - what about new systems, whether those are automated or manual, that when implemented actually damage the business? You can probably think of some and if you do, please comment. This is a situation where something is implemented and everything goes to that hot place in a hand basket, costing sometimes more than the original system cost to repair.
Let's consider the recent implementation of a payroll system in a large organization in a somewhat cold country - the warming of which should not be from the heat generated by systems crashing and burning. The system went in, and it didn't even cover the core functionality of the packaged solution. What was that core functionality? Well, to grossly trivialize it, the system was meant to pay people. What does that mean? In most situations there are categories of people you pay, for example employees (Gross Pay - multiple, sometimes complex deduction = Net Pay) and contractors (Hours claimed X hourly rate from timesheets or invoices = Gross Pay - Deductions = Net Pay). As I said, this is a gross simplification, but I often find this approach serves to raise the real issues to the surface.
What I am really trying to say here is that the technical part of this implementation was, if not a piece of cake, at least very understandable and relatively easy to implement. I mean, really, have we ever paid people before? Have there been payroll and benefits systems flogged by vendors for more than a few weeks? Well, of course! When were computers invented? And before computers, haven't we been paying employees for hundreds of years? This is not rocket science or virgin territory. It takes me back to when managed the implementation of upgrades to the MSA Payroll system at Nova Corporation in Calgary decades ago. I think we can all agree that the technical solution is quite simple.
So what caused all the issues? Aside from the obvious questions we won't get into (but someone should) like "Was there a parallel run?" and "Was there a backout plan in case it didn't work?", one has to delve deeper into the underlying issues.
First of all, how was the contracting managed for this job? Was it competitive such that the job went to the lowest bidder? To that I say "You get what you pay for". Was there an algorithm for selection that put the important things at a higher priority over price, such as "Turn Key solution.", "Includes comprehensive training.", "Guarantees the system will not be implemented until it is proven to work across the organization both technically and organizationally."? These sorts of questions seem to be common sense, yet we all know the rarity of that type of sense, despite its description.
And what type of contract was it? Fixed Price? If so, was everything known at the time of the bid so that vendors can make a reasonable financial proposal? Or did they have to load their proposal down with change order ready assumptions because they didn't know enough to provide a fixed price bid?
Or was the procurement based upon the reputation of the vendor with some sort of executive order to hand them the work based on how they had performed in the past, and based perhaps on possibly unfounded assertions that it had to be done this way to avoid a lengthy procurement cycle in a "burning platform" situation?
And where did responsibility lie for successful implementation?
Now we get to the crux of the matter. IT vendors are usually very good at the technical solutions, but not so good at the human side of things - organization and process, fear of job loss, future expectation for advancement and so on. Often this is shuffled off to the client. Ever hear of the "Train the trainer" solution? You see it in so many proposals, once might say it has become a standard approach.
So far we have talked about the ease of implementing a technical solution and the methods used by large organizations to choose vendors. Now let's talk about the real subject of this article - Organizational Change Management (OCM).
There are many models for change expressed by organizations like ACMP, PMI and Prosci, and from authors like John Kotter and Jeffrey Hiatt. And these are all excellent approaches to OCM, but I have to ask: Are IT companies reading them? Are they putting deliverables and activities into their proposals to account for the steps required to manage change? Or are they weaseling out of it and transferring the responsibility to budget strapped naive clients? And are clients reading these well-founded missives of change management? If so, are they making them an integral part of a bid request? More to the point, are they willing to pay for it?
Change has to come in a package. First we start with the reason for the change strategically. Why are we making the change? What is the change exactly? Who will support the change at various levels (including the top) in the organization? Who be involved in making the change? Who will be impacted by the change? Who will see change on the receiving end? Who will be "right sized" out of a job as a result? Who will be given completely new activities to do in their job and what level of expertise will be required? How will they gain that expertise? How will you know if they have actually gained it? How will the change be woven into the fabric of the organization so that it becomes an integral part of it? How will organization structures be altered as a result of this change? Will there be support for the organizational change? Is a distributed function being centralized? Will there be resistance? How will compliance be achieved? Where will the change be implemented? How will it be implemented? Why? Who? Where? Why? What? How? Kipling and his serving men come to mind.
If you ask questions like these, you will be led down the road of good Organizational Change Management, and you will take into account all of the human factors involved in such a change. Choose the right projects, consider how you will enlighten the organization about what is coming, how you will persuade all levels of the organization to take part, how you will instruct them in the change and confirm that there was a positive effect, how you will weave it into the organization so it becomes an expected part of organizational life. And above all, how you will ensure the benefits you so diligently defined when you started all this have been or will be realized.
So, if you think of your next big contract going a vendor to make a substantial change within your organization, what forces do you have to muster? Organizational support from the top, filtering down through all parts of the organization that are impacted. Clear definition of business benefits. How communication will take place throughout the organization. How quality of the result will be ensured. How the PEOPLE in your organization will want to take part in the change to help you succeed.
Think of your next big change as a package. Strategic planning resulting in the right change being implemented. Selecting vendors who know about the technical machinations required to make your vision a reality, but are also keenly aware of the people side of things and will be there to help you through it if they are not going to do it for you. If your vendor shies away from discussions of communication, awareness, training, checking and operational institutionalization.... run in the opposite direction!
Make sure that the entire picture has been painted before you try to make your vision, your change, a successful reality.
Mike Frenette, PMP, I.S.P., CMC, SMC is a very experienced project manager who likes to post on controversial topics. For his paid job, he teaches Agile and PMP certification courses through his company, CorvoProjectManagement.com.
The Core Committee spent a lot of time and effort to produce it, so we owe copious amounts of gratitude to them, and the twenty-eight content reviewers, the PMI Standards Program Member Advisory Group and the three production staff.
If you haven’t already downloaded it, click here.
You’ll find that this 56-page guide (not including the index and appendices) is written in a familiar way with textual descriptions, contextual and activity diagrams. As stated in the introduction, this new guide serves as a bridge between the PMBOK ® Guide and the recent Business Analysis for Practitioners: A Practice Guide. The PMBOK Guide addresses good practices for requirements management, and the BA for Practitioners Guide describes what a BA does and how to apply requirements development and management skills to project tasks. Intended for PMs and anyone doing requirements work, the Requirements Guide defines processes for requirements development and management.
What is a Requirement? According to PMI, it is “A condition or capability that is required to be present in a product, service, or result to satisfy a contract or other formally imposed specification.”
Requirements Management is about establishing a baseline and then ensuring it is traced (did the project implement everything it was supposed to?), managed through change control (if anything changed from the baseline, was it done in a controlled and approved way?) and updated (did the desired product, service or result of the project change, and if so, were the requirements related to the change appropriately captured in a new baseline?).
Requirements Development involves eliciting and identifying requirements, planning, analysis, documenting, specifying requirements and the necessary validation and verification.
The activities described in the Guide, paraphrased:
As you might expect, the Guide describes all interactions with the five Process Groups and ten Knowledge Areas. The types of requirements defined are probably familiar to most people – those required by the business, usually expressed at a high level, those required by stakeholders, solution requirements, both functional and non-functional, transition requirements, project requirements, quality requirements and program requirements.
Techniques for eliciting requirements are also in the guide, comprising interviews, workshops, focus groups, brainstorming, questionnaires/surveys, analysis of documents and interfaces, prototypes and observation.
The Guide tells us that good requirements are unambiguous, consistent, correct, complete, measurable, feasible, traceable, precise and testable. In an adaptive life cycle, they must be independent, negotiable, valuable, estimable, small and testable.
It also delves into backlog management and prioritization, and various models, including Scope (context, ecosystem, goals/objectives, features and use cases). It discusses functional decomposition and feature trees, and various process models (process flow, use case, user story) and rule models (business rules, decision trees/tables), and a favorite of mine, data models (ERDs, data flow, data dictionary and state diagrams/tables).
The Guide draws our attention as well to interface models, that is, what occurs between systems and/or users, considering report definition, flow of data between systems, user interfaces, and tools like wireframes, and the tabular N2 model.
There is much value to be found in this Guide. I’ve only very briefly touched on bits and pieces of it here. Armed with it, the PMBOK Guide and the Business Analysis Practitioners Practice Guide, a project team can’t go wrong when it comes to translating business needs to appropriately detailed requirements that can be traced, confirmed and verified - and, of course, translated into that infamous product, service or result required by the business.
Go get it while it is still free!
Early in my career I was fortunate to have a mentor who was very data driven. He believed that data in its purest form would help describe the processes that you might want to implement in a system, but that process analysis alone would not properly define the data and in fact, might very well define it improperly due to the usual insular aspect of looking at some processes and not all, of necessity given scope and budgets.
When I think about this at a high level in terms of typical data entities and processes, I have to believe that the metrics would support such a conclusion. If we look at any particular organization, the number of processes operating on the same, similar or related data will be very high, yet the number of entity types the organization deals with will be very low in comparison.
So let's look at a very simple example, a basic course registration system. Here are the entity types we'll deal with:
Here are some business rules around the data:
From this simple example, I conclude that we have at least the following potential processes:
.... and so on ... and so on... and so on...
All of the processes I listed come directly from the two entities, student and course, along with the attributes each has and the relationships they have, one with the other. I am sure with a little extra thought, the list could be doubled. I am also sure that some of the processes listed have nothing to do with a student registration system, and more to do with other systems, and so could serve to put some rope around both the data (and its attributes) and the processes to be able to plan multiple projects as part of a program.
Furthermore, since we all know there is a many-to-many relationship between Courses and Students, we are probably missing an Entity Type - one that is fairly obvious anyway - something called "Course Registrations" which becomes a one-to-many relationship with each "Kernel" entity type, resolving the many-to-many relationship by adding the "Associative" entity type. And we won't even get into the difference between registrations and transcripts, whether things like marks even belong in this model, etc.
I obviously invented a lot of the processes I listed, and they would likely be very different if I actually had stakeholders to help me along (or would they?). It may reveal new business rules, such as those student registration privacy or restrictions on instructor performance data, or the absence of processes, such as being able to label other students who are your friends, and seeing which courses they have registered for.
A very simple example, of course, but from two entities (which eventually became three), I was able to derive at least 18 processes (some actually have multiple processes per line), and I did that as quickly as I could type. Confirmation with the stakeholder would take much longer, of course.
Trying to create a long list of processes and then deriving the data from them, rather than letting the data drive the derivation of processes, is, I feel, much more complex, time consuming and subject to errors and omissions.
What do you think? Are you process driven, or data driven? Were you one and became another at some point in time?
Before you answer, remember the CRUD (or CUDDL) I mentioned in a previous post. Don't know what that is? Look up "Of CRUDs and CUDDLs" :)
People often ponder whether requirements for an IT project should already be in place before a project begins, whether they should be detailed at the front end of a project, or whether they should be refined as the project progresses. The answer is not simple, and has a lot to do with the sort of project you are running.
We would all likely agree that no project can even be chartered without some level of requirements being in place - high level business requirements at the very least. But it is a rare project that will kick off to the resounding thump of a multi-hundred page business requirements document. Most waterfall-type projects have the creation of the BRD as an early step in the project. Most Agile projects will have high level requirements, often in the form of user stories. And then there is everything in between.
But there's the rub. What apporach and methods are being used on your project? Is it waterfall? Is it one of the Agile approaches? Can requirements change while the project is in motion? Might such changes alter the direction of the project?
I would hypothesize that requirements can and do change on any project, regardless of the project methods being used. Remember the old requiremernts freeze? "You must sign here, and we will build what you have asked for. No changes wiil be permitted from now until project end." Seems almost hilarious now, doesn't it? The only freeze that would really be in effect is the one that freezes you out of any more work with that client.
So - still we are left with question, "When do requirements need to be completed?". The question itself might be a tad banal. Before we design, before we build, before we test... requirements must be clear. Do we need a 50 pound signed off document early on? Maybe... if we are designing and building software to run a space shuttle. But do we need it if we are only configuring an ERP package? Or designing a web site? Or might we need only story elaboration a bit at a time when we have a fully engaged and knowledgeable Product Owner?
As with most questions in this field the "It depends." answer pops to mind. It depends on so many factors, including contract types, that the answer will shift like the sand dunes on the Sahara.
It's getting cold in here. Must be those frozen requirements.
Back early on in my career as I moved from a programming role to an analyst role, I was fortunate to be funded for training by the utility company I worked for at the time. The training was conducted by Tom deMarco of Ed Yourdon and Associates, and had to do with analytical techniques, dubbed structured analysis - mainly data flow diagramming (DFDs), and data structure diagrams (DSDs). DSDs fell by the wayside, but DFDs served me well over the years. The basic premise was to model a clien'ts environment using sources, data flows, processes, data stores and sinks. These fanciful terms could be used in diagrammatic form to describe the "current physical" and "current logical" environments and then the "future logical" and "future physical" environements.
There were many rules and methods one could employ, such as "structured english" to describe processes, and ensuring that a process transformed the input data flow into the output data flow. We learned that if the output data flow was the same as the input data flow, well then, there was no process at all, was there? And if there was something in the output data flow that was not input to the process, there was something awry with the process. Data flows represented data in motion and data stores represented data at rest. With all that flowing going on wasn't it thoughtful of Ed and Tom to give data a break now and then?
At the time, I was a young buck looking for the silver analysis bullet to help advance by career by doing a stellar job, and I thought this was the ultimate way to model current state and describe future requirements. It worked very well over the years, and still does, for that matter, although I have to admit I may have become somewhat less rigid about applying Ed and Tom's rules.
After a few years with the utility company, I moved half way across the country to join an oil and gas company. I learned about data analysis from a company called LBMS, which if memory serves, stood for Learmonth Burchett Management Systems, a company from London, England. What they were doing in Houston is anyone's guess.
Like the process modeling I learned about from Ed and Tom, in data there were both logical and physical aspects. I came to the conclusion that data at rest was actually far more important than data in motion, and became somewhat of a data zealot, tossing aside my previous process modelling soapbox for a new data modelling podium, waxing eloquently (well, maybe just waxing) about logical data models, entities, attributes, relationships and obscure things like fourth normal form. A slightly older buck at that time, I moved into the realm of data management, database administration and data-driven applications development, even helping to develop a tool that would generate code from a data dictionary based on data entity types and their attributes. We called it the prototype system, and it was truly impressive (if I may say so myself), generating a menu structure with "CRUDs" for each entity type (Create, Read, Update, Delete). We didn't get as far as generating reporting modules for each relationship between entities in the data model, but we could have. A colleague I worked with subsequently insisted on calling these generated modules CUDDLs, not only because we added list functionality (hence the L) allowing the display all instances of the Entity, sorted in multiple ways as desired by the user, but also because CUDDL sounds a lot nicer than CRUD. I have to agree. Go ahead - pronounce both and decide for yourself.
I have seen and used many more modelling techniques over the years including use cases, user stories, affinity diagrams, and so on. I am glad to see a mix of both the old and new techniques in current materials, such as PMI's newly minted Business Analysis for Practitioners: A Practice Guide, which you can still download for free at pmi.org.
So what is the message? A guess there are a few. Realize that many modelling techniques are necessary to do a good job in your analysis role - so learn many of them. If you must be a zealot about something, choose modelling. Don't choose a specific hill upon which to die. It's not a war, and there are many hills worth saving.
Now, does anyone know what you call a ... shall we say ... seasoned buck?