Project Management 2.0

by
New technologies, concepts, and Web 2.0 tools are popping up everywhere. How can you use them to help your project team collaborate, communicate - or just give your project an extra boost? [Contact Dave]

About this Blog

RSS

Recent Posts

Eliciting Requirements... Creatively!

What To Expect When Your Stakeholders Are Expecting

8 More Templates to Save You Time

What's The WORST Thing a PM Can Do?

9 Amazing PM Templates

Critical Path Analysis > Is This How You Do It?

Categories: Estimating, workshops

Situation: You need a quick description of how you can leverage CPA on your project.

Our Techniques Wiki offers a library of commonly used approaches to tactical challenges on your project. Critical Path Analysis is a pretty well used technique, so I thought it would be useful to highlight it here and get your take on our "official version". The beauty of posting it as a wiki is that it's community driven and refined. If you think it should change - you can change it. 

Critical Path Analysis

'An analysis technique used to identify the critical (essential) and non-critical (non-essential) activities associated with a business process or work plan and the amount of float (slack) associated with each noncritical activity'. The result of the analysis defines the critical path, a sequential set of related and essential steps that comprise a value stream or work plan. It is the longest path, in terms of duration, that passes through all the critical steps of a value stream or work plan, and determines the fastest time to completion. The results of critical path analysis are depicted graphically in a Critical Path Diagram.

Applications

  • To identify the critical and non-critical activities associated with a business process or work plan.
  • To identify non-critical steps which can be eliminated, at minimum cost, to improve the value stream or work plan.
  • To identify the amount of time an activity may be delayed without affecting subsequent, dependent activities or the ending time or date.

Procedures

  • Identify all steps in the business process or work plan.
  • Document the steps in the sequence in which they occur.
  • Identify the relationships between steps, and document the dependencies between them.
  • Determine the latest allowable start and end time or date at which each step can occur without delaying the next step and, subsequently, the whole value stream or work plan.
  • Assign a float value to each task. Critical tasks should have zero float. Non-critical tasks will have a numeric value associated with them, representing slack time.
  • Calculate the float for each step by subtracting the Early Start time or date from the Late Start time or date and assign a float value to each task and sub-task.
  • Using the information collected above, identify the critical and non-critical tasks and sub-tasks by determining the duration of the value stream or work plan.
  • Chart/document the critical path.

Instructions

The essence of critical path analysis is to examine all options for reducing the duration of time required to complete the critical steps in a business process or work plan. Tasks, their duration, and their dependency relationships determine the critical path. When applied in business reengineering, critical path analysis addresses issues of quality, efficiency, and cost reduction by standardizing work efforts and eliminating unnecessary steps to reduce the time required to satisfy the customer of the value stream. In project planning, it is applied to determine all options (duration, cost, resource requirements) for reducing the work plan or project duration and for determining the amount of time an activity may be delayed without affecting subsequent, dependent activities or the project end date. When used in conjunction with Cycle Time Analysis andDependency AnalysisCritical Path Analysis is an effective tool to measure the quality of the business process or work plan by analyzing the steps in the path, measuring inefficiencies, and determining what steps can be eliminated to improve a business process redesign or reduce the amount of time required in the work plan.

Critical path analysis begins with the identification of all activities (tasks and sub-tasks) which are part of the business process or work plan. Document the tasks and sub-tasks in sequential order; documentation can be prepared using various diagramming techniques such as block diagrams, work flow diagrams, etc. (see Work Flow Diagramming), in a simple list, or using a automated project management tool for creating work plans.

Once all tasks and sub-tasks have been identified, identify the relationships between the tasks and sub-tasks, usingDependency Analysis. Determine which tasks and sub-tasks are dependent upon one another and establish a predecessor or successor relationship. Document these relationships on the diagram, list or work plan.

Determine the critical and non-critical activities by assigning a float value to each task and determining the float associated with each task and sub-task. Float represents slack time, the amount of time an activity may be delayed without affecting succeeding activities (free float) or the ending duration or date (total float). Critical tasks should have zero float as there should be no slack time associated with them. Critical tasks must be accomplished sequentially and promptly; thus, when a critical task is delayed, the completion and duration of the business process or the end date of the project is affected. Non-critical tasks and sub-tasks have a numeric float value associated with them, as there can be slack time without affecting the end result. This value (e.g., float value of 1=slack time of 1 unit of time that is being measured) represents the amount of delay that can occur without affecting the duration of the business process or the work plan. To determine the float associated with each task or sub-task, define the early start and early end duration or date for each task or sub-task (e.g., the earliest possible time each task and/or sub-task can begin and end). (See also Cycle Time Analysis.) Calculate the float for each step by subtracting the early start time or date from the late start time or date.

Chart the critical path by identifying all critical steps (those with zero float). The path through all steps or events that have zero float represents the critical path. The non-critical steps (those with associated float) are candidates for elimination from the value stream or work plan.

Although resource constraints do not affect true critical path calculation, critical path analysis may be followed by resource planning, using resource manipulation techniques such as Resource AllocationLoading and Leveling to improve project schedules and end dates.

Posted on: August 14, 2013 11:30 AM | Permalink | Comments (0)

Turning Estimating on its Head

Situation: You need to take a fresh approach to estimating.

Estimating tools are always interesting to understand because they reflect what their makers feel are the key inputs to the estimation process.  Many of us go tool-free and estimate based on person experience and the experience of SMEs around us.  We recently spoke with J. Chris White of SimBLOX about the pmBLOX product™.  Whether or not you are interested in a new tool to estimate with, some of the approaches he describes are pretty interesting.


Q.  Can you give us a quick overview on pmBLOX™ and simulation-based project management software in general?  Is the function of the software similar to Monte Carlo simulation?  (What is it? How does it work? Why is it better than traditional estimating?)

A.  What makes pmBLOX truly unique and revolutionary in the field of project management is its underlying model is completely different from anything that’s been done before with simulation-based project management (PM).

Unlike current methods like Monte Carlo (which are based on the CPM/PERT approach requiring task duration as an input), pmBLOX produces task duration and resource utilization as OUTPUTS.  Work backlog, resource availability, productivity, and several other fundamental factors are used as INPUTS.  In effect, pmBLOX turns the traditional CPM/PERT method on its head.

(Note that this is not far off what is already done with current planning tools.  When a user makes an estimate for the duration for a task (an input with the CPM/PERT approach), he/she typically has some assumptions about using particular people for particular amounts of time so that the estimated duration is not a complete guess.  pmBLOX simply starts with these assumptions and makes them explicit so that they can be challenged/defended to ensure a more realistic project plan.)

In the underlying pmBLOX model, a task is represented with a backlog of “work to do”.  For example, a task may require 40 hours of work.  If a single person works on this task for 8 hours/day with 100% productivity (i.e., each hour the person is paid results in an actual hour of task work), then the task will be completed in 5 days (40 hours / 8 hrs/day = 5 days).  (If you are familiar with any simulation techniques, we use system dynamics – a continuous simulation methodology invented at MIT in the late 1950’s that is based on engineering feedback control theory.)

This is a very simple example and is straightforward.  In fact, this is actually what current planning tools like MS Project do behind the scenes.  When a user says a task has a 5-day duration and a single resource is assigned to the task at 100% and that resource works 8 hours/day, behind the scenes the software converts that task to 40 hours.  So, when the user adds a second resource, the duration is cut in half to 2.5 days because now 16 hours of work are being done each day.

The difference with pmBLOX is that the user would designate the task as a 40-hour task instead of inputting a 5-day duration.  Assignment of the single resource is the same.  Very little difference in the inputs, but the underlying approach is fundamentally different.  As long as the single resource is available as expected, the final result (i.e., task duration) is the same for both pmBLOX and the CPM/PERT approach:  task duration is 5 days.

However, once we changed the underlying approach from CPM/PERT to hours-based simulation, we found that the door was open to making many more substantial enhancements to bring the simulations even closer to the real-world activities that the simulations are trying to mimic.  For instance, since we moved to an hours-based approach for completing work, we now had access to variables such as resource productivity.  If conditions ever changed in the simulation so that the same situation in the real world would result in productivity losses for a resource, we could now incorporate that effect.

As an example, imagine working overtime.  As you work more and more overtime over an extended period of time, you get more fatigued and “burned out”.  This is common knowledge.  If you work 8 regular hours for a day, you come back fresh and productive the next day.  If you work a few hours of overtime for a few days, you come back that next day a little drained and less productive, but in a day or two you are back to normal.  If you work 6 hours of overtime for several weeks in a row, your productivity greatly decreases during that time (which impacts that amount of “good work” you can do) and when you come back the next day you still have not fully recovered.  It takes several days or even weeks to get back to your original level of productivity, even when you are only working 8 hours a day regular time.  pmBLOX incorporates this burnout effect.  So, as tasks fall behind schedule in the simulation and resources are tapped to work overtime, there may be productivity losses, which makes work take longer than expected.

As another example, take a “senior” level designer and a “junior” level designer.  In the real world, these two people will not have the same level of productivity.  For a given amount of time, the senior level designer will produce more work than the junior level designer.  This cannot be accommodated in current PM tools without some manipulation or “gaming” of the software.

At this point, one of two things can happen with current planning tools.  With the most commonly used approach, the user does nothing to account for these productivity differences.  A designer is a designer is a designer.  They are all considered equal.  When the project plan is implemented in the real world, the differences in productivity will change the duration of the task depending on which designer is used.  Typically, because the junior designer is cheaper than the senior designer, the junior designer will be used in the real world and will be asked to keep to the schedule that was estimated based on the allocation of a senior designer (because it made the estimated schedule look better).  This is a recipe for disaster.

With the less-used alternative approach, the user can “game” the planning software and say that the junior designer only works 6 hours/day compared to the senior designer working 8 hours/day (to represent that the junior designer is only 75% as productive as the senior designer).  This may work for the execution of tasks based on hours, but now the user must change the hourly wage rate for the junior designer to reflect that each hour (based on a 6 hour day) is more expensive than the junior designer’s actual salary (based on a regular 8 hour day).  While it is possible to do all this with current tools, it is cumbersome and does not reflect reality.  pmBLOX makes this process of assigning productivity levels much easier.

Since you mention Monte Carlo simulation specifically, let me make a few comments about the Monte Carlo approach.  In reality, the Monte Carlo approach is not a simulation, but it is an analytical approach applied to some form of simulation.  So, for instance, MS Project (or Primavera or any other PM tool on the market) “simulates” a project through the use of a database/spreadsheet methodology.  The Monte Carlo approach simply changes a few input variables and re-runs the “simulation” to get a different set of results.

All of this could be done manually by a user, but a Monte Carlo tool automates this process and allows the simulations to occur thousands of times very rapidly.  The Monte Carlo approach is typically used when there is a fair amount of uncertainty in input parameters (e.g., task duration).  The output of a Monte Carlo analysis is a range of results (e.g., a range of project timelines) with a level of confidence for the most probable results (e.g., the most likely project timeline).

Thus, the Monte Carlo approach could be used with pmBLOX, too.  In fact, we already have that feature scheduled for a future version of pmBLOX.  As stated, current Monte Carlo approaches to CPM/PERT vary the inputs on task durations, which are actually an output of all of the variables accounted for in the pmBLOX model.  Therefore, you are varying real-world outputs, not inputs.

With the pmBLOX approach, Monte Carlo analysis will entail varying true inputs:  resource availability, hours backlog, productivity, etc.  This makes the pmBLOX Monte Carlo analysis far more valuable to the user, because it will show the user which of these input variables have the most impact on a project, allowing the user to foresee pitfalls and construct management policies to minimize them.


Q.  You talk about having “every task embedded with a complete set of logic and rules that define how it is performed”.  That sounds like a lot of overhead – kind of like figuring out what every possible variable might be up front.  How do you make sure you create all of the right rules within a reasonable period of time? 

A.  Actually, the pmBLOX simulation runs extremely fast and has a very small memory footprint.  It’s not a lot of overhead because the underlying simulation models are quite basic, and all variables are not figured out up front.  Variables are calculated only when the user runs a simulation.

When we say that every task is embedded with a complete set of logic and rules, we mean that each task has an operational simulation model that mimics activities and decisions that would typically occur in the real world.  In the opening paragraphs, I mentioned that the underlying approach is hours-based and uses the system dynamics simulation methodology.  The amount of work completed for a particular task in any time step of the simulation (e.g., each hour of the day) is basically the product of the number of people assigned to the task, the number of hours those people work in a day, and the productivity of those people during that time.  Three people working on a task for 6 hours/day at a productivity level of 50% would complete 9 hours of task work (3 * 6 * 0.50 = 9).  This is what we call the task execution simulation.

A key factor here is that the underlying system dynamics methodology is very “structural” in nature and not incredibly data-intensive.  As a result, the task model in pmBLOX uses data that a good PM should already be using to make an estimate, no matter what tool they are using.  pmBLOX just makes those fundamental inputs explicit and open to discussion.

On top of the task execution simulation, we add real-world management decision making through several “management policies”.  These management policies are a way for the user to incorporate their management style or decision preferences.  For example, we give the user the ability to change the number of hours any resource works on a task and the ability to change the number of resources assigned to a task, or a combination of both, based on “schedule pressure” or “cost pressure” experienced on the project.

This is another area where the system dynamics methodology provides power.  The system dynamics methodology can incorporate feedback and non-linear relationships among variables.  In the real world, these are what drive changes.  A small change in X may lead to a small change in Y.  But a slightly larger change in X may result in a huge change in Y (i.e., non-linearity.  And, the change in Y may come back around again to influence X (i.e., feedback).

pmBLOX uses the traditional Earned Value calculations to determine the status of a task.  With EV, the two key parameters are the Schedule Performance Index (SPI, schedule pressure) and the Cost Performance Index (CPI, cost pressure).  The pmBLOX user can set a management policy that says that as a task falls behind schedule (i.e., SPI drops below 1.0), assigned resources will work 1 hour of overtime.  As the task falls very far behind schedule (i.e., SPI drops way below 1.0), the policy may state that the assigned resources will work 3 hours of overtime.

Now, as conditions are experienced in the simulation of the project, whether or not the assigned resources work overtime will depend on how far behind schedule a particular task falls.  If conditions end up being such that the task is on time with assigned resources working regular hours, then no overtime is allocated.  However, if for some reason the task falls behind schedule (perhaps due to certain resources being allocated to other tasks for a portion of the time), then the management policy will “kick in” and the assigned resources will work overtime to try to get the task back on schedule.

Each user can set his/her own policies.  One user may have an approach in which overtime is assigned in proportion to the lateness of a task:  the further behind a task is, the more hours of overtime are assigned.  Another user may have an aggressive approach that immediately works resources several hours of overtime as soon as a task is even just slightly behind schedule.  It is completely the user’s choice.

In addition to these simulation methods, pmBLOX incorporates some basic “common sense” approaches that have been lost in some of the current planning tools.  For instance, in MS Project, suppose a task specifies a resource to be used at 50% level of effort.  If that resource works 8 hours/day, this equates to 4 hours/day.  If that resource is only available 3 hours on a given day, MS Project will not use that resource on the task and the resource will sit idle, which delays the task completion.  In the real world, that resource would be used for the 3 hours that it is available.


Q.  What do you mean when you say that “individual tasks actually manage themselves in relation to the entire project”?

A.  In PM tools on the market today, any calculations are “static”.  That is, they do not change as the PM tool generates a project plan.  With pmBLOX, the activities of the first time step influence the activities of the second time step, which influence the activities of the third time step, and so on.  As a result, activities can change throughout the simulation.  In other words, the simulation is “dynamic”.

For example, a specific task may not have a required resource available initially.  In the first few time steps, this may not change the priority or schedule pressure for the task because the due date may be several weeks away.  However, as the simulation progresses through more time steps and the required resource is still not available, the priority and schedule pressure for the task increase to the point where the required resource may be pulled off another activity to come work the high priority task to get it back on schedule.

This dynamic re-allocation of resources occurs in real life, but cannot be simulated with current planning tools.  In current planning tools, the user specifies that the resource is either available or unavailable; the resource cannot dynamically change availability.  So, in the case of the simulation in pmBLOX, the task has “managed itself” to try to achieve its scheduled end date by changing its priority level and pulling resources from other lower-priority tasks.  Note that this is an example of feedback:  the task has a goal and makes changes as necessary to achieve the goal.


Q.  What skills and knowledge enable a user to effectively use pmBLOX?  Do you need a lot of experience with the particular type of project you are facing?  Do you need to be a statistician or math guru?

A.  No special computer skills are needed.  As with current planning tools, only basic PC/Windows skills are necessary.  Users do not have to be statisticians or math gurus.  In fact, the user does not even have to know about the simulation.  We have done our best to make pmBLOX look like any traditional PM software with a Gantt chart, start/end dates, etc.

Because it is a simulation-based tool and there are additional inputs to use the power of the simulation, the interface is slightly different, but not much.  Since our focus is to extend the capabilities of MS Project, any MS Project user should have no trouble navigating and using pmBLOX.  Users can import MS Project files directly and immediately run a simulation.

It should be noted that, as with any project planning tool, actual PM expertise helps.  If someone is new to the field of project management and is not familiar with the estimation process, resource allocation, task dependencies, etc., that person will have a difficult time using pmBLOX to its fullest potential, just as that person would have trouble with any planning tool.  Seasoned PM’s who already know how to build a reasonable project plan and manage a project will appreciate the new power and capabilities available to them with pmBLOX.


Q.  Which types of projects are appropriate for this software?  Do they have to be highly iterative so that the modeling can improve over time – or does it help with one-offs as well?

A.  pmBLOX is appropriate for all types of projects, whether they are repetitive or one-time.  The appropriateness of pmBLOX is more tied to the size and complexity of the project.  For small projects, pmBLOX may be a bit much.  pmBLOX can definitely be used on small projects, but the simulation approach will yield no better results than a spreadsheet.

pmBLOX’s “sweet spot” is medium to large projects where traditional PM tools tend to be limited due to overwhelming complexity.  With these larger projects, someone using a traditional PM planning tool would need to “work around” some of the constraints of the tool or even employ a few “tricks” to fool the software into providing a certain result.  The simulation approach in pmBLOX can handle the complexity and provide much more realistic projections of task completion, resource allocation, etc. without the user having to “game” the software.


Q.  Are there particular industries where this software is more effective?  Are there industries where this software doesn’t work well at all?

A.  Most of the attention pmBLOX has received so far has been from the construction/infrastructure industry, but pmBLOX is not limited to that industry at all.  As an example, one of the industries in which pmBLOX has huge potential value is the IT/software development industry.

With software-related projects, there is a common phenomenon of the “mythical man-month” that cannot be accounted for with current planning tools.  PM’s who have managed software-related projects can tell the stories of how adding people to late projects only makes the projects even later because experienced people have to bring new people up to speed, new people make mistakes that require rework, etc.  With the management policies mentioned previously, the user can designate when to add people to a task due to schedule pressure, as well as the productivity losses due to “overmanning”.  This is much closer to the real world.  There is no such thing as a free lunch.

Just like making people work large amounts of extended overtime can have productivity losses due to fatigue and burnout, adding people to a task can also have productivity losses as everyone “steps on each others’ toes”.  pmBLOX is the only tool available that incorporates these effects.  And, because of pmBLOX’s ability to include non-linear relationships, these types of productivity losses can mirror reality.  When one person is added to a task, the productivity loss may not be much.  However, by adding just another one or two people, the productivity for the whole group typically may suffer a precipitous drop.


Q.  Can you give me an interesting example of a time when this approach was particularly effective?

A.  We took an example engineering project plan right off of the MS Project website.  It was fairly complex with several hundreds of tasks, so it was a good test for pmBLOX.  The total project time was estimated at about 3 years in MS Project.  We imported the file and the pmBLOX simulation showed the project to take about 5 years.  In MS Project, we level-loaded the resources for the engineering project and the timeline pushed out to about 8 years in MS Project after the level-loading.  A very big difference compared to the original file.  We took the level-loaded project and imported it into pmBLOX.  The pmBLOX simulation of the level-loaded project gave exactly the same timeline as the previously imported file:  5 years.

Current PM tools are either overly “optimistic” and provide a short timeline that is completely unattainable but looks good on paper, or current PM tools are overly “pessimistic” and show long timelines that no one would ever accept and in real life would never occur because resources would be shifted along the way.  We like to say that pmBLOX is “realistic”.

A final question that you may still have is:  so what makes the SimBLOX company uniquely qualified to challenge the status quo with project management tools with pmBLOX?

The pmBLOX product that is currently available for community preview is the maturation of a concept that two of the SimBLOX partners originally worked on over 10 years ago for the Department of Defense. At that time, we helped create an advanced project management simulation, using system dynamics as the foundation, for DoD aircraft manufacturers and shipyards.  Thus, it was geared toward very experienced PM’s in the defense industry.  That tool required a higher level of expertise and honestly was priced well beyond what most companies could afford to pay.

After seeing no fundamental changes in PM tools for the last decade, we applied for and received an SBIR (Small Business Innovative Research) grant from DARPA to create a next-generation simulation-based project management tool for a broader commercial market.  Our DARPA customer is actually an ex-Microsoft executive and knows that that field of PM has not advanced much in the last few decades.  pmBLOX is the result of our Phase I SBIR effort and we have been awarded a Phase II SBIR contract to continue development of pmBLOX.
Posted on: January 06, 2009 05:19 PM | Permalink | Comments (3)

Truth (or Clarity) in Scheduling

Situation: You need a more accurate project schedule (and who doesnt?).

The inspiration for the whole Agile movement hinged on the fact that we all know that linear schedules are usually wrong.  The more complex the project, the more that is true.  Agile approaches are one way of dealing with that uncomfortable truth.  Another is to use a really interesting approach offered by the folks at Liquid Planner.  Recently,  we spoke with Charles Seybold, their CEO and founder who offered us some insight into how the tool works.

Q.  Liquid Planner uses date ranges and probabilities to deliver a more accurate view of project progress.  It’s pretty clear how that could be more accurate than any single date.  Could you talk a bit about how input from team members affects deadlines? 

First off, we don’t actually have an entity called a deadline in LiquidPlanner. Rather we have expected dates which we mark with a big [E] on the schedule (these are always flowing) and we have promise dates which are shown on the schedule with the traditional black diamond of a deadline.  The key is to manage to the [E] but set your promise dates at the end of the bars (which are drawn to the 98% confidence date). Setting the promise date “locks” your commitment and you will get an alert if any action puts those promise dates at risk. Any item can have a promise date, but they work best on projects and deliverables.

By asking team members to estimate in a range you are giving them a mechanism which allows them to be honest. Most things have intrinsic uncertainty so we just cannot be that precise. For instance, can we really say we will be done in with exactly 10 days of effort?  If the person says 9-11 days, that tells you they probably have it under pretty good control. If they say 5-15 days, that says something is not well known and that working to understand the requirements better might pay off. 

What really is a single point estimate?  It is the expected case? Best case? Worst case? A sandbag perhaps? 

It’s fun to note that estimating in ranges pretty much eliminates “sandbagging”.  This phenomenon happens when single point estimation meets experience. The experienced worker knows that they need to give estimates they feel 90% confident in so that they will not get dinged for a miss, but when you estimate at that level, 9 times out of 10 you’ll be early.  When that happens the worker can sometimes fill that time with things that maybe were not part of the plan and… well you know the rest. Single point estimates are just bad for relationships.

The other great thing about a team member capturing uncertainty is that it inherits through their chain of prioritized work so that the exit dates on later work get a correspondingly higher about of uncertainty even if they are small tasks. This makes sense because if the exit date of your first task is uncertain, the start date of your next task is uncertain.


Q.  I personally like the Liquid Planner interface from a usability perspective.  What unique steps did you take to test Liquid Planner before its release? From a usability perspective, how do you think it compares to other Ruby on Rails PM apps like Basecamp? (using specific examples)

I’ll interpret your question broadly. In my previous corporate gig like I spent a great deal of time working on planning tools (mostly Excel based) where we were modeling concepts like ranged estimation and flowing work. From basically the first week of LiquidPlanner’s existence, we started prototyping. I maintained a prototype in PowerPoint that we used to mock up every feature we added and I kept that prototype up to date with the work the dev team was doing. This allowed us to test designs very early and make very rapid decisions and modifications for the UI. In short, it was try-fail-learn at a very fast, ridiculously lost cost rate. Looking back at my archive, I see over 200 versions of prototype. This allowed us to narrow in on a design that felt right to us and our friends many months before we put it in the hands of Alpha customers.  Even with that, we’re not perfect; we found some things that needed rework in our private alpha and I expect we’ll find and fix some things in the beta.

We are built on Ruby on Rails and drew inspiration from what the 37 Signals crew accomplished. We like Basecamp and think they did a great job showing the world that web software could be easy. There are many applications out there that are basically Basecamp clones and we think there is no point in repeating that again.  Our goal with LiquidPlanner was to take on a much broader set of objectives for a higher level of professional planning. We wanted to build for a greater scale with hundreds of projects and thousands of tasks. We wanted to be more like a desktop application.  LiquidPlanner is designed to be a platform for project management which, over time, will grow to serve large enterprises while staying true to our belief that the most important feature is usability.  Since you asked for an example, I’ll point out that many of the lightweight online task management tools are not built to put a ton of data in them. LiquidPlanner is build like a data warehouse and uses rich work breakdown structure as the backbone of your collaboration data so that your discussions, documents, and reports will stay organized as you reorganize your plan.


Q.  I really like the idea of everyone owning the schedule, based on their direct input.  Are there typical ways outside of the tool that PMs motivate team members to give honest input (versus padding their tasks) so that you can maximize accuracy?

None that we know of that work with other tools on the market.

Fundamentally a single point estimate is interpreted as a promise and this means that people will negotiate or obfuscate through them. A pattern we see is that the estimate giver and the estimate taker often do not have the same skill level in negotiation and the estimate giver gets backed into what we call “the least defensible estimate” which lies very close to the optimistic estimate.

Some techniques for getting better estimates from your team are tee-shirt sizing, wide band Delphi, and estimating by analogy but they all embrace notions of uncertainty and calibration.  Group estimating is quite effective even informally.


Q.  You talk a lot about “one source of truth”.  How do you see requirements playing into that “single big picture” view of the project, when using Liquid Planner?

Another word for truth is clarity, and any process that you can bring more clarity to is what we are talking about.  For example, in my last company we had a full SDLC and wrote specifications full of requirements for development work. We had a system of rating the specs 1 through 4 based on their “readiness” for Dev; level 4 meant it was done. In practice this was a binary state – not ready vs. ready.  I submit their would be real value in a ranged estimate at these stages to capture a meaningful metric regarding how much uncertainty exists. I think the feedback would be super useful to the person responsible for the requirements as well as a manager who wants to be able to direct her efforts to the projects with the most uncertainty.  If you want to take it a step further, you can do what we do, which is spec requirements in LiquidPlanner and let the projects, categories, and work items carried those requirements with them. That way each item can be assigned and estimated as you go and you can use uncertainty to guide your management actions. It’s the best way to facilitate one of my favorite practices: cut early and cut often.
Posted on: February 25, 2008 04:57 AM | Permalink | Comments (8)

Requirements Start With an IDEA

Situation: You Need a New Way to Deal with Project Requirements.

Requirements are those tricky, slippery things that define the success or failure of your project.  I recently ran across Jama Software, builders of software that seems to take an interesting approach to requirements gathering and management.  In this interview I spoke with Eric Winquist, CEO of Jama Software, a young startup gaining momentum in the collaborative requirements management space. 



Dave:
  At Jama, you talk a lot about “fueling innovation through collaboration”.  Is that just a way of describing ways to easily change and understand the impact of requirements?  Or does your tool actually spur innovation in some way?

Eric Winquist, CEO of Jama Software:  We’re finding it does both.  First and foremost, Contour, is about making requirements management easier and helping teams manage complexity found in developing software applications or systems, designing new products or whatever their projects might be.  

Contour provides companies with a central location to store and collaborate on ideas, research and features which helps them innovate faster and more successfully. 

For 70% of enterprise organizations, innovation via new product development is a top strategic initiative; yet, the majority of these projects don’t end successfully.  Why is that?  When we founded Jama a little over year ago, we looked to answer this question.  From our personal experience in managing software development projects and the insights we gained from customers, we consistently saw a gap between the requirements definition phase and the ability to successfully deliver on them. 

When we created our tool, we built it on a core philosophy that by unlocking the requirements and getting the entire organization collaborating on the projects through a Web-based environment, project teams will increase their success rates.  When companies adopt this open collaborative approach to RM and innovation overall, it fosters new ideas and a cultural shift toward greater accountability to the goals of the projects across the entire organization.


Dave:
  For most PM software vendors, the focus is at the Enterprise level these days, mostly on cross-departmental projects.  Some of the features of your product address requirements that span projects.  Do they address departmental boundaries in any special way?

Eric:  Yes.  The larger the organization, the greater the risk for information silos to exist.  And, when information silos exist, that’s when projects break down – errors get made leading to expensive rework and defects (you know, all the issues we’ve heard about ad nauseum for years around the failure of projects).   Eliminate the silos, and you eliminate a major point of failure. 

The focus is at the Enterprise level because the complexity is magnified.  Some of our enterprise customers have hundreds of projects going on concurrently, with thousands of moving parts within them that are interrelated across people, departments and other projects.  Companies that are successful have adopted an open innovation model that not only breaks down walls internally for greater internal collaboration; it also brings external audiences such as customers and partners into the process as well.  This is where collaborative Web-based tools are making such a difference.  In fact, we’ve found this has become one of the primary benefits customers are seeking in their selection of an RM solution.


Dave:
 “Requirements span projects” – that’s a great point to make.  Other software vendors, like Microsoft, are trying to do a better job of integrating “non-project” work into the overall picture.  So you address operational requirements with your products and integrate that work into an overall picture of “what needs to be done”? 

Eric:
  Absolutely.  Projects don’t live in isolation.  And neither do the tools that people use to manage them.  We believe an effective RM tool has to be built on an open architecture and integrate well with other tools within the larger picture.  We’ve structured our tool around the concept that requirements, use cases, defects, artifacts, ideas, release plans, etc. are all fundamentally items and these items have relationships with other items, and projects are basically the collection of all the specific items toward the completion of specific goal(s).   So when it comes to the management of these items within projects, the tool must make it easy to:

1: Capture - define requirements and other items. 
2. Connect - relate them to other items, projects and people. 
3. Control – assess impact and manage change when it occurs. 
4. Collaborate – keep the entire team in sync and up to date throughout the lifecycle.

For example, we have customers who use our tool as a centralized repository for capturing research documents, usability studies, videos, raw product ideas, etc., essentially leveraging the tool to manage items at the front-end of the innovation funnel, and thus connecting these items up to the requirements definition and management phase later in the product planning & development process – creating a stronger bridge between R&D and Product/Project Management.  In fact, we’re customizing this for a Fortune 100 technology company right now.  So, companies are definitely looking at RM from a much more strategic perspective now and wanting to tie it together end-to-end.

 
Dave:   So many vendors now are producing very simple, easy to use lightweight apps.  How important is simplicity in your design and where do you focus your efforts if not on adding new features?  Can you describe a new feature that perhaps looked like a good idea at first, but then wasn’t? 

Eric:
  Simplicity is a must.  Web 2.0 as a buzzword has been overexposed, but there is an undeniable movement and demand across many categories of software to be light-weight and Web-based - RM included.  But what is that really all about?  I believe it’s less about technology and more about customers simply wanting tools that work and help them get things done.  The days of big, bulky enterprise suites that take 6-9 months to implement and people needing months and months of training to learn them, and even then get underutilized and abandoned after a year.  Who can afford that inefficiency in managing their business? The #1 criticism of traditional requirements management tools is that they’re too difficult to use.   

In many ways, Jama and other new vendors are at an advantage because we don’t have 10-year old code bases to manage.  So we can build our tools on modern, flexible platforms.   Also, we take an iterative approach to our development of Contour, so we’re constantly working on new ways in which we can improve our UI, make tasks easier to accomplish and simply make things more intuitive for our users.  We push things out in new releases every few weeks and see how people respond.  In fact, we did a complete overhaul of our UI in our 2.0 release in December.  Our customers provide as much input into our product roadmap as we do.  Some features are home runs, others don’t stick.  And, when that happens, we go back to the drawing board to figure out a smarter way. 

To answer the last part of your question, one of the challenging areas within RM is reporting.  At first glance, companies want the ability to build custom reports, so we offer a flexible and powerful Custom Report Designer in response to that need.  But, in practice, building custom reports is a complex art form in itself.  So, we’ve found that for some customers who don’t have an internal reporting guru on-staff, they have more success when we actually build the reports for them versus just handing them the keys to this feature.

Dave:  Thanks Eric for the Q&A session.  Appreciate your time and perspective on things.

If you have any specific questions for Eric, you can reach him at ewinquist@jamasoftware.com or for more information about Jama Software, visit www.jamasoftware.com  

Posted on: January 14, 2008 04:34 PM | Permalink | Comments (0)

What's an "inch pebble"?

Situation: You need more near-term visibility.

It's always easier to estimate how long things will take when you're right about to do them.  That's when you suddenly uncover all of those little steps between here and there that ou forgot about completely when building your 6 month project plan.

Inch Pebbles are used to estimate all of those little granular activities that together make up what you'll really have to do to get the job done.  In Estimating with Inch Pebbles Johanna Rothman discusses how you can use this extreme programming technique weekly or monthly to get a better handle on where things stand.  The article is a quick read - well worth your time.
Posted on: September 06, 2007 11:46 AM | Permalink | Comments (0)
ADVERTISEMENTS

You know what I love? How there's two nuts named after people: Hazel and Filbert.

- George Costanza

ADVERTISEMENT

Sponsors