Home
/
Blogs
/
Easy in theory, difficult in practice
/
Easy in theory, difficult in practice
by Kiron Bondale
My musings on project management, project portfolio management and change management.
I'm a firm believer that a pragmatic approach to organizational change that addresses process & technology, but primarily, people will maximize chances for success.
This blog contains articles which I've previously written and published as well as new content.
Recent Posts
Leading Through Crisis Means Leading Through Context
"It's the end. But the moment has been prepared for." - retirement lessons from the Doctor
Just because they are non-critical, doesn't mean they are not risky!
Just because they are non-critical, doesn't mean they are not risky!
How will YOU avoid these AI-related cognitive biases?
Categories
Agile,
Artificial Intelligence,
Career Development,
Change Management,
Communication,
Decision Making,
Governance,
Hiring,
Human Resources PM,
Kanban,
Lessons Learned,
Personal Development,
PMO,
Portfolios (PPM),
Project Management,
Risk,
Risk Management,
Scheduling,
Team Building,
Time,
Tools
Date
I'm midway through reading Jeremy Kahn's book "Mastering A.I. - A Survival Guide To Our Superpowered Future". While I find the title aspirational (can you truly master anything which is evolving as rapidly as A.I.?), the author has done a good job of providing a balanced assessment of some near and longer term benefits and risks of A.I.
What has resonated with me as it relates to project management are the following three cognitive biases:
- Automation bias - the inclination to assume that recommendations or information presented by a computer system are more accurate than that produced by a human being, even when we are presented with contradictory evidence.
- Automation neglect - the tendency to discount and ignore what a computer system is telling us, especially when it runs counter to our beliefs or desires.
- Automation surprise - the tendency to rely on computer systems and to be confused or surprised when they fail.
I've witnessed the impact of the first two biases multiple times over my career with traditional project management applications.
I've seen senior executives trust the information provided in a Project Portfolio Management solution's sexy dashboard telling them that a particular project was healthy even when the data used to populate that dashboard had undergone significant green-shifting and it was clear to any stakeholder remotely close to the project that it was on fire.
I've seen a sponsor refuse to accept a project manager's recommendation to push back a milestone date based on a Monte Carlo simulation which showed that meeting the desired date had an extremely low probability of success.
I haven't run into automation surprise yet mostly because many project management applications have the unfortunate tendency of failing regularly as the complexity or volume of data or queries increases.
In the near term, we are unlikely to fall prey to such biases when it comes to A.I.-based project management solutions. It is being well drilled into us to employ techniques such as human in the middle to verify that A.I. generated outputs are valid.
But lets fast forward a few years to when the growing pains of the current generation of A.I. tools are but distant memories.
As the reliability of the tools improves, our vigilance diminishes. The likelihood of automation bias affecting project managers, team members, and senior stakeholders will increase, especially as our ability to understand how the A.I. tools are coming to a conclusion gets harder. This will go hand-in-hand with automation surprise. When A.I. tools fail, we might lack the experience or knowledge to understand how to troubleshoot it and if we have become too reliant on the tool doing what we would have done manually in the past, our ability to take over might have atrophied.
The impacts of automation neglect are likely to remain fairly constant. For stakeholders who have a preconceived belief that they don't wish to have challenged, a high confidence contrary answer from a more reliable A.I. is unlikely to sway them. Mandating that users are required to follow the A.I.'s guidance is not the solution as it just increases the potential impacts of automation bias and automation surprise.
So as you contemplate your future as a project manager, what will YOU do to reduce the impacts of these biases as A.I.-enabled project management continues to mature?
|
Posted on: July 18, 2024 09:41 AM
|
Permalink |
Comments (6)
Based on the extensive media coverage, YouTube videos, TED Talks, and books published, many might agree that 2023 has been hailed as the year of artificial intelligence, at least in terms of mindshare if not market dominance.
Throughout the past year, online project management communities have frequently discussed the potential impact of A.I. tools on the role of project managers. While concerns persist about potential negative effects, such as new project risks and potential job displacement, there's also optimism. A.I. tools, when used appropriately, are seen as potential assistants in delivering projects more efficiently and effectively, akin to other professions.
However, let's maintain perspective. Like previous project management tools—such as schedulers and knowledge management platforms—some aspects of our work won't be affected by A.I. until projects can be entirely completed by machines without human involvement.
Certain challenges will persist:
- Commitments will still be made prematurely: A.I. might provide better reasoning for unattainable completion dates or funding amounts, but it's unlikely to deter senior stakeholders from imposing unrealistic constraints.
- What you don't know will still hurt more than what you do know: In the near term, we won't have sufficiently advanced A.I. capabilities to identify all the possible risks which could impact our projects. And as complexity continues to increase, the likelihood remains that unknown-unknowns will affect our projects to a greater extent than the known-unknowns.
- Stakeholders will continue to surprise us: Provided sufficient context, A.I. tools might be able to improve our forecast of how stakeholders will respond to a given decision or project approach. However, if we've learned anything from The Matrix, even if humans are part of an A.I. system, they'll still find ways to behave unexpectedly.
- More concurrent work than can be effectively delivered: A.I. tools might give us a better understanding of the capacity within our teams and our throughput potential, but with the exception of those who use product-centric delivery models or who embrace the flow guidance of Dr. Goldratt or Don Reinertsen, most will still welcome more work into their system than should be permitted, so multitasking, work overload and the inability to accurately forecast people's availability will persist.
- The single biggest problem in communication: A.I. tools will eventually help us to bridge communication gaps with real-time context sensitive translation and guidance to make better choices about messaging tone, medium and other factors. Nevertheless, some gaps, as demonstrated in 'Star Trek: The Next Generation's' episode 'Darmok,' may remain insurmountable.
So as the dawn of 2024 approaches, lets greet it with the confidence that while some things are likely change in project delivery, most won't.
"The art of progress is to preserve order amid change and to preserve change amid order." - Alfred North Whitehead
|
Posted on: December 23, 2023 10:19 AM
|
Permalink |
Comments (12)
"Common sense is the collection of prejudices acquired by age 18."
- Albert Einstein
|