Please login or join to subscribe to this thread
Perhaps I did not understand your question but I saw you have a status related to the matter so it seems to me the way is to asking for each status when you perform monitoring sessions.
Based on the taxonomy you have provided, you can query your risk register for materialized risks (a better term would be "realized"). By default, all others were not realized.
I do agree with Sergio and Kiron
"which risks were materialized and which were not" is not the only consideration in lessons learned (LL). The WHY? is critical in order for the LL to have lasting value.
There are a couple things to keep in mind when undertaking your risk analysis so as to assist later in lessons learned process.
1) the risk event versus impact has to be clearly defined. For example schedule slippage is the impact not the event. Labour shortage may also be an impact rather than an event. You have to drill down to identify the root cause.
2) Mitigation may be applied to reducing probability or impact (sometimes both). .
3) From a lessons learned perspective one has to know why a risk event materialized or 'did not occur'. Was it due to mitigation? Were the applied probability factors reasonable? Was probability mitigation sufficient?
4) If the risk event occurred was the impact as expected? Was the impact mitigation effective? Should other mitigation measures have been considered?.
If the intent is to do a post mortem (lessons learned) on the risk management process you have to set it up during the initial risk analysis.
Having fought against poorly conceived metrics for many years, two things I recommend always considering are:
1) How will the data be used to drive performance?
2) How will the data be collected?
That requires thinking through your process before you collect data. Without considering those, organizations often spend a lot of time and money collecting useless data that provides no information of value.
You know more than us about how the data will be used and whether you're simply counting realized risks, or will later use that data to find common themes or trends for example. This is important because if you do not include the right data when you start, you eventually get to a point where you realize you are missing what you really needed.
How you enter and collect the data often drives how you must structure the data. If a risk was realized, and then resolved or mitigated and you over-write the status column, you just lost data. Something as simple as an extra column for Realized (Yes/No) may fix that. The point being, the data usage later drives the data collection/record keeping requirements.
Sometimes people try to avoid those by collecting every data point they can imagine in case it might be useful later. Most of it turns out to be worthless, and now people spend far too much time collecting data rather than managing risks.
Are you able to modify the taxonomy to support the reporting you are looking for? Basically, you'd want to differentiate the following four cases:
1. Identified risks which were just accepted but were not realized
2. Identified risks which were just accepted and were realized
3. Identified risks which were responded to (e.g. avoid/transfer/mitigate/escalate) and were not realized
4. Identified risks which were responded to (e.g. avoid/transfer/mitigate/escalate) and were realized
Please login or join to reply