Ignore this article
…if your company knows everything about the product its developing. They’ve done it before multiple times, and all market factors as well as technical and organisational issues are known, trusted and have been incorporated into the plan.
…if it does not need to respond to feedback, from the market as well as from internal teams, as they are a monopoly or a vertical that surely will not be disrupted by their competitors.
Hopefully, you realised that this was a rhetorical proposition, and you are welcome to continue reading.
The company you work for has a multi-year investment of human effort and resources in order to develop a certain project, and it is your responsibility to oversee its progress.
Funding for the project is on an annual or quarterly basis, and you want to make sure it stays within its allotted budget and time constraints.
You are not directly involved with the project’s development, and thus lack detailed information about its development process, and sometimes even the reasons for which the company is developing it.
The process being obscure, you ask for status meetings as they are the only source of information made available to you.
Despite process and reporting, however, the project incurs cost and schedule overruns, mismatched deliveries, and what people thought to be known issues, turned to be unknown in nature.
Often you feel helpless, having no bearing on the information that was collected. After collation and analysis, your role ends with escalation, without the executive ability to influence the project’s scope nor direction.
Learning from that experience, there is a re-assessment of qualifying risk, resulting in front-loading more risk items for subsequent projects. Unwittingly, this is carried out based on the assumption that the system is not complex, and that all aspects of the problems are known.
Frustrated, you apply yourself to oversee a new development effort, yet nothing seems to change.
It seems evident that we need to revise our understanding of how to assess risk and investment management in light of a modern market environment. To quantify risk, and qualify investment and funding, we need to be able to apply effective metrics from which we can extract meaningful reports.
With this data we can reassess PMO/CAB functions to be able to respond to a disruptive landscape.
Assessing risk and investment funding in light of market feedback PMO/CAB responsibilities in a disruptive landscape
…constitute what I see as the basis for product development “governance”.
Reassessing Risk Assessment
Current risk assessments deal with known “knowns” and account for “known unknowns” with unsubstantiated assumptions at best. Their premise is that systems are static and simple, as opposed to interdependent and complex. So current risk assessment is, in fact, applying its science to product development in another universe.
Metrics currently measure “actual”, and report deviance in relation to “planned”. My only logical deduction is that those who collect this kind of information (I’ll discuss PMO below) are self-serving, and are, in fact, observing the original plan, with a deliberate disregard to what is happening in the real world where the product is deployed, or is about to be deployed to.
Put differently: The current measurements enable a comparison between “planned” and “actual” tasks of a predefined plan, hence yielding a qualification of the plan itself, not a lot more. I labelled it “self-serving” because it’s often the same organization that created the plan as the one that’s overseeing it.
Smug when “actual” beats “planned”, patting themselves on the back when they have proof that the estimates match reported data, wringing their hands when delays are discovered, as the team is beaten into shape to make up for their apparent loss in productivity.
Another critique of this method, is that it assumes that the plan is viable. By the mere effort that is exerted to monitor and quantify “progress” of the plan, we can deduce that it’s quite a large one. That, in turn, assumes that an equal amount of effort was put in to make sure all aspects of the product were analyzed. That, in turn, signals that the team that analyzed the product offering and created the plan also assumed that it knew everything about it, and that they would not be building a dynamic complex system. Anyone adhering to these flawed measurements emprill their companies as they desperately attempt to correlate them (again, self-serving) to empirical findings once the product is live. A personal and blunt example is that I was a project manager where we were spot-on the project plan, had all the tests passing, but completely failed in production when we went live. Unfortunately, I have more than one example of this reality.
In summary, we’re applying risk assessment based on measurement of the wrong things. A futile and useless endeavour.
Time to breathe.
An alternative could be to measure value at risk. If we want to avoid a lengthy definition of what “value” means, I suggest that we define it, for the purpose of this discussion, as “investment”. I see unrealized investment as risk.
This definition states, in other words, that the larger the investment, or un-measured value, the riskier the endeavour is.
Assessing risk becomes comparing unrealized investment to realized value at any arbitrary moment.
The process of assessing risk by these means has largely been simplified because we can attach a number to it, and measure it using arithmetic: it’s closely related to sunk cost, but is dynamic: How large is our debt by not having released the product to prove wrong that it is indeed debt? The moment the product is in production, that debt is potentially offset by its value proposition. The more unrealized value, the greater the debt. Conversely, we now have an indication that the product is not viable if the realized value does not balance or exceed our debt.
As value is realized, the debt decreases through standard accounting mechanisms. With this, we have a metric that exposes our current risk level.
Therefore, it is imperative to realize value as soon as possible to avoid being in a “sunk cost” situation. Answering how teams can do this, is a segue to the second element of governance.
Reassessing PMO and CAB operations
PMO came to be when the multitude of projects became too hard to view as a whole, and a broader view describing current activities needed to be presented to upper management. It since evolved to trying to govern and optimise product development using economies of repetition in the execution of projects.
Riiight. Again, the fallacy of presuming that everything is known, and that thanks to that, the process can be, and will be, repeatable. And yet, they are not involved in day-to-day delivery. They usually belong to another department (PMO) reporting the aforementioned self-serving metrics in green, amber and red, while the economics of product development elude them.
In our times, the thought of handing over control to an externally governing body by fiat is nothing short of ludicrous.
The case with the CAB is even worse, since their members have little or no context of the project for which they are tasked to approve changes. It is usually a rubber-stamp meeting that all participants dread, not for the time that it wastes, but for the embarrassment of all those involved, including the members themselves. In most companies I’ve worked with, those meetings were to basically approve budgets for contractors and had little to with product engineering. Those few times when the discussion did revolve around actual changes, the members were quickly assured that, yes, all impacts have been analysed and that there will be no consequence as part of the change. Choosing to ignore that these are mere guesses without having a single line of code to prove this, the CAB approves and after careful deliberation.
This description of the function and operations of PMOs and CABs compel us to reassess their role in modern product development.
As part of the democratisation of product delivery, where intimate collaboration between cross-functional team-members is the norm, reporting functions should be included within them, under the term “meta-project” information and monitoring. Value at risk, for example, is a meta-project monitor. Eliminating steering committees and CABs, replacing them with information gathering from what real customers want, is more effective than rubber-stamp steering committees.
These functions should be treated just as functional stories or features are, and should acceptance criteria just as stories do. Failing the acceptance criteria would effectively mean that the product is out of touch with its customers, and would lead to more value at risk, which would need a countermeasure from within the team to take corrective measures.
An alternate framework
I think I’ve argued that managing value at risk is the most valuable operation we have in terms of governance for which we need up-to-date measurements from production.
From here, it’s a small step that realisation of value (i.e. the reduction of investment risk) is more important than a report that shows that people busy, based on burn-down charts.
In order to maintain project data fresh and relevant, we need to reduce lead-time as much as we possibly can, allowing us to accelerate the collection, analysis and action on feedback from products in our user’s hands.
Reducing batch size coupled with an efficient delivery pipeline will convert risk to realized value from the first deployment to production
Thus the alternative for current governance is to value the aforementioned, work from within the teams to assure a healthy delivery pipeline by keeping stories small, and by constantly reassessing the VSM.
- Dan North and Associates
- Sunk Cost
- Change Management