BarcVox

The voice of business architecture. And more.

Weighted Average of Dimensions for Initiatives

I have long advocated for and attempted to practice dimensionalising competing development requests. Generally two dimensions are sufficient—value and effort—, but I’ve created 3-dimensional models by further breaking out customer and business value.

Podcast: Audio rendition of this page content

Each of these axial dimensions are further dimensionalised into, say, 10 subdimensions. On the effort axis, I might have items such as:

  • Does this build on an existing capability?
  • Does this have an executive sponsor in place?
  • Has this already been budgeted?
  • Is there an existing governance structure in place?
  • Does this require new cross-functional processes?
  • Is there already a change management approach for this feature?
  • Are there partner or vendor dependencies?
  • Does this involve new technologies?
  • Does this touch multiple platforms or channels?

On the value axis, I might include these items:

  • Revenue growth
  • Revenue protection
  • Cost reduction
  • Customer acquisition
  • Customer retention
  • Investor relations (PR)
  • Number of users
  • Frequency of use
  • Efficiency play
  • Cost of delay
  • Alignment with strategic goals
  • Brand differentiator

Each enterprise is different. The goal is to create dimensions relevant for the organisation. Then these items are assigned a weight. Generally speaking, it doesn’t matter how the weights are assigned so long as they provide relative importance. I like to have the weights sum to 100 per cent, but whatever value you choose will be normalised by dividing by the sum anyway.

Once this data structure is implemented, it can be represented on a scatterplot chart. You have a little leeway in how you choose to assess winners, losers, and also-rans, but in general the upper-right quadrant is populated with winning initiatives. Do them first. If you plotted value on the Y-axis and effort on the X-axis—inverted, so the rightmost items would require the least effort—, then choose the highest, top rightmost item first. Any items in the lower left quadrant offer little value and are difficult to do. Just stop. Don’t do these.

The good thing is that if the weighting changes in time, then the initiatives will be reshuffled in concert with the change. For example, in year one perhaps the focus is on revenue growth, so it gets a heavy weight. The next year, cost savings are all the rage. By amending the weights, the revenue growing items will be deprioritised relative to the cost cutting projects.

Moreover, any new initiatives will use the same weighting and will be prioritised accordingly. There are challenges to this model. Most notably, it biases tactical projects over strategic projects. Sometimes, enabling groundwork needs to be performed to bring the effort dimension into line. In this case, you can either add a strategic value measure or you can just make that decision outside of this system. You may also wish to combine related initiative in order to gain relative value and nudge your initiative up the priority ladder.

Although I advised not to attend to the lower lefthand items, you may be able to work to reduce the amount of effort by, say, getting a sponsor or funding, creating a change management approach or managing any of the other effort measures.

To be honest, in most organisations, I’ve found value measures to be seriously lacking, and where they are required, there is no governance to determine that the estimate was made in good faith or the value target was achieved. Rarer still is a figure to reflect the cost of doing nothing. This is pretty important, but it is an often overlooked measure. I’ve discussed this before.

I’ve used this approach in the past. In fact, I’ve had live scoring sessions where each requester presented a summary of their pitch and a crowd of upwards of 30 peers each voted along the dimensional scales. It was interesting to see some favoured projects fall into the red zone of the lower left. The good thing is that is was transparent and people could see why their pet initiative was not getting any traction. They could also see why another project they may not have seen the value in was beating theirs.

In the end, I offered to tweak the weightings. We didn’t. Once the dimensions and weights were known, people also had clues to how to elevate their initiatives. It does take a fair amount of effort to suss out some of the values, and unless you instrument and measure the actual-versus-expected value and govern the process, you are likely to get inflated numbers just to gain a higher ranking.

In closing, I have found that most people have little idea what value a request might bring. Sadly, this is even true for product managers. And if you are not in a data-driven value-seeking organisation, you are not likely to get any practice than raising a damp finger to the wind.

Leave a reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: