Overstating the obvious is not always overstated nor obvious. In this segment, I discuss shifting perspective from outputs to outcomes.
It’s easy in a development environment to become obsessed with output at the expense of outcomes, so how can we shift the focus on numbers of features implemented or lines of code written, to value realisation? To some extent, there are several challenges, almost all of them cultural.
If you take one and only one thing from this segment, it should be: don’t give a user or customer something just because s/he asks for it, and don’t assume you know what the customer needs. I’ll use the terms customer and user interchangeably throughout.
If you are just giving the customer what s/he requests, you are providing little value as a business analyst or architect, beyond being an order taker. If this is the case, just create a form to accept requests, and have the project manager or development manager queue it for you.
Oftentimes—and more often than you might think—, the customer doesn’t know what feature s/he wants. S/he only knows what s/he needs done—what the problem is.
As economist and Harvard University professor, the late Theodore Levitt reminds us, ‘A person doesn’t need a quarter-inch drill. They want a quarter-inch hole‘. The problem or need is the quarter-inch hole. A quarter-inch bit is only one possible solution. Depending on the application, it may be the best solution; but it may be a poor solution depending on context.
Your goal is to solve problems, not just implement suggested solutions. Perhaps the suggested solution is perfect, but validate that it solves a problem. And don’t get caught up in edge and corner cases. Moreover, make sure the solution adds value, and understand this because it will become important in prioritising requests. All else being equal, if you have two competing requests and only have time to develop one for the next release, you need a way besides HIPPO to decide which one to implement.
Speaking of value, what is the value proposition for this feature? And how many people will it effect? If a feature will save $10 a day and it affects 10 users. That’s $100 a day. If it affects 1,000 users, that’s $10,000 a day. This is important perspective. It would be harder to justify spending $10,000 development dollars in the first case but easier in the second.
In a case study I was recently presented, I saw a request for salespeople to be able to see marketing activity related to potential customers and for the records to be somehow be colour-coded to reflect where they reside in the sales funnel. Given that these were presented as solutions, I can conceive of some theoretical value in each of these. For the marketing activity history, perhaps the salesperson could review this information and use it to help to determine specific interest and perhaps increase the probability of closing a deal. If this is the case, I would want to understand several things.
First, is this something most salespeople are apt to use? Is this something that the best salespeople are apt to use? Perhaps the new salespeople would use this information. So use case and segmentation information would be helpful to the value assessment.
Second, how would we measure the success of this feature? We’d need some sort of baseline. If we know what the historical close or conversion rate is, we could compare against it to see if it increases with this new feature. Perhaps it’s not the conversion itself, but deal size or customer lifetime value. In any case, we need to establish a value hypothesis before much additional effort is expended.
If we don’t have a baseline, perhaps we A|B test the feature, specifying a control-group for comparison purposes. Of course, we’d need to control for other reasons that performance of this group was higher.
This brings us to attribution. How do we know that some feature, say marketing history, is responsible for improved performance by whatever measure? Moreover, just because a new feature is made available doesn’t mean it’s being utilised. In this case, I’d want to see the relationship between who is accessing this information, how long they are interacting with it, and any performance metrics. I could imagine a scenario where access to these data becomes a distraction and slows the sale process, resulting in fewer sales or lower performance. If a dozen new performance-enhancing features were just added, which ones are realising value? If performance increases, perhaps they all contributed to some degree or another, but affective attribution measurement tells us which.
To put a finer point on it, let’s imagine that 9 features increase performance and 3 detract, but on balance performance is net positive. You’re ahead of the game. So that’s good. Right? Not so fast. If 3 are detracting, we want this to be known so we can pull the features or revise them. And if no one adopts these features, training and awareness issues aside, we’ve misallocated our resources developing them. And you don’t want people thinking, I remember when we added X to our platform and sales increased, when X was a detractor.
The second request was colour-coding contacts based on their pipeline or funnel status. It’s obvious that someone wants their contacts to be visually differentiated, but what does the colour-coding do that some other solution wouldn’t. If I am reviewing a screen with mixed contacts, what information am I gaining? Why am I looking at mixed contexts. Is there a reason I need to see different status contacts in the same view? What’s the use case? Maybe this colouration does have value. If so, what is it, and how will you instrument it? I think it will help me work 20 per cent faster is a possible hypothesis, but make sure you can evaluate the claim. You might even be able to test this in a clickable prototype with mock data before devoting any development time.
When users request feature functionality, they are probably not aware of the cost and effort necessary to deliver the item. Even so, implementing a low cost, low value feature is not usually a good idea. As the saying goes, ‘Follow the money.’ No money? Stop following. Chase something else. How requests get prioritised relative to competing requests in the first place is a topic for another segment.
The point I’m making here is, outcomes are more valuable than output. A challenge that some organisations have is to have development resources dedicated to a product or application. The problem here is that we might have several ‘local’ applications within a global context. Imagine you have 4 applications to manage. It’s conceivable that all the highest value would be expected to be returned in only one of the applications. There are backlogs of feature function requests in all four, each promising to deliver value. But from an enterprise view or a programme portfolio perspective, all effort should be concentrated on the one. Of course, this is just a hypothetical situation, but it is not inconceivable that at least one product doesn’t have any value propositions that rise to the level of competing products. Transparency provides a solution. No product manager or owner wants their product to languish, but this visibility can serve as a signal that additional value needs to be found, or one must simply wait for diminishing marginal returns to set in on the other products in the portfolio. It’s also good to remember, there is also a cost of doing nothing.
Whilst the first point was to never accept a feature request from a user at face value, the other is to not presume you know the solution out of hand. I may have shared this story before, but I was working with a lead user experience designer for a bank that offered home mortgages. We had already researched the top customer needs and were designing page content. This UX designer had recently purchased a home and experienced a certain problem. I don’t recall the details, but it was not high among the aggregated customer wants, and did not appear as a common support request either online or at the call centre. As a UX expert, she should have known better, but emotions got the best of her, and she insisted that this feature be added. In her defence, it could have been a solution everybody wanted and didn’t know to ask, but I’m going with the more obvious: no one was asking because it wasn’t a common need.
Don’t presume you have the answer if you haven’t fully captured the problem. I’ve worked with many product managers who want to be the next Steve Jobs. ‘Customers don’t know what they want‘, they protest. And whilst this may be true—we’ve already mentioned this—, it doesn’t follow that you know the solution either. We’ve heard the quip attributed to Henry Ford that if he asked people what they needed prior to the advent of the automobile, they’d have said faster horses. Let’s ignore that some people may have come to this conclusion. It’s unlikely to have been the consensus view. And if we are coming from a problem perspective rather than a solution fishing expedition, we’ll have better luck anyway. In the early days, autos were neither reliable nor durable. They were finicky and didn’t have a very stable infrastructure to operate on. Without diverging into a segment on socialism. This is where government investment in roadways yielded profits to Mr Ford. And so it goes.
Before I end, let’s return to culture. Culture is what buys into the value perspective, or leans toward kneejerk requests. Culture is what provides a wide global view, versus narrow local views. Cross functional versus siloes. Culture is what values measurement , incremental learning, and risk tolerance versus seat of the pants fully-fledged all or nothing propositions, and risk aversion. I’d be willing to argue that for most companies, they either reside on one side or another.