In the introduction to the Win/Loss Analysis in FinTech interview series, “It’s Not What You Think!, we launched a discussion about topics including what is win/loss analysis, who it is for, why FinTech is different, and how to run an effective analysis program in that market.  In this interview, Richard Case and I talk about data-driven win/loss analysis, ROI issues, and how asking the right questions during the interview impacts the study results.  

Dan: Richard, quants love data.  So, let’s talk about how to make win/loss interviewing data-driven. How does win/loss analysis turn the unstructured content from an interview with a buyer into actionable data?

Richard: Well, not all win/loss analysis out there turns qualitative interview responses into quantitative, analytical data to support decision-making.  It depends on how you do it.  But let’s back up. When you say “quants,” who do you mean?

Dan:  Unlike fundamental investors, who look for individual stocks that provide the best returns, quants try to design entire portfolios that maximize returns, using mathematical and statistical modeling to understand the patterns of tradable instruments.

Richard: OK. And, what data do they use?

Dan:  Pretty much anything, from prices of financial instruments to fundamental and economic data to alternative data to “Big Data.” And it’s not just numbers, it’s also text.  For instance, social sentiment moves financial markets.

[For more on big data, alternative data, and the “data deluge gap,” see Moore’s Law is Dead, Again]

Richard:  So, just like quants turning numbers, text, and alternative data into insights that give them the best returns, win/loss analysis should turn qualitative interview responses into quantitative, analytical data.  To be worth doing at all, win/loss should deliver insights and recommendations for changes to the business that will improve future competitive outcomes. Win/loss that merely tells you what happened in the past is insufficient.

Dan:  Right. Just like a quant is trying to forecast possible future price changes, anyone conducting win/loss analysis needs to forecast the revenue impact of competitive changes.  Analysis should distinguish between those findings which are statistically significant and those which are merely anecdotal – like detecting the signal in the noise. The answers are valuable because that gives business managers the information they need to make significant investment decisions.  Not investments in the financial market, I mean investments in their own businesses – new products, improved operational processes, and the like.  But improvements that result in increased revenues.

Dan: Given that, how do you get there from a set of interview questions?

Richard:  That’s important – the questions can’t be canned, and the interviewer can’t merely “stick to a script.”  Overly scripted interviews can cause the respondent to shut down, and should be avoided.  Another problem is that they can waste a lot of time reviewing issues irrelevant to the deal at hand. You need to focus on what mattered most to the customer, so you need to be flexible to respond to what they tell you and spend most of the interview time investigating WHY the customer rated vendors as they did, especially where you discover the buyer perceived large gaps between the vendors. You have to uncover, in detail, how and why these relative strengths and weaknesses became apparent to the buyer during the sales cycle conducted by competing vendors.  You have to understand respondents’ perceptions in detail.  That gives you concrete recommendations on how to improve.

Dan:  Yeah, but that sounds anecdotal. How do you know you can rely on that?

Richard: It would be if you conducted only one interview, but if you get a representative sample, it all boils down to the statistical significance of the data resulting from the interviews.  You’re not just looking for common comments that respondents make.  The only way this works is by building decision-making models on independent criteria where important differences were perceived. As much as you would try to structure a discussion, the best interviews are the ones where the respondent provides these criteria in, first, an unprompted context and, second, in a prompted fashion.  So you have to be reasonably expert in the domain, as an interviewer, to pick this up in an unstructured way.  Then you need to have the respondents provide numerical comparative vendor ratings – on the criteria that they provided in the unstructured part of the discussion.

[For more on the perils of robotic interviews, and why you need a domain expert instead, see Win/Loss Interviewing: It Takes an Expert]

Dan:  So, you’re structuring the data “on the fly.”

Richard:  Right.  And that’s at the heart of the interview: the discussion of key decision criteria.  This is where your FinTech expertise makes a difference in the final analysis – from having asked the right questions, based on what the respondent said – not necessarily from sticking to the prepared criteria, albeit that is based on market knowledge too.

Dan: Sounds like the point is to follow the respondent and let the discussion go where it needs to go – which takes an understanding of the industry.  Then the key criteria show up in the interview results.  And you have to capture those.  But then you have the next problem: how do you turn the insights into numerical data to analyze? If you’re swimming in data that you can’t consume and analyze, you’re stuck in the plight of the Ancient Mariner: “Water, water, everywhere, [but not a] drop to drink.

Richard:  You solve that by assigning importance ratings to the criteria. That gives you the weightings. And then by rating the competitors against those criteria.  Finally, you have to ask WHY the vendors were rated differently.  These comments are generally going to be the most important part of the interview because they will reveal exactly why a vendor won or lost and what actions might counter a weakness or sustain a strength.

Dan:  Sounds like a big-data approach.  Do you need a data lake, a virtual sea of interview data, to produce statistical significance in the results?  It would be very difficult to consume all of that data.

Richard:  Not really, you just need to know when you have enough.  You can use Student’s T-Test to determine confidence in findings.  And you can look at those statistics to examine the respondent quotes in a prioritized way.  I would also suggest a Monte Carlo analysis.

Dan: To determine what, exactly?  Quants would use Monte Carlo for options pricing.

Richard:  Well OK, but for win/loss, Monte Carlo analysis is used to determine a statistical range in a scenario predictive change analysis.  Meaning, that you can easily query the data set with “what-if” questions, and model the range of possible outcomes.  And you also can segment data by product line, competitor, region,… and additional custom demographics or any combination.  It helps if you have software to do this.

Dan: Ok, so you know all of this, so what?

Richard:  Well, there really are 2 main outcomes.  If you conduct a thorough analysis of your wins and losses, and find out why you won and lost, but just leave it at that, you still don’t really know where to take your organization from there.  It’s kind of like driving your car in reverse – you’re looking over your shoulder the whole time rather than the road ahead.  What you need is a forward-looking analysis, which gives you the ability to conduct what-if scenarios on your offering.  

Dan:  Meaning, if you tweak something that made a big difference in your wins and losses, what’s the impact?

Richard: Yes, in terms of additional revenues, built on the model you constructed from the interviews.  For instance if you now add features to your product, the model could warn you that these improvements would likely only generate an additional $500k in revenues per year, because the feature set wasn’t as influential in competitive deals as you thought.  Whereas if you improve your customer service, you might find that the model predicts an extra $10M in revenues.  And you know the cost of making that improvement, so you can calculate the bottom-line impact.

[For more on establishing the ROI of a win/loss analysis program, see Win/Loss Best Practice #2: Know your ROI]

Dan: OK, so that’s the first of the two main outcomes.  What’s the second?

Richard:  Well, now the findings won’t be merely anecdotal; you can act on them.  And that’s the key. Execution is everything.