Researchers develop technologies by following a process of trial-and-error. If a technology is working, or not working, is not assessed based upon opinions, but by creating evidence that this thing actually works. However, when it comes to business, our experience shows that these highly-skilled individuals switch to a quite different rationale: planning instead of doing, relying on expert opinions instead on facts, reasoning based upon very abstract market research publications instead of showing that it really works with a single customer.
Likewise, it is sometimes hard to understand how funding and resource allocation decisions are being made for R&D projects. For most of the larger funding programs, there will be some kind of committee or jury that receives proposals, judges and decides. There are several issues around this decision-making method:
Lack of actual data: Although many professionally managed calls will demand a comprehensive description of the technology, the business idea / business model and the project team, all data is still in an ideation phase and thus imaginary.
Incomplete data: Many committees rely upon a one-time, one-dimensional decision. How much does a written proposal tell you about the teams’ passion? How much will a 10 minutes pitch help you to really get your mind around the idea, its potential and challenges? What actual, valid data do you have for your decision?
Group decision issues: Discussions among committee members are often biased by political issues and extroverted members expressing strong opinions sometimes direct decisions for their personal advantage. The downside of the democratic nature of most committee decisions is the propensity to agree. Or as Carl Spetzler puts it: “Most people confuse agreement with decision quality. What we want to have is a high quality decision, not just agreement around a bunch of nonsense.“
Definition of success: When tracking progress of projects and linking tranches of the funding to milestones, how exactly do these committees define success or progress? Is a team that achieved a full version of a business plan and a prototype with built-in customer feedback within a 3-months incubation program more successful than a team that showed within the same amount of time that the business idea is not worthwhile pursuing?
The same is true, by the way, for many decision-making processes within venture teams. The following quote is my favorite excerpt from a must-read book:
„In the field of medicine, I told Dalio, there‘s widespread consensus among experts that the quality of evidence can be classified on a scale of strength from one to six. The gold standard is a series of randomized, controlled experiments with objective outcomes. The least rigorous evidence: ‚the opinion of respected authorities or expert committees.‘ The same standards are part of the growing field of evidence-based management and people analytics, in which leaders are encouraged to design experiments and gather data instead of relying solely on logic, experience, intuition, and conversation.“ -Originals: How Non-Conformists Move the World by Adam M. Grant
How much does an expert, a human-being, actually know about the future? To what extend can human brains conceive and predict the success of an innovation project? There are many famous quotes pointing out the inability of humans to anticipate the future value of an innovation. And the more you become an expert, the less suitable are you for recognizing the non-obvious potential of game-changing innovations. I always tell my teams: “You are highly respected experts in your field, but when it comes to your venture project, you are not an expert anymore.” Accepting this perceived loss of status and shifting mindset to learning mode accordingly, lies at the heart of developing innovation teams.
Given that insight, we must change the way how we structure decision-making processes for innovation projects of all kinds. Stop sharing opinions, start collecting evidence. Within teams and likewise for funding decisions by committees. Which means, in practice, the following:
Base decisions on actual data: when a team tells you that customers are interested in buying their product, ask them what evidence they can provide to proof that the customer will actually buy? When someone in a meeting says that she believes in this team, what evidence do we have regarding e.g. team cohesion? Do not invest in stories, ideas and opinions, but in facts and people, who know how to collect evidence. It sometimes might be only qualitative data, which is still way better than no data at all.
3-dimensional maturity: the stages of technology readiness levels (TRL) are well-defined. However, there is a pretty abstract understanding of maturity levels when it comes to market readiness, or team readiness. You cannot manage, what you do not measure. And maybe the lack of criteria on market and team readiness leads to exactly those opinion-based judgements. Markets and teams are more qualitative, fuzzy and thus subjective. This demands even more efforts by committees and teams themselves to assess their readiness for all three dimensions: technology, market and team. We are currently collaborating with University of St. Gallen on developing criteria and scales similar to TRL levels.
Redefining the CEO: the notion of a so-called “chief experimentation officer” emerged over the recent years and alluded to the need for new leaders and decision makers in innovative environments. Instead of leaders and committees that evaluate business proposals based upon opinions and amount of expertise, these key roles should be shaped by people, who provide guidance from a innovation process perspective: how to collect evidence by experimenting. Decision makers with a “I know it all”-mindset are obsolet, yet those with a “I know the way to know it all”-mindset become key.
Innovation accounting: a crucial factor for shifting mindsets towards a more evidence-based and experimental approach is the redefinition of success and progress. Of course you need some kind of controlling, KPIs, measures of success in order to assess processes and hold innovators accountable. Though, innovation accounting follows different rules compared to standard management approaches. Since innovation happens in environments of uncertainty, progress means learning and success is gathered evidence. A team stating after a 12-weeks acceleration program that they have been completely naive when starting with their venture and that they will stop their efforts right here is as successful as a team that achieved a customer contract after the same time.
Experimentation as a strategy, assessing experiments for investment decision purposes and designing experiments for a single project is something most organizations and people are not used to. However, this is what researchers are basically doing all the time when developing technologies. You can develop markets and business models the same way. Watch our workshop series in order to learn more about evidence-based innovation beyond the lean startup. We also recommend the LinkedIn post by Zevae Zaheer.
//by Thorsten Lambertus