This image was created with the help of Microsoft Designer
As Lewis Carroll once said, “If you don’t know where you are going, any road will get you there.”
I recall this wisdom every time I listen to the gospelers of the “fail-fast-fail-often” creed. I suspect that the ease with which these guys accept failure — and then rush to celebrate it — stems, at least in part, from their inability to define success.
And if you don’t know what success is, every attempt is a failure. (Worse yet, as our politicians regularly show us, when you don’t know what you’re doing, every attempt can be hailed as a success. But I don’t want to go there.)
As Andrew Binns and Andreas Brandstetter write in Chapter 1 of the book that Andrew and I have recently co-edited, innovation starts with a clearly articulated goal, a North Star that lays out the firm’s strategic ambitions; it also helps guide its subsequent actions.
Success (and failure) is then defined not by a sheer number of the attempted tries but by the number of steps that bring you closer to the established goal.
As Andrew likes to say, it’s not about how often we fail, but how much we learn — and, unfortunately, one doesn’t guarantee the other. Many people and firms fail often — and repeatedly! — simply because they don’t learn from their previous failures. Nothing to celebrate here, if you ask me.
Speaking of learning. A 2019 paper in Nature examined the role of difficulty of training on the rate of learning. The paper shows that the maximum learning takes place when the optimal training accuracy (a measure of difficulty) is about 85% or, conversely, when the optimal rate of training error is around 15%. In other words, to learn successfully, one should be five times more right than wrong.
So much for failing often!
We ought to realize that many contemporary “rules” of the innovation process originate from the daily routines of Agile development. Sure, when you design a software product, you don’t have the time, nor money, to run extensive customer research for every imaginable feature. You run an A/B test instead, and — bingo! — in no time you know what the majority of the end users prefer.
In this case, yes, progress can be measured by the number of tested pairwise combinations — the more, the better. And the less time you spend on rejecting the inferior options, the better too. Dude, you “fail” faster, good for you!
But not all areas of innovation are like software development. In my previous article, I pointed out that in drug development, the ultimate proof that a candidate drug has clinical benefits (is a success, in other words) comes as late as in the Phase III clinical trial — and that to run a Phase III clinical trial costs about $1 billion.
Given that the failure rate of Phase III clinical trials exceeds 50%, do we have any reason to celebrate a failure worth a billion even if we learn from this failure?
Moreover, not all the areas of creative activity can even benefit from customer feedback.
Take, for example, creative writing. When writing a book, a writer can’t share its early versions with the future readers. No, he or she writes it to the very end, publishes it, and then — and only then! — gets an idea of whether the book is to be nominated for a Pulitzer or will begin collecting dust on the shelves of a warehouse.
There is another area of human creative activities that doesn’t measure success by the number of failures: experimental science.
As a former bench scientist, I’ll tell you how this works.
A scientist begins with formulating a hypothesis, which articulates his or her vision of a problem. The scientist then designs an experiment that tests the validity of the hypothesis. If the experiment confirms that the hypothesis is correct — always the preferred outcome, make no mistake! — the scientist formulates a new, advanced vision of the problem based on the newly acquired knowledge. And the process repeats.
If the experiment shows that the hypothesis is incorrect, the scientist returns to the drawing board and tries to formulate another, better hypothesis, the one that will get support in the next round of experimentation.
Sure, failures happen here too. But in this case, a failure is either a mistake in the experimental design or a human screwup in the implementation of a correctly designed experiment. A failure is an embarrassment, something you want to hide from your boss and colleagues, not to celebrate with the rest of the civilized world.
A good experimental scientist is a person who develops better, more perceptive, hypotheses; designs experiments that result in a 100% clarity about the correctness of the hypothesis; and makes few, if any, mistakes when running experiments. And, yes, a good experimental scientist celebrates successes, not failures.
Innovation managers can take a page or two from the science textbooks and place hypothesis-driven experimentation in the center of the innovation process.
By coming back to what Andrew Binns and Andreas Brandstetter wrote, we need a few things preceding experimentation.
First and foremost, we need an innovation strategy.
We also need innovation processes, metrics, training, and incentives.
This is what will make our innovation process predictable and repeatable, at least more predictable and repeatable than winning a lottery.