I must give credit to the vibrant Lean Startup community, as it has articulated and grown awareness around some powerful techniques that they have been using to help new businesses identify and match solutions to the changing, and at times completely invisible, needs of customers in new and developing market places. As larger businesses become familiar with concepts like validated learning, business model generation, and pivots, they are asking how they can implement Enterprise Lean Startup. Are there practices and values that they can take from this movement to help them better answer their own questions about what products and solutions their customers would want?
The answer is yes. Enterprise Lean Startup can be a reality, with some proper context. Many start ups operate in a the space of online services sold to customers, making frequent delivery very achievable. This kind of environment also offers the opportunity to collect many data points in the form of things like acquisition, activation, retention, referral, and revenue (AARRR), dubbed the “pirate metrics” by Dave McClure. Conversely, many large organizations are in the business-to-business market, where sales are much larger in scale and lower in frequency. Global enterprises frequently have internal IT and service providers that are larger than many independent companies and have a captive audience, denying them traditional market feedback. Additionally, public institutions are frequently accountable to financial markets and the demands of strategic plans that crave a perception of certainty. This can be a place where people saying “I don’t know” or “Let’s try an experiment that may fail” can be met with outright hostility.
I don’t mean to imply that these barriers are insurmountable, but more that we must understand the nature of the large organization and the character of some of its challenges. A start up organization has a very high threshold for risk and uncertainty; indeed they want something unexpected to happen, as the status quo would mean they continue to be very small business ventures. Large organizations, on the other hand, have a certain vested interest in preserving the markets and positions they have established. In order to effectively implement these practices, let’s look at four key behaviors that large organizations need to address in order to maximize the effectiveness of a build, measure, learn loop.
- Make it safe to experiment
- Identify measures for learning
- Accelerate delivery
- Align feedback with strategy
This is a pretty large topic, so let’s take each one as an individual entry. Today we’ll start with making it safe to experiment.
The Value of Certainty & Enterprise Lean Startup
Several years ago, when I first started exploring rationality (or more appropriately the lack thereof) of the mind, I found a very simple survey that some behavioral economists had conducted regarding people’s preferences towards two possible outcomes, which I’ve illustrated in the diagram to the left. First, we see option A, which pretty much yields a consistent value of 5. Then we see option B, which offers a range, most of which will return a higher value, in some cases more than 50% more. Notice though, that in a few possibilities, it returns a lower amount of value. If you average the returns of option A, it averages 4.9, while option B averages 5.8. If I asked you which option you want, logic would dictate that option B is the preferable one. However, people were found to prefer option A 80% of the time. Simply stated, we are conditioned to dislike risk.
In their pioneering work on loss aversion, Amos Tversky and Daniel Khaneman found that, on average, people value avoiding losses at twice the rate they value possible gains. Consequently, people and organizations can quickly become very risk adverse, especially when they feel they have something to lose. In the case of most established organizations, they do have something to lose: their current business. This has very profound implications when we start asking a business to begin conducting experiments of which the outcome is unknown and of which some may fail. Most companies today operate under a model where variability is shaken out by a planning process, project buffering and other management controls, we are basically asking them to now embrace uncertainty on at least some level, and this can be very scary.
Part of this fear of failure is ingrained in many organizations and how they run projects today. Most people who work on traditional projects are trained and habituated to believe that there is one true answer, and the most efficient way to solve a problem is to identify and implement that one true solution. Consequently, we see numerous projects that feature the “big bang” type implementation where we get one chance to launch a product or system. This makes the stakes very high, so high that people won’t want to do anything that isn’t a sure thing and isn’t safe.
If we want to create a space for the experiments associated with Enterprise Lean Startup, we need to change the dynamics of how we operate.We must change the way we operate so that we can run many small tests that are “safe to fail.”
Let me offer you an example. At one point in my career, I was working for a large financial service company that took great pride in their reputation. They were trying to build a new web presence and were quite fearful that if they put something onto their site that didn’t work, or was poorly received, it would adversely impact their brand. Getting to production in this environment was a herculean task, and it went way beyond simply proving your code was solid. In the end, some enterprising people on the team proposed an A/B model, where they put a router between users and the systems. This router could be configured to direct a specific subset of users to the new site, so that they had a safe “sandbox.” They could then show new features in a controlled manner to this small, select audience.
This capability has been used by numerous organizations since then, and we see organizations that can get quite sophisticated in how they select users allowed into their test area, ranging from random selections to advanced logic based on things like profiles and invitations to select groups. In this situation, the team running the system has complete control over how much they exposed the experiment to the public. Additionally, this gave them a platform to compare the impact of the new feature being tested with the legacy system. With many users still going to the current site, teams can compare what users do while effectively controlling for all other variables. This technique of using controlled comparisons was powerfully demonstrated by New Zealand in the America’s cup, when their two boat testing strategy allowed them to build a better designed ship than the Americans and other teams with much larger budgets.
This model of having two versions goes beyond just websites. If you remember back when United Airlines and US Airlines tried to compete with the low cost airline model being championed by JetBlue, they didn’t risk their current brands. Rather, they launched new brands, Ted and Song respectively. Another good example comes from Eric Ries when he retells the story of Intuit reengineering QuickBooks to support multiple versions of the application on someone’s desktop. Thus, users could keep their data on the older, stable platform and also look at new features on an alpha instance that was not able to actually change the data. This way, people could experiment with new features on their own computer with their own data, but there was no risk of their sensitive financial data being damaged or lost. Sometimes the fear around experiments is less about actual impact, but a general feeling of loss of control or no rigor. This can be addressed with more formal planning around the experimentation process.
Find the Cheapest Test Possible
A willingness to experiment is not enough. Many traditional organizations are building in a way that makes iterations and experimentations very expensive. As Joe Justice nicely demonstrated when talking about why most cars have improved so slowly in fuel efficiency is that the cost of components, like door molds, are so high it can take up to ten years to pay them off. Companies operating like this require years, if not decades to develop new ideas. When change costs this much, people are not in a position to experiment. So organizations face a two-fold challenge when they seek to make their environment safe for experimentation. First, they must mitigate risks of failed experiments, second they must lower the cost of experiments in general so that they can be incorporated into reasonable timeline and budget. This is where Agile Software development practices and Enterprise Lean Startup come in. Techniques like Scrum help us build working software in very short increments so that we can go to market often and get rapid feedback. But sometimes, even that is not fast enough.
Let me give you an example. A while ago, my colleague David Bland was investigating the idea of building out a dedicated office space as a demonstration area for clients and coaches to use to be immersed in Agile practices. The idea was pretty cool. It was basically a facility completely set up to provide a model for agile practices. Clients would temporarily locate a team in this space so that they could get the benefits of a specifically designed facility, hands-on coaching, and being surrounded by other agile enthusiasts. From a brand point of view, there was no real risk to BigVisible, but there was a question of whether or not people would really be interested in something like this. While we could have rented a place and seen if we could have gotten people to pay to use it, David had a much cheaper preliminary experiment in mind. Within a few hours, he had launched a simple Wordpress website and a sign up list where people could express their interest. This provided incredibly cheap and rapid feedback.
Another colleague of mine was very involved in business process redesign, and he would talk about how he would invariably come to a point where they would need to build reports for executives, who were infamously fickle about exactly what they wanted and how it should look. For the first few months, rather than build anything in an analytics engine, he simply created dashboards in Excel manually. Each time, people had requests to tweak and adjust. Only after a few months, when they had settled on a few concepts, did they engage a development team to build production versions of the dashboards. I don’t mean to denigrate Agile practices, but we should remember that sometimes even working software is too expensive, and we should always be looking for opportunities to probe our domain as cheaply as possible.
Planning for Failure and Success
So far we have talked about how experiments need to be sufficiently cheap and that we need to have approached them in a way that it is safe to fail. However, this alone is not enough. We are naturally inclined to validate what we believe, this confirmation bias can cause people to look for validating evidence and simply not see contradictory results. This bias is powerful, and can even manifest itself in the design of experiments where people may only select those that prove their beliefs and fail to identify ones that would seek to disproof possible theories. If all of a team’s experiments validate theories, then chances are they are not really pushing the boundaries of what they can learn.
When instructing businesses on how to deal with complexity, David Snowden points to the safe-to-fail experiment as a critical tool for learning more about complex problems. However, not all experiments are created equal. If an organization is very comfortable with large plans to show that ideas have been thought out, he advocates embracing the bureaucracy, rather than fighting it. For topics and concepts to be explored, he offers templates and rules for how people define experiments. They must be clearly articulated, there must be an identified plan for what they will do if the experiment succeeds. There should be a plan on what to do if it fails. Organizations will even be given guidelines such as they should plan some number of experiments that will fail. This serves two purposes, first it means that some failures are actually part of the plan, reducing the stigma. Second, it broadens the range of options that we explore. We assume that the experiment where we offer users fewer choices will result in fewer orders, but imagine how that would impact our business, if it showed us the exact opposite. Thus, the experimentation process does not mean no discipline or structure. In fact, clearly defined experiments within a well controlled experimental environment are critical to successful learning.
From Factory to Laboratory
The final concern that many teams and organizations have about running tests to validate learning comes from a poor analogy, where people conceive of product development as a factory line. In this mental frame, any motion that does not add to the construction of the final product is seen as waste. Tests are seen as a poor substitute for “good analysis” and a waste of motion. Iterations of the product based on learning is considered “rework” that was not put into the plan.
In these cases, I find that what we need to address is the frame of reference. Rather than considering the team’s work to be running a production line, where we are simply producing a predefined product over and over again, we should look at the work of the team to be more like that of a scientific study. We have informed hypothesis and we are now seeking to test and validate those. Basically, we’re looking to use the scientific method on our project. While we can’t guarantee what we find, with some thought and preparation we can do it in a safe way. Of course that planning also needs to help us figure out exactly how we are going to observe the results of our experiments, which we will talk about in Part 2.