This potential exclamation can be rephrased as a statistical hypothesis and tested on a small percentage of users. We aimed to prove that our latest features didn’t do any harm and all pre-existing functionality worked as before and produced the same revenue. As the results came in and our confidence grew, the activity began to produce diminishing returns and a new activity started to dominate.
Our next activity was to measure any improvement we could attribute to the new features, such as increased revenue per user. We usually released the product to larger percentages of our users and ran different statistical tests. This activity also eventually faded away as we gained confidence for the full user-base rollout. We were then faced with the last test: Could we use the improved product and the statistical proof of improvement to attract more paying customers?
Figure 2: Example of a measure-learn process
Just as David Anderson describes in his knowledge discovery process blog post, the key points in our process were not the handoffs between individuals, teams, or departments, but rather the changes in the dominant activity. For example, our initial experiments typically involved a core product engineer (usually me), the vice president of advertising operations, a senior advertising campaign manager, and an operations guy. When we moved on to quantifying improvement, we needed to create new ads that weren’t even possible with the old system. Therefore, we brought in additional collaborators, including another campaign manager, a UI programmer, a graphic designer, and the creative director. In the final stages, the core engineer’s role diminished greatly, but the same campaign managers and the creative staff worked more closely with sales—collaboration and no handoffs!