Site icon The Blog of Author Tim Ferriss

Vanity Metrics vs. Actionable Metrics – Guest Post by Eric Ries

Vanity metrics: good for feeling awesome, bad for action. (photo source: UK Guardian)

This is a guest post by serial entrepreneur Eric Ries. He was most recently co-founder and CTO of IMVU, which has more than 20 million registered users and generates $1,000,000+ in revenue per month. Eric is also a venture advisor to Kleiner Perkins.

How do you get to $1,000,000 per month in sales? By testing the right things. Eric is a metrics man.

Get exclusive content from Tim right in your inbox

Here is just one business-changing example, taken from the outstanding “How IMVU Learned its way to $10M a year” on Venture Hacks

IMVU learned its way to product/market fit. They threw away their first product (40,000 lines of code that implemented an IM add-on) as they learned customers didn’t want it. They used customer development and agile software development to eventually discover customers who would pay for 3D animated chat software ($10M in revenue in 2007). IMVU learned to test their assumptions instead of executing them as if they were passed down from God.

Enter Eric Ries…

Vanity Metrics vs. Actionable Metrics

The only metrics that entrepreneurs should invest energy in collecting are those that help them make decisions. Unfortunately, the majority of data available in off-the-shelf analytics packages are what I call Vanity Metrics. They might make you feel good, but they don’t offer clear guidance for what to do.

When you hear companies doing PR about the billions of messages sent using their product, or the total GDP of their economy, think vanity metrics. But there are examples closer to home. Consider the most basic of all reports: the total number of “hits” to your website. Let’s say you have 10,000. Now what? Do you really know what actions you took in the past that drove those visitors to you, and do you really know which actions to take next? In most cases, I don’t think it’s very helpful.

Now consider the case of an Actionable Metric. Imagine you add a new feature to your website, and you do it using an A/B split-test in which 50% of customers see the new feature and the other 50% don’t. A few days later, you take a look at the revenue you’ve earned from each set of customers, noticing that group B has 20% higher revenue per-customer. Think of all the decisions you can make: obviously, roll out the feature to 100% of your customers; continue to experiment with more features like this one; and realize that you’ve probably learned something that’s particular valuable to your customers.

Unfortunately, most analytics packages are configured by default to provide mostly reports on vanity metrics. That makes sense, since they are the easiest to measure and they tend to make you feel good about yourself.

For example, here’s a pattern I’ve witnessed in companies large and small. The company launches a new feature or new product, and a few days later, traffic (or revenue, or customers) starts going up. Everyone involved with that product celebrates. In fact, I’ve noticed that people tend to believe that whatever they were working on that preceded the metrics improvement probably caused the improvement itself. So the product guys think it’s the new feature, the sales guys think it’s that new promotion — I’ve even seen customer service reps be convinced it’s due to a new customer-friendly policy. In many cases the fluctuations are random or caused by unrelated external events. Unfortunately, the same mental trickery doesn’t apply when the numbers come back down. Human beings have an unfortunate bias to take credit for positive results and pass the blame for negative results.

Take the example of a product that has a weekly seasonality pattern. For products “on the Disneyland calendar” they will see higher usage on weekends and holidays. As a result, new initiatives that are launched on Thursday or Friday are likely to be judged a success when people come to work on Monday. Yet products unfortunate enough to be launched on Sunday may be judged a failure by Tuesday or Wednesday — unless the company is focused on Actionable Metrics.

There are some tips to getting to more actionable metrics:

1. Split-tests.

A/B experiments produce the most actionable of all metrics, because they explicitly refute or confirm a specific hypothesis. Either way, you can use split-tests to take action on anything from minor copy tweaks to major changes in the product or its positioning. However, not all split-tests are created equal. There is some value in the linear-optimization type tests that are a useful tactic in growing conversions. But the real value of split-tests comes when you integrate them into your decision loop: the process of putting your ideas in practice, seeing what happens, and learning for your next set of ideas. The tests that drive the most learning are the ones to focus on. A good rule of thumb is to ask yourself, “if this test turns out differently from how I expect, will that cast serious doubts on what I think I know about my customers?” If not, try something bigger.

Good third-party tools for A/B testing are hard to come by — most are too complex for most situations. If you don’t have an A/B system, you can use Google Website Optimizer or — if you have a software development team — build your own (for more implementation details, see “The one-line split-test, or how to A/B all the time” and “Getting started with split-testing“).

2. Per-customer metrics.

It’s important to remember, “Metrics are people, too.” Vanity metrics tend to take our attention away from this reality by focusing our attention on abstract groups and concepts. Instead, take a look at data that is happening on a per-customer or per-segment basis. For example, instead of looking at the total number of pageviews in a given month, consider looking at the number of pageviews per new and returning customer. Those metrics should be relatively constant — unless something interesting is happening with your product. So even a big rush of new customers shouldn’t change how many pages they each view on average, unless you’re getting a new kind of customer.

Similarly, if you’re increasing the engagement of customers with your product, that will tend to show up in the data for the returning customers. But if you just look at their aggregate data, you can miss important trends. I’ve often observed the following pattern: a big spike of customers joins thanks to a Digg or Slashdot mention. If a product has an average customer lifetime of two months, then after that period elapses, a huge number of customers can be expected to churn out all around the same time. But these effects are hard to keep track of, since customers are coming and going all the time. If you focus only on the number of pageviews, even if you limit it to returning customers, you might mistake a positive product change for something negative, because you launched it during a churn-dominated period.

Many analytics packages, including the much-maligned Google Analytics, have the ability to break down aggregates into per-customer or per-segment analyses. These can help make reports more actionable if you combine them with the Goal Tracking feature. For example, if you can tell which web referrers are driving the most traffic, that’s moderately useful. But if you can tell which are driving the most conversions, then you can start to make ROI-based decisions on where to invest your time in getting more traffic.

3. Funnel metrics and cohort analysis.

The best kind of per-customer metrics to use for ongoing decision making are cohort metrics. For example, consider an ecommerce product that has a couple of key customer lifecycle events: registering for the product, signing up for the free trial, using the product, and becoming a paying customer. We can create a simple report that shows these metrics for subsequent cohorts (groups) over time. Let’s say we create a weekly report. For each week, we then report on what percentage of customers who registered in that week subsequently went on to take each lifecycle action. If these numbers are holding steady from cohort to cohort, then we get clear feedback that nothing significant is changing. If one suddenly shifts up or down, we get a rapid signal to investigate.

The best thing about funnel metrics is that they allow you to boil down a large amount of information into a handful of numbers. If you don’t have the software to build these reports automatically, consider doing it by hand.

This is easy to do if the number of conversion events in relatively small — even if the number of customers is very large. For example, a typical website will have a 1% registration-to-purchase conversion rate. So even if you are registering 1000 new customers every day, those customers are going to result in something like 10 new purchases over their lifetime. So instead of getting fancy, use the good old index cards. At the end of each day, create an index card with that day’s date on it and the number of people who registered that day. Then, for each conversion that comes in, make a tally mark on the index card of the date that the person registered, not the date they purchased. For most products, this only requires you to maintain a week or two’s worth of index cards, since most products have customers that make purchase decisions relatively quickly. Then, on a weekly or monthly basis, gather up all the cards for a given cohort, and compute the conversion rate of the customers who registered in that period. That’s the number you want to focus on driving up.

4. Keyword (SEM/SEO) metrics.

SEM (Search Engine Marketing) and SEO (Search Engine Optimization) are great customer acquisition tactics, but they also can reveal important and actionable insights about customers, if we treat customers who were acquired with a given keyword as a segment and then track their metrics over time. For example, early on at IMVU we tried advertising for AdWords phrases that contained the name of a competitor’s product plus “chat.” We’d then take a look at key statistics for the cohort of customers that registered from each separate campaign. What we found were striking differences in signup and conversion rates depending on what competitor we brought the customer in from. That information is moderately useful in directing a marketing campaign. But it’s far more useful as an indicator of who the customer behind the numbers are. We eventually found that the highest conversion rates came from products that are primarily used by teenagers and young adults — a very different demographic than we thought we were serving. As a result, we started to adjust the mix of customers we were bringing in for usability tests, with dramatic results. For concrete examples of user feedback and testing, see the below video from an interview with Mixergy:

Here is a small sample transcript from the above video:

And so out of complete desperation, we were like, “Okay, fine, we’ll introduce a simple chat now feature.” It was a matching thing where you could push a button and you would be randomly matched with somebody else from around the world – the only thing you have in common is you both pushed that button at the same time.

And we did that, and all of a sudden people were like, “Oh, this is fun.” And then – then here’s what happened. So we bring them in and they do the Chat Now, maybe they meet somebody new who they thought was kind of cool. They’d be like, “Hey, that guy was neat, I want to add him to my Buddy List. Where’s my Buddy List?”

And we say, “Oh, no, no. You don’t want your own Buddy List. You want to use your regular AOL Buddy List” because that’s interoperability, network effects, all this nonsense.

And the customer’s looking at us like, “Well, that doesn’t make sense. What do you want me to do exactly?”

And we said, “Well, just give that stranger you just met your AIM Screen Name so you can put them on your Buddy List.”

And you can see the eyes go wide – they’re like “Are you kidding me?! A stranger on my AIM Buddy List?”

And we said, “But – but otherwise you’d have to download a whole new instant messaging client! And then you’d have to have your separate Buddy Lists.”

They’re looking at us like, “Do you have any idea how many instant messaging clients I already run?”

We said, “No, what, like two or three?”

And the teenager responds, “Duh! I run eight!”

They were already running, like, fifty clients! I mean, I had no idea how many instant messaging clients there were in the world. And we had this preconception like, “Oh, it’s a challenge to learn new software, and it’s tricky to move your friends over to the new Buddy List,” and all this other nonsense sitting in our heads that just, for our customers, looked at us like we were crazy.

Conclusion and Challenge

A common theme across all of these actionable metrics is the lack of really good action-oriented third party tools.

So I’d like to issue this challenge to all of you reading this post today: share your stories of actionable metrics and how you track them. If there are good tools that you have used, let us know. Most importantly, let us know how you customized off-the-shelf tools like Google Analytics to get more action-oriented. We’ll share the results in a future post. We’re looking for stories that embody these three principles:

1. Measure what matters. It’s tempting to think that, because some metrics is good, more metrics is better. That’s why vendors routinely list the thousands of reports they are capable of generating as a feature. The truth is, the key to actionable metrics is having as few as possible. Detailed reports are useful when we’ve diagnosed a problem and are looking for clues as to what’s gone wrong. But where does that diagnosis come from in the first place? Actionable metrics help us realize we have a problem and point us in the right direction to start solving it.

2. Metrics are people, too. Great metrics tools allow us to audit their accuracy by tracing reports back to the individual people who generated their data. This improves accuracy, but its more important effect is that it lets us use the same customers for in-depth qualitative research. Not sure what the numbers mean? Get the customers on the phone and ask them.

3. Measure the Macro. Lastly, even when we’re split testing the impact of a minor change, like a wording or a new button, it’s important not to get distracted by intermediate metrics like the click-through rate of the button itself. We don’t care about click-through rates, we only care about the customer behaviors that lead to something useful, whether purchase, retention for advertising CPM, or some other measurable “success” particular to your business model.

[From Tim: Here are a few options to get the juices flowing: The Better Google Analytics Firefox plug-in and six other tools for specific Google Analytics feature enhancement.]

###

Metrics are just one component of a new vision for entrepreneurship that I call the “lean startup”. You can learn more on the Startup Lessons Learned blog. For those that want to explore these concepts in comprehensive depth, including more real-world examples, there will be two all-day Lean Startup seminars sponsored by O’Reilly on May 29 and June 18 in San Francisco.

Get exclusive content from Tim right in your inbox

Exit mobile version