How We Accelerated Growth with Product Onboarding

Not everyone who tries your product will purchase and stay engaged. If you offer a free version, you’ll often see a significant drop in engagement shortly after sign up. The question then becomes, “How do I improve the onboarding experience to increase engagement and revenue?”

I’ll break this down into 5 steps, and show how product onboarding contributed to 500% revenue growth at my company Loggly. Loggly offers cloud-based application and system monitoring, and thousands of software developers and system operations people sign up for a free trial each month.

1. Understand Your Users

Even if a good number of your users are successful after signing up, what about the ones who are bouncing soon after or are struggling to make progress? Do you really understand what problem are they trying to solve at that point in time? What is motivating them to take action and what is their ideal outcome? Clayton Christensen described this in more depth as the job to be done. Remember that a user’s first job when trying a product is only part of their full set of jobs. It’s likely they first want to satisfy some immediate need, or learn more about your product so they can plan for the future.

At Loggly we created a survey on SurveyMonkey that asks about each user’s first use case. We sent invitations in our Marketo email program several days after the customer signed up. I listed out the top 10 use cases that I heard on sales calls, and asked them to pick up to 3. Over 150 users responded. Below you can see that the top use case was finding the root cause of errors, which was selected by 60% of our users. We also created separate surveys based on who successfully activated the product, and those who did not. Next, I’ll show you what to do with this information.

First Use Case

2. Analyze the Customer Journey

Once you understand your user’s current situation and their ideal outcome, you can create a map of all the steps they go through from beginning to end. This is also known a customer journey map. You can start building your map by adding what you already know. You can start as early as lead gen by including how they recognize the problem and how they find your company. For onboarding, include all the steps they go through in your app, and even common friction points.

Kick it up a notch by using behavioral data to enhance your journey map. A good map should be highly predictive of who purchases your product. Analyzing our data using regression and decision tree analysis, we identified 3 key behaviors that our paid users were highly likely to demonstrate, and our free users were unlikely to. Putting all the stages together in a funnel allowed us to build a model that predicted about 70% of our paid users. You can read more about the analysis we did in my blog Predicting Customer Conversions in R.

The “success stages” we identified were:

  1. Signed Up – They signed up for a trial account
  2. Activated – They sent some log data to Loggly
  3. Operationally Activated – They sent a significant amount of data for several days
  4. Engaged – They logged in to our web portal several days
  5. Invested – They added a team member, and personalized their settings

The invested stage is the most predictive of who is going to purchase a paid plan. Generally, user investment is “any activities in which users spend time or effort interacting with a product in a way that ultimately makes that product more valuable to them”.  In the chart below, you can see that a user meeting our definition of invested is 10 times more likely to purchase than one who is only engaged. I removed the scale on the Y axis for privacy. The high prediction rate validates that our journey map is correct. Also, it gives us guidance to encourage as many users to become invested as we can.

Likelihood of purchasing

3. Measure User Success

Next turn your customer journey map into a conversion funnel, and measure the user success rates at each step. This helps you identify where the biggest drop-offs are, indicating either lack of motivation or too much friction.

Common tools to measure user behavior include custom events in Google Analytics or KissMetrics, but they are limited to the included reports. We track behavioral data using Loggly, which is a low cost way to stores lots of data points. It also allows us to easily export data to an S3 bucket, where we can do custom analysis in map reduce or a relational database. We hook these data stores up to Tableau for visualization and reporting, and R for predictive analytics.

Below, I’ve plotted an example conversion funnel showing the percentage of trials in each stage. These numbers have been altered for privacy, but this shows a typical funnel. We can see that about 75% of accounts that sign up become activated. Also, only a small percentage of trials have reached the invested stage, and our goal is to increase this percentage.

Percent of trails by stage

It may require some creativity to track things done by users outside of your software. For example, at Loggly we previously asked users to manually configure their operating system or server environment to send logs to us. We knew they were having trouble, but we didn’t know why. As a result, we created tools that automatically verify the user’s setup. In addition, they report the success rate so we can track it.

If you’re trying to figure out what contributes to a given success rate, break it down into components. For example, there are many ways to configure a system for logging, including Linux, Windows, Java and more. We tracked success rates on each of them separately, so we could see which was the most challenging for our users.

4. Test New Product Improvements

Next look at your conversion funnel and pick one metric you want to drive first. You can get ideas on how to increase user success by reducing friction or adding more value. Look at steps in your journey map to simplify, or enhance existing steps with a bigger “wow” factor. Take a look at the design of the user experience, have you planned out a great first run experience? Samuel Hulick’s book The Elements of User Onboarding has tons of great design ideas.

Rank by the impact an improvement would offer, and how quickly you can offer a solution. The low hanging fruit is often in the metrics earlier in the funnel or newly discovered stages. You can do a before/after comparison if the improvement is obvious. A/B testing might be better if you are unsure which performs better.

We decided to focus on our activation rate first since too many people had trouble setting up their computer to use our service. We defined our activation rate as the percentage of signed up users who successfully sent logs to Loggly. Most of the improvement came from reducing friction in the setup process. Below, you can see an example of our setup instructions for Linux, which previously had 3 complicated steps to install and verify it was working. We made this into one easy step with a script that does all the steps automatically. When we saw it working in Linux, we created more scripts to automatically configure Mac OS X, Apache Web Server, and more.

BeforeAfter

We also shared our insights with marketing and sales so all our company’s touch points are improved. For example, we made our customer success stages available in Salesforce, so our sales team can offer tips when doing outreach. We also integrated them into Marketo to personalize our introductory email program. We got some great results with our personalized email, but I’ll save that for another post.

All together, we were able to lift our activation rate by 35% and now the large majority of our customers are successful sending data. Plotting by month or quarter cuts some variance and allows us to see long-term trends. The activation rate grew quite fast at first and then went into a steady climb as we ironed out the biggest friction points.

Activation Rate

5. Continuous Improvement Leads to Big Growth

Everyone loves quick wins and breakthroughs in high level metrics like revenue. Most often we see the biggest improvements in low level metrics. For example, a bug fix we made in our Java instructions instantly lifted our success rate for Java. All these small improvements add up over time to make a big impact on higher level metrics like activation rate and revenue. You can read more about continuous improvement in the Kaizen management philosophy that lead to Toyota’s success.

Improving our activation rate each month leads to exponential growth over time. You can see proof of this in our aggregate activations. We launched our product with strong growth, but since then 35% of our activated accounts are due to optimizations that have accelerated our growth. In total, we saw a 500% increase in revenue last year.

Screen Shot 2015-04-22 at 2.28.12 PM

What about improving the rest of our user’s job to be done? For people who want to learn more about log management, we’re launching some great new guides in a few weeks. For people who are engaged and debugging problems, we have a lot of work to do in offering better insight into errors and their root causes. Delivering this insight when and where people want it is part of our product roadmap for this year.

I love sharing and learning things from the community. Does your company look at customer success in the same way? What are some things that you tried that you tried to improve user onboarding? Please add comments below so we can all learn from you.

Inside Product Management: David Breger of LinkedIn

This article was originally published on Startup Product on 8/29/14.

David BregerDavid Breger is the Tablet Mobile Product Lead at LinkedIn and a mentor at 500 Startups. Mobile now accounts for 43% of LinkedIn’s visits and will soon make up the majority. Previously, he created endorsements which is the fastest growing product in LinkedIn history.

I reached out to David to learn more about Product Management at leading companies. I was lucky enough to interview him about his top goals and toughest challenges.

What are your top goals?

Our top goal is to bring the power of LinkedIn wherever you need it. People move between devices throughout the day. They use their desktops at work, mobile phones go with people throughout the day, and tablets are used most between 6 and 11 at home. I unlock the power of the tablet by understanding how people use it differently than other devices. For example, it can be weird to search for jobs during the day using your work computer, but many scan through interesting jobs at home using their tablets. Some key metrics I monitor are how often people use LinkedIn to apply for a job or reading posts that other members or our Influencers wrote, and how long they stay engaged with longer sessions.

Photo Jul 05, 9 27 40 AM_2

What are your biggest challenges?

Mobile changes really fast, much faster than web. How many web apps can you think of that haven’t changed much in the past 4 years? On the other hand, when iOS 7 came out everyone expected apps with the new style right away. It used to be that you shipped software once and now there is new release every month. There is also a proliferation of devices. The iPad just came out when I started and now we have tons of Android devices, watches, and even Google Glass. It’s a challenge to achieve feature parity across platforms. It’d be even harder for a startup with fewer people and resources.

How do you decide which features make the cut?

Picking features for each platform is a bit of an art. We have to ask ourselves questions like “How fast is it growing? What can we do better on that platform over the current web experience? Will users be happy or content with a given choice?” When we launched our iPad app we started with the basic functionality our members needed and then we grew from there. Our head of product splits development into three buckets: core, strategic, and venture. For example, our content play started out as venture, then became strategic as it started working and became more important to the business, and now it’s part of our core to be a publishing platform of record. Our tablet app started out as venture and has now moved up.

What other interesting challenges are you working on?

Another one is that people can have a given version of the app forever without upgrading. We want to be able to experiment and test new things, but you can’t easily revert something on mobile. That’s why we do lots of A/B testing. In fact, our XLNT Platform allows us to run many tests in parallel. When we’re confident that a given version is the best, then we can lock it in.

What are you most proud of?

I’m really happy how quickly the company has shifted to “get” mobile. This transition has involved designers, sales, marketers, etc. and everyone has to know what that means. Our iPad app is evolving nicely since the interaction, navigation, and presentation are all unique to the tablet platform. I’m also really proud about the huge adoption of the endorsement product that I helped create. It’s been our fastest growing product ever, and has generated some really great insights.

endorsements

What do you like most about working at LinkedIn?

It’s exciting that my work impacts hundreds of millions of people. We’ve gained a lot of penetration in the professional market, and we’re serving as a source of identity that almost everyone uses. We’ve become a standard and are an important part of people’s lives. There’s so much innovation in across areas like monetization, features, and we’re growing really fast. Additionally, there’s a great sense of ownership. There’s not that many PMs at LinkedIn compared to the number of engineers and others so I get to be responsible for a lot. I have the privilege of working with really great people, and that makes it a great place to work.

Building A Table For My Awesome Girlfriend

Part of love just magically happens, and part of it is something we build. Telling my girlfriend I care is a good start, but what better way to show it than building something by hand?

I started by creating a Pintrest board for inspiration. Luckily, her favorite also included a How To Guide to build it from pallet wood. She picked her own colors, how well sanded it would be, even the type of legs. We did a sample showing at her apartment so we could see it in the natural environment. It took almost three months of working on the weekends to finish. I hope she enjoys using it as much as I did building it.

I’ll share some tips in case you’re inspired to build something too. First, I rented my saws which saved hundreds over buying new tools at Home Depot. You can also get reclaimed wood on Craigslist which has more character than new wood. Also, our recycling facility offers free stains if you want to try out a bunch of colors.

The hardest part of this project was doing the finish. My Dad lent me his book Understanding Wood Finishing by Bob Flexner. Finishing is more of an art than a science, but this guy has tested every combination and tells you what works best.

IMG_3237_2 2015-02-07 12.07.29
2014-11-27 12.11.26 2014-12-13 16.35.49
2014-12-06 15.19.11 2015-01-02 13.00.45

 

Top Use Cases for Business Analytics

Business analytics helps you serve your customers better through a deeper understanding of their needs. It helps you optimize your team to for efficiency, and focus your efforts on things most valuable to your customers.

What are your biggest goals and how would you like analytics to help you?  Here are popular examples for several departments:

Marketing

  • Attribute and optimize marketing spend for what drives purchases
  • Target content creation on topics that drive purchases
  • Personalize content to users based on their behavior and stage in the buying cycle

Sales

  • Score sales leads so you can focus on accounts that are likely to close or important ones that are at risk. Read my post describing how to predict conversions using R.
  • Identify the most successful tactics for engaging leads and prospects
  • Make personal recommendations based on a customer’s interactions on your site or trial

Customer Success

  • Prevent churn by identifying at-risk accounts early
  • Track a customer happiness index, learn what makes them happy or unhappy, and how to win over unhappy ones.
  • Proactively outreach to customers stuck at a given stage of setup or implementation

Product Management

  • Prioritize development on features that help or hurt conversion rates the most
  • Understand behavioral patterns in different segments of users
  • Reduce friction points that decrease retention, and encourage positive behaviors that increase purchases

How To Predict Customer Conversions Using R

At a recent Analytics Hackathon I was able to use R to double our prediction percentage for which customers converted to paid plans. An accurate prediction can generate lead scores so sales can focus their effort on promising opportunities or customers who are at risk. A good predictive model also hints at which features to improve and positive behaviors to encourage.

The problem with existing analytics tools like KissMetrics or Tableau is that you have to build your own models which is a guessing game. SAS and SPSS can do it but they cost well over $10,000.  I was looking for a better way and found R, which will build optimal models (almost) automatically. While statistics is hard to learn, R makes it relatively easy because all the packages are basically plug-and-play. This turns it from a hard scientific problem in more of an engineering/hacker one.  Once I learned R, I was able to do it in just a few hours. In this guide I will show you how to build a decision tree, and then use it to predict which customers will convert.

Step 1: Get Started With R

It might be a pain to learn a new language, but it’s worth it.  The impressive package library Rbookturns a regular hacker into statistical superman.  It also allows you to do set calculations so your code will be short and sweet.  There are many guides to learn the basics of the language.  I really enjoyed R for Everyone which I could quickly flip through on Kindle.  I looked at R courses on Udemy.  I thought the pace was too slow, but if you can play content at 2-3X speed it might be a good option.

The most popular IDE is RStudio and it’s free. Inside RStudio make a new project to save your work. Also, remember to save your environment regularly, because RStudio only does it when you exit. I like to try out new commands on the console. Once I get a bit of code working, then I copy it to an RScript so I can step through it and experiment with new algorithms.

The hardest part is learning about the useful packages, but below I’ll discuss ones to get started with. You can specify that RStudio loads your packages automatically when it loads your project.  Just create .Rprofile file and paste this in for now, then run it. If any packages need to be installed you can do so through the Tools menu.

require(ggplot2)
require(boot)
require(ROCR)
require(OptimalCutpoints)
require(caret)
require(plyr)
require(rpart)
require(rattle)
require(reshape2)

Step 2: Load And Format Your Data

Next you’ll want to load data about what features or behaviors your customers are exhibiting, and whether they convert to a paid plan.  Here are types of features that we found valuable to analyze:

  • Size of their need at signup
  • Fully activated by completing the on boarding flow
  • Engagement
  • Adding more team members
  • Investments into the product by configuring it to save time
  • Feature usage
  • Retention

You can load data from a database or a CSV file. If you load it from the database, you’ll need to set up an ODBC connection and have the right drivers. I found it easier to not load strings as factors both because it loads faster, and because factors are more difficult to work with. The data I’m using in this post has been modified in order to protect privacy.

require(odbcConnect)
db <- odbcConnect("DatabaseName")
f <- sqlQuery("select * from table", stringsAsFactors=FALSE)
f <- read.csv("~/table.csv", stringsAsFactors=FALSE)

Here are some examples of ways to make your data easier to build models with.  Working with NA data, or empty data, is really challenging in R and can throw your calculations off. I just strip out any rows with NA data using the complete.cases command.  Hopefully you’re left with a decent representative sample to work with.

f <- f[complete.cases(f),]

Next you’ll want to select the data and features to use in your analysis. You can get an overall picture by looking at all the data, or zoom into a particular point in your funnel and look at conversion from one stage to the next. It’s better to select a small number of variables that account for a large portion of the variance in your data. This will help avoid building a model that overfits your data. If you have a limited supply of data, this will reduce the number of model parameters to learn. You may also want to combine dependent or closely related features into a single metric. If you later choose to do linear regression, it works best with linearly independent features.

f$additionalUsers <- with(f, count_admins + count_nonadmins)

I have anonymized the data set used in this example by modifying the data and replacing the variable names with numbers. I hope you’ll be able to imagine your own variables in each of these formulas.

Step 3: Visualize Your Data

When I get started with a new data set, I often like to visualize it first.  If there is a significant difference between paid and free users, then our prediction is likely to be a good one.  One way to do this is to see how the averages vary based on whether the account is paid or not. The aggregate function can calculate the means split by whether they are paid.

vars <- c("var1","var2","var3","var4","var5","var6","var7","var8")
tierSeg <- aggregate(f[vars], by=list(f$pdStat), mean)

We can then plot each of these variables in a bar chart. A popular charting package is ggplot2. It lets you specify a data frame, and then you can layer visualizations on top of it. In this case, I’m including the geom_bar for the bar chart, as well as facet_wrap to create one chart for each variable.

colnames(tierSeg)[1] <- "IsPaid"
tiers.m <- melt(tierSeg, id.vars='IsPaid')
ggplot(data=tiers.m, aes(IsPaid, value)) + geom_bar(aes(fill = IsPaid), stat="identity", position = "dodge") + facet_wrap(~ variable, nrow=2, scales="free_y")

VariablesByTier

Step 4: Build Your Models

Whether a customer is paid or not is a binary variable, so common model choices include logistic regression or a decision tree. It’s somewhat tricky to interpret the coefficients of logistic regression, especially with dependent input variables. I’m going to choose a decision tree, because it will give me a better clue regarding variable importance, and sets easy to interpret thresholds.  This model will predict the paid column, using several input columns I created regarding features and behaviors usage.

m <- rpart(paid ~ var1 + var2 + var3 + var4 + var5 + var6 + var7 + var8, data=f, method="class")

You can visualize the tree using the fancyRpartPlot package. I prefer this one over the standard plot because it’s easier to read. The nodes and cutoff points allow me to understand how different segments perform.

fancyRpartPlot(m)

tree

It’s also useful to see the importance of various variables. Here’s how you can visualize it with a graph using the ggplot2 package. I’m sorting the rows in decreasing variable importance to make it easier to view.

a<-data.frame(m$variable.importance)
a$variable <- factor(rownames(a), levels=rownames(a))
x <- transform(a, variable=reorder(variable, -variable.importance) ) 
ggplot(data=a, aes(x=variable, y=importance)) + geom_bar(stat="identity", fill="lightblue")

variable_importance

Step 5: Predict Your Conversions

Now you can predict the probability that someone will convert to a paid plan. This might be useful to your sales team directly, so they can focus on ones with a high probability of closing.

f$prob <- predict(m, f)[,"TRUE"]

This prediction can also be thought of as a lead score, and your sales team might set a threshold for which ones are worthwhile to reach out to. Different thresholds will have different rates of true positives and false positives. One way to visualize these tradeoffs is to plot an ROC curve. You can choose the performance measures that matter most to you. I care most about positive predictive value (ppv) which is the probability that if the model classifies the account as paid, that it will actually be paid. I also care about the sensitivity, which is the percentage of actually paid accounts correctly recognized as paid.  Here you can see that this model has about a 65% sensitivity and a 70% positive predictive value at a cut point near the middle. It’s a fairly good model, but could probably be improved with more data.

pred <- prediction(f$prob, f$paid)
 perf <- performance(pred, measure="sens", x.measure="ppv")
 plot(perf, col=rainbow(10))

ROC

You can also pick a cut point that optimizes for a criteria. Here I’m choosing to maximize Kappa, which maximizes agreement between the predicted paid plans and actual paid plans. Once you have a threshold, use it to classify each point as paid or not.

optimal.cutpoints(X="prob", status="paid", tag.healthy=TRUE, methods=c("MaxKappa"), data=feat, direction=">", control=cont)
 f$predict <- f$prod> .2

You can calculate the accuracy of your classifier at this cut point using a confusion matrix. It will also show you the positive predictive value and sensitivity. While an accuracy of 92% is quite good, the positive predictive value is lower because there is a higher prevalence of accounts that don’t go paid.

           Reference
Prediction FALSE TRUE
 FALSE     1060  62
 TRUE      73    202
 
 Accuracy : 0.9207672 
 95% CI : (0.9478534, 0.971242)
 No Information Rate : 0.9442023 
 P-Value [Acc > NIR] : 0.006509636 
 
 Kappa : 0.7303789 
 Mcnemar's Test P-Value : 1.000000000 
 
 Sensitivity : 0.6512676056
 Specificity : 0.95876270 
 Pos Pred Value : 0.6745454545
 Neg Pred Value : 0.93966728

Step 6: Interpret Your Results

Does the model make sense given what you know about your customers and your product. Are there enough data points that you can be confident in the model? Does it seem like the model could be over-fitting the data? If not, you might want to experiment with different models or variable transformations.

I already knew that the more days that someone uses our product and the more data they send to it, the more likely they are to convert. However, I learned that adding more users is also correlated with conversion. I could conduct an experiment to determine if an increase in users will cause an increase in conversions.  For example, we could A/B test offers to add more team members to the account. This is a new and potentially valuable key to increasing our conversion rate. Additionally, the higher accuracy lead scores will help our sales team be more efficient.

If you’ve worked in analytics before, what are your suggestions on how to make even better predictions? I’m always looking to learn from the best.

About The Author

3_2_2 Jason Skowronski is currently a Product Manager at Loggly.  He studied machine learning in grad school, and enjoys attending hackathons in the SF bay area. If you want to learn more or would like help, please contact him directly.

Brand New Blog

Today an online presence is the best way to communicate at scale to the world who you are and what you stand for. It’s a way to be part of discussions and be a live participant both inside and outside our usual circles.

I’m most interested in startups, product management, online marketing, software development, and tackling big problems. This will be mainly content I’ve written myself, and a great compliment to the articles I share on Twitter (@mostlyjason). Please contact me for any questions!

%d bloggers like this: