Growth Hacking: The Results Are In. Kind Of.

Image

Wah, what happened?  How is it July, and not even early July but mid July?  How have I not posted in so long?

Well, here’s the truth: I’ve been posting every week, but the damn zombie post has been eating them.  Because they were smart posts, and thus full of brains.

(You: “Doesn’t that mean this post is dumb?” Me: “Probably. Let’s move on.”)

Anyways, people mostly ignore everything I write, but I have gotten questions about the growth hacking experiment.  “How did it turn out?” they want to know.  “Have you disappeared off the face of the earth because you’ve been so very busy hacking away?”

Kind of?

Unfortunately for a data-driven marketer, I can only bring anecdata to the table, and the reason why illustrates the difficulties of testing things without true split conditions.  I quieted down on the blog for a couple of weeks, unlinked the blog from the LinkedIn profile, and generally tried to control things so that changes in views and incoming requests could be attributed to the title change and nothing else.

But at the same time, I had been working on a piece on the KISSMetrics blog about landing page optimization, which was a great opportunity, and one that I didn’t have that much control over. They ran the piece when they ran the piece, and I couldn’t really say, “Hey, could you hold off for six weeks?  I’m doing an experiment.”  So the piece ran right around the same time I Growth Hacked my job title.

So– yes, I did get a lot of traffic to my profile, and I did get a lot of inbound requests from people wanting hacking.  But did it come from the title change?  Did it come from the KISSMetrics piece?  Did it come from putting my Lean slides on SlideShare, which also produced traffic?

It’s hard to say.  My gut– which even a data-driven marketer has– says that yes, the title change did produce some traffic and some inbound opportunities.  I admit, this is not a resounding answer.

Now if you’ll excuse me, I’m off to carefully cull Andrew Chen’s posts and Twitter so I can get ahead of the curve on the next buzzword opportunity.

News You Can Use: An Infographic Of Every Walking Dead Zombie Kill

Infographics, while they are a bit overused these days, still represent a great way to package information in a visually engaging way.

That’s the wrapper I’m using to vaguely explain why this awesome infographic of every Walking Dead zombie kill should be on a marketing blog.  Because… content?  Also, statistics?  Anways, this is sooper important, so just look:

Walking Dead Zombie Killers

I’m pretty surprised that Hershel has out-slaughtered Carl, and that Carl ranks ahead of Maggie.  Step it up, girl.

And a person with light OCD tendencies such as myself can’t help but love this careful cataloging of every weapon used in a kill, color-coded by season AND presented in alphabetical order.  (Although not sure how “scythe” got where it is… maybe it was a hand-mower at some point?)

Walking Dead Tool Use

 

Is there more? Oh yes, there’s more.  See the whole thing and marvel.

In conclusion: statistics.

Growth Hacking: Does It Bring All The Boys To The Yard?

I'm really not sure how this particular shot of Willie Wonka became a meme. He actually looks pretty friendly here.The first time I encountered the term “Growth Hacking” was on the LinkedIn profile of my Simplee colleague, Aaron Ginn.  “Ha-ha, that is some buzzword BS,” I thought dismissively, because buzzwords give me hives.  But soon enough, Aaron was writing a series on growth hacking in Tech Crunch.  Apparently it is a thing now. (And apparently using cliches doesn’t give me hives.)

“Get with the program,” Aaron told me. “Growth hacking is the new black.”

So what is growth hacking?  Allow entrepreneur and blogger Andrew Chen to explain:

Growth hackers are a hybrid of marketer and coder, one who looks at the traditional question of “How do I get customers for my product?” and answers with A/B tests, landing pages, viral factor, email deliverability, and Open Graph. On top of this, they layer the discipline of direct marketing, with its emphasis on quantitative measurement, scenario modeling via spreadsheets, and a lot of database queries.

Huh, I thought when I read that.  That just sounds like the sort of smart, scrappy marketing every startup should be doing.  But whatevs, I guess if we need a new title to take its place alongside “Social Media Guru” (15,157 results on LinkedIn) and “Viral Ninja” (312 results, a growth opportunity! Although sounding somewhat like an aggressive case of Japanese encephalitis), then “Growth Hacker” works fine.

Cut to today; I was chatting with another marketing nerd and he mentioned that after he finally broke down and put the term “Growth Hacker” on his LinkedIn profile, the opportunities came pouring in like gravy at a Southern buffet.

“Seriously,” he said. “Give in. Change your title.  Belly up to the trough.”

Okay, I thought, I’ll do it.  BUT I’LL ONLY DO IT FOR SCIENCE.  I realized this is an exciting chance to– say it with me– test!  I’m going to change my title and see how it affects how many people look at my profile.

This is a good example of the kind of experiment that can’t really be tackled with a split test.  That means I have to try to control what other factors I can.  And pretty much the only factor I can control is that for the last month I’ve been running all my blog posts through LinkedIn, as well as participating in some groups there.  So I disconnected my LinkedIn from the blog and– I’m sorry to do this to you, Lean Startup Group– I’ll also refrain from commenting in groups.  I’ll let the control conditions stand for a couple of weeks, and then I’ll change the title.

And then I’ll sit back with a napkin around my neck and a piece of white bread in each hand, waiting to sop up that sweet, sweet gravy.

It’s exciting, I know.  Try sleep well at night, regardless.  I will keep all The World apprised of results.

Significance

two coconuts

This is what came up when I searched for “compare.” This and lots of pictures of a meerkat which is apparently named Compare.

So you’re running a marketing campaign, because you are awesome and know that testing is your path to improved performance and general hosannahs.  You’ve got a couple of different banner ads with conversion rates of 4% and 5%.  (Hahaha, I know no banner ad ever in the history of the Internet has had that kind of conversion rate, but stay with me for a minute.)  Time to declare Mr. 5% the winner and move on, right?

Not necessarily.

You’ve probably noticed that your conversion rates don’t stay very stable– one week they’re down, one week they’re up, even if you haven’t done a thing to your campaign.  So how do you know that the one that looks to be the winner today won’t take a nosedive tomorrow?

By testing the significance of the difference between the two ads.

Results are considered statistically significant if it is unlikely that they occurred by chance.  Statistical significance is also sometimes referred to as a confidence level– the higher the significance, the more confident you are that differences between two results aren’t due to random chance.  Often a confidence level of 95% is considered the threshold to declaring a winner, but you may choose to do less if you’re trying to move through testing options quickly.

If you’re hungry for the stats– and who isn’t?– you can take a look here to see the specifics of how you can compare two different proportions (which is what a conversion rate is) to see if the difference between them is significant enough.

If you’d just like to skip to the part where you check your results, there are a couple of online tools you can use (here or here).

But if you’re like me you’ll quickly find that using the online tool gets tedious.  That’s why I created a spreadsheet that lets you input the impressions and conversions of the winning and losing ads and from that calculates the degree of confidence in the result.  You can find it here:

Split Testing Results Calculator

Having it in spreadsheet form makes it easier to use it for your own glorious purposes– for example, I created a different version that lets me paste in downloaded AdWords results and mark the winner and loser, and it automatically picks out the right stats and throws up my results.  Magic.

Quick note on the inputs

I mostly use this for PPC ad tests, although you can use it for banners, emails, and any old thing with a response rate.  You need two stats:

  • Population stat: this is going to be something like impressions, opened emails, etc. Basically, it’s how many people saw your thing.
  • Success stat: this is the thing you wanted to happen. God willing you’ll make it a conversion event and not clicks.

For PPC ads some people just use conversion rate, meaning conversions over clicks.  However, there could easily be a situation where an ad converted better but had a lower click through rate so that you end up getting proportionally fewer of the people who originally saw the ad.  Therefore I prefer to take it all the way from impressions.  Which of course means always having to calculate test results by hand, because Google doesn’t even offer conversions over impressions as a stat.

Now go be all scientific.

Photo by thienzieyung via Flickr.

Split Testing In A World Of Tiny Traffic

As you know, I think split tests rock and you should definitely do them.  However, over at TechCrunch Robert J. Moore brings up a great point about A/B testing:

…What if, like most start-ups, you only have a few thousand visitors per month? In these cases, testing small changes can invoke a form of analysis paralysis that prevents you from acting quickly.

Consider a site that has 10,000 visitors per month and has a 5 percent conversion rate. The table below shows how long it will take to run a “conclusive” test (95 percent confidence) based on how much the change impacts conversion rate.

A/B Testing Populations

If you’re a startup with low traffic, is his point, you don’t have as much opportunity to cycle through tests as might a site with more visitor flow, so you want to make sure the tests you do run will have a big impact.  Change only something small about the home page, and you may find yourself needing to let the test run for weeks before you reach significance.  Some implications of this:

  • Take traffic into account when designing a test plan.  If you’re doing a banner ad campaign with several different segments, it probably only makes sense to run tests in the segments big enough to get reasonable traffic.  If you’ll only get a few conversions from a segment, you likely won’t have enough volume to generate significant results.  In the principle of minimizing the amount of resources expended on projects, only spend time preparing and tracking tests if you are likely to see results.
  • Start big, go small.  If you’re in early days of testing start at the concept level– different tones, different layouts, different messages.  Test things that are likely to have a bigger impact, and refine from there.  That being said…
  • Small changes can have big impacts.  Once I tested search ads that were completely identical except that one had a period after the text and one didn’t.  The period generated a surprisingly big lift. I just heard from a friend that changing one word in the call to action on their home page lifted response by 20%.  On the other hand I’ve run tests where two ads were completely different and didn’t really get a significantly different result.

If you’ve got tiny traffic you will have fewer test cycles available to you.  You don’t always know ahead of time what’s going to move the needle, so check your results regularly and end tests that don’t seem to be going anywhere.

Lean Your Marketing: Marketing Is Not Rotisserie Chicken

beth-morgan-lean-your-marketing-marketing-isnt-rotisserie-chicken

Wanna hear a sad story about some orphans?

Pretty much every company I’ve helped out with paid search marketing has been running tests– in the sense that there were a couple ads per ad group, and they were running on some sort of rotation.  But in general, that’s as far as the tests have gone.  No one has ever come back to check them.  On and on they rotate, through the months and years, speaking their results out into an unhearing universe.  So common is this, in fact, that Google introduced a new setting for ads where they rotate evenly for 90 days and then start to optimize (previously, when you picked ads to rotate evenly they would do so indefinitely).  They were clearly tired of leaving money on the table because companies weren’t tending to their sad, losing orphan ads.

I know, it’s heart-breaking.  Take a moment.  Get a tissue.

Marketing Is Not Rotisserie Chicken – You Can’t Set It And Forget It.

It should go without saying, but if you’ve got a test underway you should regularly check the results, promote the winners, and set up new tests.  That’s really the only way testing is helpful, in fact.

Even if you’re not testing, you want to make sure that your programs aren’t on autopilot.  Running banner ads? Keep your eye on the numbers and make sure they’re still performing.  Sending outreach emails?  Look at the conversion rates (and of course, test to make them better).  Running an affiliate program?  Try out different program descriptions and experiment with what kind of offers get the best uptake.

You Think You’ll Remember, But You Won’t.

The other way to make testing helpful?  Write down what you’re testing along with the results.  It seems fresh to you now, but as the tests and the learnings pile up in your big brain, stuff from a couple of months ago will start to fade away.  Plus, writing down the results of your tests means that you will notice when there aren’t actually any results, meaning that you haven’t completed your experiment cycle and gotten to the most important part– learning.

In Lean Startup, Eric Ries describes using a kanban approach to product development; projects are either planned, in progress, built, or validated.  Each column is only allowed a certain number of projects, so to move more in some have to be moved out.  This system assures that features aren’t being built without being validated.

So it should be with the things you are testing with your marketing.  Use a spreadsheet or even a Word doc; write down your what you’re testing, what the results were, and what conclusions you drew from it.  If you find yourself with a lot of open tests and no results, go back and close those down.

beth-morgan-lean-your-marketing-pump-the-good-dump-the-bad

And in the spirit of keeping your marketing programs alive and fresh– use the baseline performance metrics you got from starting early to critically examine each program you’re running.

Are you finding PPC doesn’t come within shouting distance of your target CPA? DUMP IT.

The traffic you’re getting off of PR placements doesn’t seem to justify your $10,000/month retainer? DUMP IT.

Preparing ads for re-targeting takes you a lot of time, yet the volume it produces is tiny relative to other things you’ve tried? DUMP IT.

The traffic you’re getting from your guest blogging spot is solid and it converts like a dream? DO MORE OF THAT.

And of course, record it all in your learnings document so that six months later the replacement who was hired because you got promoted because you’re so awesome isn’t like, “Why on earth aren’t we doing PPC?  So dumb.  I’ll start right away!”

Hey, wait a minute… what’s this?

beth-morgan-lean-your-marketing-success

It’s Success Baby! We’re done!  Congratulations, you now know everything you need to know to become a flawless marketing stud.  People will whisper your name in the halls as you pass, and this time it won’t be because of your horrid yellow socks.  Go forth and conquer.

(Photo credit for savory chickens: tomkellyphoto via Flickr. Photo credit for handsome young Clint: some movie studio. Please don’t sue me, movie studio. Photo credit for Success Baby: who knows? That baby is a meme. He’s all over the internet.)

Related:

Lean Your Marketing: The Slide Deck

Slide 1 | Slide 2 | Slide 3 | Slides 4, 5, 6 | Slide 7 | Slide 8 | Slides 9 & 10 | Slides 11 & 12 | Slide 13 | Slide 14 | Slide 15 | Slide 16 | Slide 17 | Slides 18, 19 & 20

Lean Your Marketing: If You Can’t Do A Split Test, Control What You Can

beth-morgan-lean-your-marketing-control-what-you-can

As a certified control freak, I loves me some split tests.  So clear-cut, so reliable!  But the world is the world, and sometimes you find yourself with questions that can’t be answered with a split test. Shocking, I know!  But it happens.

Say, for example, that you want to figure out whether it makes sense to bid on your brand terms in PPC when they already rank well in SEO.  This isn’t an easy thing to test with a split.  You could control your ads so that they only show to 50% of search traffic, but it will be hard to parse out which SEO traffic wasn’t served against a search ad.

What is a marketing nerd to do?

The same principle applies to this test design as to your split tests– control what you can to minimize outside factors.  For something like the PPC question, you might run your ads every other day so you know which days didn’t have ads running and can split out those SEO results.

(Some of these slides truly had about 15 seconds of material behind them… this was one.  We are done for today, class.)

Next up: Why marketing is not like rotisserie chicken.

(Photo credit: pixyliao via Flickr)

Related:

Lean Your Marketing: The Slide Deck

Slide 1 | Slide 2 | Slide 3 | Slides 4, 5, 6 | Slide 7 | Slide 8 | Slides 9 & 10 | Slides 11 & 12 | Slide 13 | Slide 14 | Slide 15 | Slide 16 | Slide 17 | Slides 18, 19 & 20