Growth Hacking: The Results Are In. Kind Of.

Image

Wah, what happened?  How is it July, and not even early July but mid July?  How have I not posted in so long?

Well, here’s the truth: I’ve been posting every week, but the damn zombie post has been eating them.  Because they were smart posts, and thus full of brains.

(You: “Doesn’t that mean this post is dumb?” Me: “Probably. Let’s move on.”)

Anyways, people mostly ignore everything I write, but I have gotten questions about the growth hacking experiment.  “How did it turn out?” they want to know.  “Have you disappeared off the face of the earth because you’ve been so very busy hacking away?”

Kind of?

Unfortunately for a data-driven marketer, I can only bring anecdata to the table, and the reason why illustrates the difficulties of testing things without true split conditions.  I quieted down on the blog for a couple of weeks, unlinked the blog from the LinkedIn profile, and generally tried to control things so that changes in views and incoming requests could be attributed to the title change and nothing else.

But at the same time, I had been working on a piece on the KISSMetrics blog about landing page optimization, which was a great opportunity, and one that I didn’t have that much control over. They ran the piece when they ran the piece, and I couldn’t really say, “Hey, could you hold off for six weeks?  I’m doing an experiment.”  So the piece ran right around the same time I Growth Hacked my job title.

So– yes, I did get a lot of traffic to my profile, and I did get a lot of inbound requests from people wanting hacking.  But did it come from the title change?  Did it come from the KISSMetrics piece?  Did it come from putting my Lean slides on SlideShare, which also produced traffic?

It’s hard to say.  My gut– which even a data-driven marketer has– says that yes, the title change did produce some traffic and some inbound opportunities.  I admit, this is not a resounding answer.

Now if you’ll excuse me, I’m off to carefully cull Andrew Chen’s posts and Twitter so I can get ahead of the curve on the next buzzword opportunity.

News You Can Use: An Infographic Of Every Walking Dead Zombie Kill

Infographics, while they are a bit overused these days, still represent a great way to package information in a visually engaging way.

That’s the wrapper I’m using to vaguely explain why this awesome infographic of every Walking Dead zombie kill should be on a marketing blog.  Because… content?  Also, statistics?  Anways, this is sooper important, so just look:

Walking Dead Zombie Killers

I’m pretty surprised that Hershel has out-slaughtered Carl, and that Carl ranks ahead of Maggie.  Step it up, girl.

And a person with light OCD tendencies such as myself can’t help but love this careful cataloging of every weapon used in a kill, color-coded by season AND presented in alphabetical order.  (Although not sure how “scythe” got where it is… maybe it was a hand-mower at some point?)

Walking Dead Tool Use

 

Is there more? Oh yes, there’s more.  See the whole thing and marvel.

In conclusion: statistics.

Growth Hacking: Does It Bring All The Boys To The Yard?

I'm really not sure how this particular shot of Willie Wonka became a meme. He actually looks pretty friendly here.The first time I encountered the term “Growth Hacking” was on the LinkedIn profile of my Simplee colleague, Aaron Ginn.  “Ha-ha, that is some buzzword BS,” I thought dismissively, because buzzwords give me hives.  But soon enough, Aaron was writing a series on growth hacking in Tech Crunch.  Apparently it is a thing now. (And apparently using cliches doesn’t give me hives.)

“Get with the program,” Aaron told me. “Growth hacking is the new black.”

So what is growth hacking?  Allow entrepreneur and blogger Andrew Chen to explain:

Growth hackers are a hybrid of marketer and coder, one who looks at the traditional question of “How do I get customers for my product?” and answers with A/B tests, landing pages, viral factor, email deliverability, and Open Graph. On top of this, they layer the discipline of direct marketing, with its emphasis on quantitative measurement, scenario modeling via spreadsheets, and a lot of database queries.

Huh, I thought when I read that.  That just sounds like the sort of smart, scrappy marketing every startup should be doing.  But whatevs, I guess if we need a new title to take its place alongside “Social Media Guru” (15,157 results on LinkedIn) and “Viral Ninja” (312 results, a growth opportunity! Although sounding somewhat like an aggressive case of Japanese encephalitis), then “Growth Hacker” works fine.

Cut to today; I was chatting with another marketing nerd and he mentioned that after he finally broke down and put the term “Growth Hacker” on his LinkedIn profile, the opportunities came pouring in like gravy at a Southern buffet.

“Seriously,” he said. “Give in. Change your title.  Belly up to the trough.”

Okay, I thought, I’ll do it.  BUT I’LL ONLY DO IT FOR SCIENCE.  I realized this is an exciting chance to– say it with me– test!  I’m going to change my title and see how it affects how many people look at my profile.

This is a good example of the kind of experiment that can’t really be tackled with a split test.  That means I have to try to control what other factors I can.  And pretty much the only factor I can control is that for the last month I’ve been running all my blog posts through LinkedIn, as well as participating in some groups there.  So I disconnected my LinkedIn from the blog and– I’m sorry to do this to you, Lean Startup Group– I’ll also refrain from commenting in groups.  I’ll let the control conditions stand for a couple of weeks, and then I’ll change the title.

And then I’ll sit back with a napkin around my neck and a piece of white bread in each hand, waiting to sop up that sweet, sweet gravy.

It’s exciting, I know.  Try sleep well at night, regardless.  I will keep all The World apprised of results.

Significance

two coconuts
This is what came up when I searched for “compare.” This and lots of pictures of a meerkat which is apparently named Compare.

So you’re running a marketing campaign, because you are awesome and know that testing is your path to improved performance and general hosannahs.  You’ve got a couple of different banner ads with conversion rates of 4% and 5%.  (Hahaha, I know no banner ad ever in the history of the Internet has had that kind of conversion rate, but stay with me for a minute.)  Time to declare Mr. 5% the winner and move on, right?

Not necessarily.

You’ve probably noticed that your conversion rates don’t stay very stable– one week they’re down, one week they’re up, even if you haven’t done a thing to your campaign.  So how do you know that the one that looks to be the winner today won’t take a nosedive tomorrow?

By testing the significance of the difference between the two ads.

Results are considered statistically significant if it is unlikely that they occurred by chance.  Statistical significance is also sometimes referred to as a confidence level— the higher the significance, the more confident you are that differences between two results aren’t due to random chance.  Often a confidence level of 95% is considered the threshold to declaring a winner, but you may choose to do less if you’re trying to move through testing options quickly.

If you’re hungry for the stats– and who isn’t?– you can take a look here to see the specifics of how you can compare two different proportions (which is what a conversion rate is) to see if the difference between them is significant enough.

If you’d just like to skip to the part where you check your results, there are a couple of online tools you can use (here or here).

But if you’re like me you’ll quickly find that using the online tool gets tedious.  That’s why I created a spreadsheet that lets you input the impressions and conversions of the winning and losing ads and from that calculates the degree of confidence in the result.  You can find it here:

Split Testing Results Calculator

Having it in spreadsheet form makes it easier to use it for your own glorious purposes– for example, I created a different version that lets me paste in downloaded AdWords results and mark the winner and loser, and it automatically picks out the right stats and throws up my results.  Magic.

Quick note on the inputs

I mostly use this for PPC ad tests, although you can use it for banners, emails, and any old thing with a response rate.  You need two stats:

  • Population stat: this is going to be something like impressions, opened emails, etc. Basically, it’s how many people saw your thing.
  • Success stat: this is the thing you wanted to happen. God willing you’ll make it a conversion event and not clicks.

For PPC ads some people just use conversion rate, meaning conversions over clicks.  However, there could easily be a situation where an ad converted better but had a lower click through rate so that you end up getting proportionally fewer of the people who originally saw the ad.  Therefore I prefer to take it all the way from impressions.  Which of course means always having to calculate test results by hand, because Google doesn’t even offer conversions over impressions as a stat.

Now go be all scientific.

Photo by thienzieyung via Flickr.

Split Testing In A World Of Tiny Traffic

As you know, I think split tests rock and you should definitely do them.  However, over at TechCrunch Robert J. Moore brings up a great point about A/B testing:

…What if, like most start-ups, you only have a few thousand visitors per month? In these cases, testing small changes can invoke a form of analysis paralysis that prevents you from acting quickly.

Consider a site that has 10,000 visitors per month and has a 5 percent conversion rate. The table below shows how long it will take to run a “conclusive” test (95 percent confidence) based on how much the change impacts conversion rate.

A/B Testing Populations

If you’re a startup with low traffic, is his point, you don’t have as much opportunity to cycle through tests as might a site with more visitor flow, so you want to make sure the tests you do run will have a big impact.  Change only something small about the home page, and you may find yourself needing to let the test run for weeks before you reach significance.  Some implications of this:

  • Take traffic into account when designing a test plan.  If you’re doing a banner ad campaign with several different segments, it probably only makes sense to run tests in the segments big enough to get reasonable traffic.  If you’ll only get a few conversions from a segment, you likely won’t have enough volume to generate significant results.  In the principle of minimizing the amount of resources expended on projects, only spend time preparing and tracking tests if you are likely to see results.
  • Start big, go small.  If you’re in early days of testing start at the concept level– different tones, different layouts, different messages.  Test things that are likely to have a bigger impact, and refine from there.  That being said…
  • Small changes can have big impacts.  Once I tested search ads that were completely identical except that one had a period after the text and one didn’t.  The period generated a surprisingly big lift. I just heard from a friend that changing one word in the call to action on their home page lifted response by 20%.  On the other hand I’ve run tests where two ads were completely different and didn’t really get a significantly different result.

If you’ve got tiny traffic you will have fewer test cycles available to you.  You don’t always know ahead of time what’s going to move the needle, so check your results regularly and end tests that don’t seem to be going anywhere.

Lean Your Marketing: Marketing Is Not Rotisserie Chicken

beth-morgan-lean-your-marketing-marketing-isnt-rotisserie-chicken

Wanna hear a sad story about some orphans?

Pretty much every company I’ve helped out with paid search marketing has been running tests– in the sense that there were a couple ads per ad group, and they were running on some sort of rotation.  But in general, that’s as far as the tests have gone.  No one has ever come back to check them.  On and on they rotate, through the months and years, speaking their results out into an unhearing universe.  So common is this, in fact, that Google introduced a new setting for ads where they rotate evenly for 90 days and then start to optimize (previously, when you picked ads to rotate evenly they would do so indefinitely).  They were clearly tired of leaving money on the table because companies weren’t tending to their sad, losing orphan ads.

I know, it’s heart-breaking.  Take a moment.  Get a tissue.

Marketing Is Not Rotisserie Chicken – You Can’t Set It And Forget It.

It should go without saying, but if you’ve got a test underway you should regularly check the results, promote the winners, and set up new tests.  That’s really the only way testing is helpful, in fact.

Even if you’re not testing, you want to make sure that your programs aren’t on autopilot.  Running banner ads? Keep your eye on the numbers and make sure they’re still performing.  Sending outreach emails?  Look at the conversion rates (and of course, test to make them better).  Running an affiliate program?  Try out different program descriptions and experiment with what kind of offers get the best uptake.

You Think You’ll Remember, But You Won’t.

The other way to make testing helpful?  Write down what you’re testing along with the results.  It seems fresh to you now, but as the tests and the learnings pile up in your big brain, stuff from a couple of months ago will start to fade away.  Plus, writing down the results of your tests means that you will notice when there aren’t actually any results, meaning that you haven’t completed your experiment cycle and gotten to the most important part– learning.

In Lean Startup, Eric Ries describes using a kanban approach to product development; projects are either planned, in progress, built, or validated.  Each column is only allowed a certain number of projects, so to move more in some have to be moved out.  This system assures that features aren’t being built without being validated.

So it should be with the things you are testing with your marketing.  Use a spreadsheet or even a Word doc; write down your what you’re testing, what the results were, and what conclusions you drew from it.  If you find yourself with a lot of open tests and no results, go back and close those down.

beth-morgan-lean-your-marketing-pump-the-good-dump-the-bad

And in the spirit of keeping your marketing programs alive and fresh– use the baseline performance metrics you got from starting early to critically examine each program you’re running.

Are you finding PPC doesn’t come within shouting distance of your target CPA? DUMP IT.

The traffic you’re getting off of PR placements doesn’t seem to justify your $10,000/month retainer? DUMP IT.

Preparing ads for re-targeting takes you a lot of time, yet the volume it produces is tiny relative to other things you’ve tried? DUMP IT.

The traffic you’re getting from your guest blogging spot is solid and it converts like a dream? DO MORE OF THAT.

And of course, record it all in your learnings document so that six months later the replacement who was hired because you got promoted because you’re so awesome isn’t like, “Why on earth aren’t we doing PPC?  So dumb.  I’ll start right away!”

Hey, wait a minute… what’s this?

beth-morgan-lean-your-marketing-success

It’s Success Baby! We’re done!  Congratulations, you now know everything you need to know to become a flawless marketing stud.  People will whisper your name in the halls as you pass, and this time it won’t be because of your horrid yellow socks.  Go forth and conquer.

(Photo credit for savory chickens: tomkellyphoto via Flickr. Photo credit for handsome young Clint: some movie studio. Please don’t sue me, movie studio. Photo credit for Success Baby: who knows? That baby is a meme. He’s all over the internet.)

Related:

Lean Your Marketing: The Slide Deck

Slide 1 | Slide 2 | Slide 3 | Slides 4, 5, 6 | Slide 7 | Slide 8 | Slides 9 & 10 | Slides 11 & 12 | Slide 13 | Slide 14 | Slide 15 | Slide 16 | Slide 17 | Slides 18, 19 & 20

Lean Your Marketing: If You Can’t Do A Split Test, Control What You Can

beth-morgan-lean-your-marketing-control-what-you-can

As a certified control freak, I loves me some split tests.  So clear-cut, so reliable!  But the world is the world, and sometimes you find yourself with questions that can’t be answered with a split test. Shocking, I know!  But it happens.

Say, for example, that you want to figure out whether it makes sense to bid on your brand terms in PPC when they already rank well in SEO.  This isn’t an easy thing to test with a split.  You could control your ads so that they only show to 50% of search traffic, but it will be hard to parse out which SEO traffic wasn’t served against a search ad.

What is a marketing nerd to do?

The same principle applies to this test design as to your split tests– control what you can to minimize outside factors.  For something like the PPC question, you might run your ads every other day so you know which days didn’t have ads running and can split out those SEO results.

(Some of these slides truly had about 15 seconds of material behind them… this was one.  We are done for today, class.)

Next up: Why marketing is not like rotisserie chicken.

(Photo credit: pixyliao via Flickr)

Related:

Lean Your Marketing: The Slide Deck

Slide 1 | Slide 2 | Slide 3 | Slides 4, 5, 6 | Slide 7 | Slide 8 | Slides 9 & 10 | Slides 11 & 12 | Slide 13 | Slide 14 | Slide 15 | Slide 16 | Slide 17 | Slides 18, 19 & 20

 

Lean Your Marketing: Control The Conditions With Split Tests


beth-morgan-lean-your-marketing-control-with-split-tests

Once you’ve got a hypothesis in hand, it’s time to get testing.

Split testing, aka A/B testing, means testing two options against each other at the same time and against the same randomized population.  This is pretty easy to do in a lot of digital marketing:

  • The major paid search providers allow for different ads to be rotated evenly against your keywords
  • Many banner ad networks also provide for testing, although direct banner buys may not
  • If your email marketing provider doesn’t give an automatic way to test emails, you can pretty easily split your list yourself and send out two different messages
  • Services like Optimizely or Unbounce make landing page testing easy

Split testing is a great way for you to feel comfortable that the result you observe is a result of the thing you are testing and not some other random factor.  For example, if you ran one search ad for two weeks with a conversion rate of 3%, then another for two weeks with a conversion rate of 5%, could you really be sure that the second ad won because it was better?  Maybe it was because it was two weeks closer to Christmas.  Or maybe it was because you had a site outage the first two weeks. Or you had some good PR the second two weeks.

But if you test the two ads against each other in a split test, alternating them evenly to a randomized population, then all those other factors don’t matter, and you feel comfortable saying the second ad was just better.

Keep It Under Control

So setting up split tests helps you minimize extraneous factors; you can minimize them even further by paying careful attention to how you set up your tests. Make sure you limit the differences in your two marketing assets to just the thing you want to test.

Say you want to test whether having a call to action in your search ad improves response.  What would be wrong with these two ads as your test subjects?

monkey-chow

The problem is, although one ad has a call to action (“Try It Now!”) and one ad doesn’t, there are also other differences between them.  The headlines are different, the key messages are different, one is longer than the other.  If what you really want to learn about is how effective a call to action is, you’re not going to get at it this way.  You might see a difference in performance, but you won’t be able to confidently attribute it to the factor you’re looking at.

And then you’ve gone and wasted time and effort by setting up a test that didn’t teach you anything.  Those happy scientists in the last slide?  That makes them cry.  EVERY TIME.

Don’t make the scientists cry.

Instead, make your ads largely the same except for what you want to test:

monkey-chow-better

The first ad wins?  We say, “Yay! Calls to action work!  I will try this in other ads!”

This doesn’t mean, though, that all split tests must involve tiny little tweaks— it depends on what you’re testing.  If you’re designing banner ads and you really don’t have a sense for whether a playful overall effect works better than a serious one, you’ll be putting out two pretty different ads.  Once that larger question is answered you might move on to smaller changes of the concept, until eventually you’re changing background color or testing different pictures.  And then you might just want to test the whole concept again against another concept.

The key is to control non-test factors as much as possible, so that you feel confident that you are learning something real.

Next up: what to do when split tests aren’t possible.

(Photo credit: shar ka via Flickr)

Related:

Lean Your Marketing: The Slide Deck

Slide 1 | Slide 2 | Slide 3 | Slides 4, 5, 6 | Slide 7 | Slide 8 | Slides 9 & 10 | Slides 11 & 12 | Slide 13 | Slide 14 | Slide 15 | Slide 16 | Slide 17 | Slides 18, 19 & 20

 

Lean Your Marketing: Create A Hypothesis


beth-morgan-lean-your-marketing-create-a-hypothesis

Did you ever do science fair in school?  Creating that yearly experiment taught me the scientific method:

  1. Create a hypothesis
  2. Test it in a controlled way
  3. Gather results
  4. Draw conclusions that either confirm or invalidate the hypothesis

You’ll hear Lean Startup aficionados talk a lot about validated learning, and a key step in that is to first have something to validate– a hypothesis.  In the context of developing product, a hypothesis is basically that customers will like this service or that feature and will be willing to pay for it.  To test the hypothesis you develop and launch the feature and record whether it succeeds or not.

Test Why, Not Just Which

In the context of marketing, most people recognize that they should do some kind of testing of search and banner ads and the like.  However, often their method for doing this is to throw up two different search ads, promote the winner, and stop the loser– basically, they go right for steps 2 and 3 of the scientific method. This is fine, and better than doing nothing at all, but your testing program will be even better if you add steps 1 and 4 into the mix.  Then you’re not answering the question “Which ad is better,” but more importantly, “Why is it better?”  You’re testing motivations for behavior that can enhance all your marketing.

For example, say you want to test the background color of two different banner ads.  If you just randomly pick blue and red, and blue wins, you’ve really just learned that blue is better.  Instead, if you were testing whether warm or cool colors work best, and blue (as a representative of cool) wins, you can further test different cool colors against each other, or test another warm/cool pair to see if it was the shade or the intensity.

With search ads, you don’t always have ad groups big enough to support a robust testing program– if you have 12 tightly-organized ad groups, for example, only 2 or 3 of them might have enough traffic to return results quickly.  If you just throw up different ads in those groups, you learn what works for those groups.  But if you test a theory such as “A call to action will increase response,” then if you validate that hypothesis you can add calls to action to lower-volume groups.  You can also test more different things at once– you can test three different hypotheses in three different ad groups and learn in parallel.  You might find that a dynamic headline, a call to action, and at least one exclamation mark works better in your search ads and then combine those into one ad to see if that’s even better.

In short, creating a hypothesis before you test is a key part of learning, not just optimizing.

(Photo credit: peaceplusone via Flickr)

Related:

Lean Your Marketing: The Slide Deck

Slide 1 | Slide 2 | Slide 3 | Slides 4, 5, 6 | Slide 7 | Slide 8 | Slides 9 & 10 | Slides 11 & 12 | Slide 13 | Slide 14 | Slide 15 | Slide 16 | Slide 17 | Slides 18, 19 & 20

 

Lean Your Marketing: Everything’s Trackable


beth-morgan-lean-your-marketing-everythings-trackable

Some things lend themselves to easy tracking– paid search, for example, generates lots of data and, at least in the case of Google, runs it through a nice dashboard with lots of reports.  But what about other kinds of marketing?  Some may be harder to wrestle into shape, for sure.  But everything can at least be approached with a bit more discipline.

Tracking Digital Campaigns With Google Analytics

Google tracks a lot of stuff for you automatically– referral traffic vs. paid search vs. organic search, for example.  But you can take this even further and track a wide range of digital efforts by creating custom campaigns.

Now, when you say, “create a custom campaign” people tend to picture that there is somewhere in Google Analytics where they will click a “+” icon and enter in details.  But in reality, creating a custom campaign in Google is both easier and harder than that.  Basically, all you need to do is append every URL you want to track with some parameters, and it will be tracked for your automatically.

Um, what’s a parameter?

If you already know what a URL parameter is, you can skip to the next section (sort of like a Choose Your Own Adventure).  If you don’t, read on!

A parameter is a part of a URL that passes information to the browser or to code within the page for tracking purposes.parts-of-a-url

Parameters are separated from the first part of the URL by a question mark, and each parameter has both a name and value.  Different parameters are connected with an ampersand.  In this example, the page “example.html” would have some sort of code on it that would tell it to scan all incoming URLs for parameters called “source” and “type,” then store the value it finds– in this case, “google” for “source” and “banner” for “type.”

What Parameters Does Google Analytics Track?

Any page that you have coded with your GA code will track up to five parameters.  From GA’s custom campaign page:

“We recommend you always use utm_source,utm_medium, and utm_campaign for every link you own to keep track of your referral traffic.  utm_term and utm_content can be used for tracking additional information:

  • utm_source: Identify the advertiser, site, publication, etc. that is sending traffic to your property, e.g. google, citysearch, newsletter4, billboard.
  • utm_medium: The advertising or marketing medium, e.g.: cpc, banner, email newsletter.
  • utm_campaign: The individual campaign name, slogan, promo code, etc. for a product.
  • utm_term: Identify paid search keywords. If you’re manually tagging paid keyword campaigns, you should also use utm_term to specify the keyword.
  • utm_content: Used to differentiate similar content, or links within the same ad. For example, if you have two call-to-action links within the same email message, you can use utm_content and set different values for each so you can tell which version is more effective”

So to track up to five different things for any one link, just add the variable name and a value to the end of your URL, making sure to separate the parameters with a “?” before and a “&” between each one.  Here’s how that might look:

http://www.marketingnerdistry.com/?utm_source=linkedin&utm_medium=outreach&utm_content=slides&utm_campaign=midwinter

You can use these customized links on anything you might like to track– links in an outbound email, links to your site that you put on Twitter or Facebook, links on Slideshare presentations; anything that links back to your site can have custom campaign variables on it so that you can better see how people are getting to your site– and how your marketing is performing.

At one of the places I worked, our outbound email program didn’t track specifically what people clicked on. So we tagged each link in our email with a different custom link so we could see what content got a better response.  That told us there was one section of our newsletter that no one ever clicked on; we dropped that section and added more content people liked, driving up overall response.

Of course putting all those variables on a link can make it quite long; use a shortener like Bit.ly to make your links more Twitter-friendly.

How do you see results?

In GA’s standard reporting, look at the Traffic Sources report.  “All Traffic” gives you a report that defaults to a view of source/medium; you can also select source, medium, or “other,” which allows you to select campaign or content.Screen Shot 2013-01-08 at 11.10.29 AM

But of course, who likes to stick with standard reporting when custom reports are at your fingertips?  You can set up a custom report in Google Analytics to look at any numbers you want (visitors, conversions, costs, etc) and to drill down by the factors you’re tracking: source, medium, etc.  This gives you the power to analyze exactly the data you wish.  If you’ve never used custom reports, log into your Google Analytics account and download this sample report I’ve prepared: GA Custom Report Parameter Tester.

This has only one number, unique visitors, but five levels of drill down.  Once you’ve copied it to your own GA account you can change it and play around with it to see how it works.

Track Your Tracking

The reason that creating custom campaigns by just sticking something on the URL is easier than creating some sort of campaign in Google Analytics is that you can do anything you want, on the fly; but, on the other hand, unless you are pretty careful it can start to be hard to remember what custom variables you have out there, because Google doesn’t track what you WANT to track, only what it actually gets in.  Also, it can be a little tedious to build all these URLs by hand.

I usually handle this by creating a spreadsheet that tracks all the different custom URLs we are using; that way when we look at our Google report, if there is a source or a campaign we don’t know right away we can just check the spreadsheet as a reminder.  As an added benefit, the spreadsheet can be coded to create the links for you, which cuts down on user error, and you can also store the matching shortened link instead of creating it fresh each time.

I’ve created a sample tracking spreadsheet in Google Docs that you can copy and use: Google Analytics Custom Campaign Tracker

Okay, but what about stuff that you don’t control?  What about PR?

This is all well and good for links we are putting out into the universe on our own; but what about something like PR or blog outreach, where we just hope for a mention at all and can’t really ask them to use our custom-tagged URL?

I’ve worked at a couple of different places that were managing PR efforts, and let me tell you one metric that isn’t super useful: number of placements. This, to me, falls squarely in the realm of bullshit metrics, because it doesn’t help you learn anything about your marketing efforts.  Instead, you should see what placements are driving traffic.  Here custom reports will help you again.  When you know you’ve gotten a placement, check out your source/medium report to see what referring URLs from the placement looks like.  Then create a custom report just for tracking your PR by using another key custom report feature: filters.

You can set up custom reports to filter for a whole range of things, but in this case we’ll have it filter by just the sources of our PR traffic by using regular expressions, aka regex. (Don’t be scared, it’s easier than it sounds.)  Regular expressions mean that it won’t look for things that exactly match what we give it; it will look for things that are similar, too.  So if we tell it to look for traffic from wsj.com, it will also give us the traffic from m.wsj.com as well.  Set it up like this:Screen Shot 2013-01-08 at 11.48.43 AM

Now, here’s where it gets a little tricky.  Next time you get a placement, you’ll want to add it to the report so you’re looking at your overall PR efforts.  However, for some reason the filters in GA are set up to only be AND, so if you try to add another filter it will look for traffic from wsj.com AND forbes.com.  Then your report will return nothing, because that  doesn’t make sense. You want it to look for traffic from wsj.com OR forbes.com.

The answer is to use a regex character, the pipe, which means the same thing as OR.  In other words, set up your filter like this:

Screen Shot 2013-01-08 at 11.55.19 AM

Any time a new source comes in, add another pipe and throw it on the report.

(Here is a good explanation of regex expressions you can use with Google Analytics.)

Tracking PR like this may give you some surprises– you may be amazed at how much traffic you get from someplace you’ve never really heard of, while disappointed at how little you get from a Big Name Placement.  But surprises are the stuff of learning.  Form some hypotheses about why some placements are working better, then test it by going after more like that.  Soon your PR efforts will be humming.

Huh, long post. I could have done a whole presentation on just this slide, apparently!  Next up: Learning.

(Photo credit: Michael Kappel via Flickr)

Related:

Lean Your Marketing: The Slide Deck

Slide 1 | Slide 2 | Slide 3 | Slides 4, 5, 6 | Slide 7 | Slide 8 | Slides 9 & 10 | Slides 11 & 12 | Slide 13 | Slide 14 | Slide 15 | Slide 16 | Slide 17 | Slides 18, 19 & 20

 

%d bloggers like this: