The problem with email benchmarks

By Karlyn Morissette - Wed, Dec 10, 2008

Email

Preface: So Kyle tells me its been a month since I’ve posted.  I’ll be honest - between conferences/live blogging, guest bloggers/contests and new writers, I’ve kind of been sitting back and letting the madness settle.  But to make up for it, here’s a nice little 1,000 word post on email marketing that I’m sure no one will read all the way through.  Enjoy!

K

=============

Benchmarks, benchmarks, benchmarks.  It seems to be all I hear about from people when they start seriously venturing into email marketing.  ”How do we compare to everyone else???”  It seems to be a reasonable question.  And it would be easy for me to give you benchmarks in the education industry to comparing your stats with…but I just don’t feel comfortable doing that.  In fact, I’m going to recommend that you IGNORE industry email marketing benchmarks.  Why? Because they are faulty on several levels: 

They assume that all email marketing plans are created equal

When you look at email benchmarks, you inherently accept that really crappy email programs are going to be included in the results.  And in education, there’s a lot of REALLY bad email going on.  According to a recent paper by DemandEngine, colleges seem to ignore email best practices more than they practice them.

Now, if you’re looking for benchmarks I hope that you’re not one of those colleges.  I hope that you’re more the variety that has put a lot of thought and effort into your program.  But the reality is that you don’t know what types of institutions these stats are pulled from.  Even if you do know the specific schools they are coming from, you don’t know whether their plans are good or bad as its doubtful that you’d be able to study them in-depth.  Really, its just a bunch of meaningless numbers.  

They assume all audiences are created equal

I always recommend that people manage their expectations regarding what their email will achieve based on the specific audience you’re mailing to.  Not all audiences are going to respond the same to your email campaigns and once you lump them all together, it will completely skew the results.  

For example, in my office once you make your annual gift, you stop being solicited by email.  Contrast that with an admissions office that actually ramps up email (at least they should) to students as they work their way through the funnel and become more interested.  Both are in the education industry. One is only targeting people who have been non-responsive to previous requests.  The other is (or, again, should be) primarily targeting people who have demonstrated responsiveness.  Comparing these stats is like comparing apples to oranges.  On face value you can predict that the admissions audience is going to be more responsive since they have been in the past, whereas the fundraising audience is going to be terribly unresponsive since nothing has worked with them yet.  Meshing them together to create an arbitrary benchmark just makes a huge mess.  Additionally, its rare that you’d be privy to the actual makeup of the benchmarks and what types of communications they’re sending, thus complicating the problem further.  

It assumes all reporting is created equal

If someone told you that their average open rate was 70%, you’d probably be impressed.  But what happens to me when someone gives a state that impressive is that the red, flashing warning light starts to go off in my head.  It’s not that the person reporting that number is lying - they firmly believe it.  The problem is that their tool is lying to them.

Something many beginning email marketers don’t realize is that there’s a way to track both the unique number of times an email you sent is opened and the total number of times that email is opened: 

  • Unique opens: The number of your unique recipients that opens one of your emails.
  • Total opens: The total number of times an email you sent has been opened. 

So say a user gets one of your emails and its so awesome that he goes back to it five times.  That counts as one unique open, but five total opens.  This also applies to click throughs.  The problem is that there are email tools out there that don’t differentiate between the two, so their users have no idea how their rates are calculated.  I’ve seen more than a few of these tools that use the total number of opens to calculate the open rate.  This is a problem, since open rates are traditionally calculated using unique opens (again, same with click rates).  

Additionally, I’ve stumbled across more than a few email marketers don’t accurately calculate their click through rates, instead doing the calculation for click-to-open rates:

  • Click Through rate: Your unique click throughs against the number of messages that were delivered.
  • Click-To-Open rate: Your unique click throughs against the unique number of messages that were opened.

Oftentimes, I see people calling a click-to-open rate a click through rate (I’m pointing at you Brad J. Ward!) It’s an unintentional mistake, but when it happens it amounts to comparing apples to oranges.  Personally, I believe that click-to-open is a much better rate to look at, but we need to make sure we’re all using the same nomenclature so we understand what we’re comparing.

So what do you do?

Instead of comparing your program to generic and meaningless benchmarks, start by looking at your own data.  Assess the numbers you’re hitting right now (whether you think they’re good or bad) and put mechanisms in place to periodically track them.

Next, take a hard look at your email plan.  Is it perfect?  I think if you’re honest you’ll say there are areas you could improve.  What do you think your rates could be if you made improvements?  Work towards that goal.  Marketing is about continual assessment and refinement.  If you aren’t meeting your goals, then what other schools are doing really doesn’t matter.  One the other hand, if you are satisfied with your results, do benchmarks really matter?  

I’ll leave you with one last thought: Benchmarks are averages.  Are you really striving for your email program to just be average?  I find that oftentimes when you’re looking at benchmark, average suddenly becomes good enough for the director-types.  You should always be striving to be better than that and if you’re putting together a solid email plan, you will achieve more than an arbitrary open or click rate will tell you.


Tweet
Share StumbleUpon It! Del.icio.us reddit

Like this post? Be sure you've subscribed to the .eduGuru RSS feed or email to get all the latest news and articles.


audience, audiences, benchmarks, best practices, colleges, education industry, Email, email benchmarks, email marketing, email programs, marketing plans, meaningless numbers, specific schools

Read Related Posts on .eduGuru:

  1. Email Stats Can Be Deceiving
  2. New Standards for Email Subject Lines
  3. Time to get serious about email

This post was written by:

Karlyn Morissette

Karlyn Morissette - who has written 45 posts on .eduGuru

Karlyn Morissette is a thought leader and innovator in higher education. With over 12 years of web experience (half spent working exclusively on higher education web marketing initiatives), she helped pioneer many of the web strategies considered best practice today.

Today as the Director of Marketing Communications at Fire Engine RED, Karlyn works with colleges around the world to execute integrated marketing campaigns as a part of student search. She also teaches courses on Internet marketing and strategy at Champlain College as adjunct faculty. She holds a Bachelor of Science in Communication from Boston University and a Master of Business Administration from Norwich University.

To quote a friend of hers: "Karlyn is a super rad ninja marketing genius who will make your target demographic submit to your every whim through sheer willpower. Oh, and she's smarter than you."  We're not sure about the smarter part, but "super rad ninja" is true enough.

Compulsory disclaimer: The views expressed in Karlyn's posts are hers and hers alone, and do not represent those of any company she's affiliated with. Yes, it's true - the girl has a mind of her own. 

Rachel on TwitterRachel on LinkedIn



6 Responses to “The problem with email benchmarks”

  1. Avatar image
    Bradjward Says:

    Thanks for this post and for correcting me on the CTR. I will get it right next time, promise. :)

    Reply

  2. Avatar image
    Karlyn Says:

    hahaha, you know i’m just giving you a hard time ;-)

    Reply

    • Avatar image
      myspace friend adder download Says:

      Who does your SEO work?

      I’d give you a PR10 for sure LOL - nice work here.
      Very Nice ;)

      Reply

    • Avatar image
      Song Plays Says:

      My acquaintance emailed us this URL beforehand today, she said that I would absolutely like your blog. She was right! Thanks Again

      Regards

      Reply

  3. Avatar image
    Paul Prewitt Says:

    Lets not for get the (track able) S2F numbers. I’ve seen a few companies lump them in with the original target list… I hope you can see how that is poor at best.

    This number is a great assessment tool for tracking the “viral” effect of your campaign. After all we can pitch till we are blue in the face but the brand promoters are where the big numbers come from.

    The world is social and people are more likely to do something if a friend/family member suggests it rather than a company (yes Higher Ed = a company).

    Reply

  4. Avatar image
    Scott Hardigree Says:

    Here are few more free email benchmark studies to throw into the mix: https://emailcritic.com/2009/09/29/how-does-your-email-marketing-program-stack-up/

    Reply

Leave a Reply

Spam protection by WP Captcha-Free