Lesson #9

Testing and Optimizing Your Lead Nurturing Campaign

Welcome back to our Lead Nurturing Masterclass. This lesson will focus on how to optimize your campaign after it is launched because a lead nurturing campaign should never be considered finished. Quite the contrary actually—a campaign should be in a constant optimization process that ensures it does not fall victim to viewer fatigue, changing industry trends, and shifts in your target personas. So if you want the best results, testing and tweaking your lead nurturing campaign post-launch is critical.

Lesson #9 overview


Share This Infographic On Your Site

Digging for Data: Backbone of the Optimization Process

An actionable optimization strategy starts with cold, hard data. But what kind of data is available, and how can it be systematically and efficiently collected? We begin with a distinction between two main varieties: quantitative and qualitative.

 

What is Quantitative Data? 

Quantitative data, simply put, is characterized by structured, often numerical criteria that can be analyzed with statistical methods. This type of data focuses on the “what,” “when,” and “how” of the customer and lead behavior and is often collected through research methods like surveys and experiments, to which a set of procedures can be applied systematically.

Two main types of quantitative variables are continuous and discrete. Continuous variables can take any value (including decimals) over a range, and are measured in units like hours and seconds, fractions of a dollar, or percentage rates. On the other hand, discrete variables are generally counts of things that can only take a whole value like referrals, site views, new customers, or rating scales in surveys.

Some examples of potential insights provided by quantitative research include:

lead-nurturing-data
  • Where website visitors enter and exit your site
  • How long visitors tend to spend on particular pages 
  • Which pages are most popular
  • How page performance has been increasing or decreasing over time
  • Email marketing rates (click-throughs, unsubscribes, etc.)

How to Collect Quantitative Data 

KPI Analytics 

KPIs, or key performance indicators, tend to be best suited for quantitative data research. They are, in a sense, the most straightforward way to collect this type of data, as they can be tracked and statistically analyzed directly—with the help of the right software of course. They are also more directly relevant than other methods to testing for lead nurturing, as they essentially provide variables for the tests. 

A large variety of tools are at marketers’ disposal for data collection and compilation, suited to every budget and company type. From Google Analytics to Crazy Egg to KissMetrics, the range of features and options available can be overwhelming, so time should be taken to make a well-researched decision about the option that fits your company best. 

Heat Maps and Click Analysis 

Heat maps are visual representations of data that provide information on how site visitors interact with specific pages and elements on your website or emails. A color or monochrome gradient is used to distinguish areas of higher activity. Examples of variations on heat maps include hover maps, scroll maps, mouse movement maps, attention maps, and more. 

Click maps, similarly, demonstrate which page elements are attracting the most attention and are actually being clicked on, such as CTA buttons, links, and interactive features. Outside of analytics that track metrics like click-through rates, supplemental click analysis can help to identify which site elements are being visually perceived by viewers as clickable or click-worthy.  

 

heatmap-719567-click-desktop

Surveys 

A survey can come in the form of a single question or a series of questions. They can be embedded as a pop-up directly on the page for your website, or as a full survey sent out via email. In the case of quantitative data, it is important to create surveys with the right type of variables in mind, most often discrete. The nature of the questions in the survey, and the answer choices available, should reflect this. 

Consider examples such as: 

  • Radio buttons - Respondents select one option from a list of possible choices
  • Checkboxes - Allows respondents to choose multiple options from a list
  • Slider scales - Responses range either numerically or categorically (i.e., “extremely unlikely” to “extremely likely”)
  • Net promoter scores (NPS) - This is an index ranging from -100 to 100 that measures a customer’s willingness to recommend a company to others
  • Star rankings - Ranking is indicated by stars (usually 1-5) with the highest number of stars ranking for the highest quality

What is Qualitative Data? 

Qualitative data is more unstructured information that is intended to be analyzed subjectively. Unlike quantitative research, qualitative research is exploratory and focuses on the “why”, aiming to gain insight about a particular topic. It can also help answer questions about the results of quantitative research, providing explanations and illustrations that bring clarity and direction to your next business move.

This data is commonly collected through focus groups, surveys, and observation. (Note that surveys can be applied to both quantitative and qualitative data, depending on the questions asked and how the results are analyzed.)

Two of the most popular methods for gathering qualitative data are interviews, which allow for direct communication with customers about where optimization strategy may be lacking, and customer surveys, which gather feedback on a larger scale. Examples of insights which qualitative data can provide include:

customer
  • What kind of problems leads are facing
  • What their motivations and goals are
  • What kind of experience visitors have on your company site
  • Where customers’ pain points have been, both in and outside experiences with your company

How to Collect Qualitative Data

Open-Ended Surveys

Open-ended surveys involve asking open-ended questions, which means minimizing multiple-choice options and instead requesting descriptive, essay format responses. As the goals for qualitative surveys are broader, and the results more ambiguous in terms of interpretation, it can seem difficult to determine what type of questions to include.

Fortunately, there are several universally applicable key topics to cover, in addition to more specific questions that may be relevant to your company’s industry, products, or services. These include: 

  • The customer’s intentions
  • The user experience process and the behaviors in which customers participated throughout the nurturing and buying process 
  • Points of friction experienced 
  • The identities of the customers
profile-icon

In-Person Interviews 

Interviews are one of the most flexible ways to gather data, while additionally gaining trust and strengthening a relationship with the respondents. By making the process more “human”, it may be easier to receive more authentic and insightful responses. In addition, in-person conversations can yield information indirectly through the interviewees’ tone of voice and body language.

It’s also possible to put an existing feedback system—customer help chats and phone conversations—to use. If your software keeps accessible logs of these, they can prove to be a small goldmine of insight into common complaints, usability of site features, areas of highest satisfaction, and more.

thumbs-up-icon

Reviews and Secondary Feedback Sources 

Last but not least, another example of the potential of recycled content in data collection is the use of existing customer reviews and secondary channels that allow for client feedback. These include blog pages, social media platforms, and forums. Reviews can offer insights not only into customers’ experience with your products, but with the company itself. 

If you have been observing a consistently low turnout of comments and reviews, consider incentivizing customers with something like a rewards point scheme, or at least simplifying the response process. While it will not always fit neatly into your objectives for the kind of data you consider most valuable to collect, customer feedback will send the message loud and clear about the best and worst components of their experiences with the company.

What to Optimize and Test in Your Lead Nurturing Campaign 

To evaluate the areas in which a lead nurturing campaign can be optimized, let’s review the key components of lead nurturing:  

  • Lead scores and segments 
  • Email marketing campaigns
  • Landing pages and CTAs  
  • Nurturing content pieces
  • Social media and other channels of outreach
  • Workflows 

Each of these components has specific metrics that can be analyzed to assess its performance and, likewise, be the focus of our optimization efforts. But how can we figure out which parts need optimization? The first step is to compare your observed metric to an established benchmark.

Sources for benchmarks include:

  1. Historical performance (best option)
  2. Industry benchmarks 
  3. Campaign goals

Once we have decided what we are comparing our rates to, we want to know the metric we observed is significantly better (or worse) than those benchmarks. Now how do we do that? Enter the chi-square goodness of fit test. You can learn how to use this test in the video below.

 

 

Once you have found your poor performers you will need to prioritize your optimization activities because you can’t fix everything at once. We suggest applying a prioritization model, such as ICE (Impact, Confidence, Ease) or PIE (Potential, Importance, Ease), to your list to identify the highest potential improvements first. 

It can be helpful to begin by outlining a few key metrics by which to measure your lead nurturing as a whole. From here, you can move into determining more specific metrics for the pieces listed above. 

Top Lead Nurturing Metrics

research-gathering-image

Click-Through Rate 

This measures the proportion of the audience who click on one or more links contained in a message, on a page, etc. Click-through rates help businesses better understand the relevance and engagement of their content. Low rates can indicate problems in your ability to align content and offers with customers’ needs. 

nurturing_leads_into_customers_graphic (1)

Time to Customer Conversion/Sales Cycle Time 

The length of time it should take for a lead to become a customer can be a difficult measure to determine, though it is possible to navigate with the help of industry-specific averages. The main goal of the company, regardless, should be to continuously shorten the sales cycle while being cognizant of realistic constraints—for example how B2B companies inevitably tend toward longer cycles than B2C.

database-lead-hero

Customer Lifetime Value 

Customer lifetime value (LTV) measures how much a client is worth to a company, and also determines the current value of the customer relationship as well as its growth potential over time. Taking both cost of procurement and return on investment into account can be useful when trying to decide how much to budget for acquiring new customers. A negative LTV can indicate an inefficient acquisition approach, or misguided investment in the wrong lead segments. 

lead-gen-hero

Conversion Rate 

This is perhaps the most important staple metric of lead nurturing optimization. Conversion is the process through which prospects are converted to leads, and/or leads are converted to customers. 

Thus, your conversion rate is the percentage of recipients who complete a desired action, designated as an indicator of their progress into the next phase. This could be a form submission, CTA click, or email open. To be more specific, “convertees” can be broken down into three categories: Leads, Marketing Qualified Leads, and Sales Qualified Leads.

Leads demonstrate some interest in your content, particularly top-of-funnel offers, and may fill out a form requesting their personal information. Marketing qualified leads (MQLs), meanwhile, engage more actively, taking interest in middle and perhaps even bottom-of-funnel offers. 

Finally, sales qualified leads (SQLs) have thoroughly interacted with your company and its offers and content. They are all but ready for a sales pitch! Lead nurturing focuses on the span of the conversion cycle, tracing the progress of leads to MQLs to SQLs. 

In B2C marketing, conversions can be relatively fast and simple, as they are connected to more instantaneous actions. In the B2B sales cycle, on the other hand, the conversion process requires longer investment and is more complex, based on a series of smaller conversions. Your conversion strategy will, of course, need to be optimized to best fit your company’s needs. 

The answer to what constitutes a “good” lead conversion rate is rather complex and subjective. Average rates vary between B2B and B2C, industries, individual companies, and more—numbers cited by marketing sources range from 2% to 10%. Industry-specific reports can be good resources for establishing ballpark benchmarks, but remember that consistent growth often matters more than surpassing a specific number.

Now, let’s dive into some specific lead nurturing elements including the most common metrics associated with each, ways to improve those metrics, and aspects to test.

 

Testing and Optimizing Emails 

In Lesson 5, we covered lead nurturing emails in detail, including testing strategies and the major goals you should be pursuing as you improve your campaigns. Here is a brief overview (you can also return to Lesson 5 for a more in-depth reference). 

Email Metrics

email_marketing_graphic
  • Delivery Rate
  • % Contacts Lost 
  • % Hard Bounces 
  • % Marked as Spam
  • % Unsubscribes
  • Open Rate
  • Click-through Rate 
  • Click to Open Rate
  • Clicks to Conversion Rate 

How to Improve Email Metrics

Email Deliverability 

Signs that you are experiencing issues with email deliverability include a high number of bounced emails and/or a high number of emails being marked as spam.

Ways to improve bounce rate include:

sending-mail-icon
  1. Frequent list updates
  2. Removing hard bounced email contacts
  3. Sending emails from your own domain
  4. Ensuring a new contact’s email address was typed correctly in your database 

Ways to improve spam rates:

  1. Reducing the number of emails you are sending 
  2. Sending emails from your own domain for legitimacy
  3. Avoiding spammy subject lines
  4. Including straightforward unsubscribe options in all emails 
 
Email Open Rates

If your open rates are performing significantly worse than your benchmarks, here are some ways to improve open rates:

mail-letter-icon

  1. Utilize email subject lines that are short, actionable, relevant, and pique a user’s curiosity. 
  2. Send your emails from a person (i.e. tammy@campaigncreators.com instead of info@campaigncreators.com). Emails that come from a human often have a higher open rate.
  3. Write persuasive “preview text” in order to give the recipient email context and improve open rate.
  4. Use personalization in subject lines to connect with your recipients and increase authenticity.

 

The most useful and critical method to increase open rates is by constantly “Testing, Testing, Testing”. That’s because what works for another audience may or may not work for yours. Examples of useful tests to run to raise open rates include: 

  1. Sender Tests - Sending emails from a brand name or an email address with company info in its name, from different representatives’ accounts, or from different email addresses (info@ vs. offers@ vs. tammy@).
  2. Subject Line - Length, word choice, content, offer type and placement, etc.
  3. Preview Text - Editing for length and content offers another opportunity to optimize the open rate of your emails.
  4. Personalization - Placing a first name or even a company name in the subject line, preview text, or greeting of your email.
  5. Time and Day - Determining optimal time and day of the week to send your email to a particular database.
  6. Segmentation - Exploring numerous ways to create unique segments in your database.
  7. Suppressions - Suppressing the segment of unengaged subscribers from the majority of email sends. 

 

Email Click-Through Rates

If your click-through, click to open, and/or clicks to conversion rate is underperforming, here are some tips for improving these metrics: 

clicks-icon

  1. Include a signature from a real person as well as their contact information with your email. This will improve trust, click-through rate, and email engagement.
  2. Include bullet points, bolded text, and white-space to improve email skim-ability and the reader’s understanding of the email’s core message.
  3. Include an image of your offer to give the reader a tangible connection to whatever it is you are trying to promote.
  4. Include alt text with your images to ensure that recipients who have switched off images in their email provider still understand what value they provide and are encouraged to click.
  5. Ensure each email has a single objective to improve clarity, decrease cognitive load, and increase click-through rate.
  6. Include various CTA styles in your emails. Use a healthy mix of buttons, images, hyperlinks, and contextual links that lead to the same landing page.
  7. Leverage an email testing tool to see how emails render in different email platforms (i.e., Outlook, Gmail, Apple Mail, etc.).

Again the best way to determine what works best for your audience is to test it. There are several test variables associated with click-through rates including:

  1. Design - Tweaking elements including header banner, columns, and high vs. stripped-down design (see Lesson 5 for a standard example format).
  2. Text-to-Image Ratio - Keep in mind the general rule that higher ratios of text to images tend to produce better-performing emails.
  3. Length - Moderating the length of your emails and optimizing the design of those that are longer and guide readers short on attention or time.
  4. CTAs - Type, color, placement, number on a page, etc.
  5. Social Sharing - Deciding whether it is best to include social sharing links to engage your audience.
     

Testing and Optimizing Landing Pages

Landing pages are a primary way of delivering your offers or hook to leads, and are the way in which you collect a visitor's contact information—so optimizing their performance is time well spent.

Landing Page Metrics

Rinse-Repeat-Image
  • Form submission rate
  • Form submission by source
  • New contacts rate 
  • Average page loading time

Improving Landing Page Metrics

If your landing pages are not converting at an acceptable rate compared to benchmarks, here are a few tips for improving them:

  1. Ensure your form is above the fold when viewed on ANY device. Landing pages should be mobile, tablet, and desktop friendly. 
  2. Be certain the number of required form fields are in line with the offer you are providing. For a top-of-the-funnel offering, simply requesting an email should do. 
  3. If your landing page feels long, try including more than one form or CTA on the page. The user might forget about a form at the top after scrolling for a minute or two.  
  4. Add a short video for explaining complex offers. PRO TIP: Try using a turnstile (embedded form) halfway through your video to boost video conversion rate. 
  5. Create urgency by adding words on the page that will encourage your viewers to act sooner (on this visit) rather than later. Phrases such as “Limited Time Offer” or “Exclusive Deal” can achieve this. 
  6. Figure out what your desired landing page action is and reinforce it numerous times with your CTA and supporting copy. 
  7. Add visual and directional cues towards a part of your landing page, such as your form, that you would like to highlight. This will draw a visitor’s eyes and make sure they complete your desired action.
  8. To speed up your page make sure your images are scaled appropriately for the box they are placed in, you minimize your CSS files, enable browser caching, turn on compression at the server level for your website, delete or deactivate any website plugins that are no longer in use, and reduce the number of redirects associated with your site. Redirects generate extra HTML requests and increase load time.

The best way to determine what works to convert your landing page visitors is to test it. There are several test variables associated with landing pages including:

  1. Main headline
  2. Main image (size, placement, people vs. no people, icon vs. photo)
  3. Button (Text, placement, design, size, color)
  4. Form (Length, color, layout, placement)
  5. Long copy vs. short copy
  6. Header image/banner vs. none
  7. Bullets lists vs. paragraph copy
  8. Testimonials vs. none
  9. Call-to-action

Testing and Optimizing for Social Media Platforms and Blogs 

Social and Blog Metrics

Social media and blog metrics are fairly straightforward, and in some regards, easier to track than those of other campaign elements.

These metrics include:

mobile-social-icon
  • Post engagement rate 
  • Average post reach
  • Click-through rate
  • Share rate
  • Clicks to conversion

Improving Social Metrics

The best way to find out what your audience will respond to (and what the social algorithms will like) is to test variables like:

  1. Headlines and copy
  2. Posting time
  3. Posting frequency
  4. Type (image, link, text, video, audio)
  5. Content (question, opinion, news, stats, quote, poll, tip, etc.)
  6. Calls to action
  7. Hashtags

Improving Blog Metrics

Tips for improving the conversion rates of blogs include:

  1. Using a mix of buttons and in-text hyperlinks.
  2. Writing your blogs with your persona in mind first, and Google second.
  3. Choose the right font—large and easy to read. This will keep your visitors from getting frustrated and leaving the page. 
  4. Make sure your blog is responsive to being viewed on mobile.
  5. Only insert links to your campaign in blog posts that are relevant to that campaign.
  6. Match the tone and style of your blog, CTA, and landing page language. This will provide consistency and boost conversion rates.

You can also try testing the following elements on your blog:

  1. Headline
  2. H2’s (sub-headings)
  3. Font (size, typeface, and color)
  4. Images along with their frequency and placement
  5. Call-to-action (position, color, text)
  6. Adding multimedia like audio or video

Testing and Optimizing Workflows

In a lead nurturing campaign, workflows are used to tie together the pieces of your campaign together in order to progress your lead from one stage of their journey to the next. The key metric associated with your workflows is goal completion rate—the percentage of contacts enrolled in your workflow who met the designated goal.

To improve the performance of a lead nurturing workflow you can test:

workflow-funnel
  • Wait time between actions
  • Enrollment and suppression criteria (who is included in the workflow)
  • Number of communication (adding or subtracting)
  • Offer/content of communication
  • Form of communication (i.e., swapping an email for an SMS message)

How to Test Your Lead Nurturing Campaign 

Now that we know what we want to optimize and test, it’s time to actually do it!

 

A/B Testing for Lead Nurturing Campaigns

A/B testing is the cornerstone of lead nurturing optimization. Also known as split testing, A/B testing allows you to test variations of an element of a campaign alongside one another. The results will enable you to determine which version is the most effective option. Standard A/B testing begins with creating two versions of a piece of content, which are then randomly presented to similarly sized audiences.

The responsiveness and conversion rates of the test groups are recorded and analyzed with testing and/or analytics software, which often offers testing tools in tandem with tracking and analysis of metrics and KPIs. 

But as our friend Peep Laja from CXL said,

Using an A/B testing tool does not make you an optimizer. Using a scalpel does not make you a surgeon. - Peep Laja (@peeplaja) March 10, 2014 

 

The steps for conducting an A/B test on your lead nurturing campaign are:

conversion_rate_optimization_graphic
  1. Formulate a hypothesis.
  2. Create a ‘control’ and a ‘variation’.
  3. Determine sample-size and test duration.
  4. Go forth and test.
  5. Evaluate the results.
  6. Decide what do do with a ‘failed’ test.

Now, let’s break each step down in greater detail.

1. Formulate a Hypothesis

The key to finding success with A/B testing is by having solid hypotheses. A hypothesis is a prediction you create prior to running a test. It states clearly what is being changed, what you believe the outcome will be, and why you think that’s the case. Running the experiment will either prove or disprove your hypothesis. 

A complete hypothesis has three parts—the variable, desired result, and rationale—which should be researched, drafted, and documented prior to building and setting an A/B test live.

So a hypothesis is essentially a change and effect statement that often follows a simple established formula:

“If [variable], then [result], because [rationale].” 

Let’s break these elements down a bit more.

ab-testing-icon
The Variable

This is an element that can be modified, added, or taken away to produce a desired outcome.

As with the scientific method, you want to isolate one "independent variable" (i.e., element) to test. If you want to test multiple aspects at once, you will need to deploy multivariate testing.

cro-icon
The Result

The predicted outcome. Essentially you need to choose the "dependent variable” for your test and how you expect it to change.  As discussed above, a number of conversion metrics can be relevant to every component in a campaign. Take time to find the indicators most relevant to the specific piece being tested. This could be more landing page conversions, clicks or taps on a button, email opens, or another KPI/metric you are trying to affect.

email-video-icon
The Rationale

The last part of a hypothesis is the “why”. This demonstrates that you have informed your hypothesis with research. What do you know about your visitors from your qualitative and quantitative research that indicates your hypothesis is correct?

A thoroughly researched hypothesis doesn’t guarantee a winning test. What it does guarantee is a learning opportunity, no matter the outcome (winner, loser, or inconclusive experiment).

Another consideration is the desired statistical significance of your results. Setting your confidence level to a higher percentage is equivalent to investing in the accuracy of results.

 

2. Create a 'control' and a 'variation.'

You now have your independent variable, your dependent variable, and your predicted outcome. Use this information to set up the unaltered version of whatever you're testing as your "control". If you're testing a web page, this is the unaltered web page as it exists already. If you're testing an email subject line, this would be the subject line copy you are already using.

From there, build your variation—the website, landing page, or email you’ll test against your control. For example, if you're wondering whether including an emoji in your subject line will increase open rates, set up your control email with no emojis in the subject. Then, create your variation email with an emoji in the subject line. 

3. Determine Your Sample Size and Test Duration

Your sample size depends on three factors: 

people-icon
  • Baseline conversion rate - Your control group's expected conversion rate.
  • Minimum detectable effect - The minimum relative change in conversion rate you would like to be able to detect.
  • Statistical significance - Threshold for significance based on your risk tolerance. For example, higher significance indicates greater certainty in your results but less power to detect a difference.

There are a number of sample size calculators available that will determine your needed sample size per variation needed based on these three factors.

For A/B testing emails, you just need to ensure that each variation is sent to the calculated sample size. For landing pages and website A/B testing, you'll translate sample size into the estimated time you need to run your test with two calculations:

Calculation #1: Sample size  × 2 = Total # of visitors needed

Calculation #2: Total # visitors needed ÷ Average # of visitors per day = Test duration (in days)

 

4. Go Forth and Test 

Though your variations should be tested simultaneously, there is nothing wrong with selecting testing times strategically. For instance, well-timed email campaigns will deliver results more quickly. Determining these times require some research of subscriber segments. As mentioned, depending on the nature of the piece, your site traffic, and the statistical significance that needs to be achieved, the test could take anywhere from a few hours to a few weeks. 

If you are interested in gaining some additional insight into the reasoning behind your visitors’ reactions, consider asking for qualitative feedback. Exit surveys and polls can quite easily be added to site pages for the duration of the testing period. This information can add value and efficiency to your results.  

 

ab-testing-tactic-1

 

5. Evaluate the Results 

Using your pre-established hypothesis and key metrics, it's time to interpret your findings. Keeping confidence levels in mind as well, it will be necessary to determine statistical significance with the help of your testing tool or another calculator. If one variation proves statistically better than the other, congratulations! You can now take action appropriately to optimize the campaign piece. 

 

6. In Case of “Test Failure” 

If your test failed to achieve a statistically significant result—that is, the test was inconclusive—several options are available. 

For one, it can be reasonable to simply keep the original variation in place. You may also choose to reconsider your significance level or re-prioritize certain metrics. Finally, a more powerful or more dramatically different variation may be in order for your next test. 

Most importantly, if your A/B test “failed”, do not be afraid to try again. After all, the adage “practice makes perfect” fully applies to testing methods. 

 

A/B vs. Multivariate Testing

Multivariate testing is founded on the same key principle as its A/B counterpart. The difference is in the higher number of variables being tested. The goal is to determine which particular combination of variations performs best, and examine the “convertibility” of each variation in the context of other variables rather than simply a standalone process. In many ways, it can be a more sophisticated practice. 

This type of testing is a great way to examine more complex relationships between optimizable elements. In theory, it is possible to test hundreds of combinations out side-by-side! Notedly, multivariate tests have their disadvantages, particularly with regards to the greater amount of time and number of site visitors needed to conduct them effectively.

 

Testing for B2B vs. B2C Lead Nurturing 

It is important to note that the significant differences between your B2B and B2C nurturing campaigns extend into optimization and testing. That being said, you will need to tailor your strategies accordingly to your sales cycle lengths and database sizes (as well as, of course, other factors relevant to your unique campaigns).

B2B 

case-studies-hero

With longer sales cycles, B2B marketing often takes quite a while to collect enough data to gain insights into many optimization opportunities, and the smaller databases often lack sufficient data for more complicated testing, like multivariate testing. 

However, it is fully possible to begin with more simple testing. This is why A/B testing is perfect for this type of campaign. Page layouts, email subject lines, and offer types can all be tailored to maximum effectiveness, given you put in the time to gather enough data to determine whether, and which, changes are making an impact.

ecommerce_marketing_graphic

B2C

B2C marketing’s shorter sales cycles are conducive to fast-paced data collection to gain insights into many optimization opportunities. The flip side of this is that constant optimization is a virtual must to keep up with the (occasionally intimidating) pace of the market.

Larger customer databases also open the door for more complex testing and iterations. While there is still room for A/B testing in a B2C campaign, it is more efficient to incorporate multivariate testing into your approach. 

As you can see, the launch of a lead nurturing campaign is only the beginning. If you want to experience success, you have to continually seek opportunities for improvement.  As Tom Peters aptly pointed out,

“Excellent firms don’t believe in excellence - only in constant improvement and constant change.”

-Tom Peters

Feel like you have a better understanding of testing and optimizing your lead nurturing campaigns? Take your learning to the next level by applying your knowledge in the Lesson 9 Exercise below.

 


 

Lesson 9 Exercise

Identify and Prioritize Optimization Opportunities

Select one element of your lead nurturing campaign to evaluate for optimization (i.e., an email, landing page, blog post, social post, or workflow). Under this lesson’s section, What to Optimize and Test in Your Lead Nurturing Campaign, find the section dedicated to your chosen campaign element. Use this section to guide you through the following steps:

  1. Gather the observed metrics listed for that element.
  2. Determine your benchmarks for each metric.
  3. Compare your observed metrics to your benchmarks.
  4. If this element is underperforming, create a list of optimization activities or tests you could run to improve this struggling metric.
  5. Prioritize your list using a prioritization model such as PIE or ICE.

 

Next up in Lesson 10 of the Lead Nurturing Masterclass, we’ll wrap things up with the best practices for a winning lead nurturing strategy.

Download the print copy to take the Lead Nurturing Masterclass with you!