Let's Talk subscribe
- Our Company
Welcome back to our Lead Nurturing Masterclass. This lesson will focus on how to optimize your campaign after it is launched because a lead nurturing campaign should never be considered finished. Quite the contrary actually—a campaign should be in a constant optimization process that ensures it does not fall victim to viewer fatigue, changing industry trends, and shifts in your target personas. So if you want the best results, testing and tweaking your lead nurturing campaign post-launch is critical.
An actionable optimization strategy starts with cold, hard data. But what kind of data is available, and how can it be systematically and efficiently collected? We begin with a distinction between two main varieties: quantitative and qualitative.
Quantitative data, simply put, is characterized by structured, often numerical criteria that can be analyzed with statistical methods. This type of data focuses on the “what,” “when,” and “how” of the customer and lead behavior and is often collected through research methods like surveys and experiments, to which a set of procedures can be applied systematically.
Two main types of quantitative variables are continuous and discrete. Continuous variables can take any value (including decimals) over a range, and are measured in units like hours and seconds, fractions of a dollar, or percentage rates. On the other hand, discrete variables are generally counts of things that can only take a whole value like referrals, site views, new customers, or rating scales in surveys.
Some examples of potential insights provided by quantitative research include:
KPIs, or key performance indicators, tend to be best suited for quantitative data research. They are, in a sense, the most straightforward way to collect this type of data, as they can be tracked and statistically analyzed directly—with the help of the right software of course. They are also more directly relevant than other methods to testing for lead nurturing, as they essentially provide variables for the tests.
A large variety of tools are at marketers’ disposal for data collection and compilation, suited to every budget and company type. From Google Analytics to Crazy Egg to KissMetrics, the range of features and options available can be overwhelming, so time should be taken to make a well-researched decision about the option that fits your company best.
Heat maps are visual representations of data that provide information on how site visitors interact with specific pages and elements on your website or emails. A color or monochrome gradient is used to distinguish areas of higher activity. Examples of variations on heat maps include hover maps, scroll maps, mouse movement maps, attention maps, and more.
Click maps, similarly, demonstrate which page elements are attracting the most attention and are actually being clicked on, such as CTA buttons, links, and interactive features. Outside of analytics that track metrics like click-through rates, supplemental click analysis can help to identify which site elements are being visually perceived by viewers as clickable or click-worthy.
A survey can come in the form of a single question or a series of questions. They can be embedded as a pop-up directly on the page for your website, or as a full survey sent out via email. In the case of quantitative data, it is important to create surveys with the right type of variables in mind, most often discrete. The nature of the questions in the survey, and the answer choices available, should reflect this.
Consider examples such as:
Qualitative data is more unstructured information that is intended to be analyzed subjectively. Unlike quantitative research, qualitative research is exploratory and focuses on the “why”, aiming to gain insight about a particular topic. It can also help answer questions about the results of quantitative research, providing explanations and illustrations that bring clarity and direction to your next business move.
This data is commonly collected through focus groups, surveys, and observation. (Note that surveys can be applied to both quantitative and qualitative data, depending on the questions asked and how the results are analyzed.)
Two of the most popular methods for gathering qualitative data are interviews, which allow for direct communication with customers about where optimization strategy may be lacking, and customer surveys, which gather feedback on a larger scale. Examples of insights which qualitative data can provide include:
Open-ended surveys involve asking open-ended questions, which means minimizing multiple-choice options and instead requesting descriptive, essay format responses. As the goals for qualitative surveys are broader, and the results more ambiguous in terms of interpretation, it can seem difficult to determine what type of questions to include.
Fortunately, there are several universally applicable key topics to cover, in addition to more specific questions that may be relevant to your company’s industry, products, or services. These include:
Interviews are one of the most flexible ways to gather data, while additionally gaining trust and strengthening a relationship with the respondents. By making the process more “human”, it may be easier to receive more authentic and insightful responses. In addition, in-person conversations can yield information indirectly through the interviewees’ tone of voice and body language.
It’s also possible to put an existing feedback system—customer help chats and phone conversations—to use. If your software keeps accessible logs of these, they can prove to be a small goldmine of insight into common complaints, usability of site features, areas of highest satisfaction, and more.
Last but not least, another example of the potential of recycled content in data collection is the use of existing customer reviews and secondary channels that allow for client feedback. These include blog pages, social media platforms, and forums. Reviews can offer insights not only into customers’ experience with your products, but with the company itself.
If you have been observing a consistently low turnout of comments and reviews, consider incentivizing customers with something like a rewards point scheme, or at least simplifying the response process. While it will not always fit neatly into your objectives for the kind of data you consider most valuable to collect, customer feedback will send the message loud and clear about the best and worst components of their experiences with the company.
To evaluate the areas in which a lead nurturing campaign can be optimized, let’s review the key components of lead nurturing:
Each of these components has specific metrics that can be analyzed to assess its performance and, likewise, be the focus of our optimization efforts. But how can we figure out which parts need optimization? The first step is to compare your observed metric to an established benchmark.
Sources for benchmarks include:
Once we have decided what we are comparing our rates to, we want to know the metric we observed is significantly better (or worse) than those benchmarks. Now how do we do that? Enter the chi-square goodness of fit test. You can learn how to use this test in the video below.
Once you have found your poor performers you will need to prioritize your optimization activities because you can’t fix everything at once. We suggest applying a prioritization model, such as ICE (Impact, Confidence, Ease) or PIE (Potential, Importance, Ease), to your list to identify the highest potential improvements first.
It can be helpful to begin by outlining a few key metrics by which to measure your lead nurturing as a whole. From here, you can move into determining more specific metrics for the pieces listed above.
This measures the proportion of the audience who click on one or more links contained in a message, on a page, etc. Click-through rates help businesses better understand the relevance and engagement of their content. Low rates can indicate problems in your ability to align content and offers with customers’ needs.
The length of time it should take for a lead to become a customer can be a difficult measure to determine, though it is possible to navigate with the help of industry-specific averages. The main goal of the company, regardless, should be to continuously shorten the sales cycle while being cognizant of realistic constraints—for example how B2B companies inevitably tend toward longer cycles than B2C.
Customer lifetime value (LTV) measures how much a client is worth to a company, and also determines the current value of the customer relationship as well as its growth potential over time. Taking both cost of procurement and return on investment into account can be useful when trying to decide how much to budget for acquiring new customers. A negative LTV can indicate an inefficient acquisition approach, or misguided investment in the wrong lead segments.
This is perhaps the most important staple metric of lead nurturing optimization. Conversion is the process through which prospects are converted to leads, and/or leads are converted to customers.
Thus, your conversion rate is the percentage of recipients who complete a desired action, designated as an indicator of their progress into the next phase. This could be a form submission, CTA click, or email open. To be more specific, “convertees” can be broken down into three categories: Leads, Marketing Qualified Leads, and Sales Qualified Leads.
Leads demonstrate some interest in your content, particularly top-of-funnel offers, and may fill out a form requesting their personal information. Marketing qualified leads (MQLs), meanwhile, engage more actively, taking interest in middle and perhaps even bottom-of-funnel offers.
Finally, sales qualified leads (SQLs) have thoroughly interacted with your company and its offers and content. They are all but ready for a sales pitch! Lead nurturing focuses on the span of the conversion cycle, tracing the progress of leads to MQLs to SQLs.
In B2C marketing, conversions can be relatively fast and simple, as they are connected to more instantaneous actions. In the B2B sales cycle, on the other hand, the conversion process requires longer investment and is more complex, based on a series of smaller conversions. Your conversion strategy will, of course, need to be optimized to best fit your company’s needs.
The answer to what constitutes a “good” lead conversion rate is rather complex and subjective. Average rates vary between B2B and B2C, industries, individual companies, and more—numbers cited by marketing sources range from 2% to 10%. Industry-specific reports can be good resources for establishing ballpark benchmarks, but remember that consistent growth often matters more than surpassing a specific number.
Now, let’s dive into some specific lead nurturing elements including the most common metrics associated with each, ways to improve those metrics, and aspects to test.
In Lesson 5, we covered lead nurturing emails in detail, including testing strategies and the major goals you should be pursuing as you improve your campaigns. Here is a brief overview (you can also return to Lesson 5 for a more in-depth reference).
Signs that you are experiencing issues with email deliverability include a high number of bounced emails and/or a high number of emails being marked as spam.
Ways to improve bounce rate include:
Ways to improve spam rates:
If your open rates are performing significantly worse than your benchmarks, here are some ways to improve open rates:
The most useful and critical method to increase open rates is by constantly “Testing, Testing, Testing”. That’s because what works for another audience may or may not work for yours. Examples of useful tests to run to raise open rates include:
If your click-through, click to open, and/or clicks to conversion rate is underperforming, here are some tips for improving these metrics:
Again the best way to determine what works best for your audience is to test it. There are several test variables associated with click-through rates including:
Landing pages are a primary way of delivering your offers or hook to leads, and are the way in which you collect a visitor's contact information—so optimizing their performance is time well spent.
If your landing pages are not converting at an acceptable rate compared to benchmarks, here are a few tips for improving them:
The best way to determine what works to convert your landing page visitors is to test it. There are several test variables associated with landing pages including:
Social media and blog metrics are fairly straightforward, and in some regards, easier to track than those of other campaign elements.
These metrics include:
The best way to find out what your audience will respond to (and what the social algorithms will like) is to test variables like:
Tips for improving the conversion rates of blogs include:
You can also try testing the following elements on your blog:
In a lead nurturing campaign, workflows are used to tie together the pieces of your campaign together in order to progress your lead from one stage of their journey to the next. The key metric associated with your workflows is goal completion rate—the percentage of contacts enrolled in your workflow who met the designated goal.
To improve the performance of a lead nurturing workflow you can test:
Now that we know what we want to optimize and test, it’s time to actually do it!
A/B testing is the cornerstone of lead nurturing optimization. Also known as split testing, A/B testing allows you to test variations of an element of a campaign alongside one another. The results will enable you to determine which version is the most effective option. Standard A/B testing begins with creating two versions of a piece of content, which are then randomly presented to similarly sized audiences.
The responsiveness and conversion rates of the test groups are recorded and analyzed with testing and/or analytics software, which often offers testing tools in tandem with tracking and analysis of metrics and KPIs.
But as our friend Peep Laja from CXL said,
Using an A/B testing tool does not make you an optimizer. Using a scalpel does not make you a surgeon. - Peep Laja (@peeplaja) March 10, 2014
Now, let’s break each step down in greater detail.
The key to finding success with A/B testing is by having solid hypotheses. A hypothesis is a prediction you create prior to running a test. It states clearly what is being changed, what you believe the outcome will be, and why you think that’s the case. Running the experiment will either prove or disprove your hypothesis.
A complete hypothesis has three parts—the variable, desired result, and rationale—which should be researched, drafted, and documented prior to building and setting an A/B test live.
So a hypothesis is essentially a change and effect statement that often follows a simple established formula:
“If [variable], then [result], because [rationale].”
Let’s break these elements down a bit more.
This is an element that can be modified, added, or taken away to produce a desired outcome.
As with the scientific method, you want to isolate one "independent variable" (i.e., element) to test. If you want to test multiple aspects at once, you will need to deploy multivariate testing.
The predicted outcome. Essentially you need to choose the "dependent variable” for your test and how you expect it to change. As discussed above, a number of conversion metrics can be relevant to every component in a campaign. Take time to find the indicators most relevant to the specific piece being tested. This could be more landing page conversions, clicks or taps on a button, email opens, or another KPI/metric you are trying to affect.
The last part of a hypothesis is the “why”. This demonstrates that you have informed your hypothesis with research. What do you know about your visitors from your qualitative and quantitative research that indicates your hypothesis is correct?
A thoroughly researched hypothesis doesn’t guarantee a winning test. What it does guarantee is a learning opportunity, no matter the outcome (winner, loser, or inconclusive experiment).
Another consideration is the desired statistical significance of your results. Setting your confidence level to a higher percentage is equivalent to investing in the accuracy of results.
You now have your independent variable, your dependent variable, and your predicted outcome. Use this information to set up the unaltered version of whatever you're testing as your "control". If you're testing a web page, this is the unaltered web page as it exists already. If you're testing an email subject line, this would be the subject line copy you are already using.
From there, build your variation—the website, landing page, or email you’ll test against your control. For example, if you're wondering whether including an emoji in your subject line will increase open rates, set up your control email with no emojis in the subject. Then, create your variation email with an emoji in the subject line.
Your sample size depends on three factors:
There are a number of sample size calculators available that will determine your needed sample size per variation needed based on these three factors.
For A/B testing emails, you just need to ensure that each variation is sent to the calculated sample size. For landing pages and website A/B testing, you'll translate sample size into the estimated time you need to run your test with two calculations:
Calculation #1: Sample size × 2 = Total # of visitors needed
Calculation #2: Total # visitors needed ÷ Average # of visitors per day = Test duration (in days)
Though your variations should be tested simultaneously, there is nothing wrong with selecting testing times strategically. For instance, well-timed email campaigns will deliver results more quickly. Determining these times require some research of subscriber segments. As mentioned, depending on the nature of the piece, your site traffic, and the statistical significance that needs to be achieved, the test could take anywhere from a few hours to a few weeks.
If you are interested in gaining some additional insight into the reasoning behind your visitors’ reactions, consider asking for qualitative feedback. Exit surveys and polls can quite easily be added to site pages for the duration of the testing period. This information can add value and efficiency to your results.
Using your pre-established hypothesis and key metrics, it's time to interpret your findings. Keeping confidence levels in mind as well, it will be necessary to determine statistical significance with the help of your testing tool or another calculator. If one variation proves statistically better than the other, congratulations! You can now take action appropriately to optimize the campaign piece.
If your test failed to achieve a statistically significant result—that is, the test was inconclusive—several options are available.
For one, it can be reasonable to simply keep the original variation in place. You may also choose to reconsider your significance level or re-prioritize certain metrics. Finally, a more powerful or more dramatically different variation may be in order for your next test.
Most importantly, if your A/B test “failed”, do not be afraid to try again. After all, the adage “practice makes perfect” fully applies to testing methods.
Multivariate testing is founded on the same key principle as its A/B counterpart. The difference is in the higher number of variables being tested. The goal is to determine which particular combination of variations performs best, and examine the “convertibility” of each variation in the context of other variables rather than simply a standalone process. In many ways, it can be a more sophisticated practice.
This type of testing is a great way to examine more complex relationships between optimizable elements. In theory, it is possible to test hundreds of combinations out side-by-side! Notedly, multivariate tests have their disadvantages, particularly with regards to the greater amount of time and number of site visitors needed to conduct them effectively.
It is important to note that the significant differences between your B2B and B2C nurturing campaigns extend into optimization and testing. That being said, you will need to tailor your strategies accordingly to your sales cycle lengths and database sizes (as well as, of course, other factors relevant to your unique campaigns).
With longer sales cycles, B2B marketing often takes quite a while to collect enough data to gain insights into many optimization opportunities, and the smaller databases often lack sufficient data for more complicated testing, like multivariate testing.
However, it is fully possible to begin with more simple testing. This is why A/B testing is perfect for this type of campaign. Page layouts, email subject lines, and offer types can all be tailored to maximum effectiveness, given you put in the time to gather enough data to determine whether, and which, changes are making an impact.
B2C marketing’s shorter sales cycles are conducive to fast-paced data collection to gain insights into many optimization opportunities. The flip side of this is that constant optimization is a virtual must to keep up with the (occasionally intimidating) pace of the market.
Larger customer databases also open the door for more complex testing and iterations. While there is still room for A/B testing in a B2C campaign, it is more efficient to incorporate multivariate testing into your approach.
As you can see, the launch of a lead nurturing campaign is only the beginning. If you want to experience success, you have to continually seek opportunities for improvement. As Tom Peters aptly pointed out,
“Excellent firms don’t believe in excellence - only in constant improvement and constant change.”
Feel like you have a better understanding of testing and optimizing your lead nurturing campaigns? Take your learning to the next level by applying your knowledge in the Lesson 9 Exercise below.
Select one element of your lead nurturing campaign to evaluate for optimization (i.e., an email, landing page, blog post, social post, or workflow). Under this lesson’s section, What to Optimize and Test in Your Lead Nurturing Campaign, find the section dedicated to your chosen campaign element. Use this section to guide you through the following steps:
Next up in Lesson 10 of the Lead Nurturing Masterclass, we’ll wrap things up with the best practices for a winning lead nurturing strategy.