Put Your Audience Before Your Intuition: Four Simple Steps for Message Testing

“Our intuition has been right so many times, it doesn’t recognize when it’s wrong,” said Steve Wendel, head of Behavioral Science at Morningstar, Inc. during a discussion on message testing at the Design for Action conference.

Wendel urged attendees to consider not only our message testing results, but also our methods, posing the question, “Why is it that we start with the most complicated tests?”

He offered four sequential tests (and a bonus) for improving message effectiveness:

  1. Impact/Null Testing — “Does the message make any difference at all?” Test your message in one group and compare it to a group who receives no message at all. This is a classic two-group design.
  2. “Kitchen Sink” Testing — “Does any part of the message make a difference?” Test a message that is completely different from the original message (i.e., the control). If the message as a whole doesn’t make a difference, it’s unlikely that parts of that message will either.
  3. Archetype Testing (a.k.a. Concept Testing) — “Which concept resonates best with your audience?” Test key message concepts against each other. Let’s say you’re encouraging your audience to access behavioral health services. You’ve heard from your audience, extracted what you can from existing research, or otherwise have reason to believe that trustworthiness and effectiveness are both factors that influence the behavior. Create and test two messages: one that focuses solely on trustworthiness and another that focuses solely on effectiveness. Take the most effective concept to the next phase.
  4. The Microscope Test — “Which message differences actually promote behavior change?” We commonly refer to these tests as A/B, A/B/C, or multivariate (testing multiple combinations of message differences). These tests are meant to isolate the effects of message differences to show what specifically works for an audience.
  5. Machine Test — “Wait, are we implementing our message delivery consistently?” This test, sometimes called an A/A test, entails testing the same message in two different groups. If the groups are significantly different, it may be a sign of inconsistent implementation. Of course, this only works if you randomize the groups.

Does this sound like a lot of work? Well, it can be if you don’t know what questions to ask. So remember these key rules to ensure the research process produces meaningful, actionable results.

  • Put your audience before your intuition. Let your audience speak to you through the data to ensure you’re meeting their needs.
  • Only answer questions that further your learning. If a test won’t produce additional insights into your overarching research question, don’t do it.
  • Randomize when possible. Randomization is the practice of using chance (flip a coin, random number generation, etc.) to determine which message each audience member will receive. This practice leads to more valid information and allows non-statisticians to make sense of data.

Finally, the golden rule of research: Remain humble enough to recognize your intuition isn’t always right.

I’m reminded of the old adage, “The only constant in the world is change.” Draw on your past experiences to guide you in developing research questions, while remembering that context is everything. The only way to know if your message matters to your unique audience in a specific situation is to test it.

That’s behavioral science we can all get behind.

Tags: / / / /

Categories: Events