There’s no avoiding it now. After successfully laying the groundwork for evaluation by establishing goals and objectives and creating a logic model, you are on your way to really implementing a communication evaluation plan.
Before you do, it’s important to understand that there are really two types of evaluation — those that occur before and during a campaign (formative measures) and those that occur as you gather and process campaign data (summative evaluation).
Think of formative evaluation as communications recon. It produces information that is fed back during the development of a campaign to help improve it (Scriven, 1967). It can begin with fact-finding missions using literature reviews, community focus groups and interviews with key stakeholders.
The core elements of formative evaluation are planning and analysis. We know. We told you about goals and objectives earlier, but these are such critical elements of effective evaluation, we want to reinforce the point: You can’t evaluate without a vision for what you’re evaluating. Set long-term goals that describe the outcomes you want to achieve and define the measurable steps — objectives — that will allow you to track progress toward that goal.
This is when you begin to understand the needs and motivations of your target audience (Bauman, 2004) and to learn what affects their behavior. Formative evaluation can help you appreciate where your audience falls on the change continuum and can establish the baseline for future evaluation. Did you move your audience from pre-contemplation to contemplation? Are they ready to begin considering a behavior change when previously they were averse to the issue?
Going on guesses or gut instinct is likely to send your whole communication effort in the wrong direction. Analysis is the communicator’s lifeline to the science of evaluation. When we launch into a communication effort, we have assumptions about what our audience thinks and feels, and how they might act in response to our campaign efforts. You will have captured these subtleties in your logic model. The formative evaluation stage is your chance to test your hypothesis and determine if your assumptions are right. This fine-tuning is what it takes to put you on the path to successful communications.
Examples of formative measurements
Remember the first time you decided to stick your toe in the dating pool? While some people like to just dive right in, others are more cautious, assessing their own weaknesses and strengths, then determining what they might need in a potential mate. Those are the folks who understand the importance of a needs assessment to the success of a campaign.
A method of formative research, a needs assessment might begin with an audience analysis or environmental scan to learn about an audience’s behaviors and become familiar with events and trends in the environment (Lauzen, 1995). This can be thought of as the intelligence-gathering phase.
Other strategies to consider are exploratory research, which seeks to identify both enablers of and barriers to action, and pretesting, where piloted messages are tested to obtain audience reactions prior to finalizing a campaign (Atkin and Freimuth, 1989).
Tools on this front include:
- Surveys: Conducted online or offline, surveys can be powerful tools to ask an audience directly about opinions related to a social issue, an organization’s message, structure of web content being presented or the value of a community created in a digital space, for example.
- Focus groups: Traditional focus groups, conducted in-person or online, more informal discussion groups and one-on-one interviews are terrific for gathering information directly from an audience. You can learn whether the outreach channels that were identified in the research phase turned out to be legit. For example, women are the most likely demographic group to get health information online. However, you may find that the women who comprise your audience are less likely to participate in discussions on health issues on social networks and more likely to engage in such discussions using more private channels.
Here are some other common tools that would be considered part of a formative evaluation:
Media analysis: Also known as content analysis, this technique is a broad and systematic analysis of the nature of traditional and digital media reporting on a particular topic. The aim is to determine what specific issues are being discussed and how they are being framed (i.e., in a positive or negative light) (Hughes, 2011). (In dating terms, this is the stage where you’d Google that person who just winked at you on Match.com to make sure they aren’t married already or are sought by the FBI.) Media analyses can involve collection of either quantitative or qualitative data.
Message audit: Similar to a media analysis, a message audit examines the source, tone and content of specific communication messages, as opposed to a broader examination of media (McCarthy, 2008). (This is where you might check out a prospective date’s Facebook page to see what kinds of comments their friends write [are they witty, or blah?], see if you find any embarrassing pictures, etc. It’s all about knowing what you’re getting into and how you should communicate going forward.) Message audits can help an organization evaluate and improve the effectiveness of their conversations about an issue (Hargie and Tourish, 2000).
Integrated media audit: The role of an integrated media audit is to assess the ability of an organization’s many communications channels to successfully convey a message. This might include an analysis of multiple media, including websites, databases, mailing lists and conferences. (Here is where you might direct message your date’s followers on Twitter to unearth any “skeletons”.) These audits usually provide organizations with a quantitative analysis, a validation of whether a campaign is reaching its intended target (Buchanan, 1998).
While formative evaluation does just what it says it does — helps to form a campaign — summative evaluation also does what it says it does — it sums up what’s happened. Summative evaluation typically occurs when a campaign reaches certain benchmarks. It gives us the detail we need to persuade Boards of Directors, funders and decision makers that communications is worth the investment. It provides information to help us improve our campaign approaches and determines the effectiveness of the campaign overall.
Process evaluation is where most of us start and stop our evaluation. Counting is fun! We can count the number of brochures we distributed at a health fair. The number of media hits we earned and the millions of people we reached with our message. The number of followers we have on Twitter and the number of likes on Facebook. Thousands, millions, even billions of people know about our work thanks to the power of communications. Woo hoo! Now that’s something to celebrate … but we secretly know that’s only a piece of the evaluation puzzle.
In plain terms, process evaluation assesses the extent to which a communications campaign is being delivered as it was intended — that is, how true the program is to its blueprint (GAO, 2011). When you look back to your campaign logic model, process evaluation responds to the success indicators for your inputs and outputs. It is also where you can determine the influence of external factors and any missed assumptions. In this phase, evaluators seek to know what exactly is happening during the course of a communication campaign. What is being delivered? How much traffic was there to a website? How many views did a video receive on YouTube? We’re determining volume and measuring the noise generated by a campaign, not the action the campaign motivated its audience to take — that comes later.
As with a formative evaluation, data collection for a process evaluation is often conducted using a mix of quantitative and qualitative methods. Qualitative strategies such as interviews, focus groups, usability testing and surveys with program staff are particularly useful in that they provide information that is difficult to tease out of quantitative findings: How is the communication campaign being delivered? Are there barriers to implementation, according to the interview subjects? Those delivering the message can often provide unique perspectives on the program — what’s working, what isn’t working and how it can be adjusted. In addition, direct observation of a campaign in action can again reveal insights about an initiative that quantitative data simply cannot provide.
It’s time for a bit more analysis. We’re moving beyond counting to assessing awareness. Through qualitative research and analysis, we begin to determine whether we’re realizing the outcomes we laid out in our logic model.
Outcome evaluation determines short-term effectiveness and is often directly connected to changes in personal knowledge, attitudes and beliefs. This is where you start to see some progress on the social change continuum. For example, local residents might have been disinterested and unsupportive of local agriculture programs previously. Now there is interest in exploring whether farmers markets are effective tools for increasing local food consumption in the state.
You can decide what to measure by revisiting the campaign logic model. What were your success indicators for short-term outcomes? What research questions did the campaign team agree would prove the efficacy of communications?
Here are some ideas for measurement tactics that can help you evaluate your outcomes:
- Use surveys, focus groups and media analyses to determine changes over time. If you’re updating your brand, these methods can help you find out if it is effectively communicating and delivering on its mission and promise. Is the brand easily recognizable among target audiences? Have there been any negative reactions to the brand?
- Create a feedback loop to measure the success of an event. Feedback opportunities should capture information about the likelihood of event attendees to apply the information they learned in other areas of their work and should be done immediately following the event and at later intervals to determine whether the outcomes are lasting. Tools to create feedback opportunities include online surveys, social media conversations and direct mail inquiries.
- Conduct media analyses to determine the outcomes from a training program. Following spokesperson training, monitor coverage to determine whether there are changes in type and tone.
Impact evaluation is the Big Kahuna — the most difficult and most expensive of all. It also gives us the answers that fill in the final missing pieces of the measurement puzzle. Impact evaluation looks far beyond the objectives you have set for your campaign and seeks to answer the question, “Was your goal realized?” For example, have there been major improvements in infant mortality rates? Are standardized test scores increasing? Have policies been implemented that improve water quality?
Not only is impact evaluation the hardest to take on, but it’s also the hardest to prove. It might take decades to recognize a seismic shift in how society thinks or acts. Because social change happens in the real world, scientists argue that there is little way to know what influenced the results. How can communicators take credit for a societal shift that may have been influenced by any number of factors? The only way we can is by relying on data. Impact evaluation can’t be the only evaluation we perform to measure effectiveness. It is built from the formative evaluation, process and outcome evaluations that show ongoing progress, shifts in strategy and campaign impact over time. By gathering the facts, you can prove that communications played a role in shifting societal norms.
Cost-Benefit and Cost-Effectiveness Analysis
Cost analysis is an economic evaluation method to help program designers and evaluators determine the relationship between their inputs (monetary resources) and their outcomes (program outcomes) — that is, it helps organizations identify the least costly method to obtain a desired level of output (CDC, 2004). This is your response to the Board’s inquiry about ROI, or return on investment. In addition, cost analyses encourage communicators to become knowledgeable about program costs, something very salient to stakeholders and funders (Rossi, Lipsey and Freeman, 2004).
Cost-effectiveness compares the cost of a program to its effectiveness as measured in communications outcomes. Cost-effectiveness is expressed as a ratio, which represents the cost per outcome (e.g., cost per change in attitudes, or cost per 5 percent increase in high school graduation rates in Washington, D.C., public schools). Relative to a cost-benefit analysis, cost-effectiveness can be less time- and resource-intensive and easier to understand.
A cost-benefit analysis requires evaluators to estimate the benefits of a campaign, both tangible and intangible, and estimates of the costs of delivering the campaign, both direct and indirect. By framing all program outcomes in monetary units, decision makers can directly compare the outcomes of different types of communication campaigns. A cost-benefit analysis assigns dollar values to the outcomes attributable to the program.
Putting It All Together: The Evaluation Plan
Now you have a whole lot of information, but it’s only useful if you put it to work. The word “plan” can be overwhelming — especially for communicators who are already buried in strategic plans, media plans, marketing plans, retirement plans…. So use any word you’d like to describe how you’re going to proceed with evaluating your campaign.
Since we know that a good evaluation can make you a rock star, we’re going to call this approach evaluation’s “all-access pass.” Here are the steps to get backstage:
- Gather the people who matter most to your campaign.
- Together, propose a goal and objectives.
- Apply your goal and objectives to the development of a logic model.
- Use your logic model success indicators to determine key evaluation questions.
- Identify how you will measure or assess each of those questions.
- Determine timeline, budget and staffing.
- Conduct formative and summative evaluation.
- Party like the rock star you are!
Some Parting Advice
There’s no question that you are now ready to head out and conquer the world with your evaluation skills. Just be careful not to conquer yourself in the process. There are many, many options for evaluating your communication efforts. You don’t have to do them all. Part of your job will be getting the group to agree on what’s “good enough.” What can we afford? What can we accomplish with the staff we have? What can we prove?
Don’t be scared off by the buffet of options. Choose innovative, low-cost measures that get the job done versus not evaluating at all. Through practice, you’ll learn what works. Make sure that your proposed evaluation matches the scope of your communication effort. Don’t propose an impact assessment for a short-term, low-budget campaign.
Avoid trying to accomplish all of it alone. There are these great people out there called “grad students” who really know their stuff (thanks, John!). They need projects. You need help. They have access to research libraries. You need access to best practices that will help you narrow your search for measures.
Finally, calm yourself. The prospect of an evaluation carries with it a certain degree of anxiety, because it poses questions for which you may not be prepared to hear the answers. Has your campaign made a difference? Have you spent your budget wisely? Will the Washington Nationals replace the New York Yankees as Major League Baseball’s winningest franchise?
OK — so an evaluation can’t tell you everything about everything.
As long as you go into the process prepared to learn and not just to prove, there’s really nothing to fear. Share this philosophy at the outset of an evaluation with those who are in a decision-making position. Get them on board about what an evaluation can and should do, so that no matter the answers to those frightening questions, you can celebrate what was learned, together.
SO WHAT if you don’t need more shelf space this year for the latest communications award. Even the campaigns that carry no hardware have incredible value through learned best practices — if they are evaluated.
It’s like a blind date. You might have some idea of what to expect from an evaluation, but you won’t really know what is possible until you stick your neck out there. And the most successful people don’t give up on the entire process as the result of one, or even a few, bad dates.
Be courageous. Evaluate!