Measuring Behavior Change, Effectively

Does advertising work? Put it this way, if it didn’t work, would companies still invest in it? But how do we prove that it works? John Wanamaker, a US businessman born in Philadelphia and considered the father of modern advertising, once said: “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.”

In the social development field, behavioral change communication (BCC) is used to achieve a similar kind of influence. But instead of promoting brands, aid organizations sell ideas. These ideas typically focus on how beneficiaries can improve their livelihoods or protect themselves from harm. Unlike commercial enterprises, aid organizations do not have sales figures to evaluate. As a result, proving impact becomes a real challenge as the desired outcome is often some form of behavior change.

Aid organizations often resort to qualitative evaluation methods with anecdotal results, as opposed to quantified measures of impact.  When collecting qualitative data it is important to understand that results are not representative and an inference to the overall target population cannot always be made. Qualitative measurement is often best applied when we don’t understand the nature of an issue and need to explore different possibilities. Further, qualitative case study material tends to focus on individual cases as opposed to what is common practice, potentially highlighting a non-representative view. For all these reasons, qualitative evaluations can be subjective and less convincing when it comes to showing impact on behavior change.

Another option is to verify behavior change compliance through observation. Observational studies are certainly powerful as they help prove causality between an intervention and actual behavior. The challenge is that many interventions will not have the opportunity to do this, in some cases it is near impossible. We can observe helmet wearing and school attendance quite easily, but what about wildlife trafficking or safe migration? Also, we can’t understand what beneficiaries are thinking by simply watching what they are doing. Observation alone will not provide insight into why beneficiaries behave the way they do, and from a program perspective, this is often what really needs to be understood.

One of the better ways to understand behavior change is to conduct some form of quantitative study. Knowledge, attitude and practice (KAP) surveys are commonly used for this purpose. The effectiveness of KAP surveys will, of course, depend on how well they are executed. Still, there are some fundamental guidelines that should be considered when conducting a KAP survey to measure behavior change. First, apply an appropriate methodology. Ideally, the sample should be representative of the target group. Second, a pre and post measure is recommended so that impact can be verified with some level of accuracy. Post evaluation measures can be carried out using a control sample; however, the control sample should be taken from a similar population. Third, it is important to develop KAP measures that go to the heart of what the intervention is trying to achieve. Finally, consider barriers to change (e.g. financial, access, availability, etc.) including enforcement factors (e.g. regulation awareness, law enforcement, etc.). Social marketing and BCC campaigns can potentially bring beneficiaries to a point where they have an intention to change. Actual behavior change, pushing beneficiaries over the threshold, is often a result of removing barriers or through the help of law enforcement. Negative attitudes can also be a barrier to change making it important to measure attitudes as well.

At the end of the day, a KAP survey cannot prove causality, but if done correctly it can generate strong evidence if some level of behavior change has taken place. Methodology and execution are key.

Daniel Lindgren is the Founder of Rapid Asia, a monitoring & evaluation consultancy based in Bangkok, and can be contacted