Learning Note 4: Measuring impacts of Makutano Junction

Follow-up guest post by James Habyarimana and William Jack of Georgetown University Initiative on Innovation, Development and Evaluation (gui2de)

In 2013-14, gui2de and Twaweza collaborated on a research project which asked a question fundamental to Twaweza’s theory of change: does compelling mass communication contribute to positive behavior change among citizens? The research was situated in a very poor neighborhood of Nairobi, Kenya. This post follows on a previous write-up which introduced the research and some of the first-line findings on whether a high-quality televised program could persuade viewers to undertake civic engagement actions (see Learning Note 3). The post below examines in more detail the measures used in the research – a crucial and difficult choice to make in much of social science. For instance, are attitudes and stated preferences acceptable outcomes (acceptable in that they can suggest whether an intervention will have impact, and its direction), and, moreover, how does one accurately measure attitudes and preferences? Read below about how the gui2de team tackled this issue, and what they found. A powerpoint presentation summarising the results of the same study is also available - see here.  

Previously we wrote about a study we conducted in a Nairobi slum in which we attempted to identify and measure the impact of exposure to a Kenyan drama called Makutano Junction, a show promoting issues such as income-generation, mental and physical health, and rights and responsibilities of good citizens, by incorporating them into the storyline.  The action on which we hoped to see some movement was women’s willingness to sign a public petition demanding improved sanitation in the slum.  As we reported, we found little impact of the videos on this outcome, except amongst women who were exposed to a sub-treatment in which they were explicitly encouraged to sign.  The good news was that that encouragement did change women’s behavior; the bad news was it made them less, not more, likely to sign.

In this post we report on the impacts of the intervention, and of the placebo – a show called Mother in Law, with little in the way of implicit or explicit social commentary aimed at inducing behavior change – on a number of other dimensions.  The good thing about the signature campaign was that it was a very quantifiable and objective response – the measures we report here are more subjective and open to a wider degree of interpretation.  Nonetheless, they provide added insight into the potential impacts of the interventions we assessed.

So what did we measure?  First, we presented participants with a series of vignettes that described problems that might be solved by some kind of individual initiative or action – a drainage problem, a crime problem, a water supply problem, a teacher absenteeism problem, and a gender issue that might be addressed through political participation.  For each scenario we asked the women a number of Yes/No questions – yes being interpreted as showing more pro-activity or initiative, and aggregated the responses.

In the control group, scores for individual scenarios ranged form about 0.45 to about 0.90, depending on the question.  Those in the group exposed to Makutano Junction mostly scored higher, but only seldom was the difference statistically significant, and the point estimates were at most 0.05 – five percentage points.

However, women who were exposed to the placebo consistently scored lower than the control, often at highly levels of significance and to a large extent, up to 15 percentage points in some cases.  Thus, exposure to Mother in Law appears to have dramatically reduced the willingness or capability of women to act – or at least to say they would act – under certain hypothetical circumstances.

These observations have flipped our experiment on its head.  It’s possible that the placebo did its job well, and that simply getting together to watch videos women were induced to be less socially active.  The net impact of Makutano Junction then would be the combination of this negative effect, and an offsetting and more or less equal positive effect associated with the different content of the video material.

An alternative hypothesis is, however, that getting together to watch videos didn’t do anything in itself to affect behavior, and that the content of Makutano Junction was also ineffective, while the content of Mother in Law was such as to demotivate women.  It’s impossible to tell from the data we have, but we are inclined to lean towards the latter.

The other measures we collected were less definitive.  For example, we asked a series of questions design to measure an individual’s “locus of control,” related to a positive outlook and expectations for the future, and a belief that they could overcome obstacles to achieve more.  Here we found no effect of the treatment or placebo compared with the control group, either on average or in distribution.

Next we asked if exposure to the videos might have affected the way people think about local public goods, and in particular the priority they attached to certain goods and services.  Along eight dimensions, including for example health, education, garbage collection, and police corruption, there was only a handful of statistically significant impacts of exposure to the videos.  And across all the point estimates, the ratio of negative to positive effects was 3 to 1.  We calculated a standardized index of preference for public goods and here found no effect of the treatment video, but a negative and statistically significant impact of the placebo, which appeared to nudge people to be less concerned about public good provision, not more.

Finally, we asked respondents about their levels of community participation – including knowledge of group leaders, membership of groups, and participation in meetings.  Again, the only significant effects were negative, especially for the placebo.  Those exposed to it were 9 percentage points less likely to know the identity of their community leaders (compared to 62% in the control) and 7 points less likely to be a member of a community group (39% for the control).  On the other hand, reported attendance at meetings was uncorrelated with treatment status – if attendance is what is needed for meaningful participation, the videos might have been more benign (if not positively effective) than we might otherwise fear.

All in all our results are a little disappointing, but it might have been too much to expect to be able to see meaningful changes in behavior a few months after a short 6-week exposure to the videos.  Other studies, like the work of Tanguy Bernard, Stefan Dercon, Kate Orkin and Alemayehu Taffesse in rural Ethiopia, have had more positive results, so maybe the slum environment was not conducive to producing big effects.   On the other hand, Berg and Zia’s work in South Africa, mostly in informal settlements, showed some impact on financial behavior.  That study might have been more robust because the exposure was not at somewhat contrived settings in churches converted to video halls, but on free-to-air TV.

Even so, if the experimental setting we engineered made it difficult to see results, the negative impact of the placebo on a variety of outcomes is all the more intriguing.  Maybe we should just switch the treatment and placebo labels and make this a paper about the pernicious effects of soap operas?!


Pernicious effects of soap operas notwithstanding, this post highlights the difficulty in measuring the process of behavior change. Many of us agree that bringing about behavior change is hard and takes time (though as with much science, evidence is mixed), and so many researchers opt for the measurement of antecedents, or determinants, of such change – i.e., indicators that might tell us change is forthcoming. This includes attitudes and preferences, but also knowledge, skills, self-efficacy, beliefs, stated intent, and similar. Messy business. In our opinion, the gui2de team’s choice of presenting vignettes, followed by a series of questions, was an ingenious way to attempt to capture respondents’ attitudes and preferences, vis-à-vis the outcomes of interest (civic action). The vignettes were translated and also thoroughly pretested in a similar setting before they were applied. For illustration, one of the vignettes is presented below. As far as results go – in this round, we admit defeat in achieving the impact we were after. But that doesn’t mean we didn’t learn along the way.  

Vignette #1

Mike lives in a house that like others in his area has lots of solid waste disposal outside. In particular, there is a drain outside his house that floods occasionally creating a very unhealthy environment for children in the area. Mike and his neighbors would really like to have the drain cleared so rainwater can flow away from their homes. If you were Mike’s neighbor, would you take any of the following actions?

a)      Ask someone in your household to help clear up the drain? Yes/No
b)      Contribute to a fund to pay for the civil works required? Yes/No
c)      Talk to the village chairman to do something about the drainage? Yes/No
d)      Start looking for alternative accommodation?  Yes/No

Read more: Georgetown University



You might also like...