We measure to be accountable, but equally as importantly, to learn. We want to learn because we certainly don’t have all the answers in how to achieve our vision of an East Africa with active, engaged citizens and responsive, accountable governments. If there was an agency and responsiveness vaccine, we could simply roll out vaccination campaigns. In its absence, it’s all about educated trial and error, learning from the experience, and trial again.
On the accountability side, we have policies and guidelines on all of our major work areas, internal systems to track contracts, budgets and expenditures, and outputs produced. We have an ethic of output (not input) based performance. Independent evaluation has confirmed our systems are of high quality, and our processes ensure value for money.
But the higher-order purpose of measurement is to learn, to track and describe how change happens. Our vision and strategic statements set the bars high: we outline core areas (problem statements) in basic education and open government which we want to change. These are outlined in our strategy document. To get there, we design and implement specific initiatives (often a set of inter-linked initiatives); these are elaborated in considerable detail in each annual plan (see for example the plan for 2015).
For every initiative we implement, we develop a mini-theory of change, in which we define:
1. Which overarching problem is it addressing
2. Who is it targeting
3. What is the desired change it is contributing to (intermediate outcomes), and via which hypothesized pathways
The Learning, Monitoring and Evaluation unit leads and guides this work. But engagement and curiosity is expected and encouraged across the organization.
Monitoring is (much) more than bean-counting
We control an initiative up to a point (say, a production point), but then we release it into the real world. In order understand what actually happens “out there,” we monitor. It helps us to be accountable, but moreover, we are curious as to who is the initiative reaching? In what volume? What do the people think of it?... This applies equally to materials we produce as to broadcast; it also applies to engagement inputs and strategies. It is relevant for single products (e.g. just radio shows), as well as a package of products that go together (e.g. radio shows, together with a televised interactive campaign, together with print material, together with liaising and engaging with major stakeholders in government). In monitoring terms, this means tracking:
· Delivery/distribution: where was the product sent, how far the pipeline did it get, and in what volume?
· Coverage: of all the potential users that could have received it, what is the proportion that actually did?
· Quality: what do we think of the quality? What do the end users think of the quality? How about experts, do they have anything to say?
· Feedback from users: In addition to perceived quality, what other feedback do we get? For example, is the product new, interesting, useful? If it’s useful, then how? Has it been used?...
Monitoring gives us considerable insight. But it stops short of asking (and answering) the ultimate question: did the initiative contribute to change, and what kind of change? This is the question that is asked by evaluation.
Evaluation = curiosity
What actually works, and how? Evaluation at Twaweza is designed to answer a set of four core questions.
1. What are the effects of a major Twaweza platform (such as Sauti za Wananchi) on specific outcomes?
2. What are the effects of a Twaweza-led “campaign” (or set of interlinked interventions) on specific outcomes?
3. What is the effectiveness (and what are the effects) of a particular Twaweza partner on specific outcomes?
4. What are the changes in an overarching area (problem statement)? This includes assessing Twaweza contribution, and where plausible, also attribution.
Evaluation outcomes are categorized according to our theory of change, which specifies 10 types of intermediate outcomes (e.g. policies, budgets, norms, attitudes, etc., see page 11 in our strategy document). Guided by these categories, we develop specific measures for each evaluation – e.g. which policy we want to change, which component of the budget, which norm or attitude and held by whom, etc. For all our evaluations, methods follow purpose; we use and mix qualitative and quantitative methods; we strive to include more than one source of information and more than one round of data collection.
Some evaluations are conducted internally, i.e. Twaweza’s LME unit is in charge of the entire process, including terms of reference, selecting implementing partners as needed (consultants, data collection firms, etc.), developing and vetting tools and methodologies, finalizing the results and reporting. Other evaluations are conducted in partnership with external high-quality research partners, in which the research design, tools and products are a result of a collaborative process. These partnerships are selected based on the overlap of interest in an evaluation (research) question between the research team and Twaweza, and tend to fill the dual role of providing Twaweza with an evaluation as well as providing the research team with raw material which furthers knowledge in their field.
All our evaluations are characterized by a commitment to transparency. We mean transparency of the results – whether they are positive or negative assessments of Twaweza’s initiatives. We also mean transparency of methods, tools, and the raw data.
It’s got to be linked to organizational learning
Unless you will learn from it, you might as well not do it. The monitoring as well as evaluation data and findings are fed into regular organizational processes. Mid-year, we review our progress to date, and forecast the rest of the year. This includes implementation plans, monitoring and evaluation plans, and financial forecasts/revisions. Towards the end of each year we hold a retreat followed by annual planning exercise, where we review again progress against set successes (objectives) for the year, and also consider higher order questions. For example:
a. Are our tactics successful? Do we need to shift gear, and how?
b. Have we made sufficient progress during the year that we believe we are still on the right track, and at the right speed? And if not, what adjustments need to be made?
c. What are we learning from the evaluation results, insights? How does it affect our strategies, but also our overall direction?
d. Given the political environment we work in, is the overarching problem statement still relevant? How does it need to be adjusted?
Institutional learning is key, but big bonus if we also generate evidence that advances the field. This is often the case in evaluations designed in partnership with external researchers.
Specifically, what are we evaluating now?
For the 2015-18 strategic period, our evaluation plans will be developed around the core areas listed below. As our implementation plans develop, and sometimes shift, we will also tailor the evaluation plans accordingly; therefore the list below may be adjusted as needed.
1. How does the introduction of Sauti za Wananchi into Kenyan (and Ugandan) media space affect the use of data and citizen’s opinions in public dialogue? In Tanzania, what is the assessment of the contribution of Sauti to the media space?
2. What are the effect of Twaweza’s campaigns developed around elections in each of the three countries (2015 in Tanzania, 2016 in Uganda, and 2017 in Kenya) on citizens’ perception of responsive, engaged authorities? On perception of public dialogue, accountability?
3. What are the effects of innovative media, such as the “MP reality show” on citizens’ perceptions of responsive, engaged authorities? On perception of public dialogue, accountability? Are there also effects on the MPs?
4. How effective are our core communications partners, such as Minibuzz, Rockpoint 256, and Makutano Junction, in getting key messages (on education, governance, accountability) across to their audience? Can we link exposure to changes in norms, attitudes, perceptions?
5. What is the contribution Twaweza is making into selected policy dialogue (e.g. on open data, education, etc.), and can we trace our inputs and engagement to how the policy is designed? Can we trace how it is implemented?
6. What is the effect of Twaweza’s work with local CSOs and district-based initiatives on sub-national government responsiveness? Can changes be traced to school management? How about to citizen engagement?
 More details on monitoring types, methods, logic, and responsibilities are outlined in the Monitoring Framework.