Imagine that after a bumpy ride, you arrive in a sizeable village in Magomeni Ward, Bagamoyo District, Eastern Tanzania. You get off the daladala and look around; you have never been here before. Your task is to find ten specific households in this village: you have a list with names of the head of the household, though for one of them there is no name recorded (it just says “mother of the house”) and for another, you only have a first name and an initial for the last name. The list is five years old. Not only do you have to find the households, you need try to find the person interviewed five years ago and interview them again today. In addition, your task is also to find Magomeni Primary School and the Magomeni Health Post and conduct interviews there as well. Best way to get started is to first find the village chairman – the office cannot be far from the daladala stop, and this person is usually quite knowledgeable about the families in the village. In any case, you have to interview her or him as well. You have just one and a half days to do all this, and it looks like it’s about to rain.
In the previous Learning Note we described the rationale for the nationally-representative panel survey that Twaweza is currently fielding. This Note describes some of the intricacies of fielding a follow-up panel survey. A panel design is a study that provides longitudinal data on a group of people, households, or other social unit, termed ‘the panel’, about whom information is collected over a period of months, years, or decades. In other words, it’s the same people that are visited and interviewed at set intervals. Now, these people and households are not obligated to stay in the same place for the duration of the panel: indeed, some panel studies are decades long and panel participants live normal lives during this time. This means that the burden is on the researchers to find the participants – whether they stay in the same place, join another household through marriage or divorce, move to another town for work, and so forth.
The Twaweza follow-up survey is focusing on telling the story of the entire communities surveyed – that is, the households together with schools, health services, and community leadership. Therefore, while we endeavor to find the same households – and to the extent possible, the same individuals – we are not tracing households and individuals beyond the boundaries of the original communities. This makes data collection slightly easier: when every reasonable effort is made to find the original household from baseline, and when it is established that the household is not available (because the family has moved to another town, for example), then we can find another household as a replacement. And yet, it’s not as simple as it sounds…
Twaweza and the team from EDI, the agency Twaweza has engaged to conduct the survey, has been working hard for weeks before the actual data collection began. First, we had to carefully review the files from the baseline survey, conducted in 2010. We were checking whether villages and neighborhoods were clearly named, whether GPS codes were available for the households, whether the names of respondents were clearly recorded, and so forth. No dataset is ever perfect (sometimes two villages have very similar-sounding names, often a household and a family are identified by the names of several different individuals, and spelling mistakes are common), but still, we had a decent starting list.
Then, we put together intricate protocols – that is, rules about who and how to select for interviews. For instance, the rule for households is to: find the original household and interview original respondent; if original respondent not available, select another eligible respondent in the original household; if original household not in the original location but still within the boundaries of the original community, find and interview the original household; if original household moved outside of the boundaries of original community, then replace with another randomly-selected household. And so on – for households, schools, health facilities, community leaders.
Designing the protocols involves playing a lot the “what if…” game. That is, we try to put ourselves in the shoes of the data collection teams and imagine how we would solve problems likely to be encountered during data collection. For example, in addition to interviewing adult members in the household, we also want to test one child per household aged 6-16 in basic numeracy and literacy skills (using our Uwezo test). What if the chosen child is at school; how long do we wait to see if child returns, or can we select another child? What if, in a health facility, when we have to fill out the medicine in-stock section, we can’t observe the stock room because it’s locked; can we look at the records, or is the recall of the nurse sufficient? What if in the school we are meant to survey neither head teacher nor assistant head teacher are available?...
Interviews with each type of respondent has an intricately developed set of rules that all survey teams must follow uniformly, and record their choices along the way. These protocols, together with the questionnaires, are piloted and revised many times before they are considered final. Piloting is essential – it allows the team to see how the protocol and the questions work in a real life scenario; each pilot sheds additional light into potential problems or areas which can be improved. After careful pilot and multiple reviews, the tools are ready, the teams are trained, and the data collection begins.
As a snapshot of this week: the teams have so far visited 49 communities (so about one-fifth of the total), and have completed 455 household surveys. The vast majority of the households found were indeed the same households as in the baseline, and in about half of them, the same respondent as in the baseline was also interviewed. However, the Uwezo test could be conducted in only about a half of the households, with the main reason being that the selected child is at school during the survey time. So far, most school and facility surveys were also conducted; majority of the schools were the same as at baseline, but interestingly, only about half of the health facilities were the same as at baseline. We need to talk to the teams in greater detail to better understand this. These are just initial results after the first couple of weeks of data collection and should not be interpreted at this point.
The fieldwork will continue like this for weeks to come; there will invariably be problems to solve, issues to follow up. As with any survey, the data is thoroughly checked as it comes in, and a portion of the sample will be re-surveyed, for quality assurance. We’ll return with another update on the survey when the data is all in and we have a good idea of the success of the follow-up survey round overall.
You might also like...
- Learning Note 1: Follow-up survey of Tanzanian communities (8 May 2015)
- Learning Note 5: Citizen perspectives on politics — a qualitative study in context of national elections (part 1) (7 Sep 2015)
- Learning Note 4: Measuring impacts of Makutano Junction (31 Jul 2015)
- Learning Note 3: Motivating civic participation (17 Jun 2015)
- Three experiments to improve learning (26 Jun 2012)
- Learning Note 8: Understanding Citizen Preferences for Political Candidates | An Experiment in Rural Tanzania (5 Feb 2016)
- Learning Note 6: “Politicians all make the same promises”: Citizen perspectives on politics from a qualitative study (part 2) (18 Sep 2015)