15.12.2015 Views

INNOVATIONS IN IMPACT MEASUREMENT

VUmAO

VUmAO

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Innovations In Impact Measurement<br />

Introduction<br />

1<br />

<strong><strong>IN</strong>NOVATIONS</strong><br />

<strong>IN</strong> <strong>IMPACT</strong><br />

<strong>MEASUREMENT</strong><br />

Lessons using mobile technology<br />

from Acumen’s Lean Data Initiative<br />

and Root Capital’s Client-Centric<br />

Mobile Measurement.


Innovations In Impact Measurement<br />

2<br />

AUTHORS<br />

ACKNOWLEDGEMENTS<br />

Tom Adams is Acumen’s Director of Impact and<br />

is based in London, heading Acumen’s global work<br />

on Impact.<br />

Rohit Gawande is based in Nairobi and leads the<br />

planning and delivery of Acumen’s work on Impact<br />

across Africa.<br />

Scott Overdyke supports the design and execution<br />

of strategic initiatives at Root Capital, including the<br />

mobile technology pilots now underway in Latin<br />

America and Africa. He is based in Boston.<br />

The case studies on Labournet and KZ Noir were<br />

co-authored respectively with Ben Brockman and<br />

Daniel Gastfriend of IDInsight.<br />

This work has benefited from the generous funding<br />

and intellectual support of several organizations.<br />

Principal funding for the projects detailed in<br />

the report was provided by the ANDE Research<br />

Initiative, which in turn was generously funded<br />

by the Rockefeller Foundation, Bernard van Leer<br />

Foundation and the Multilateral Investment Fund.<br />

Acumen would also like to thank Omidyar Network<br />

for their continued support to Lean Data and<br />

similarly Root Capital is also grateful to Ford<br />

Foundation for their wider support to its Client-<br />

Centric Mobile Measurement work. Both Acumen<br />

and Root are especially appreciative to Saurabh<br />

Lall for his advice and support to our work as<br />

well as Steve Wright, Rachel Brooks, Grameen<br />

Foundation India and to various folks at IDinsight<br />

for the many ideas they have shared which have<br />

informed our thinking and approaches.


Innovations In Impact Measurement<br />

3<br />

PARTICIPAT<strong>IN</strong>G ORGANIZATIONS<br />

Acumen is changing the way the world tackles poverty<br />

by investing in companies, leaders and ideas. We invest<br />

patient capital in businesses whose products and services<br />

are enabling the poor to transform their lives. Founded by<br />

Jacqueline Novogratz in 2001, Acumen has invested more<br />

than $88 million in 82 companies across Africa, Latin<br />

America and South Asia. We are also developing a global<br />

community of emerging leaders with the knowledge, skills<br />

and determination to create a more inclusive world.<br />

This year, Acumen was named one of Fast Company’s<br />

Top 10 Most Innovative Not-for-Profit Companies.<br />

Learn more at www.acumen.org and on Twitter @Acumen.<br />

Root Capital is pioneering finance for high-impact<br />

agricultural businesses in Africa, Asia and Latin America.<br />

We lend capital, deliver financial training, and strengthen<br />

market connections so that businesses aggregating<br />

hundreds, and often thousands, of smallholder farmers<br />

can grow rural prosperity. Since our founding in 1999,<br />

Root Capital has disbursed more than $900 million<br />

in loans to 580 businesses and improved incomes for<br />

more than 1.2 million farm households. Root Capital<br />

was recognized with the 2015 Impact Award for<br />

Renewable Resources – Agriculture by the Overseas<br />

Private Investment Corporation, the U.S. government’s<br />

development finance institution. Learn more at<br />

www.rootcapital.org and on Twitter @RootCapital.


Innovations In Impact Measurement<br />

4<br />

CONTENTS<br />

<strong>IN</strong>TRODUCTION 05<br />

ABOUT THE PROGRAMMES <strong>IN</strong> THIS PAPER 07<br />

ACUMEN’S LEAN DATA 08<br />

ROOT CAPITAL’S MOBILE <strong>IMPACT</strong> <strong>MEASUREMENT</strong> 09<br />

PART 1: LESSONS TO DATE 10<br />

EFFECTIVENESS OF MOBILE TECH FOR SOCIAL PERFORMANCE <strong>MEASUREMENT</strong> 12<br />

GETT<strong>IN</strong>G STARTED: SEVEN STEPS 20<br />

PART 2: CASE STUDIES 25<br />

LEAN DATA: SOLARNOW 27<br />

MOBILE <strong>IMPACT</strong> <strong>MEASUREMENT</strong>: UNICAFEC 29<br />

LEAN DATA: LABOURNET 33<br />

LEAN DATA: KZ NOIR, PILOT<strong>IN</strong>G A “LEAN EVALUATION” 36<br />

LOOK<strong>IN</strong>G FORWARD 41


Innovations In Impact Measurement<br />

Introduction<br />

5<br />

<strong>IN</strong>TRODUCTION<br />

Impact investing is booming. The sector<br />

is attracting greater attention and more<br />

capital than ever, with an estimated<br />

USD 60 billion under management in 2015. 1<br />

Yet as impact investing grows, quality data collection on<br />

social performance remains the exception rather than the<br />

norm. While nearly all impact investors — 95% 2 — say that<br />

they measure and report on impact, current practice is, on<br />

the whole, limited to output measures of scale: number of<br />

people reached, number of jobs created.<br />

While this is disappointing, it is also understandable.<br />

The prevailing wisdom within the sector is that collecting<br />

data about social performance is burdensome and expensive,<br />

and some impact investors and social entrepreneurs would<br />

assert that it is a distraction from the ‘core’ work of building<br />

a financially sustainable social enterprise. 3<br />

Practitioners believe this because we’ve allowed ourselves<br />

to be convinced, incorrectly, that the tools we inherited from<br />

traditional monitoring and evaluation (M&E) methodologies<br />

are the only way to gather social performance data.<br />

This is no longer the case.<br />

Thanks to the ubiquity of cellphones and dramatic<br />

improvements in technology – inexpensive text messaging,<br />

efficient tablet based data collection and technologies like<br />

interactive voice response – we are in the position to quickly<br />

and inexpensively gather meaningful data directly from end<br />

customers of social enterprises. Moreover we can collect<br />

this data in ways that delight and engage customers and<br />

provide the information social enterprises need to optimize<br />

their businesses. Used with care and creativity these tools<br />

can open up new channels for investors and enterprises<br />

to cost-effectively ask questions at the core of social<br />

performance measurement, such as the poverty profile of<br />

customers and changes in their wellbeing.<br />

Building on the encouraging results of early pilots 4 , Acumen<br />

and Root Capital have expanded their adoption of mobile<br />

technologies to enhance our capacity to collect, analyze,<br />

and report on field-level data. Each organization has<br />

independently launched parallel initiatives: Acumen’s Lean<br />

Data Initiative and Root Capital’s Client-Centric Mobile<br />

Measurement.<br />

Whilst distinct – Root Capital has tended to use tablet<br />

technology to improve the efficacy of in-person surveying,<br />

whereas Acumen has largely adopted remote surveying<br />

using mobile phones – both have applied data collection<br />

innovations to right-size our approaches to understanding<br />

social and environmental performance for and with our<br />

investees. In the process, we have discovered how these data<br />

collection tools can also inform business-oriented decisions.<br />

Collecting data on social performance opens up a channel<br />

to communicate with customers, thus also providing<br />

opportunity to gather consumer feedback, data on customer<br />

segmentation, and market intelligence.<br />

This paper focuses on synthesizing the lessons from<br />

Acumen’s and Root Capital’s experiences. Our aim is to<br />

describe our two complementary approaches to mobile<br />

data management, in the hopes that these descriptions will<br />

both generate feedback from other organizations already<br />

engaged in similar efforts and be useful to organizations<br />

with similar goals and challenges. And whilst this paper<br />

describes the efforts of two impact investors we believe this<br />

work has implications beyond impact investing, including<br />

foundations, governments and NGOs.<br />

1. Global Impact Investing Network’s<br />

Investor’s Council, 2015.<br />

http://www.thegiin.org/impactinvesting/need-to-know/#s8<br />

2. J.P. Morgan and the Global Impact<br />

Investing Network, Spotlight on the<br />

Market: The Impact Investor Survey,<br />

2014. http://www.thegiin.org/binarydata/2014MarketSpotlight.PDF<br />

3. Keystone Accountability,<br />

“What Investees Think”, 2013, p11.“<br />

4. See the following paper for discussion<br />

of Acumen’s pilot using Echo Mobile<br />

http://www.thegiin.org/binary-data/<br />

RESOURCE/download_file/000/000/528-<br />

1.pdf


Innovations In Impact Measurement<br />

Introduction<br />

6<br />

The paper is divided into two sections. The first section<br />

discusses our lessons to date covering both technical<br />

(data quality) and operational (ease of implementation)<br />

considerations. It also highlights some of the current<br />

limitations of implementing such measurement initiatives.<br />

The second section provides a range of case studies that<br />

bring this work to life. We hope these real-life examples<br />

will provide encouragement and inspiration for others to<br />

try these approaches.<br />

Our biggest finding in rolling out Lean Data and Client-<br />

Centric Mobile Measurement is that these approaches<br />

allow us to focus on our original purpose in supporting<br />

social enterprises. These enterprises exist to make a<br />

meaningful change in the lives of low-income customers<br />

as well as suppliers (i.e., smallholder farmers). Lean Data<br />

and Client-Centric Mobile Measurement put power into<br />

the hands of these customers, giving them voice to share<br />

where and how social enterprises are improving their lives.<br />

In so doing, we can finally see – in close to real-time and<br />

at a fraction of the cost of traditional research studies –<br />

the data we need to understand if we are achieving our<br />

purposes as agents of change.<br />

A DIFFERENCE BETWEEN <strong>IMPACT</strong> &<br />

SOCIAL PERFORMANCE <strong>MEASUREMENT</strong><br />

The term “impact” is a tricky one. It often means<br />

different things to different people. This is not helpful.<br />

Within the impact evaluation profession, to state that<br />

an intervention has “impact” usually requires a high<br />

degree of certainty of attribution, based on the existence<br />

of a relevant control group against which to judge a<br />

counterfactual (i.e. what would have happened anyway<br />

without the intervention).<br />

Because of this definition of “impact,” we have found<br />

it more helpful to use the term “social performance<br />

measurement.” Lean Data and Client-Centric Mobile<br />

Measurement are collecting reported data on social<br />

change to understand trends and patterns. This data<br />

gives indications of social change, and the data gathered<br />

should consider a counterfactual, even if imperfectly<br />

measured. However, in most cases these data gathering<br />

exercises do not have formal control groups, largely<br />

because we have found it impractical in the context of<br />

the social enterprise.<br />

The decision to structure our data-gathering in this way<br />

reflects a core principle of prioritizing efficiency and the<br />

business realities of a fledgling social enterprise, and the<br />

belief that while the data collected in this way will have<br />

limitations, this data is nevertheless much more useful<br />

to inform decisions than sporadic or no data at all.<br />

Our principle objective is not to know with certainty<br />

that impact can be attributed to a particular action<br />

or intervention. Our objective is to collect data with<br />

an appropriate degree of rigor that gives voice to our<br />

customers, including a more objective window into their<br />

experiences of a given product or service, and helps the<br />

businesses we invest in use this data to keep an eye on<br />

their social metrics and manage toward ever improving<br />

levels of social performance.<br />

To avoid confusion in this report, we use the term<br />

social performance measurement rather than impact<br />

measurement as a more accurate description of the<br />

data we collect and use to assess the social change we<br />

believe both we and our respective investees make.<br />

The only exception is in the case study by Acumen of<br />

KZ Noir, where the methodology in question involves a<br />

control group. To the best of our knowledge, this is a first<br />

attempt at a “Lean Evaluation” in the context of a social<br />

enterprise.


Innovations In Impact Measurement<br />

About The Programmes In This Paper<br />

7<br />

ABOUT THE PROGRAMMES <strong>IN</strong> THIS PAPER<br />

This paper presents results from two initiatives:<br />

Acumen’s Lean Data and Root Capital’s Client-Centric<br />

Mobile Measurement. Both programmes received generous<br />

support from the Aspen Network of Development<br />

Entrepreneurs (ANDE) through the Mobile Innovations<br />

grant. The discussion and case studies presented in this<br />

paper focus predominantly on specific projects funded<br />

though this grant. However, the report also benefits from<br />

both Acumen’s and Root Capital’s wider experience and<br />

expertise in implementing such projects.<br />

TECHNOLOGY TESTED<br />

Specifically, the paper draws from fifteen separate data<br />

collection projects across our respective investees (ten from<br />

Acumen in Africa and India, and five from Root Capital in<br />

Latin America). Through these projects, Acumen and Root<br />

Capital have surveyed more than 7,000 customers using<br />

four different types of data collection methods across<br />

nearly ten different technology or service providers. These<br />

include the following technology platforms: Taroworks,<br />

Echo Mobile, mSurvey, Laborlink, SAP’s rural sourcing<br />

platform, iFormBuilder, Lumira, Enketo Smart Paper and<br />

Open Data Kit. Projects ranged from four weeks to six<br />

months, depending on the intensity and complexity<br />

of the engagement.


Innovations In Impact Measurement<br />

About The Programmes In This Paper<br />

8<br />

ACUMEN’S LEAN DATA<br />

In early 2014, Acumen created the Lean Data initiative. Lean<br />

Data is the application of lean experimentation principles to<br />

the collection and use of social performance data. The core<br />

philosophy behind Lean Data is to build, from the ground up,<br />

a data collection mindset and methodology that works for<br />

social enterprises. Inspired by “lean” design principles, Lean<br />

Data is an approach that involves a shift in mindset away<br />

from reporting and compliance and toward gathering data<br />

that drives decisions. Lean Data uses low cost-technology to<br />

communicate directly with end customers, generating highquality<br />

data both quickly and efficiently.<br />

Applying both a new mindset and methodology, Lean Data<br />

aims to turn the value proposition for collecting social<br />

performance data on its head. Rather than imposing topdown<br />

requests for data, we work collaboratively with social<br />

enterprises to determine how Lean Data can generate<br />

valuable, decision-centric data that drives both social<br />

and business performance. We start with asking our<br />

entrepreneurs one simple question, “What does successful<br />

social change look like to you?” and work with them<br />

throughout the collection process to ensure both their and<br />

our data collection needs are met.<br />

Because our portfolio is heterogeneous by sector and<br />

business model, individual theories of change relating<br />

to social performance can vary starkly by investee.<br />

As a consequence, each implementation of Lean Data<br />

involves a different set of questions to answer, metrics to<br />

gather, technologies to deploy and methodologies to use.<br />

Nonetheless, the three core building blocks to Lean Data<br />

remain:<br />

Lean Design<br />

We tailor our measurement and collection approach to the<br />

unique context of each company, utilizing existing companycustomer<br />

touch points where possible.<br />

Lean Surveys<br />

By keeping our surveys focused and tailored to the<br />

company’s needs, we gather meaningful information on even<br />

challenging social performance questions without requiring<br />

much of the customer’s time.<br />

Lean Tools<br />

By using technology, typically leveraging mobile<br />

phones, Lean Data enables our companies to have quick,<br />

direct communication with customers even in the most<br />

remote areas.


Innovations In Impact Measurement<br />

About The Programmes In This Paper<br />

9<br />

ROOT CAPITAL’S CLIENT-CENTRIC<br />

MOBILE <strong>MEASUREMENT</strong><br />

Root Capital’s mobile measurement, like all of our<br />

monitoring and evaluation research, strives to evaluate<br />

social performance in ways that create value for researchers<br />

and research participants alike.<br />

Too often, data collection for impact evaluations, regardless<br />

of the intent, feels extractive to the research participants.<br />

Such evaluations reinforce real and perceived imbalances<br />

in power and opportunity between the people doing the<br />

research and the people being studied. In contrast, like many<br />

practitioners of market-based approaches to development,<br />

Root Capital has come to see impact evaluations as one<br />

among many touchpoints in the customer, employee, or<br />

supplier relationship.<br />

For lack of a better term, and because our partners in these<br />

studies are in fact our lending clients, the internal term<br />

we use for this approach is “client-centric” evaluation. Our<br />

intent, however, is not to coin a new term or invent a new<br />

impact methodology. It is simply to honor the rights of<br />

the agricultural businesses and small-scale farmers that<br />

participate in our impact evaluations, and find creative<br />

ways to deliver more value to them. By doing so, we hope to<br />

significantly increase the value of the research, notably to<br />

participants, without proportionately increasing the cost.<br />

Root Capital began conducting social performance studies<br />

with clients and their affiliated farmers in 2011, and began<br />

using tablets and mobile survey software for field-based data<br />

gathering in 2012. Interestingly, a number of clients began<br />

asking for Root’s assistance in developing similar datagathering<br />

capabilities to inform their own activities on the<br />

ground. We have since piloted a variety of different mobile<br />

technology platforms with more than twenty-five clients in<br />

Latin America and Africa.<br />

Today, Root Capital uses Information and Communication<br />

Technology (ICT) for data gathering and analysis to fulfill<br />

three objectives:<br />

1. Capture social and environmental indicators at the<br />

enterprise and producer level to better understand the<br />

impact of Root Capital and our agricultural business<br />

clients on smallholder farmers;<br />

2. Capture producer and enterprise level information to<br />

better inform the business decisions of Root Capital<br />

clients; and<br />

3. Capture and aggregate data relevant to Root Capital for<br />

the monitoring and analysis of credit risk and agronomic<br />

practices.<br />

This report describes the ‘mobile-enabled’ component of<br />

Root Capital’s client-centric approach. While the majority of<br />

our experiences using mobile technology in data gathering<br />

have been in Africa, the technologies profiled by Root Capital<br />

in this report are exclusive to our Latin American operations<br />

and focus largely on tablet-based data gathering for detailed<br />

processes (e.g., deep dive impact studies, internal farm<br />

inspections, and monitoring of agronomic practices). These<br />

examples differ from the Acumen case studies in that they<br />

require personal interactions with producers, and have been<br />

intentionally selected to demonstrate use of one particular<br />

approach applied in a variety of settings.<br />

For more information about Root Capital’s client-centric<br />

approach, including general principles, practical tips, and<br />

case studies, please see our working paper<br />

“A Client-Centric Approach: Impact Evaluation That Creates<br />

Value for Participants.”<br />

CONNECTION TO METRICS 3.0<br />

The approaches of Lean Data and Client-Centric Mobile<br />

Measurement share much in common with each other.<br />

However, we recognize that they are also very much in<br />

line with a growing number of progressive movements<br />

to change the way measurement is conducted. In<br />

particular, our efforts in this paper aim to build on the<br />

work of “Metrics 3.0” (http://www.ssireview.org/blog/<br />

entry/metrics_3.0_a_new_vision_for_shared_metrics),<br />

which intends to advance the conversation among<br />

investors and companies from accountability-driven<br />

measurement to a performance management approach.


Innovations In Impact Measurement<br />

Introduction<br />

10<br />

PART 1:<br />

LESSONS<br />

TO DATE


Innovations In Impact Measurement<br />

Part 1: Lessons To Date<br />

11<br />

The explosion of creative data collection<br />

techniques that leverage mobile phones<br />

has opened new opportunities to<br />

communicate with and learn from lowincome<br />

customers. Indeed, following our<br />

respective experiences described in this<br />

paper, Acumen and Root Capital feel more<br />

certain than ever that these data collection<br />

methods will remain a core element of<br />

how both organizations gather social<br />

performance data.<br />

However, we’ve also learnt that asking<br />

questions and collecting data via mobile<br />

phones takes considerable thought and<br />

attention, reinforcing the importance of<br />

collectively building our knowledge base in<br />

this area. Compared with traditional inperson<br />

surveying, remote, mobile-phone<br />

based methods to collect social data are in<br />

their infancy. Even applying tablet-based<br />

technology, ostensibly a relatively simple<br />

upgrade of old fashioned pen and paper<br />

surveys, requires careful integration of new<br />

technologies and training of new processes.<br />

Through our projects both Acumen and<br />

Root Capital are learning about the best<br />

ways to implement mobile data collection,<br />

every bit as much from our failures as<br />

from our successes. And despite bumps<br />

in the road, both organizations are<br />

discovering that with thought, practice and<br />

perseverance these technologies are helping<br />

to transform our ability to collect and use<br />

social performance data.


Innovations In Impact Measurement<br />

Part 1: Lessons To Date<br />

12<br />

EFFECTIVENESS OF MOBILE TECH FOR<br />

SOCIAL PERFORMANCE <strong>MEASUREMENT</strong><br />

A first question when considering the technologies and<br />

approaches described in this paper is, “so how well do they<br />

work?” And, despite some challenges, the headline answer<br />

appears to be “encouragingly well”.<br />

And if you are currently considering mobile technology and<br />

lean design principles for your own data collection efforts,<br />

you’re likely reading these cases with two fundamental<br />

questions in mind:<br />

Q1. Is the data accurate, representative,<br />

and useful?<br />

Q2. What are my options and what do<br />

they cost?<br />

In this section we share what we’re learning with respect<br />

to both of these questions.


Innovations In Impact Measurement<br />

Part 1: Lessons To Date<br />

13<br />

Q1. Is the data accurate, representative,<br />

and useful?<br />

Garbage in-garbage out. This age-old maxim of research<br />

is as relevant to our work to measure social performance<br />

as it is in any other data collection and impact modelling<br />

exercise. Indeed, the entire exercise of social performance<br />

data collection is wasted if the data that is collected is not<br />

accurate, representative, and useful for decision-making<br />

purposes.<br />

The methods profiled in this report are not exactly applesto-apples<br />

comparisons – the technologies tested allow for<br />

different numbers of questions, types of questions, answered<br />

in different levels of detail. However, each of the methods<br />

do produce results that are on the whole accurate, inclusive,<br />

and useful in the following ways:<br />

Data Accuracy<br />

The question we spend most time and energy considering<br />

is that of accuracy. After all, if we cannot gather reliable,<br />

quality data the whole exercise is undermined. Our<br />

experience shows that it is possible to collect data with<br />

enough accuracy to help social enterprises better understand<br />

social performance, as well as make more informed business<br />

decisions. We are experimenting with different methods to<br />

validate and improve accuracy, and we have summarized<br />

our main findings below.<br />

One way to get a sense of accuracy of mobile surveys is to<br />

back-check questions with a small sub-sample (typically<br />

5-10%) using an alternative, more-established survey<br />

method. Large variations in responses suggest something<br />

may be awry. Based on this approach our data suggests<br />

that, whilst accuracy rates appear to be in general good,<br />

accuracy of responses can vary for multiple and sometimes<br />

unexpected reasons (e.g. the complexity of the question<br />

being asked, the length of survey, and even the mood of the<br />

respondent at the moment they receive the survey).<br />

We don’t yet know enough about all the reasons for such<br />

variability, but it does suggest that significant care must be<br />

taken to collect reliable responses, and repeated testing to<br />

find out what works and what does not is required.<br />

However, we have learnt that rather than one tool being<br />

universally more or less accurate, the degree of accuracy is<br />

usually more dependent on matching the right question to<br />

the right technology. For example, questions asking about<br />

relatively static variables, such as family size or occupation,<br />

are well-suited for SMS as respondents typically do not<br />

need clarification or further information. On the other<br />

hand, questions about spending habits are well-suited for<br />

in-person, tablet or call centre interviews, given they can<br />

require further probing, explanation or nuance to get a<br />

full picture. We are also finding evidence that sensitive or<br />

confidential information may be best collected by remote<br />

technologies, such as Interactive Voice Response (IVR) or<br />

SMS: this gives respondents increased anonymity and may<br />

lead to more accurate responses. 5<br />

Our observations on variability rates are shown in Table<br />

1 below, categorized by technology and question type. To<br />

date, our data suggest that rather than one technology<br />

being ‘better’ that another, the most important cause of<br />

variability between answers is question type. Responses to<br />

static questions, asking about generally stable indicators<br />

such as household size, land size or education levels are<br />

pretty robust via simple SMS or IVR. By contrast, what we<br />

describe as ‘dynamic’ variables - metrics that may have high<br />

variability over short time periods (e.g. fuel spending, litres<br />

of milk consumed, daily wages) – can be effectively collected<br />

by call-centre and in person. And whilst it is by no means<br />

universally impossible to measure these dynamic variables<br />

by SMS or IVR, it requires greater care.<br />

Table 1: Observed Variability rates by collection method.<br />

Technology<br />

SMS<br />

IVR<br />

Call Centre<br />

Range<br />

Static Questions<br />

3-5%<br />

13-23%<br />

5-17%<br />

Range<br />

Dynamic Variables<br />

6-100%<br />

17-44%<br />

8-12%<br />

All data points shown are from individual data collection projects at an<br />

Acumen portfolio company. A range represents two separate data collection<br />

projects falling under the same category.<br />

5. Schober MF, Conrad FG, Antoun C,<br />

Ehlen P, Fail S, Hupp AL, et al. (2015)<br />

Precision and Disclosure in Text and<br />

Voice Interviews on Smartphones.<br />

PLoS ONE 10(6): e0128337.


Innovations In Impact Measurement<br />

Part 1: Lessons To Date<br />

14<br />

Moreover, just because there is variability between survey<br />

questions taken remotely by comparison to traditionally, it<br />

does not necessarily follow that the remote survey response<br />

is less accurate. It is true that in-person surveying allows<br />

skilled enumerators to weed out incorrect answers (e.g. it’s<br />

hard to respond incorrectly about the material or size of your<br />

house if the enumerator is standing in front of it). However,<br />

surveying in person also brings its own potential biases, and<br />

the apparent anonymity or increased remoteness of a text<br />

may elicit a more honest answer in certain instances.<br />

The introduction of new biases based on different survey<br />

methods underscores the importance of continuously<br />

testing questions to discover which metrics are best suited<br />

for a particular collection method. We also believe that the<br />

success of asking particularly complex questions remotely<br />

may ultimately come down to the way a question is phrased.<br />

Over time, we’ve learned how to incorporate changes in<br />

question phrasing to yield more accurate results.<br />

GROUP OR <strong>IN</strong>DIVIDUAL ACCURACY<br />

Determining “accuracy” of various mobile data collection<br />

tools is not as straightforward as it seems. In typical<br />

social science surveys “accuracy” refers to how close an<br />

observation comes to the truth. Typically there is always<br />

some amount of error involved with any survey tool<br />

because we’re dealing with human subjects who tend to<br />

misreport or misremember data points about their own<br />

lives – especially when it involves something complicated<br />

like agricultural production, health or education.<br />

This becomes more problematic for mobile data collection<br />

tools because we are testing the accuracy of the survey<br />

in addition to the tool itself. Therefore, we think about the<br />

“truth” as the result of a more robust, previously tested<br />

data collection technique. Accuracy can be considered<br />

along two dimensions:<br />

+ Individual-level<br />

We are confident that, if a respondent reported they<br />

live in district Y, the respondent actually lives there.<br />

+ Group-level<br />

We are confident that, if 50% of our respondents<br />

reported living in district Y, 50% of the total<br />

population lives there.<br />

Some questions require only group-level accuracy,<br />

while others require individual-level accuracy.<br />

For example, a firm looking to estimate the average<br />

satisfaction levels of its customer base needs only worry<br />

about group-level accuracy. The satisfaction of any given<br />

customer is not important: what matters is whether<br />

customers are satisfied or dissatisfied on average as a<br />

customer-base. By contrast, a firm looking to identify<br />

individuals living below the poverty line to qualify them<br />

for additional services requires individual-level accuracy:<br />

improperly classifying a poor individual as above the<br />

poverty line will result in a wrong decision. Acumen and<br />

Root Capital do not see one ‘accuracy type’ as inherently<br />

better than the other. Rather, we posit that the correct<br />

accuracy type to report depends on the core question<br />

a social enterprise is trying to answer.<br />

The authors would like to acknowledge ID Insight for their valuable<br />

contribution on this subjct.


Innovations In Impact Measurement<br />

Part 1: Lessons To Date<br />

15<br />

Building a representative sample<br />

Clearly if no one responds to your request for data, you<br />

won’t get any data, and your ambitions to measure social<br />

performance will be scuppered before they’ve started. As<br />

a result, we carefully track response rates to capture a<br />

sufficiently large, robust and representative sample in our<br />

data.<br />

It is relatively straightforward, when conducting in-person<br />

surveys, to manage response rates and to plan surveying to<br />

ensure an unbiased sample. People generally respond well to<br />

being asked politely and professionally to give their opinion.<br />

By contrast, with remote mobile surveys, it can be harder<br />

to ensure that a sufficiently large, representative sample<br />

respond to get high quality, unbiased responses (just think<br />

how many times you are asked and respond to a survey on<br />

your phone or online). Low response rates are by no means a<br />

uniform experience, and several important factors can boost<br />

the number of people who respond.<br />

Response rates are tied to customer’s knowledge and<br />

perception of the company asking the question, and in some<br />

cases these effects are strong. For example, we’ve seen that<br />

micro-finance institutions (MFIs) that meet with customers<br />

every month to discuss their loans can have response rates<br />

to remote surveys that are regularly above 50 percent and<br />

frequently reach 80 or 90 percent.<br />

Of course a low response rate in of itself is not necessarily<br />

a problem. It may simply mean that you need to send your<br />

remote surveys to a larger initial population. Given the<br />

lower costs of remote surveys this is far less of a challenge<br />

than when administering surveys or requesting feedback<br />

in person. Where more care is often needed is if there is a<br />

systematic reason why only some people will respond to<br />

your surveys – response bias. This is common to many data<br />

collection projects, and in our experience, even more so for<br />

remote methods. For example, in an SMS survey, perhaps<br />

those who respond are particularly excited or agitated, and<br />

those who do not respond fail because a large proportion<br />

do not have a phone or can’t use theirs at the time. In such<br />

instances, response bias becomes a significant factor.<br />

While this doesn’t have to mean your data collection is<br />

useless, it does mean that you treat the findings of any<br />

data collection exercise with more caution (where possible<br />

factoring the strength and direction of any bias). Where you<br />

suspect bias you might also consider repeating your survey in<br />

person or with another mobile tool to see if there are major<br />

variations in the data you gather.<br />

Table 2: Representative survey response rates across technology<br />

type and nature of customer relationship<br />

Technology<br />

SMS<br />

IVR<br />

Phone Centre<br />

Nature of customer relationship<br />

Indirect<br />

12-15%<br />

6-9%<br />

41-42%<br />

Direct<br />

35-65%<br />

N/A<br />

62-80%<br />

All data points shown are from individual data collection projects at<br />

an investee. A range represents two separate data collection projects falling<br />

under the same category.<br />

Not surprisingly, companies with more direct relationships<br />

get higher responses to surveys overall. The conclusions<br />

around technology type are less definitive. We’ve not yet<br />

tested IVR enough to draw any firm conclusions but it is<br />

increasingly clear that both SMS and call Centres can drive<br />

high response rates from customers.


Innovations In Impact Measurement<br />

Part 1: Lessons To Date<br />

16<br />

TIPS AND CHEATS: RESPONSE RATES<br />

Tips for constructing SMS<br />

surveys that encourage higher<br />

response rates:<br />

Tips for constructing IVR<br />

surveys to encourage higher<br />

response rates:<br />

Tips for constructing call-centre<br />

surveys to encourage higher<br />

response rates:<br />

1. Sensitize customers or let them<br />

know that a survey is coming, a<br />

day or so before you send. A little<br />

anticipation can be effective.<br />

2. Double check that the literacy<br />

rate of your customer base is<br />

high, and that the language you<br />

use is most appropriate for SMS.<br />

For instance, many Kenyans<br />

use English or Kiswahili for<br />

SMS, rather than their native or<br />

regional language.<br />

3. Start with a compelling<br />

introduction text, clearly<br />

explaining who the sender is<br />

and including a statement of<br />

confidentiality.<br />

4. Keep the number of questions<br />

to a maximum of seven or eight.<br />

We have observed that response<br />

rates drop off significantly after<br />

seven or eight questions.<br />

5. If you have a longer survey, offer<br />

incentive as compensation (if<br />

possible with the local network<br />

carrier), or split the survey into<br />

two parts and send them over<br />

two days.<br />

6. Take time to ensure your<br />

questions are crystal clear. As<br />

soon as someone is confused<br />

they’ll stop responding. Test your<br />

questions on a small group first<br />

and if you’re experimenting with<br />

‘harder’ questions, put them at<br />

the end after the easier ones.<br />

1. Try to sensitize customers or<br />

let them know that a survey is<br />

coming, and that the company<br />

would value their response.<br />

LaborLink typically hands<br />

out ‘hotline cards’ to their<br />

respondents beforehand. These<br />

cards carry a phone number that<br />

they ask the respondent to call on<br />

their own time, thus giving more<br />

agency to the respondent.<br />

2. Make an in-person connection.<br />

Voto Mobile has found that IVR<br />

works best as a supplement to inperson<br />

interactions, rather than a<br />

substitute.<br />

3. Keep it short. Like SMS, IVR<br />

respondents tend to drop steadily<br />

with each additional question<br />

(see LabourNet case study).<br />

1. Time your survey, we aim for less<br />

than 7 minutes<br />

2. Send a text before to warn it is<br />

coming, and after to ask about<br />

the experience<br />

3. Create a compelling introduction,<br />

clearly stating who the survey is<br />

from, and why it is important for<br />

the customer to respond. We find<br />

communicating that responses<br />

will be used to improve their<br />

service or act on specific feedback<br />

works particularly well.<br />

4. Select a time of day where your<br />

customers are highly likely to<br />

be home and near their phone.<br />

For instance, many smallholder<br />

farmers are in the field during<br />

the mornings and may not have<br />

mobile network coverage.


Innovations In Impact Measurement<br />

Part 1: Lessons To Date<br />

17<br />

RESPONSE BIAS<br />

An avenue for further research around remote surveying<br />

is to compare whether the characteristics of those who<br />

do not answer mobile data collection surveys drastically<br />

differs from those who are willing and able to be surveyed<br />

remotely. In the case study on LabourNet below, Acumen<br />

and ID Insight found early evidence that respondents<br />

to the mobile surveys may have differed from those<br />

surveyed in-person.<br />

The Center for Global Development (CGD) has published<br />

preliminary work that shows that those who answer IVR<br />

calls often systematically differ from the population as<br />

a whole, with respondents being more urban, male, and<br />

wealthier than the population at large. 6 Their results<br />

show that with additional analysis, a representative<br />

sample can be constructed from these phone surveys;<br />

however, it can require between 2 and 12 times as many<br />

calls to be made to achieve a representative sample<br />

similar to an in-person survey.<br />

6. Leo, Ben. (2015) Do Mobile Phone<br />

Surveys Work in Poor Countries?<br />

Working Paper 398: Center for Global<br />

Development.


Innovations In Impact Measurement<br />

Part 1: Lessons To Date<br />

18<br />

Q2. What are my options and what<br />

do they cost?<br />

Perhaps the most obvious appeal of mobile based surveying<br />

is its potential for significant cost savings. Traditional pen<br />

and paper surveys are often prohibitively expensive for<br />

social enterprises, especially those that have dispersed or<br />

remote customers.<br />

Our experience to date suggests that while costs are<br />

typically cheapest with SMS and more expensive inperson<br />

– hardly surprising – the variation in costs can vary<br />

significantly based on geographical location. This is driven<br />

by the relative ubiquity of the technology to the market in<br />

question. For example, in India where call centres are well<br />

established, costs for call centres can be as cheap as SMS<br />

based surveys in Kenya (see chart 1).<br />

Chart 1: Average survey cost by technology type<br />

Average Full Survey Cost by Technology Type<br />

(Based on a 10 question survey delivered to 1000 respondents)<br />

(assuming 10 question surveys delivered to 1,000 respondents)<br />

In-Person Tablet - Rwanda<br />

$8,770<br />

In-Person Tablet - Peru*<br />

$8,000<br />

Phone Center - Kenya<br />

$6,500<br />

Phone Center - Uganda<br />

$3,480<br />

Phone Center - Kenya<br />

$1,950<br />

Phone Center - India<br />

$1,780<br />

IVR - India<br />

$2,600<br />

SMS - Kenya<br />

$2,360<br />

$- $1,000 $2,000 $3,000 $4,000 $5,000 $6,000 $7,000 $8,000 $9,000 $10,000<br />

*Cost represents the set-up and implementation Year 1 of a 4 year survey, where annualized costs are about $4,800.


Innovations In Impact Measurement<br />

Part 1: Lessons To Date<br />

19<br />

The analysis above considers the direct out of pocket costs<br />

associated with using these technologies excluding the<br />

staff time. In addition, the speed with which such surveys<br />

can be implemented can also mean a significantly lower<br />

opportunity cost with respect to time. Many traditional M&E<br />

projects can take many months and often years to complete.<br />

The average time for data collection activities featured here<br />

was typically counted in weeks, including time for planning<br />

and analysis. Though in-person tablet-based technology is<br />

more expensive than remote methods, it can be a significant<br />

time saver because companies benefit from not having to<br />

hire and train data entry analysts to convert paper forms to<br />

digital. In addition, tablets improve data quality if done well,<br />

which can save future cost and time in re-collecting poor<br />

quality data.<br />

Notwithstanding that some measures of social performance<br />

simply take a long time to manifest (e.g. impact of improved<br />

nutrition) for companies who want to make decisions<br />

quickly, or may pivot their business model between baseline<br />

and endline surveys taken years apart, the question of time<br />

to deliver data may be equally pressing as cost.<br />

The question of cost is ultimately a question of value: is<br />

the benefit of a particular mobile tool or process worth the<br />

costs in terms of licensing fees, time, training, and added<br />

complexity? Similarly, which tools offer the best ‘bang for<br />

the buck’ and ensure that results are both accurate and<br />

actionable?


Innovations In Impact Measurement<br />

Part 1: Lessons To Date<br />

20<br />

GETT<strong>IN</strong>G STARTED: SEVEN STEPS<br />

With the considerations of responsiveness, cost and accuracy<br />

in mind, how might you get started with mobile-based data<br />

collection? Our first advice is to start small, start with care,<br />

but do start. Here we lay out a range of practical steps based<br />

on our learning that we hope encourages you to take the<br />

plunge.<br />

A NOTE OF CAUTION:<br />

CONSIDER WHAT AND WHY,<br />

BEFORE GETT<strong>IN</strong>G TO HOW.<br />

Before diving into the Seven Steps, we highly<br />

recommend that you first define what you want to<br />

measure and why before defining the how. Ask yourself<br />

what is the fundamental question (or questions) that<br />

you are trying to answer? What is the bare minimum<br />

you’d like to know, and what information is needed to<br />

get there? What will you do with it once you have the<br />

data you seek? A common mistake we’ve seen is to<br />

start collecting data without a clear answer to what and<br />

why from the beginning. If these are clear, the how can<br />

become your main focus.<br />

STEP 1: CHOOSE YOUR TOOL<br />

Given the discussion above regarding the importance<br />

of context as well as the varying reliability of different<br />

questions over different technologies, a natural first question<br />

is “which technology is right for me?” The good news is that<br />

there is no fixed answer that applies across all surveys and<br />

organizations. This means that judgement and experience<br />

is required, however the following questions might be<br />

instructive in narrowing down the field of opportunities:<br />

Data gathering: Remote vs In-person<br />

Consider the following chart when determining whether you<br />

can primarily engage in remote survey collection. If any of<br />

the needs listed below the “In-person” category hold true,<br />

it’s likely you will need to consider tools and technologies<br />

appropriate for in-person surveys.<br />

1. In-person 2. Remote<br />

Number of questions<br />

Large number of questions necessary<br />

(>15 questions)<br />


Innovations In Impact Measurement<br />

Part 1: Lessons To Date<br />

21<br />

Once you can determine if you need in-person vs remote<br />

data collection, there are multiple options available to collect<br />

data from customers. The below flow-chart is based on our<br />

experiments with varying types of remote survey collection<br />

methods.<br />

It is also worth stressing that you don’t have to use a single<br />

tool for any data collection exercise. Indeed we’ve also found<br />

that mixing and matching methods can be highly effective<br />

especially for complex social performance metrics. For<br />

example, when trying to measure net crop income, a remote<br />

survey close to crop harvest can provide good agricultural<br />

yield and price data, combined with a focus group better<br />

able to record input costs in the context of poor records and<br />

high recall bias (in a focus group the collective memory may<br />

prove more accurate than individual).<br />

Start<br />

Here<br />

Are literacy rates very low?<br />

No<br />

Yes<br />

Do you need to ask<br />

over 10 questions?<br />

Do you need qualitative,<br />

detailed responses?<br />

No<br />

Yes<br />

Yes<br />

No<br />

Sensitive questions?<br />

Yes<br />

No<br />

Budget<br />

Less<br />

than<br />

$5k<br />

More<br />

than<br />

$5k<br />

Go with SMS Go with Phone Go with IVR


Innovations In Impact Measurement<br />

Part 1: Lessons To Date<br />

22<br />

Think Local<br />

The applicability and even availability of technology for<br />

undertaking mobile data collection will vary depending on<br />

where you work. For example, despite cataloguing over 80 ICT<br />

tools in use by African farmers and agricultural enterprises,<br />

we have seen far fewer tools with resources and interfaces<br />

translated into Spanish and available in Latin American<br />

markets. In Kenya, where mobile penetration is high and<br />

integrated into many parts of life through platforms such as<br />

mPesa, using SMS for multiple purposes is part of everyday<br />

life for most. In neighbouring Ethiopia, mobile penetration<br />

rates are considerably lower, making some of the remote<br />

technologies potentially less attractive. However, whilst<br />

national mobile penetration rates are a good guide to<br />

identifying the sorts of tools you might use, they can be<br />

misleading. For instance, Acumen predicted that remote,<br />

Rwandan coffee farmers, many of whom are living in<br />

extreme poverty, would have low access to mobiles. But after<br />

completing an in-person tablet survey, we discovered that<br />

nearly two-thirds of farmers have access to a mobile phone,<br />

leading us to reconsider SMS.<br />

Software Selection<br />

In selecting the right technology, there are any number of<br />

factors you might consider (see some of the selection criteria<br />

mentioned in the following cases for examples). For fieldbased<br />

data collection, you might also consider the following<br />

elements when selecting your software application:<br />

+ If you have limited connectivity, find a software that<br />

allows you to collect data offline, with a big enough local<br />

server to accommodate the # of surveys you’ll do before<br />

having access to internet for syncing.<br />

STEP 2:<br />

GATHER MOBILE NUMBERS<br />

(FOR REMOTE SURVEY<strong>IN</strong>G)<br />

While some companies may have a database of mobile<br />

numbers collected as part of their business model (e.g. this<br />

is common with micro lenders), it’s likely that you’ll have<br />

to gather customer mobile phone numbers prior to starting<br />

your survey. This can seem challenging but there are several<br />

clever and cost-effective ways to start gathering mobile<br />

contact information. Any face-to-face customer interaction<br />

is an obvious place to start (e.g. a company’s sales force),<br />

but even for B2B companies including those that design<br />

or manufacture products but don’t sell directly to end<br />

customers, there are lots of good options: placing a number<br />

to register your product on your packaging; holding a radio<br />

campaign to encourage listeners to register their interest<br />

through SMS; even just handing out flyers. 7<br />

Even where an existing list of numbers exists, due to<br />

mobile number change/attrition – often due to customers<br />

taking advantage of deals from other carriers - it is worth<br />

periodically refreshing your database and/or checking that<br />

customers still have the same numbers. We’ve regularly<br />

found that close to 1 in 10 numbers in company databases<br />

are incorrect. One potential solution is to give customers<br />

a “hotline” card with a talk time incentive paid if the user<br />

updates a company with a new phone number. Ensuring an<br />

accurate set of phone numbers is especially important where<br />

panel data or baseline and endline surveys are concerned.<br />

+ Find a software that allows you to easily customize<br />

your survey yourself (including, changing the wording,<br />

structure, and sequence of questions), rather than having<br />

to rely on an external programmer.<br />

+ Make sure that the software you choose runs well on the<br />

tablets you plan to use; while some software programs are<br />

device-agnostic, some only run on androids or iPads.<br />

7. These methods may be prone to higher<br />

response bias, however (see above<br />

section on Response Bias for tips)


Innovations In Impact Measurement<br />

Part 1: Lessons To Date<br />

23<br />

STEP 3:<br />

DESIGN YOUR SURVEY<br />

STEP 4:<br />

PROTOTYPE <strong>IN</strong> PERSON<br />

AND ADAPT DESIGN<br />

Survey design takes practice, but it is also not as hard as<br />

often thought. Since there are many quality texts that<br />

describe best-in-class survey design techniques we wouldn’t<br />

hope to provide any further general guidance here. 8<br />

However we have discovered some general rules of thumb<br />

relating to surveys implemented through mobiles:<br />

+ Less is more: we have found over and over again that<br />

asking fewer questions, especially when using SMS and<br />

IVR is best. Beyond 5-7 questions we see steady fall off<br />

rates.<br />

+ If you need to ask a lot of questions, rather than doing it<br />

all in one blast, try asking half your questions one day<br />

and then ask if people will opt into a second batch the day<br />

after.<br />

+ Customers respond well to open ended questions even<br />

by SMS – they feel ‘listened to’ and the quality of the<br />

responses is higher than one might typically expect.<br />

+ Consider which language is most appropriate. This may<br />

not always be the same as the spoken language. For<br />

example in parts of Kenya we’ve found people like callcentre<br />

questions in Swahili and SMS in English.<br />

+ Include a question that checks for attention, e.g. “reply 4,<br />

if you are paying attention” (this is especially true if you’ve<br />

added a financial incentive to answer your survey which<br />

can prompt some people to get through your survey as<br />

quickly as possible to access the reward).<br />

As with all surveying it is essential to test that your<br />

questions are phrased properly and that they are wellunderstood<br />

by respondents. Concepts and phrases you think<br />

are unambiguous may be interpreted differently by others –<br />

especially over SMS where the text characters are limited, as<br />

well as tablets or Interactive Voice Response (IVR) where the<br />

phrasing is fixed. We have also found that, because remote<br />

methods such as SMS and IVR remove the opportunity for<br />

survey respondents to ask for clarification to a question,<br />

testing should always be done in person, either as an inperson<br />

focus group or with individual respondents.<br />

Similarly, if you are using a particular hardware or software<br />

application for the first time (or simply a new survey within<br />

a known instrument), be sure to test uploading and syncing<br />

information in different scenarios using different devices.<br />

The introduction of photos, signatures, video, or even new<br />

surveys can have unforeseen impacts on data storage and<br />

ability to sync, so testing (and potentially retesting) is<br />

crucial.<br />

STEP 5:<br />

PREPARE FOR<br />

IMPLEMENTATION<br />

Even implementing a simple looking survey remotely is not<br />

as simple as it seems. For example, for the Progress out<br />

of Poverty Index 9 ® (PPI) ®, which is one of our favourite<br />

surveys, we’ve found we need to set aside anywhere between<br />

half a day and three days of training for enumerators and<br />

potentially more time for those who have never conducted<br />

a survey. Despite being only ten questions, the PPI remains<br />

nuanced.<br />

8. E.g. Research Methods;<br />

Graziano and Raulin<br />

9. The Progress out of Poverty Index (PPI)<br />

is a 10 question poverty measurement<br />

tool developed by Grameen<br />

Foundation and Mark Schreiner:<br />

http://www.progressoutofpoverty.org/<br />

If the survey is to be performed in-person on mobile devices,<br />

we will typically offer a group training session of 1-3 days in<br />

which enumerators first review all questions in the survey<br />

and very quickly begin using the tablet or handheld device<br />

for the majority of the training. Both Root and Acumen have<br />

found that enumerators have been able to quickly learn to<br />

use both the hardware and mobile survey apps, even when<br />

they have no relevant technological experience.


Innovations In Impact Measurement<br />

Part 1: Lessons To Date<br />

24<br />

STEP 6:<br />

SURVEY<br />

Let the fun begin! Having undertaken careful planning and<br />

iteration of your survey design, it’s time for implementation.<br />

If performing in-person data collection, we have found the<br />

following activities helpful:<br />

+ Ensure that all tablets and handhelds are fully charged<br />

or have enough back-up power, and carry paper forms as<br />

backup.<br />

+ Include optional notes sections throughout the survey in<br />

the case that respondents have additional comments to<br />

include.<br />

Surveyors should bring a notebook to take notes and writeout<br />

open-ended questions instead of typing these into the<br />

tablet during the survey. Surveyors usually write faster than<br />

they type on a tablet or smartphone, and with a notebook<br />

the respondent doesn’t feel ignored or disengaged as the<br />

surveyor types.<br />

STEP 7:<br />

BACK-CHECK<br />

We recommend that for those new to remote surveying or<br />

for cases when a new metric / survey is being collected for<br />

the first time, that you prepare to back-check a proportion<br />

of responses. A back-check refers to returning – or ‘going<br />

back’ – to respondents after they’ve been surveyed and<br />

confirming that their initial response is accurate. Depending<br />

on your sample size we recommend that you repeat between<br />

5-10% of your surveys in person to see if there are any<br />

material differences. Of course you shouldn’t expect all<br />

answers to be the same, especially where you’re asking for<br />

subjective views. But at the same time this is a chance to<br />

highlight inconsistent data, and potentially reconsider how<br />

you’ve asked questions. Once a survey is more established<br />

over multiple uses you may consider back-checking less<br />

frequently.<br />

<strong>IN</strong>TEGRATION <strong>IN</strong>TO<br />

COMPANY PROCESSES<br />

When implementing a new data system it’s easy to<br />

ignore established company systems. Most companies<br />

have the capability to generate and collect data from<br />

existing processing – think of how many times or in<br />

what ways a social enterprise interacts with their<br />

customers. Root Capital’s early mobile pilots in Latin<br />

America were actually a response to client demand.<br />

These early pilots involved taking the paper-based<br />

internal inspection surveys mandated by certification<br />

programs and translating them into mobile surveys<br />

for completion by enumerators on a tablet computer.<br />

For enumerators, the only change was moving from<br />

paper-based forms to tablet-based forms. When<br />

clients were able to successfully digitize the form,<br />

collect information, and perform the analysis, Root<br />

helped support the creation of additional surveys for<br />

social, environmental, or agronomic data collection.<br />

Data points such as these not only help streamline<br />

an operational process – better understanding supply<br />

chains, target underserved demographics, or evaluate<br />

sales staff performance – they can also provide a<br />

window into learning more about customers.<br />

Of course creating a ‘data-driven culture’ doesn’t<br />

happen overnight. Though we have successfully<br />

implemented mobile data collection projects that led<br />

to better informed decision making, we have found<br />

that sustaining these practices takes a shift in the<br />

culture of an enterprise. It is not enough to simply<br />

introduce a new technological fix or mandate better<br />

data collection practices across an organization.<br />

Successful management teams can show the value of<br />

data to the field staff collecting and entering data to<br />

get their buy in.<br />

11. The Progress out of Poverty Index (PPI)<br />

is a 10 question poverty measurement<br />

tool developed by Grameen Foundation<br />

and Mark Schreiner: http://www.<br />

progressoutofpoverty.org/


Innovations In Impact Measurement<br />

Introduction<br />

25<br />

PART 2:<br />

CASE<br />

STUDIES<br />

18


Innovations In Impact Measurement<br />

Part 2: Case Studies<br />

26<br />

This second section highlights four case<br />

studies, three from Acumen’s Lean Data<br />

and one from Root Capital’s Client-Centric<br />

Mobile Measurement.<br />

Each case study is split into four sections:<br />

a description of the background, detail<br />

on how the data was collected including<br />

technologies used, a snapshot of the actual<br />

data gathered, and lastly discussion of<br />

how it helped the social enterprises make<br />

specific, data-driven decisions relating to<br />

their business and social performance.<br />

We hope they help bring the insights from<br />

the first section to life.<br />

The examples are drawn from:<br />

SolarNow, Solar Energy, Uganda.<br />

SolarNow provides finance to help rural Ugandans<br />

purchase high-quality solar home systems and<br />

appliances. Acumen helped SolarNow run phone<br />

centre surveys to gather customer feedback,<br />

understand segmentation, and track a range of<br />

social performance indicators.<br />

Unicafec, Agriculture, Peru.<br />

Unicafec is a coffee cooperative in the Peruvian<br />

Andes. Root Capital helped Unicafec adopt a<br />

tablet-based internal inspection system, digitizing<br />

organizational records and creating management<br />

dashboards to guide decision-making and<br />

communication with the cooperative membership.<br />

LabourNet, Vocational Training, India.<br />

LabourNet is a vocational training company based<br />

in Bangalore that provides job-training skills to<br />

informal sector labourers. Acumen implemented<br />

a customer profiling and segmentation survey<br />

comparing the performance of IVR, Call Centre and<br />

SMS. We also aimed to measure social performance<br />

of wage increases and employability for LabourNet<br />

trainees. This Lean Data project was implemented<br />

in partnership with IDinsight, who co-Author the<br />

case study.<br />

KZ Noir, Agriculture, Rwanda.<br />

KZ Noir is a premium coffee aggregator and<br />

exporter. Using in-person tablet surveys Acumen<br />

helped establish and evaluate a Premium Sharing<br />

Program to track and incentivise higher grade coffee<br />

production. This Lean Data project was implemented<br />

in partnership with IDinsight, who co-Author the<br />

case study.


Innovations In Impact Measurement<br />

Part 2: Case Studies<br />

27<br />

LEAN DATA: SOLARNOW<br />

Headlines<br />

+ Lean Data can be integrated into existing<br />

company processes and resources.<br />

+ Qualitative data is often as valuable as<br />

quantitative data – especially concerning<br />

customer satisfaction.<br />

+ Social enterprise management teams can<br />

act quickly on analysis provided through<br />

Lean Data, creating a rapid feedback loop.<br />

+ Asking what consumers themselves say<br />

is meaningful can generate surprising<br />

results of high relevance to enterprise<br />

impact and business performance.<br />

Description<br />

Willem Nolens, CEO of Ugandan based energy company<br />

SolarNow, is determined to succeed where others have<br />

failed. He aims to light one of Africa’s most under-electrified<br />

geographies, just 5% of rural Ugandans are connected to a<br />

national grid, which frequently suffers brown and black outs.<br />

Those not connected have to make do with expensive and<br />

often dangerous alternatives such as kerosene.<br />

Achieving these goals will be no mean feat; challenges<br />

abound, including lack of awareness, low population<br />

density, and limited capacity to pay amongst his target<br />

market consumers. Yet Willem saw such challenges as a<br />

business opportunity to apply the know-how gathered from<br />

his microfinance background along with innovations in<br />

solar technology to spread solar home systems across rural<br />

Uganda. To date they’ve sold systems to more than 8,500<br />

households.<br />

Unlike already established solar players who concentrate<br />

on solar lanterns and small home systems, Solar Now<br />

sells a larger system that can be upgraded over time. This<br />

allows customers to start with a few lights and mobile<br />

chargers, and progress to larger appliances like fans,<br />

radios, TVs and refrigerators. In order to help customers<br />

move up this ‘energy ladder’ Solar Now also provides<br />

financing at affordable rates over extensive periods of<br />

18-24 months. But despite the company’s success Willem,<br />

whose business strategy builds on word of mouth, craved<br />

greater, quality data about their customers. This would<br />

allow him to understand who was buying their products<br />

and their experience of solar, allowing him to provide more<br />

appropriate loans and services that could both widen and<br />

deepen access to the company’s products.


Innovations In Impact Measurement<br />

Part 2: Case Studies<br />

28<br />

Detail<br />

Given the distributed nature of SolarNow’s customers,<br />

Acumen decided to use a phone centre to conduct customer<br />

profiling, impact and satisfaction interviews. Though<br />

SolarNow’s field staff has regular interaction with their<br />

customers to collect loan payments, we aimed to separate<br />

this type of data collection from the loan collection staff<br />

to prevent potential bias. Separation allows SolarNow’s<br />

customers to feel more comfortable providing any critical<br />

feedback, which might be less likely to happen if asked by a<br />

staff member, in-person.<br />

To better understand customer profiles, Acumen turned<br />

to the Progress out of Poverty Index (PPI) survey. SolarNow<br />

would be able to use this information to gauge their<br />

penetration into rural markets, the majority of which lives<br />

in poverty, including the effectiveness of their financing<br />

in reaching the poorest. To assess customer satisfaction,<br />

Acumen helped the Company create a simple customer<br />

satisfaction survey, soliciting mainly qualitative feedback<br />

about the solar home systems and the company’s customer<br />

service. Lastly we included some short questions on<br />

household energy expenditure patterns.<br />

SolarNow already has an active call center, set up to receive<br />

incoming customer service calls. Acumen trained the call<br />

center staff to administer the surveys to a random sample<br />

of SolarNow customers. Given the PPI survey and customer<br />

satisfaction are relatively simple questionnaires, Lean<br />

Data was able to save on time and costs of hiring external<br />

enumerators by using the company’s existing resources and<br />

systems.<br />

And it wasn’t just SolarNow that learnt through this<br />

experience. By listening to customers we learnt that some<br />

of our preconceptions about the social value that we aimed<br />

to create through our investment needed refinement. Whilst<br />

our initial theory of change for investment majored on the<br />

health effects of switching away from traditional fuels, when<br />

we asked consumers about their perception of what was<br />

meaningful to them they majored on cost savings, but also<br />

to our surprise increased security and the brightness of light.<br />

Since then we’ve been focusing carefully on what it takes<br />

to ask, and hear from customers, with respect to their<br />

own interpretations of “meaningful” impact. Asking about<br />

meaningfulness requires care but we are increasingly finding<br />

that asked the right way people are consistent in how they<br />

report what is meaningful to them and that these answers<br />

are highly correlated with changes in outcome based<br />

indicators of wellbeing.<br />

Data<br />

SolarNow collected data from over 200 customers over the<br />

period of two months. The data show that 49% of SolarNow<br />

customers are likely to be living on less than $2.50 per<br />

person per day, indicating a strong reach into even the<br />

poorest rural communities.<br />

The survey also captured basic information on how<br />

customers were using the Solar Now system. Almost<br />

all customers reported an increase in hours of available<br />

lighting, with the average customer experiencing an<br />

increase of 2 hours of light per day. The data also show that<br />

customers are replacing other, dirtier fuels with Solar Now’s<br />

clean energy – moving from 6 hours of light from non-Solar<br />

Now sources, to only 1 hour per day.<br />

Hours of Light<br />

10<br />

8<br />

6<br />

4<br />

2<br />

0<br />

SolarNow learned that customers generally have high regard<br />

of SolarNow’s products, but was also surprised to hear some<br />

customers had experienced problems with faulty parts and<br />

installation. The company also learned that many customers<br />

would pay extra for more and varied types of appliances,<br />

providing feedback and confirmation of the management<br />

team’s strategy to further expand its product line.<br />

Decisions<br />

Hours of Light per Night by Source:<br />

Hours of Light per Night Now by Source: vs. Before Now Solar vs. Before Now SolarNow<br />

7.3<br />

1.0<br />

Now<br />

Non-Solar Now sources<br />

SolarNow<br />

Following its first experience of Lean Data the company<br />

has transformed its approach data collection on customer<br />

profiling and feedback. The Company now repeats the<br />

customer service surveys designed by Acumen every<br />

quarter, and intends to track progress over time across<br />

their distribution network. Using Lean Data, SolarNow will<br />

eventually be able to collect more detailed and targeted<br />

information that allows them to better serve and understand<br />

their customers’ needs, track its effectiveness at serving the<br />

poorest Ugandans, and gather ever increasing insights into<br />

the value customers derive from the company’s product.<br />

6.1<br />

Before


Innovations In Impact Measurement<br />

Part 2: Case Studies<br />

29<br />

CLIENT- CENTRIC MOBILE <strong>IMPACT</strong><br />

<strong>MEASUREMENT</strong>: UNICAFEC<br />

Headlines<br />

+ Small, agricultural businesses (including<br />

cooperatives) are increasingly seeking out<br />

technology tools and processes that allow<br />

for improved data gathering, analysis, and<br />

visualization to inform decision-making.<br />

+ The integration of a new technology (e.g.,<br />

tablets and a mobile survey platform) can<br />

actually reduce the complexity of data<br />

gathering and analysis through automation<br />

and standardization.<br />

+ Simple charts and visual depictions of data<br />

trends are useful in communicating the<br />

value add of data analysis to cooperative<br />

leadership and the broader membership.<br />

Description<br />

Late in 2014, Root Capital was conducting household surveys<br />

with members of Unicafec, a coffee cooperative in the<br />

Peruvian Andes, when the cooperative general manager,<br />

Alfredo Alarcon, made a simple request. Noting the tablets<br />

and digital surveys used by Root Capital’s research team,<br />

the cooperative manager was curious to know if Root<br />

Capital could teach Unicafec’s own staff how to use similar<br />

technology when performing annual farm inspection audits<br />

– a mandatory exercise for all organic certified cooperatives<br />

in the region. 10<br />

Despite a significant annual investment of time and energy<br />

on the part of the cooperative, Root Capital has found<br />

that these farm inspections are typically perceived as an<br />

onerous external requirement rather than an opportunity for<br />

continuous improvement or to generate actionable marketintelligence.<br />

+ Costs of tablet-based data gathering are<br />

frontloaded by the acquisition of hardware<br />

and the initial time spent training, but low<br />

operating costs going forward make this an<br />

attractive investment if the program<br />

is continued over multiple years.<br />

10. Smallholder cooperatives<br />

holding organic certifications<br />

are typically required to<br />

perform annual, farm-level<br />

inspections of associated<br />

producers to track compliance<br />

with practices.<br />

The surveys used by auditors<br />

take quite a bit of time and effort<br />

ranging from 30-80 questions and<br />

covering such topics as agronomic<br />

practices, farm income, and<br />

general demographic data.


Innovations In Impact Measurement<br />

Part 2: Case Studies<br />

30<br />

For example, a typical organic inspection survey may<br />

contain as many as 75 questions, requiring an in-person<br />

visit by a trained enumerator to each farm, where producers<br />

are asked about production, agronomic practices, social and<br />

economic concerns, and finally forecasts for the coming<br />

year. While all of that information could be extremely<br />

valuable from a business planning perspective, very little of<br />

it makes it into the hands of a cooperative’s leadership in an<br />

aggregated form, much less an actionable set of analyses.<br />

Alfredo Alarcon saw Root Capital’s tablet-based<br />

questionnaire as an opportunity to transform an otherwise<br />

cumbersome inspection process into a powerful engine for<br />

insight into the coop’s farmers’ practices and perspectives.<br />

Better understanding of their farm-level conditions might<br />

help him to improve his targeting of technical assistance<br />

and internal credit (i.e., loans the cooperative makes to its<br />

members for farm inputs). At the very least it would enable<br />

him to effectively capture data in aggregate form to serve<br />

as a baseline to measure changes in future years. In January<br />

2015, Root Capital partnered with Unicafec and two other<br />

Peruvian coffee cooperatives – Sol y Café and Chirinos - to<br />

put Alfredo’s theory to the test.<br />

Detail<br />

For the past four years, Root Capital’s impact teams have<br />

used tablets and digital survey platforms to administer<br />

household-level surveys when gathering information about<br />

farmers’ socioeconomic situation, production practices,<br />

and the perceived performance of Root Capital’s client<br />

businesses. As such, the organization was already familiar<br />

with a number of software options for conducting surveys<br />

and thus selected a known vendor for the internal inspection<br />

pilots based on the following criteria:<br />

1. Quality of survey platform and demonstrated track<br />

record of success for the provider<br />

2. Applicability to demonstrated need of cooperatives<br />

3. Affordability (


Innovations In Impact Measurement<br />

Part 2: Case Studies<br />

31<br />

Data<br />

As of the writing of this publication, the final program<br />

evaluation is still in process and the cost / benefit analysis<br />

still underway. However, results and observations to date<br />

include the following:<br />

+ Improved data quality<br />

The margin of error for all participating cooperatives was<br />

less than 1% (even before data cleaning), versus up to 30%<br />

for those cooperatives using the previous, paper-based<br />

inspection process.<br />

+ Time saving<br />

In less than four hours, each group was able to efficiently<br />

transfer and spot-check data to a readily accessible<br />

Excel table. Previously, the process of transferring select<br />

information from paper to computer required upwards<br />

of two months for two inspectors, with high variation in<br />

the quality and quantity of information uploaded. The<br />

mobile platform allowed for a saving of approximately<br />

thirty person-days in data inputting, a time-saving that<br />

helps allay the upfront costs of hardware acquisition and<br />

software licensing.<br />

+ Data usefulness<br />

All participating cooperatives were able to record,<br />

analyse, and visualize producer-level and enterprise-level<br />

information – very little of which was previously captured<br />

(including some advanced features like geo-mapping). The<br />

visualization component involved simple graphs produced<br />

via SAP’s free visualization software, Lumira. This step<br />

was particularly effective at demonstrating important<br />

trends in the data.<br />

+ Operational independence: Each group was able to<br />

digitize a new survey (monitoring agronomic practices at<br />

the producer level) in less than two hours. After the final<br />

training workshop, we expect participating cooperatives to<br />

lead in the design and execution of all further inspections<br />

with only minimal offsite support from Root Capital.<br />

+ Scalability of infrastructure<br />

The digital survey platform can be utilized for various<br />

types of inspection and data gathering. Administrators<br />

can also leverage the internal inspections process to add<br />

other operational and business-oriented questions to the<br />

farm level surveys.<br />

+ Producer acceptance:<br />

Farmers across all three cooperatives were generally<br />

curious about the tablets and proud of the professionalism<br />

of the process.<br />

+ Costs<br />

The primary direct costs associated with each cooperative<br />

intervention, beyond cooperatives’ staff time, include (1)<br />

tablets (~$250 per), (2) software licensing fee (~$1,000/<br />

year), (3) consultant fees for inspector training (~5-10 days<br />

per). For a 400 member cooperative, the average direct<br />

costs equate to approximately $5,600 in the first year<br />

(excluding enumerator time, as that was already incurred<br />

in the previous process) and approximately $1,300 for<br />

software licensing fees every year thereafter.<br />

Decisions<br />

Now that all three cooperatives are able to digitize surveys<br />

for inspections and monitoring, clean and analyse the data,<br />

and create simple charts and graphs to identify trends,<br />

Root Capital’s goal is to continue to build capacity among<br />

the cooperative leadership to use the data to (1) inform<br />

internal decision-making, and (2) communicate back to the<br />

cooperative membership. Beyond the time and cost savings<br />

identified above, these are perhaps the primary potential<br />

benefits of the digital surveys and remain as-yet not fully<br />

captured by the participating cooperatives. Cooperative<br />

managers have noted the following ways they would like to<br />

use this data going forward:<br />

+ Create producer-level projections and segment coffee<br />

collections by origin and quality.<br />

+ Assess weaknesses in production related to quality or<br />

quantity of coffee at the producer level and better target<br />

technical assistance by need.<br />

+ Assess social and environmental weaknesses to prioritize<br />

impact-oriented investments by the cooperative (e.g., how<br />

to allocate the Fair trade premiums).<br />

+ Identify financing needs of producers to inform the<br />

provision of internal credit by the cooperative.


Innovations In Impact Measurement<br />

Part 2: Case Studies<br />

32<br />

Noting the success of these pilots in Peru and the strong<br />

demand from other clients in the region, Root Capital is<br />

now planning to scale up from five Latin American pilots to<br />

twenty client engagements in the coming year. Additionally,<br />

we are leveraging this same technology suite for three new<br />

purposes:<br />

1. Root Capital is currently using this same technology to<br />

create modular surveys for the dynamic monitoring of<br />

agronomic practices at the farm level – particularly for<br />

farmers engaged in full-scale coffee farm renovation and<br />

rehabilitation. This should enable greater transparency<br />

into the agronomic processes of producers and help these<br />

businesses to quickly identify problems as they arise. As<br />

producers hit key milestones, additional financing from<br />

Root Capital is then unlocked.<br />

2. Secondly, Root Capital is exploring new ways to use<br />

the internal inspection data to inform our own credit<br />

underwriting, risk monitoring, and ongoing social and<br />

environmental performance measurement. We can<br />

even envision a future in which these small agricultural<br />

businesses are able to more effectively monitor and<br />

report on key indicators of interest to some of their larger<br />

commercial buyers.<br />

3. Thirdly, we are currently exploring how our clients can<br />

use mobile technology to create attractive employment<br />

opportunities for youth.<br />

DATA VISUALIZATION<br />

The “ah-ha” moment for many of the cooperative<br />

managers and inspectors was often linked to the<br />

simple graphs and charts that helped data-sets come<br />

alive. Microsoft Excel remains the primary analytical<br />

tool used by enterprises within Root Capital’s portfolio,<br />

and so it was important that any new tool for analysis<br />

or visualization complement that system and use<br />

similar keystrokes and functionality. For these early<br />

pilots, Root Capital used SAP’s Lumira software (free,<br />

online version) to easily and effectively transform<br />

Excel data tables into striking graphics with no<br />

additional technical knowledge required.<br />

These graphs have now become the primary vehicle<br />

by which the inspection data is analysed by<br />

cooperative management as well as the cooperative<br />

membership during the course of each group’s general<br />

assembly. Each organization can now see a clear<br />

visualization of production volumes and quality by<br />

member according to region, elevation, gender of the<br />

member, plant type, farm size, agronomic practices<br />

used and more. They can begin prioritizing the stated<br />

social and environmental needs of the cooperative<br />

membership and communicate back to each group<br />

the actual findings of the surveys as a basis for the<br />

plan forward.


Innovations In Impact Measurement<br />

Part 2: Case Studies<br />

33<br />

LEAN DATA: LABOURNET<br />

(CO-AUTHORED BY ID<strong>IN</strong>SIGHT)<br />

Headlines<br />

+ Collecting customer profile data through<br />

IVR technology can influence a social<br />

enterprise’s customer support services.<br />

+ We were not yet able to track quality data<br />

on complex social performance metrics<br />

such as income or wages through IVR<br />

Description<br />

Sunitha speaks with as many as 50 people a day from the<br />

middle of a call center in Bangalore, India’s technology hub.<br />

However, she’s not answering customer service complaints<br />

for Microsoft or FlipKart, India’s upstart rival to Amazon.<br />

The customers at the other end of Sunitha’s phone calls are<br />

a collection of migrant construction workers, prospective<br />

retail sales clerks, and freshly trained beauticians. Sunitha<br />

works for LabourNet, a rapidly expanding vocational training<br />

company with a mission of creating sustainable livelihoods<br />

for India’s vast pool of informal workers.<br />

The company would like to use the data Sunitha collects to<br />

understand and improve its business and social performance<br />

based on trainee feedback and, eventually, to market to<br />

new clients by showing its track record. In line with these<br />

goals, Sunitha calls back trainees three and six months post<br />

training and asks questions about their current employment<br />

status, wages, and satisfaction levels with LabourNet’s<br />

training. Sunitha and her colleagues are expected to try<br />

calling back every one of the people who have been trained<br />

by LabourNet – now totalling over 100,000 people.<br />

LabourNet had set up the calls to collect better data on its<br />

trainees, but the system turned out to be inefficient due<br />

to over-surveying, and the data collected did not seem<br />

believable – trainees regularly reported 5x income increases<br />

post-training, and near-100% satisfaction rates. The company<br />

talked to Acumen and IDinsight about wanting better data<br />

on outcomes. LabourNet’s senior management was especially<br />

interested in understanding changes in real wages and<br />

employability post-training, which would help them gauge<br />

their impact and identify areas for strategic improvement<br />

going forward.


Innovations In Impact Measurement<br />

Part 2: Case Studies<br />

34<br />

Detail<br />

Collecting data from LabourNet customers represented a<br />

significant challenge. LabourNet interacts most with its<br />

trainees during courses. Following that the trainees, who<br />

are typically either migrant construction workers or working<br />

elsewhere in the informal economy, move on to different job<br />

sites or new cities.<br />

The team tested two possible solutions to these challenges:<br />

IVR calls using LaborLink, a product of the California-based<br />

Good World Solutions, alongside revamping the company’s<br />

existing call-centre based impact surveys. Specifically we<br />

edited caller scripts to reduce bias and ‘leading questions’,<br />

created uniform translated survey scripts for surveyors,<br />

shifted to Enketo Smart Paper for data entry rather than<br />

Excel, and instituted a random sampling approach to reduce<br />

the number of calls needed by 80%. After training, the new<br />

phone-based survey was administered by just two LabourNet<br />

staff. To gauge the pros and cons of these two approaches,<br />

the team followed up in person with 200 respondents (100<br />

from each method), covering greater depth and allowing<br />

comparison of survey performance.<br />

Data<br />

The surveys asked questions across several categories<br />

including, wages & employment status before and after<br />

LabourNet training; poverty levels using Grameen’s Progress<br />

out of Poverty Index as well as other demographic questions;<br />

and lastly general customer feedback to determine what<br />

additional services may be desirable.<br />

Of the 1,817 respondents contacted by IVR an encouraging<br />

fifth of these (340), picked up after three attempts and<br />

answered the first question. However, only 32% of those<br />

who picked up completed the full survey, leading to a 6%<br />

completion rate overall. The table below shows the drop off<br />

rate per question asked. By contrast, phone-based surveys<br />

showed higher response rates (45%) and considerably higher<br />

completion rates (91% of answered calls) despite their<br />

longer duration (15 minutes on average). Across both IVR<br />

and phone-based surveys, mobile populations (migrant<br />

construction workers) responded to surveys at much lower<br />

rates than trainees in other sectors (retail, beauty and<br />

wellness, etc).<br />

Survey Question # vs. Question Response Rate<br />

(Calculated for 19% Survey of respondents Question # vs. who Question started Response survey) Rate<br />

(Calculated for 19% of respondents who started survery)<br />

100%<br />

80%<br />

60%<br />

40%<br />

20%<br />

0%<br />

1 2 3 4 5 6 7 8 9 10<br />

Survey Question Number<br />

Response and Completion Rates by Technology<br />

Response and Completion Rates by Technology<br />

100%<br />

80%<br />

60%<br />

40%<br />

20%<br />

0%<br />

45%<br />

41%<br />

19% 6%<br />

% Answered 1st Question % Completed Entire Survey<br />

IVR (N= 1,817)<br />

Phone Calls (N= 788)


Innovations In Impact Measurement<br />

Part 2: Case Studies<br />

35<br />

Assessing if people responded differently over the phone<br />

and on IVR vs in-person, we found that data quality<br />

variability increased as questions became more complex. Yes<br />

or No questions showed the highest agreement with<br />

in-person surveys, followed by multiple choice questions.<br />

% Agreement Between Remote and In-Person<br />

Data Collection by Technology*<br />

100%<br />

80%<br />

60%<br />

40%<br />

20%<br />

% Agreement Between Remote and In-Person Data Collection<br />

by Technology*<br />

0%<br />

Are you<br />

currently<br />

working?<br />

(Y/N)<br />

What type<br />

of ration card<br />

do you have?<br />

(Multiple choice,<br />

4 Options)<br />

How many<br />

income earners<br />

are in your<br />

house?<br />

(Continuous Var.)<br />

How many<br />

non-income<br />

earners are<br />

in your house?<br />

(Continuous Var.)<br />

Decisions<br />

The pilot highlighted both the opportunities and some of<br />

the challenges with respect to data collection. The results<br />

provided an opportunity for reflection on the company’s<br />

data requirements. LabourNet is now working with a<br />

Bangalore-based consultant to map their data needs against<br />

the company’s overall strategy. The number, frequency,<br />

and content of Sunitha’s follow-up phone calls are also<br />

under consideration as part of this data mapping project,<br />

which should save the company in terms of her team’s<br />

time. However, both the Lean Data team and the company<br />

discovered just how challenging it can be to measure<br />

informal sector incomes and employability; likely because of<br />

the irregularity of the employment and earnings patterns.<br />

This highlights an example of where more survey design and<br />

prototyping is required before we can confirm that we can<br />

(or cannot) collect this kind of dynamic data remotely.<br />

IVR<br />

Phone Survey<br />

Data omitted for respondents who indicated “Don’t Know” on remote technology<br />

*100% agreement indicates no variation between what respondents answered<br />

in-person vs over<br />

Data omitted<br />

a remote<br />

for respondents<br />

method<br />

who indicated "Don't Know" on remote technology<br />

*100% agreement indicates no variation between what respondents answered in-person vs over a remote<br />

method<br />

Findings in this case study support the suggestions in part 1<br />

that it is important keep remote surveys, simple and short,<br />

clearly define ambiguous terms, and pilot the questions in<br />

person first. In this instance both IVR and call-centre calls<br />

were effectively cold calls, and depending on the company<br />

relationship we have now often pre-empted phone or IVR<br />

calls with an SMS to warn that the survey will be coming.


Innovations In Impact Measurement<br />

Part 2: Case Studies<br />

36<br />

LEAN DATA: KZ NOIR, TEST<strong>IN</strong>G A “LEAN<br />

EVALUATION” (CO-AUTHORED BY ID<strong>IN</strong>SIGHT)<br />

Headlines<br />

+ Social enterprises may be able to adapt<br />

traditional impact evaluation methods<br />

to make business decisions, while<br />

maintaining a Lean approach.<br />

+ Training operations staff to conduct inperson<br />

surveying using mobile devices<br />

can be cost-effective compared to hiring<br />

an external data collection firm. Field<br />

staff picked up tablet technology very<br />

quickly.<br />

+ The act of data collection itself has the<br />

potential to affect customer relationships<br />

in positive ways (e.g. by creating<br />

opportunities for staff and customers<br />

to interact) and negative ways (e.g. by<br />

excluding some customers from data<br />

collection activities and giving the<br />

impression of selective treatment).<br />

Description<br />

Celestin Twagirumuhoza ascends up the side of a hill,<br />

stepping over bushes and under branches to find a small hut<br />

atop a hillside farm. Introducing himself as the manager<br />

of Buliza, a nearby coffee washing station, he asks the<br />

farmer if she would be willing to participate in a short<br />

survey. After explaining further and obtaining the farmer’s<br />

consent, he takes out a Samsung tablet, opens an electronic<br />

questionnaire, and proceeds to ask about her farming<br />

practices and her livelihood.<br />

Celestin works for KZ Noir 11 , which launched in 2011 when<br />

its parent company, Kaizen Venture Partners, purchased<br />

and integrated three existing Rwandan coffee cooperatives.<br />

The company produces high-grade coffee, providing<br />

smallholder farmers with higher earnings in the process.<br />

Yet it faces a challenging market environment: of more than<br />

180 coffee washing stations in Rwanda selling specialty<br />

coffee, most operate below 50% capacity, and almost 40%<br />

are not profitable. High costs, competition, quality control<br />

challenges, and under-resourced supplier farmers all make<br />

business challenging for premium-grade coffee brands.<br />

11. KZ Noir is an Investee of both<br />

Acumen and Root Capital.


Innovations In Impact Measurement<br />

Part 2: Case Studies<br />

37<br />

To build its competitive advantage and increase its social<br />

performance, KZ Noir launched a premium-sharing program<br />

for the 2015 coffee season. The program trains supplier<br />

farmers to cultivate, harvest and process premium grade<br />

coffee with the goal of passing back a portion of the price<br />

premium obtained on the export market to the farmers.<br />

Harvesting high quality coffee requires farmers to carefully<br />

pick the ripest coffee cherries and care closely for trees with<br />

weeding, pruning, pest control and fertilization.<br />

KZ Noir hoped that the incentive of an end-of-season<br />

price premium would drive higher quantity and quality of<br />

production. To the company’s knowledge, this is the only<br />

program in Rwanda seeking to directly track the quality of<br />

smallholder farmers’ groups’ coffee yields for the purpose of<br />

sharing premiums. 12<br />

Detail<br />

As KZ Noir built this program, it sought to set up an<br />

improved data system that could track the volume and<br />

quality of coffee sold by each farmer group, gather more<br />

information on its supplier farmers to learn how to better<br />

support their growing practices, and also importantly<br />

make an informed decision on the success (or failure) of the<br />

premium sharing program from both a financial and social<br />

perspective.<br />

Given the complexity of the questions at hand, Acumen<br />

partnered with IDinsight to design a Lean Evaluation - a<br />

formal evaluation with control group designed to directly<br />

inform a decision while minimizing cost, time and any<br />

imposition to the companyAdditionally the evaluation<br />

needed to be proportionate, as it would not make sense to<br />

spend more on an evaluation than the potential profits or<br />

social impact generated by the premium sharing program.<br />

Based on these considerations, KZ Noir opted for a statistical<br />

matching approach using a new farmer registration survey<br />

at baseline, and KZ Noir’s operational data at endline.<br />

To support the premium sharing program, the proposed<br />

evaluation KZ Noir needed a system capable of tracking the<br />

volume of coffee sold by individual farmers, the quality of<br />

coffee sold by farmers’ groups throughout the season 13 , and<br />

the household characteristics and farming needs, such as<br />

access to inputs, of supplier farmers<br />

To meet these goals, the team launched a farmer registration<br />

survey to collect data on farmers eligible to join the<br />

premium sharing program and a control group. It asked<br />

a series of short questions on household size, assets and<br />

agronomic practices, creating a profile for each farmer that<br />

could later be matched to coffee sales data and analysed<br />

by KZ Noir leadership in an interactive management<br />

information system. The team adopted EchoMobile which<br />

allows both in-person tablet surveying as well as SMS-based<br />

data collection. This allowed KZ Noir to link its registration<br />

data (by tablet) to additional data points (by SMS) throughout<br />

the season.<br />

With some initial training from Acumen, EchoMobile, and<br />

IDinsight, on data collection practices (avoiding leading<br />

questions and piloting surveys with an out-of-sample group),<br />

KZ Noir utilised its own staff to conduct the registration<br />

survey using mobile tablets during the off-season,<br />

minimizing labor costs. IDinsight and Acumen then spent a<br />

week monitoring data collection in the field finding that KZ<br />

Noir staff quickly learned to use the mobile tablets. Moreover<br />

although company staff, with limited experience struggled<br />

to adopt all data management best practices, such as<br />

conducting random back-checks from day one of the survey,<br />

to catch mistakes early ,at the time of writing (prior to the<br />

endline survey) we are confident that the savings (against<br />

hiring external enumerators) will more than offset any<br />

issues of data quality.<br />

12. While there are other organizations<br />

that share premiums with supplier<br />

farmers, we do not know of any<br />

others that trace the quality of coffee<br />

supplied by specific farmers’ groups<br />

and calibrate the bonus shared with<br />

the group to their quality level.<br />

13. Ideally in this situation the Company<br />

would want to track individual<br />

farmers, rather than farmer groups.


Innovations In Impact Measurement<br />

Part 2: Case Studies<br />

38<br />

EVALUATION OPTIONS FOR THE PREMIUM SHAR<strong>IN</strong>G PROGRAM<br />

Evaluation design<br />

Sources of outcomes data<br />

Randomized Trial Statistical Matching Pre-post Household surveys Operational data<br />

(selected method)<br />

(selected data source)<br />

Description<br />

Randomly determine<br />

which farmers are<br />

invited to join the<br />

program and which<br />

serve in a control group.<br />

Compare the groups at<br />

end of season<br />

Identify farmers<br />

similar to those invited<br />

to join the program.<br />

Compare participating<br />

farmers to similar nonparticipating<br />

farmers at<br />

end of season<br />

Assess participating<br />

farmers before and<br />

after joining the<br />

premium sharing<br />

program, without<br />

comparing them to<br />

non-participants.<br />

Extensive surveys<br />

investigating farmers’<br />

agronomic practices,<br />

expenditures, and<br />

revenues<br />

Transactions records<br />

and coffee quality<br />

monitoring data<br />

Value of<br />

information<br />

provided<br />

High. Randomization<br />

eliminates potential<br />

sources of bias in<br />

measuring impact<br />

Medium. Controls for<br />

many confounding<br />

variables influencing<br />

coffee quality and<br />

supply<br />

Low. Controls for few<br />

confounding variables<br />

influencing coffee<br />

quality and supply<br />

High. Provides<br />

estimates of farmer<br />

profits and practices<br />

Medium. Provides<br />

data on the volume<br />

and quality of coffee<br />

sold. Possibly less<br />

accurate.<br />

Cost<br />

Medium. Requires<br />

interviewing a<br />

“treatment” and<br />

comparison group at<br />

baseline<br />

Medium-High.<br />

Requires interviewing<br />

a “treatment” and<br />

comparison group at<br />

baseline. Larger sample<br />

size needed to attain<br />

same statistical power<br />

as a randomized trial<br />

Low. Requires<br />

interviewing only a<br />

“treatment” group at<br />

baseliney<br />

High. Accurately<br />

tracking farm profits<br />

and practices requires<br />

extensive surveys,<br />

potentially with<br />

multiple touch points<br />

throughout a season<br />

Low. KZ Noir was<br />

already planning<br />

on investing in the<br />

requisite internal data<br />

systems, as these<br />

were required for the<br />

premium sharing<br />

program to function<br />

Operational<br />

burden<br />

High. Requires<br />

denying access to<br />

eligible farmers; risks<br />

damaging supplier<br />

relationships<br />

Low. Requires<br />

interviewing<br />

more farmers at<br />

baseline, but KZ Noir<br />

already intended to<br />

interview farmers for<br />

informational purposes<br />

Low. No programmatic<br />

changes or additional<br />

surveying was required<br />

beyond what KZ Noir<br />

already intended<br />

High. Requires<br />

supervising extensive<br />

data collection;<br />

demands additional<br />

time from respondent<br />

farmers<br />

Low. Requires<br />

ensuring that data<br />

systems KZ Noir<br />

already planned on<br />

installing function<br />

properly


Innovations In Impact Measurement<br />

Part 2: Case Studies<br />

39<br />

EXAMPLES OF QUESTIONS AND RESULTS<br />

FROM KZ NOIR’S FARMER REGISTRATION SURVEY<br />

Operational question<br />

Data source/survey<br />

questions<br />

Answers received<br />

Is KZ Noir reaching extremely poor<br />

populations?<br />

“Progress Out of Poverty Index”<br />

KZ Noir supplier farmers exhibit high levels of<br />

extreme poverty (59% live below the $1.25 per<br />

day power purchasing parity poverty line).<br />

Are there areas of higher poverty that<br />

KZ Noir should target with additional<br />

social services?<br />

“Progress Out of Poverty Index”<br />

Poverty levels are roughly equal across KZ<br />

Noir’s washing stations, with only one washing<br />

station exhibiting significantly higher poverty<br />

levels than the average.<br />

Where are farmers receiving agricultural<br />

inputs from, and do they need additional<br />

help sourcing inputs?<br />

+ Did you use manure/chemical fertilizers<br />

on your trees?<br />

+ If yes, where do you get manure/fertilizer<br />

from?<br />

+ Did you get free pesticides from the National<br />

Agricultural Export Development Board<br />

(NAEB) this year?<br />

+ Did you also purchase pesticides this year?<br />

Most farmers acquired chemical fertilizer<br />

(88%) last season, with smaller numbers<br />

(but still majorities) acquiring pesticides (66%)<br />

and using manure (64%).<br />

Most farmers interviewed received chemical<br />

fertilizer from their coffee washing stations<br />

(64%), pesticides from NAEB (62%), and manure<br />

from their own production (60%). These results<br />

suggest that KZ Noir may be successfully<br />

promoting fertilizer use and could raise use<br />

of other inputs by providing them as well.<br />

Can KZ Noir reach most of its supplier farmers<br />

via mobile phones?<br />

+ How many mobile phones does your<br />

household own?<br />

+ What are your phone numbers?<br />

67% of households own at least one mobile<br />

phone, indicating that KZ Noir can reach large<br />

numbers of households via phones, but not all.<br />

Can KZ Noir disburse end-of-season<br />

premium payments to farmers via bank<br />

transfers?<br />

+ Do you have a bank account?<br />

+ If yes, which bank do you bank with?<br />

80% of households interviewed owned at<br />

least one bank account, indicating that bank<br />

transfers could be a feasible method of postseason<br />

payment for most farmers.<br />

Does the premium sharing program raise<br />

the quantity and quality of coffee sold to<br />

KZ Noir?<br />

KZ Noir’s internal coffee purchase and<br />

quality tracking data<br />

Season has not yet finished.


Innovations In Impact Measurement<br />

Part 2: Case Studies<br />

40<br />

Data<br />

KZ Noir has not yet completed its first purchasing season<br />

using this system, but the farmer registration survey has<br />

provided useful baseline information on its supplier farmers.<br />

It found that KZ Noir farmers are extremely poor, potentially<br />

one of the poorest customer-bases Acumen has in its<br />

portfolio: 59% of farmers interviewed live below the $1.25/<br />

day poverty line, slightly fewer than the 2011 national rate<br />

of 63%; 14 and 51% have not completed primary school. On<br />

the other hand, 80% of interviewed households hold a bank<br />

account and 67% own at least one mobile phone, indicating<br />

that KZ Noir can capitalize on high levels of financial<br />

inclusion and connectivity by for example offering payments<br />

by bank transfer rather than cash<br />

Decisions<br />

KZ Noir plans to use the lessons learned from the baseline<br />

data to better inform how it could distribute resources and<br />

services for next year’s growing season. In addition, the<br />

Lean Data system helps the company better demonstrate<br />

transparency through their supply chain, an important<br />

feature for international coffee importers. Much to<br />

everyone’s surprise KZ Noir also noticed that competitors<br />

seemed to launch farmer registration surveys of their own<br />

in response to our survey. These reactions were potentially<br />

an effort to pull farmers out of KZ Noir’s orbit and draw their<br />

loyalty elsewhere. These observations serve as a reminder<br />

that data collection does not exist in a vacuum, but rather<br />

interacts with a company’s competitive environment just as<br />

any other operational function does.<br />

14. http://data.worldbank.org/indicator/SI.POV.DDAY


Innovations In Impact Measurement<br />

Introduction<br />

41<br />

LOOK<strong>IN</strong>G<br />

FORWARD


Innovations In Impact Measurement<br />

Looking Forward<br />

42<br />

We’ve had fun using mobile technologies<br />

to gather data and we’ve learnt a great deal.<br />

Not only in terms of the new data we’ve<br />

gathered across our respective investment<br />

portfolios, but also around how best to use<br />

these technologies. So what comes next?<br />

Seeing how these techniques have performed, both Root<br />

Capital and Acumen aim to expand their use to larger<br />

proportions of our respective portfolios. We will also expand<br />

our use of technologies and tools that align with Lean Data<br />

and Client-Centric Mobile Measurement approaches. For<br />

example, Acumen is currently prototyping its first data<br />

collection sensor.<br />

By deepening and widening our work, we believe that<br />

our companies will see tremendous value from collecting<br />

social performance data. They will learn more about their<br />

customers, about their social impact, and be able to use this<br />

data to make better decisions and grow their companies.<br />

In the end, this is our ultimate goal: to enable the collection<br />

of data that drives both social and financial return.<br />

Through spreading the use of these techniques and<br />

developing an increased understanding of our social impact<br />

sector-wide, we can all begin to address what Jed Emerson<br />

rightly describes as a metrics myth: 15 a lack of data on social<br />

impact in our sector despite the stated commitment to<br />

its collection.<br />

Consequently, we encourage others to adopt these<br />

techniques. Whether you’re an investment manager or<br />

enterprise, if you have or can get mobile-phone numbers<br />

for your consumers, send out a survey. We urge you to open<br />

up channels of communication with your customers, ask for<br />

their feedback, and listen to how they believe a product<br />

or service has impacted their life.<br />

As we’ve reiterated throughout this paper, all you need<br />

to understand your social performance is a mobile phone<br />

and the will to start. What could be simpler?<br />

LEARN MORE, TAKE OUR<br />

LEAN DATA COURSE<br />

15. Jed Emerson, “The Metrics Myth,” BlendedValue, Feb. 10, 2015.<br />

http://www.blendedvalue.org/the-metrics-myth/<br />

If you’re interest to learn more about Lean Data or the<br />

principles of social impact measurement in general why<br />

not take one of our short online courses on these topics.<br />

The +Acumen platform hosts courses on impact and<br />

many other key topics relevant to impact investing and<br />

social enterprise. Visit http://plusacumen.org/courses/<br />

social-impact-2/ for a course on measuring impact and<br />

http://plusacumen.org/courses/lean-data/ for a course<br />

on Lean Data.


Innovations In Impact Measurement<br />

Website<br />

www.acumen.org<br />

Twitter<br />

@Acumen<br />

Introduction<br />

Website<br />

www.rootcapital.org<br />

Twitter<br />

@RootCapital<br />

43

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!