Skip to main content
decorative line

By: Yvonne Pinto, Managing Director, Aline Impact, Ltd. / GREAT MLE Partner

 

Evaluation methods for projects that aim to build human capacity and change behaviors and attitudes can be complex. The Organisation for Economic Co-operation and Development has produced a document that outlines impact assessment, which is only one of the forms such programs may be evaluated. We spoke with Yvonne Pinto, Director of ALINE Impact Limited, for her insights on measuring the outcomes from a project such as GREAT.

In simple terms, an impact evaluation provides information about the impacts produced by an intervention — positive and negative, intended and unintended, direct and indirect. This means that an impact evaluation must establish what has been the cause of observed changes (in this case ‘impacts’) referred to as causal attribution,” Pinto said. “If an impact evaluation fails to systematically undertake causal attribution, there is a greater risk that the evaluation will produce incorrect findings and lead to incorrect decisions. Usually you have to use a counterfactual or comparison group to establish attribution, which in capacity building efforts is really very difficult and fraught with a lack of credibility and sometimes inherent bias.”

Instead of pursuing an impact evaluation approach for GREAT, ALINE Impact is using a Measurement, Learning, and Evaluation Practice (MLE) framework that is based on mixed methods and both output, outcome and process changes.

Why is evaluation challenging for capacity building projects such as GREAT, and why does it matter for development?

Traditional impact assessment is used to measure long-term sustained changes from interventions. However, in complex environments, it is difficult to isolate changes due to interventions from broader change processes. Capacity building for individuals, institutions and the enabling environment encompasses both interventions and complex processes and, the purpose of the capacity building may be interpreted differently by different stakeholders involved.

Capacity building is acknowledged more recently to be more than a set of interventions, but also to include intangible, fluid and iterative processes.

Often, changes and changes in process are presented with a counterfactual — an ‘alternative scenario’ that could have happened in the absence of the capacity building effort. Counterfactuals in complex systems are near impossible, and the GREAT program focuses on an experimental model that is constantly changing and reflects a need to design data collection to inform the direction of change.

There is a growing awareness that many efforts at assessing impact fail because there is a lack of sufficient funds to do them well, particularly using participatory design that build in the views and needs of different stakeholders. The question of who assesses the impact is also important as staff or stakeholders may feel threatened if the results lead to sanctions or disincentives.

Three happy members of a workshop at Cornell University

Left to Right: Brenda Boonabaana, Yvonne Pinto, and Peace Musiimenta during a 2017 GREAT monitoring and evaluation learning event and curriculum development workshop at Cornell University.

 

What are common mistakes projects make when measuring capacity building? How can these pitfalls be avoided/resolved?

Too often, there is a focus on traditional impact evaluation as the only valid method. That a reliance on comparison groups, which are nearly impossible to effectively construct in complex environments are somehow more valid than process tracking, outcome monitoring, triangulation and more qualitative analysis from the people engaging in the process or capacity building effort. Whilst capacity building is universally accepted as useful, there are arguments and methodological challenges to measuring the effectiveness and impact of complex development interventions. These may be further complicated, by changing priorities or, an evolution of the capacity building intervention(s) over a lifetime.

The most challenging is the attribution of a change in a whole system to one or more particular inputs, creating obstacles for effective performance improvement and organisational learning. In all programs where there are multiple interventions, it is hard to identify direct causal links. The changes at different levels may be due to multiple and complex influences particularly at country, institutional, political, societal and cultural levels that are quite separate from capacity building interventions.

Whilst there is often a dependency on pre-defined indicators, they often fail to illustrate the process of capacity building that takes recipients towards patterns of behaviour and practice changes. Furthermore, they are often perceived as subjective and reliant on individual perceptions, judgement and interpretation.

Stakeholders in capacity building interventions commonly have very different theories of how change occurs and interventions can suffer from contradictory objectives. Aspirations for data and evidence are different from identifying what changes have occurred, to communicating achievements, to understanding operational performance, supporting learning processes between participants and generating data and evidence that suggest a specific model is successful. These can become conflated and confused particularly over time.

How have you approached the GREAT MLE design based on this knowledge and experience? What is GREAT doing differently to ensure success?

We have committed to an approach designed to be simple and pragmatic but sufficiently flexible to enable GREAT to evolve over time and adequately reflect the changes that have occurred. GREAT is experimental in its design and requires data collection that regularly informs choices and programmatic adjustments:

  • Work with program implementers has enabled mapping out an understanding of the change process (theory based approach) and the assumptions.
  • The key questions posed relate to how the program facilitates change in individuals and in turn creates enabling environments for change to take place.
  • MLE is an integral part of the program design. MLE participates as a partner, brings data to the fore when discussing how to refine capacity building interventions, provides insights on what are early indicators of what’s working and what’s not, and the continuous reflection on data builds the program’s capacity to learn.
  • The approach is multi-phased: Assessing first knowledge acquisition and satisfaction in participants, assessing their actual outputs (e.g., do they meet minimum standards), and assessing their experiences (e.g., how does their engagement across cross-disciplines prepare them for applying in their work). This is the first stage – have they left the course adequately equipped?
  • Then tracking engagement with the course and reported behavior changes. The collection of longitudinal data — a unique feature — yields four years of data collected from fellows.

Built in to the design of the MLE is significant support for self-evaluation processes particularly participatory self-assessment methods involving internal and external stakeholders. There is also attention given to documenting the process by which Makerere University owns the program and develops its excellence systematically. To address the concern that there may be no external reference or that the process is subjective and biased, triangulation is pursued. Triangulation is the most pragmatic way of measuring complex changes through the use of quantitative and qualitative tools balancing subjectivity with objectivity.

Furthermore the approach seeks to capture and document relationships between different components of the GREAT capacity building program, capturing the changes in relations with outside institutions, explaining why changes occurred and being aware of contextual influences particularly if they are strong.

To ensure MLE delivers quality information which is of value and therefore credibility, there is an emphasis on being systematic in MLE implementation with adequate investment of funds and time.

What can the MLE model developed for GREAT teach us about doing assessment well?
The MLE approach for GREAT is constructed using a Theory of Change approach (developed in participation with multiple stakeholders) with clear objectives that identify the different components of the capacity-building intervention. The course seeks to bring about improved knowledge, and new skills and attitudes in the field of gender and agricultural research. If the participants have been able to implement this learning then change may also be plausible at an institutional or field level. This could result in changes in the partner communities and ultimately for the end beneficiaries. The use of plausible association can assist in understanding whether changes at one level can lead to changes at a wider level within GREAT.

The assumptions leading to how change happens in this plausible association are identified and actively tested along the way to see if they hold true. Pretested and refined tools are both quantitative and qualitative in nature and connect the activities of fellows to outputs and outcomes.

Fellows are assessed before and after training, individually and as part of a team, tutors and mentors are interviewed to assess their experiences and performance and projects fellow proposals are produced as outputs that are ranked against standards and quality. Independent assessments are also carried out in the form of key informant interviews during the theoretical training to document process and adaptive management efforts to refine the delivery of the training components.

Following this, fellows are mentored through to practical application of knowledge and assessed on the implementation of projects through progress reports submitted and by interviewing mentors. Further analysis and assessment is carried out to assess the extent to which knowledge was applied within projects and identify any obstacles to implementing knowledge.

Changes in career development, opportunity and outputs of fellows are also tracked within a community of practice (peer-to-peer learning and tracking of specific outputs to understand the interactions between fellows and opportunities to apply their knowledge in their regular jobs). Changes that are self-reported by trained fellows are triangulated with individuals in their institutions.

The processes and changes in staff associated with the host institution are documented and reflected upon regularly.

Verification and validation work is carried out through a number of case studies which intend to investigate in greater detail: The how and where fellows have applied knowledge, changed their attitudes and practices, been supported by the environment in their institutions, and are continuing peer-learning (or influencing) efforts in their normal working activities.