|
 
Areas of Specialization
|
Participatory Monitoring &
Evaluation |
by Gretchen B. Rossman
Center for International Education
University of Massachusetts at Amherst
Participatory monitoring and evaluation (PM&E) is a process of self-assessment,
knowledge generation, and collective action in which stakeholders in a
program or intervention collaboratively define the evaluation issues,
collect and analyze data, and take action as a result of what they learn
through this process (Jackson & Kassam, 1998). It is fundamentally
about sharing knowledge-among beneficiaries of the program, program implementers,
funders, and often outside evaluation practitioners. Monitoring calls
for on-going documentation of the specifics of program implementation
so that results can be explained in light of program processes. Evaluating
calls for judgments about the effectiveness and sustainability of the
program. Philosophically, participatory monitoring and evaluation seeks
to honor the perspectives, voices, preferences and decisions of the least
powerful and most affected stakeholders-the local beneficiaries. All too
often, evaluation is something done to beneficiaries; participatory
approaches argue that evaluation should be done with these key
groups.
Development practitioners identify several benefits associated with PM&E.
First, by involving those directly affected, a more clear picture of what
is actually happening in a program can be drawn-both successes and failures.
Second, key stakeholder groups may feel empowered through participating
in the process-they share responsibility for the evaluation processes
and results. Third, there is potential to develop capacity and skills
in evaluation generally; these can then be applied to other programs and
activities. Fourth, when information is generated as a routine part of
program operations, there is greater likelihood that this information
will be used directly to make mid-course corrections and modifications
as the program is implemented. Fifth, there is substantial benefit for
team building and creating commitment through collaborative inquiry. And,
finally, the learning associated with participating in such a process
is experiential and can bring a deep sense of meaningfulness to the work.
PM&E is grounded in five general principles (IDS, online, 1998). The
first is participation-creating structures and processes that include
those most directly affected by the program and often those most frequently
powerless and/or voiceless in program design and implementation. The second
is negotiation-a commitment to working through different views
(with the potential for conflict and disagreement) about what the evaluation
should focus on, how it should be conducted and used, and what actions
should result. The third is that these participatory processes lead to
learning among all participants which, when shared, leads to corrective
action and program improvement. The fourth is that, given changing circumstances,
people, and skills available for the process, flexibility is required.
As circumstances change, those involved in and affected by the evaluation
should be committed to modifying their strategies to achieve desired results-knowledge
that will shape effective and sustainable programs. The fifth principle
is that PM&E is quintessentially methodologically eclectic.
Practitioners can draw on a wide variety of methods to generate information.
Beneficiaries can invent some and use local processes that are relevant
and heuristic. PM&E is not, however, just a bag of tricks or tools;
it is a philosophy, an overall approach to organizational learning that
fosters the involvement of those most directly affected.
PM&E can be used effectively within development agencies' needs for
accountability (Jackson, 1998). The shift in many aid agencies to results-oriented
management of programs provides an opportunity to implement PM&E being
mindful of both external and internal contexts. Accountability, from this
perspective, is defined as accepting responsibility for the conduct and
results of a specific program. This entails awareness of and responsiveness
to demands emanating from the external context (funding agency's strategic
objectives, for example) as well as those offered by program beneficiaries
for improvement in their living circumstances. Program managers and participants
are responsible to those who fund programs but are equally responsible
to themselves for the achievement of results articulated by beneficiaries.
We conceptualize PM&E as occurring within an accountability field
or arena. Within this field are many voices, sometimes speaking in concert;
other times, in opposition. The challenge of PM&E is to negotiate
these differences so that data are gathered that are relevant, timely,
valid, and heuristic for these various stakeholder groups.
The fundamental processes of PM&E
are as follows:
1. To collectively and collaboratively identify key objectives or outcomes
for the program to achieve;
2. To identify relevant indicators that document
changes in a specific condition and signal progress towards the objective;
3. To identify and gather data that measures
or describes the condition and can give evidence of progress;
4. To identify baseline conditions and benchmarks
of progress towards the achievement of the objective;
5. To collectively gather those data; analyze
and interpret them; and draw conclusions based on those interpretations;
and
6. To take corrective action to better achieve
the objectives.
This process can also be conceptualize as a cycle
of inquiry. Paolo Freire's praxis cycle (action-reflection-action; 1970)
is a close cousin, as is action research and other forms of collaborative
inquiry (see, especially, Cousins & Earl, 1992; Fetterman, 1996).
Throughout the process, the goal is to try to achieve a balance of power
and voice among the various participant groups. Negotiating differences
and honoring human resources and cultural knowledge are central to this
goal.
Given this conceptualization of PM&E, the role of the outside evaluator
shifts. No longer is this person cast as the expert who conducts the evaluation
on program beneficiaries or who extracts information from them. The outside
evaluator becomes a coach, a facilitator, a critical friend (Rallis &
Rossman, 2000). The skills demanded of this role are not merely technical
although the participatory evaluator must have technical skills. More
important are interpersonal skills, including skill in negotiating difference
and resolving conflict.
Many examples abound of participatory evaluations conducted with sustained
participation by those most affected by the program. Various website,
moreover, discuss the issues surrounding PM&E and offer examples of
successful as well as problematic participatory evaluations. International
case examples can be found in Jackson and Kassam (1998); excellent websites
include the Institute of Development Studies [www.ids.ac.uk].
References
Cousins, J. B., & Earl, L. M. (1992). The case for participatory
evaluation. Educational Evaluation and Policy Analysis, 14:4, 397-418.
Fetterman, D. M. (1996). Empowerment evaluation: An introduction to theory
and practice. In D. M. Fetterman, S. J Kaftarian, & A. Wandersman
(Eds.), Empowerment evaluation: Knowledge and tools for self-assessment
and accountability (pp. 3-46), Thousand Oaks: Sage.
Freire, P. (1970). Pedagogy of the oppressed. New York: Seabury.
Institute of Development Studies. (November, 1998). Participatory monitoring
and evaluation: Learning from change. IDS Policy Briefing Issue 12. URL:
www.ids.ac.uk/ids/bookshop/briefs/breif12.html. (Accessed 10/12/00).
Jackson, E. T. (1998). Indicators of change: Results-based management
and participatory evaluation. In E. T. Jackson & Y. Kassam (Eds.),
Knowledge shared: Participatory evaluation in development cooperation
(pp. 50-63), West Hartford, CT: Kumarian Press.
Jackson, E. T., & Kassam, Y. (Eds.) (1998). Knowledge shared: Participatory
evaluation in development cooperation. West Hartford, CT: Kumarian Press.
Rallis, S. F., & Rossman, G. B. (2000). Dialogue for learning: Evaluator
as a critical friend.
In R. K. Hopkins (Ed.), How and why language matters in evaluation. New
Directions in Program Evaluation, 86, 81-92.
Prepared by
For
more information, please contact us at:
Center
for International Education
School of Education
University of Massachusetts
285 Hills House South
Amherst, MA 01003
Telephone: (413)545-0465 | Fax: (413)545-1263
Web Address http://www.umass.edu/cie
E-mail: cie@educ.umass.edu
|