Health Researcher

Health Researcher
I love research

Monday, June 30, 2014

What is Monitoring & Evaluation (M&E)?


This section provides a brief introduction to what M&E is, together with a selection of recommended reading and further links to help you get started.

Monitoring

...is the systematic and routine collection of information from projects and programmes for four main purposes:
  • To learn from experiences to improve practices and activities in the future;
  • To have internal and external accountability of the resources used and the results obtained;
  • To take informed decisions on the future of the initiative;
  • To promote empowerment of beneficiaries of the initiative.
Monitoring is a periodically recurring task already beginning in the planning stage of a project or programme. Monitoring allows results, processes and experiences to be documented and used as a basis to steer decision-making and learning processes. Monitoring is checking progress against plans. The data acquired through monitoring is used for evaluation.

Evaluation

...is assessing, as systematically and objectively as possible, a completed project or programme (or a phase of an ongoing project or programme that has been completed). Evaluations appraise data and information that inform strategic decisions, thus improving the project or programme in the future. 

Evaluations should help to draw conclusions about five main aspects of the intervention:
  • relevance
  • effectiveness
  • efficiency
  • impact
  • sustainability
Information gathered in relation to these aspects during the monitoring process provides the basis for the evaluative analysis.

Monitoring & Evaluation

M&E is an embedded concept and constitutive part of every project or programme design (“must be”). M&E is not an imposed control instrument by the donor or an optional accessory (“nice to have”) of any project or programme. M&E is ideally understood as dialogue on development and its progress between all stakeholders.

In general, monitoring is integral to evaluation. During an evaluation, information from previous monitoring processes is used to understand the ways in which the project or programme developed and stimulated change. Monitoring focuses on the measurement of the following aspects of an intervention:
  • On quantity and quality of the implemented activities (outputs: What do we do? How do we manage our activities?)
  • On processes inherent to a project or programme (outcomes: What were the effects /changes that occurred as a result of your intervention?)
  • On processes external to an intervention (impact: Which broader, long-term effects were triggered by the implemented activities in combination with other environmental factors?)
The evaluation process is an analysis or interpretation of the collected data which delves deeper into the relationships between the results of the project/programme, the effects produced by the project/programme and the overall impact of the project/programme.

The Logical Framework Approach (LFA)

The Logical Framework Approach (LFA) is a management tool mainly used for designing, monitoring and evaluatinginternational development projects. It is also widely known as Goal Oriented Project Planning (GOPP) or Objectives Oriented Project Planning (OOPP).


Background
The Logical Framework Approach was developed in 1969 for the U.S. Agency for International Development (USAID), based on a worldwide study performed by a team from Fry Consultants, Inc. headed by Leon J. Rosenberg.Throughout 1970 and 1971, the tool was implemented across 30 countries, under the guidance of Practical Concepts Incorporated (PCI).The method is widely used by bilateral and multilateral donor organizations like AECIDGTZSidaNORADDFID,UNDPEC and the Inter-American Development Bank. Some non-governmental organizations (NGOs) offer training in the method to ground level field staff.
In the 1990s, it was often mandatory for aid organisations to use the method in their project proposals, but its use in recent years has become more optional. Terry Schmidt, who was involved in PCI's worldwide training programs, is now extending its use to the private sector. Dr Robert Madams, from ADH Training & Consulting, has also used a modified version of the method in the corporate sector in a number of countries.
The Logical Framework Approach is sometimes confused with Logical Framework (LF or Logframe). Whereas the Logical Framework Approach is a project design methodology, the LogFrame is a document.

Description

The text below describes the document, not the global methodology of project design. For the brief description of the LFA as a design methodology, see for example the page [1], for the thorough description see for example AusGuideline 3.3 The Logical Framework Approach cited in "External links" section.
The Logical Framework takes the form of a four x four project table. The four rows are used to describe four different types of events that take place as a project is implemented: the project ActivitiesOutputsPurpose and Goal (from bottom to top on the left hand side — see EC web site as under external links). The four columns provide different types of information about the events in each row. The first column is used to provide a Narrative description of the event. The second column lists one or more Objectively Verifiable Indicators (OVIs) of these events taking place. The third column describes theMeans of Verification (MoV) where information will be available on the OVIs, and the fourth column lists the Assumptions. Assumptions are external factors that it is believed could influence (positively or negatively) the events described in the narrative column. The list of assumptions should include those factors that potentially impact on the success of the project, but which cannot be directly controlled by the project or program managers. In some cases these may include what could be killer assumptions, which if proved wrong will have major negative consequences for the project. A good project design should be able to substantiate its assumptions, especially those with a high potential to have a negative impact.

Temporal logic model

The core of the Logical Framework is the "temporal logic model" that runs through the matrix. This takes the form of a series of connected propositions:
  • If these Activities are implemented, and these Assumptions hold, then these Outputs will be delivered
  • If these Outputs are delivered, and these Assumptions hold, then this Purpose will be achieved.
  • If this Purpose is achieved, and these Assumptions hold, then this Goal will be achieved.
These are viewed as a hierarchy of hypotheses, with the project/program manager sharing responsibility with higher management for the validity of hypotheses beyond the output level. Thus, Rosenberg brought the essence of scientific method to non-scientific endeavors.
The "Assumptions" column is of great importance in clarifying the extent to which project/program objectives depend on external factors, and greatly clarify "force majeure" — of particular interest when the Canadian International Development Agency (CIDA) at least briefly used the LFA as the essence of contracts.
The LFA can also be useful in other contexts, both personal and corporate. When developed within an organization, it can be a means of articulating a common interpretation of the objectives of a project and how they will be achieved. The indicators and means of verification force clarifications as one would for a scientific endeavor: "you haven't defined it until you say how you will measure it." Tracking progress against carefully defined output indicators provides a clear basis for monitoring progress; verifying purpose and goal level progress then simplifies evaluation. Given a well constructed logical framework, an informed skeptic and a project advocate should be able to agree on exactly what the project attempts to accomplish, and how likely it is to succeed—in terms of programmatic (goal-level) as well as project (purpose-level) objective.
One of its purposes/early uses was to identify the span of control of 'project management'. In some countries with less than perfect governance and managerial systems, it became an excuse for failure. Externally sourced technical assistance managers were able to say we have implemented all the activities foreseen and produced the outputs required of us, but because of the sub-optimal systems in the country, which are beyond the control of the project's management we have not achieved the purpose(s) and so the goal has not been attained.