Personal View

Lessons for health program monitoring and evaluation in a low resource setting

AUTHORS

name here
Emma Field
1 Master of Applied Epidemiology, Epidemiologist *

name here
Mafu Vila
2 Diploma in Health Service Management, Health Information Officer

name here
Laina Runk
3 Master of International Public Health, Data Manager

name here
Fiona Mactaggart
4 Master of the Control of Infectious Diseases, Monitoring and Evaluation Manager

name here
Alexander Rosewell
5 PhD, Senior Lecturer

name here
Sally Nathan
6 PhD, Senior Lecturer

CORRESPONDENCE

* Emma Field

AFFILIATIONS

1 Menzies School of Health Research, Spring Hill QLD 4000, Australia; Abt Associates, Level 2, 5 Gardner Close Milton, Brisbane, QLD 4064, Australia; and UNSW Australia, Botany Street UNSW Kensington Campus, Australia

2 Abt Associates, Papua New Guinea Governance Facility, Level 1, Ravalian Haus, Harbour City, Port Moresby, P. O. Box 591, Waterfront 125, National Capital District, Papua New Guinea

3, 4 Abt Associates, Level 2, 5 Gardner Close Milton, Brisbane, QLD 4064, Australia

5, 6 UNSW Australia, Botany Street UNSW Kensington Campus, Australia

PUBLISHED

12 October 2018 Volume 18 Issue 4

HISTORY

RECEIVED: 20 September 2017

REVISED: 22 May 2018

ACCEPTED: 8 June 2018

CITATION

Field E, Vila M, Runk L, Mactaggart F, Rosewell A, Nathan S.  Lessons for health program monitoring and evaluation in a low resource setting. Rural and Remote Health 2018; 18: 4596. https://doi.org/10.22605/RRH4596

AUTHOR CONTRIBUTIONSgo to url

This work is licensed under a Creative Commons Attribution 4.0 International Licence


abstract:

Numerous guidelines outline best practices for health program monitoring and evaluation (M&E). However, health programs are often implemented in less than ideal circumstances where these best practices may not be resourced or feasible. This article describes how M&E has been conducted for a health service delivery improvement program in remote Papua New Guinea and outlines lessons learned. The lessons learned were to integrate M&E into every aspect of the program, strengthen existing health information data, link primary data collection with existing program activities, conduct regular monitoring and feedback for early identification of implementation issues, involve the program team in evaluation, and communicate M&E data through multiple mediums to stakeholders. These lessons could be applied to other health programs implemented in low resource settings.

Keywords:

health service delivery, low resource setting, monitoring and evaluation, Papua New Guinea.

full article:

Context and issues

Monitoring and evaluation (M&E) of health programs in low resource settings can be challenging for many reasons: limited human resource capacity, weak information systems, inadequate financial and human resources, and limited demand for M&E1. However, increasingly rigorous M&E is required by governments and donors to transparently report if programs are implemented as planned and achieving the expected outcomes. This article describes M&E conducted for the Community Mine Continuation Agreement (CMCA) Middle and South Fly Health Program in Papua New Guinea and offers practical solutions as lessons learned from the experiences in this context.  

CMCA Middle and South Fly Health Program aims and activities

This comprehensive health program aims to improve health service delivery to remote communities in the Western Province of Papua New Guinea2. The program coordinates support through a partnership with existing health service providers, covering all aspects of health service delivery and primary health care (Fig1). Full details of the program are available in an online report of the midline evaluation3

The program activities are implemented by a multidisciplinary team of about 20 staff, in collaboration with existing health service providers in the program area. The program supports health facilities, servicing 50 000 people. The geography in the program area is challenging, with transport to villages and health facilities often by boat.

Description of the M&E system

The program design was based on a program logic (Fig1). The program design also outlined the following principles for the M&E system: use existing data appropriately; where available, use the national targets or set realistic targets; focus on the needs of users and encourage use of data; and ensure M&E is integrated into implementation and is not a separate activity. Each year, annual activity plans were developed based on the program logic.

The program design also outlined guiding principles for indicator selection: use the minimum number of indicators to track performance, as each additional indicator requires additional resources for collection and analysis; link indicators to inputs, processes, outputs, outcomes and impacts in the program logic and annual activity plan; and, wherever possible, use existing data sources for indicators.

To enable regular progress monitoring and reporting of indicators, an M&E system that incorporated both primary and secondary data was established (Fig2). Secondary data were from the National Health Information System (NHIS), a monthly paper-based information system where aggregate healthcare presentations and healthcare services (eg immunisations, antenatal care) are reported monthly. The NHIS data were used to calculate long-term outcomes (eg immunisation coverage). The program team would use existing NHIS forms to record immunisations given (or any other activity recorded in the NHIS) and provide a copy for the program M&E database and for reporting through the existing NHIS processes. This enabled a direct attribution analysis of the program to the overall indicators (eg immunisation coverage) in the program area. Where no data collection for other program indicators existed, program-specific M&E forms were developed based on the annual activity plan. These M&E forms were filled in by designated staff on a monthly basis (eg the infrastructure officer reported on all infrastructure related activities in the annual activity plan) and entered them into the M&E database. The data from the M&E system were used for monthly progress reporting, annual activity planning and periodic evaluation.

Figure 1:  Simplified program logic for the CMCA Middle and South Fly Health Program.

Figure 2:  CMCA Middle and South Fly Health Program M&E system.

Lessons learned

1. M&E should be integrated into every aspect of the program

While it is best practice to incorporate an M&E plan into the program design, these plans need to be sufficiently detailed and feasible to enable M&E to commence with program commencement. However, it is not unusual that M&E plans take substantial time to finalise while program implementation has already started4. Furthermore, M&E is often seen as the responsibility of the M&E officer or team, and not of the entire program team.

For the program, a detailed M&E plan was developed in the program design and was integrated across all program operations. The M&E plan included a program logic, an M&E framework (detailing indicators and their link to the program logic, source of data and frequency of reporting) and a reporting framework. There was regular monitoring of program progress and a baseline, midline and endline evaluation. M&E was funded in the budget with specific personnel, a part-time M&E manager, a part-time data manager and a full-time health information officer. A lesson from the program that enhances M&E best practice knowledge is that M&E specific activities, such as monitoring, were integrated into each team member’s terms of reference. This ensured that M&E was integrated into daily activities and annual performance reviews. Overall, this integration of M&E resulted in adequate resourcing of M&E activities, M&E was initiated at the same time as the program commencement and M&E was not viewed as an activity separate to a staff member’s duties but rather a core responsibility from management to field staff.

2. Existing health information data need strengthening

Using secondary data from existing information systems is most cost-effective for M&E. However, health information systems in low resource settings may have issues with timeliness, completeness and accuracy. In Papua New Guinea there is an acknowledgement these issues exist with the NHIS5. Health information systems are one of the six building blocks of health systems as outlined by WHO, forming a critical function in ensuring ‘the production, analysis, dissemination and use of reliable and timely information on health determinants, health system performance and health status’6. As with all the health system building blocks, health information systems are not standalone systems and the ability to generate complete and accurate data has ramifications for all the building blocks7. Programs that aim to support or strengthen one or more building blocks of the health system should also invest in strengthening the health information system, although in practice this does not always occur8.

An element of support for strengthening the health information system in Western Province was integrated into the program design, as part of the work of a health information officer. This officer worked with their government counterpart in the Provincial Health Office, to improve timeliness, completeness and accuracy of the NHIS. This support led to an improvement in both the completeness and quality of the data (completeness increased from 91% in 2012 to 99% in 2015). Furthermore, the review of the NHIS data led to the detection of substantial under-reporting, which was subsequently corrected.

3. Primary data collection should be linked with existing program activities 

While using existing data is preferred, health information systems in developing countries do not always collect the required information for program M&E. However, travel to program sites, often in remote locations, for primary data collection adds enormous costs for transport and personnel time. In low resource settings, travel costs can use up scarce funds that would be better used for implementation of activities. The program staff regularly travelled to program sites to implement activities. This provided an opportunity to incorporate program implementation with primary data collection, thereby reducing costs. 

The program was launched with an outreach clinic provided to every village in the program area. This was a huge logistical undertaking, given the remote location of many villages. It was, however, a prime opportunity for primary data collection for baseline evaluation. The program outreach clinic team were trained in data collection and were able to carry out health facility assessments, interviews with health workers and focus group discussions with community members, along with their outreach clinics. Baseline data collected by the outreach clinic team were extremely valuable in informing specific activities for the first annual activity plan. Additionally, involvement of the program team in the baseline built their capacity for M&E, and provided them with a deeper understanding of the health services and community expectations.

4. Regular monitoring and feedback are vital for early identification of issues

Process monitoring is the foundation of M&E. Process monitoring enables evaluators to distinguish between a failure in program design or a failure in implementation9. If what has been done is not sufficiently recorded, it is very difficult to evaluate outcomes and impact of programs. For example, poor outcomes may be attributed to the program when implementation was actually insufficient. Furthermore, for transparency, it is important to communicate to donors and beneficiaries about what the program has done.  

Process monitoring was integrated into all program staff reporting requirements, which were linked to the annual activity plan. Each month, progress of the annual activity plan was discussed with the team, identifying where activities were on track or if there were delays. This was critical in terms of timely implementation. When delayed activities were identified in these monthly meetings, the team discussed what could be done to overcome the delay. Often this led to allocation of additional resources and closer monitoring to ensure the activity was completed. Changes to activities were documented during these discussions. The documentation resulting from these discussions was used to inform the client and stakeholders about status of program activities and the issues surrounding delayed activities.  

5. The program team must be involved in evaluation

The program team were actively involved not only in monitoring, but also evaluation data collection for the program. Periodic evaluations serve to assess whether the longer term outcomes and impacts from the program were being achieved and informed alterations to implementation.

Evaluations can be conducted by either the organisation implementing the program (internal evaluators) or an external organisation or consultants (external evaluators). In a review of case studies of influential evaluations of development programs the use of external consultants was seen as being more independent, with the ability to explore sensitive political issues10. On the other hand, involving an internal evaluator may provide better access to data and key stakeholders, and opportunities for fostering program ownership and learning through team involvement10,11. This was certainly the case for the program baseline evaluation, where the team’s involvement in focus group discussions with communities, health worker interviews and health facility assessments allowed them to gain a deep insight into the issues for planning and implementing program activities (lesson 3). A third approach is a joint evaluation with internal and external evaluators as a way of ensuring independence and contextual knowledge10.

For the midline evaluation of the program, it was no longer appropriate for the program outreach team members to conduct data collection, given their now established relationship with health workers and community members and their role in program implementation. A joint evaluation approach was used with a combination of independent evaluators and the program M&E team, who did not have contact with the health workers or communities.

The role of the independent evaluators was specifically to seek the perspectives of health workers, communities and program partners on changes since program commencement, and on future directions. The program M&E team designed the overall midline evaluation methodology and data collection tools, based on the evaluation questions developed through a meeting with program stakeholders. Independent evaluators conducted key informant interviews and focus group discussions with program team members, program partners, health workers and community members. The M&E team collated and analysed quantitative data from the NHIS. The results from the qualitative and quantitative data were synthesised into a report by both the program M&E team and independent evaluators. This combination of internal and external evaluators provided advantages of in-depth knowledge and context of the program from the program M&E team, and the independent evaluators ensured evaluation participants felt comfortable raising concerns about the program and contributed to the transparency of the evaluation findings.  

6. M&E data should be communicated through multiple mediums 

The results of the program M&E needed to be communicated to various audiences. The team communicated results in multiple formats: monthly reports to the program team, which served to inform and improve program implementation in real time; quarterly reports to the donor and program partners; quarterly feedback posters and information sessions via the outreach clinic team to beneficiaries, the communities and health workers; and the program website for the wider public.

A key component of the program was equitable distribution of program benefits to the health facilities and communities, which was difficult to present in standard reporting templates (eg tables/graphs). Demonstrating this distribution was achieved through mapping program activities by village and health facilities. Data visualisation software Tableau v2018.1.2 (Tableau Software; https://www.tableau.com) was used for maps, which were embedded in the program website. These maps were interactive, allowing users to map different indicators and compare results from baseline and current status. Furthermore, this ensured that stakeholders could access non-confidential program data without having to request data from the program M&E team or wait for routine reports (Fig3).

Prior to the program, data on the health facilities were neither available to health service providers in this level of detail nor regularly updated prior to the program. The program sought to increase demand for data for decision-making through the M&E system, strengthening the NHIS, and regular presentation of data analyses at the program partnership meetings. However, use of data by program partners for decision-making, for example annual activity planning, remains limited. Enhancing the utility of information products generated from the M&E system, through seeking feedback from users, may improve data use for decision-making12.

Figure 3:  Example of data from the CMCA Middle and South Fly Health Program M&E system displayed on the program website13.

Conclusion

This article has outlined lessons from M&E for the CMCA Middle and South Fly Health Program in Papua New Guinea. Integrating M&E into all aspects of the program from program design to implementation assisted in having a solid plan for M&E, an adequate budget, appropriate human resources and buy-in from the entire team. Furthermore, the program team can contribute to primary data collection while travelling to sites for M&E and improve contextualisation of M&E through participating in joint evaluations with independent evaluators. In low resource settings, contributing to strengthening of the existing health information systems, from which the data are often used for M&E indicators, is both beneficial for program M&E and for the health information systems. Regular monitoring and feedback to the program team and discussions of M&E data assisted in identifying issues and improved implementation. Lastly, results from the M&E system were reported in multiple formats, including using maps, to stakeholders. These lessons may be applicable to health programs in other low resource settings.

references:

1 Kusek JZ, Rist RC. Ten steps to a results-based monitoring and evaluation system: a handbook for development practitioners. 2004. Available: web link (Accessed 16 May 2018).
2 CMCA Middle and South Fly Health Program. CMCA Middle and South Fly Health Program. 2013. Available: : http://www.cmsfhp.org/about-us (Accessed 16 May 2018).
3 Abt JTA. Strengthening health services in Western Province, Papua New Guinea: progress report of the North Fly Health Services Development Program and the CMCA Middle and South Fly Health Program. 2016. Available: web link (Accessed 28 April 2018).
4 Mathis J, Senlet P, Topcuoglu E, Kose R, Tsui A. Best practices in monitoring and evaluation: lessons from the USAID Turkey Population Program. 2001. Available: web link (Accessed 2 June 2017).
5 Ashwell HE, Barclay L. Problems measuring community health status at a local level: Papua New Guinea’s health information system. Rural and Remote Health 2010; 10(4): 1539. Available: web link (Accessed 24 September 2018).
6 World Health Organization. Everybody’s business: strengthening health systems to improve health outcomes: WHO’s Framework for Action. Available: web link (Accessed 24 September 2018).
7 Alliance for Health Policy and Systems Research and World Health Organization. Systems thinking for health system strengthening. 2009. Available: web link (Accessed 24 September 2018).
8 Warren AE, Wyss K, Shakarishvili G, Atun R, de Savigny D. Global health initiative investments and health systems strengthening: a content analysis of global fund investments. Globalization and Health 2013; 9(1): 30.
9 Bamberger M, Rao V, Woolcock M. Using mixed methods in monitoring and evaluation: experiences from international development. Available: web link (Accessed 16 May 2018).
10 World Bank. Influential evaluations: evaluations that improved performance and impacts of development programs. 2004. Available: web link (Accessed 16 May 2018).
11 Cole DC, Aslanyan G, Dunn A, Boyd A, Bates I. Dilemmas of evaluation: health research capacity initiatives. Bulletin of the World Health Organization 2014; 92(12): 920-921.
12 Geers E, Nghui P, Ekirapa A, Rop V, Mbuyita S, Patrick J, et al. Information products to drive decision making: how to promote the use of routine data throughout a health system. 2017. Available: web link (Accessed 2 June 2017).
13 CMCA Middle and South Fly Health Program. Map of key program indicators. Available: web link (Accessed 15 February 2017).

You might also be interested in:

2022 - What recently graduated podiatrists think of rural work, and how services are responding: a qualitative study

2015 - Rural and remote young people's health career decision making within a health workforce development program: a qualitative exploration

2012 - Thematic analysis of key factors associated with Indigenous and non-Indigenous suicide in the Northern Territory, Australia

This PDF has been produced for your convenience. Always refer to the live site https://www.rrh.org.au/journal/article/4596 for the Version of Record.