News

Monitoring and evaluation in ECD

In social development, everyone knows that good intentions are simply not enough. Development can be messy – with uncertain, complex settings, and multiple partners with different interests, goals and capacities. At the same time, development workers are under increasing pressure to demonstrate impact. We have to show that our projects have made a real change in people’s lives and that donor money hasn’t been wasted. This is where monitoring and evaluation studies come in: they aim to help development workers make the best choices, to improve programmes as they roll out, and estimate whether and how particular aims were achieved and whether this was better than other courses of action.

This year, BRIDGE has identified monitoring and evaluation as an important cross-cutting theme which is infused into all its Communities of Practice (CoPs). This theme formed the focus of the last Early Childhood Development (ECD) CoP held on 29 May 2018.

A presentation on Penreach, which showcased their involvement in ECD, was made by Thulile Makofane – Penreach’s Programme Director for ECD and Teacher Development. The presentation illustrated the activities through which Penreach supports ECD. These include the use of various projects such as toy libraries, development workshops and natural playgrounds (this involves using nature for toys and play and teaching practitioners how to use natural resources). Natural playgrounds are low cost, low maintenance and help parents understand how to use the environment for play. Thulile also shared key learnings which could serve as an initial overview of Penreach’s M&E findings.

Kulisa Management Services’ Leticia Tiamo then took participants through a number of activities in order to demystify some aspects of monitoring and evaluation. One activity involved asking participants to choose from a selection of photographs that dealt with typical ECD scenarios. The discussion then focussed on what evidence for monitoring and evaluation could be gleaned from each scenario. From the feedback the following categories or issues for M&E were identified:

Leticia concluded the discussion with a quick overview of the basic elements of M&E. To see the full presentation, click here.

Following on from Leticia, Linda Biersteker of Innovation Edge presented the Early Learning Outcomes Measure (ELOM ). The EOLM is an instrument for assessment in early learning that does not assess the individual child, but rather measures the effectiveness of a programme. Linda outlined the ELOM measures and explained how the instrument could be used. The tool includes a number of resources linked to a rating scale for task orientation and can also be used to assess behavioural skills.

There are a number of possible applications of the tool such as: programme monitoring and support for programme improvement, descriptions of learner performance for targeting purposes and area based population-level surveillance. The ELOM can be used to track Early Learning Programmes (ELPs) quality scale up, to establish what works through comparing different ELP types, to establish what is needed for ELPs to shift performance and to compare ELP children with their quintile reference group.

 

After Linda’s discussion, Cotlands’ Monitoring and Evaluation Coordinator – Mealang Van Heerden – spoke about her experience of the application of the ELOM and noted the number of ways in which the instrument has assisted her organisation. She emphasised the value of a tool that helps organisations assess and review their own programme, and provides a comparative measure through which to evaluate the usefulness of their own internal M&E systems. Once the results from the first ELOM were received, Cotlands was able to gain some insight into the different aspects of their programmes and the ELOM learning community was supportive in the process of interpreting this data. This led to Cotlands reviewing their programmes and reworking their curriculum, and 10 weeks later a significant shift was seen in the results.