> The MSF website may undergo scheduled maintenance on 21 Sep (Wed), 8pm to 12am or every Tues, Fri and Sun, from 12am to 9am. > View the latest Safe Management Measures for weddings, other COVID-19 advisories or COVID-19 FAQs (for support schemes, etc).
One of the perennial challenges of social policy, social work and social service is being able to state categorically what the direct effect of an intervention is. Equally challenging is the skill of describing specifically the behaviour for measuring improvements. We also know that there can be improvements in areas that are not measured. So the aim is to find a way to attribute directly to the intervention, the measured improvements and the specified change. To be frank, social work as part of the social sciences is not the only discipline that has this challenge although it is probably more conservative in claiming any credit for its professional work.
So let’s examine what the thorny issues are in evaluating interventions. In essence, the debate is about how and to what extent it is possible to establish causation in some of the social programs or interventions.
It must be acknowledged that expectations from donors and funders often create an immediate tension. This is the tension between the funder expectation for concrete results of impact to justify the funding or its renewal and the provider or NGO struggling with showing proof of efficacy of their intervention.
Neither is unjustified. The question to ask is how we can get the right balance in ensuring that providers can account for presenting outcomes while taking into consideration the complexity of the real world. We are always confronted with the challenge of how to measure causation in widely varying and frequently complex contexts and with limited resources. (How much resources we should devote to evaluation in itself warrants a separate discussion.)
We all know that there are many interpretations of causation. A lot of people talk about contribution and attribution to describe causation but these terms are not used systematically. For example, some people use the term attribution to imply the change is 100% caused by the intervention while others use it as a precise measure of the degree to which the intervention has contributed to the change, assuming that they are in the first place able to describe the behavioural change.
But what about other contributory factors that support the change? So some would prefer to use contribution as the main model for understanding causation which they argue better reflects the complexity of the real world. The majority of evaluators and practitioners in the social service field favour understanding contribution and would advise against measuring attribution. (Many pitches to donors and funders however continue to root for attribution.)
All said and done, it is worth examining causality in evaluations despite the difficulties of operating in a complex environment. So how do we examine? One way is to determine probable causality by looking at the different types of interactions and predicating with assumptions. For example, we know with some certainty that it is the full combination of good curriculum program design, quality of staff implementation and the nature and length of intervention that determines results. Each on its own will not produce the desired outcome. Each of these factors should be determined by learning from research and what is already known. The whole process needs to be rigorous and well debated. (The common mistake is to think that only one of the 3 factors is sufficient when what is required is to examine the interaction among the 3 factors to determine what might be the optimal calibration. Questions that need to be examined would be about the type of program design, the amount and type of training for staff and the length of intervention.)
To recap, the authenticity of a program depends on 3 elements and how closely the implementation follows what has been agreed to in (i) content in the design; (ii) dosage standards; and (iii) delivery approach.
The field of evaluation is growing fast and there are various methods and approaches that evaluators, commissioners and managers of evaluations can consider. What is necessary is to start to ask good questions even before we ask questions about what to evaluate. Some of these questions include the following: What has been achieved through current efforts? How effective have they been in meeting the needs of the people they serve? What do current data show? How can we develop and improve the service, or demonstrate accountability to funders, or both? Who have experienced the impact of what has been done? And what has the experience been like?
Finding answers to these questions could reveal insights about what needs to be done which then warrants the question of what should be evaluated. There should be appropriate logic, hypothesis about change or improvement and alignment of what the provider intends to do to achieve what results. In essence, there should be a clear statement of how the program will effect change (outputs) and of what that change will look like (outcomes). This will then provide the balance between the funder or donors’ need for evidence of impact, without being overly burdensome for the provider. With a mutual understanding, the evaluation framework should then be shared responsibility.
Director-General of Social Welfare
Ministry of Social and Family Development