> The MSF Consolidated Hotline (1800-111-2222) is temporarily unavailable. You may reach MSF via the alternative numbers on the Contact Us page.> The MSF website may undergo scheduled maintenance every Tues, Fri and Sun, from 12mn to 9am.
In my last letter on setting Key Performance Indicators (KPIs), I elaborated on the various types of KPIs. As
the term indicates, KPIs measure how a programme or service performs following a process or standard
procedure. In this letter, I will discuss important factors to consider when designing and choosing KPIs so
that they are meaningful in telling us more about the results of an intervention and how it has contributed to
The first consideration in determining what indicators to use is to acknowledge that there is a cost to
measurement. It is important that the data collected becomes information that can help with decision
making and with understanding the effect of an intervention or service. To achieve this, it is critical to make
reference to research or past learning in order to determine what data will be useful and how to collect them
without too much strain on the procedure or system. Collecting and analysing data therefore requires time
As we begin to brainstorm about what would make good KPIs, we should start by deciding what to measure
(i.e. waiting time, change in condition, knowledge, behaviour or/and attitude, client satisfaction) and how to
measure them. It is helpful to keep the measures simple, observable and clear (i.e. use a description if
necessary to obtain consistency in interpreting a term).
Ideally, a programme or service should be designed using research because it helps to determine the
level of change or prevention of deterioration that can be expected over time. The findings of existing
research should therefore help in deciding the specificity of the data that can or should be collected.
Equally important is to be clear about the target group (i.e. who will benefit and who will not benefit from
the programme or service). Being clear about the target group can help with data collection.
One useful tool in identifying what to measure is the logic model. The logic model is a tool that provides an
overview of how a programme is supposed to work and is often used in programme evaluation. It gives the
story of how the components of a programme are intended to meet the identified need or produce the
Logic models are helpful to identify disparities in the ideal programme and its real operation and to evaluate
the attribution and contribution of a programme to outcomes. They are also useful to determine what one
should measure in order to evaluate the usefulness of the programme.
An example of a logic model is as below:
Inputs: the raw materials require for the programme
Activities: the initiatives that are organised using the inputs
Outputs: the tangible and measurable products of the activity
Outcomes: the impact that the activity hopes to achieve (the change that occured or the difference that was made)
Next, it is important to think about when to measure (i.e. which part of the process or procedure to collect
the data) as it can determine the quality of the data that is collected. For example, the more intuitive and
part of a workflow the data collection is, the more likely the integrity of the quality. One should aim to collect
the data at the most natural point of everyday activities. It is also crucial to obtain commitment from those
collecting the data by explaining to them why they are doing it and how it will be used to make the
programme or service more effective.
KPIs need to be interpreted on the basis of the quality of the data and the definitions that
constitute the KPI. If the definitions are not explicitly stated or there are no checks to verify
the quality of the data, then organisations may not be accurately recording the activity and
this makes benchmarking impossible. This can be overcome by ensuring that there are explicit
definitions for each KPI and built-in data quality checks to verify that the required data is
How the indicators will be measured depends on what resources are available. Examples of possible
approaches to collect data include using surveys, pre and post activity questions, and goal
attainment checklists. It is also useful to use qualitative observations to complement the measures
as they provide fresh insights that may not have been captured by the checklists.
It is always useful to consider what aspect of the service or intervention was most or least useful, and what
other factors were important in achieving a change. At the heart of this is recognising that it may not always
be possible to establish cause and effect, or to attribute a change entirely to the programme.
A useful cluster of KPIs to put together would usually comprise the output, the characteristics of the
participants and the outcome. For example, in the area of training, common data collected are outputs such
as the number of training hours clocked and the number of participants who attended. However, these data
are insufficient to determine the effectiveness of training. The quality of the curriculum, the delivery and the
profile of the trainees are more essential in determining the effectiveness of the training programme. These
are the components that need to be considered when choosing indicators in order for them to provide good
insights into the effectiveness of a programme. Further analysis of the data can also reveal more learning
As a general rule, about three comprehensive KPIs with good data serves the purpose of monitoring the
results from a programme or service. They could comprise two KPIs that relate to performance or output and
one that relates to the quality of the results or outcome. Quite typically, a cluster could comprise the volume,
pace and extent of the outreach to the right target group and a measure of the conversion rate (e.g. the
number of participants who moved from being passive recipients to active participants or the number of
non-return or discharged participants)
There may be limitations in collecting data or in determining what a good measure is. At times, we may have
good measures, but not good data. Take the case of tracking child well-being of children for example. Data
on how many children have certain health conditions and how many repeated a grade in school may not be
available in a way that helps to draw a contributory causal relationship.
At other times, we may have data but lack a good measure. Take the case of religiosity or religious
beliefs and practices. Data from national surveys can tell us about the citizen’s attendance, but
attendance at religious institutions is an imperfect indicator of religious beliefs and practices.
Other areas where good measures are lacking include parent-child communication and adolescent
KPIs, however, can often complement social indicators to provide a strong composite picture of the effects of
a programme or a policy. It is worth devoting time to determine what is good to measure and how to collect
the data. After all, “what gets measured, gets done.” This same adage conversely rings a caution that can
encourage an agency to focus on the activity being measured to the detriment of the programme or service
as a whole.
When referring to KPIs, it is constructive to be clear about the meaning of each indicator, the purpose of
collecting the data and how the analysis contributes to the evaluation of the programme or service. A
meaningful set of KPIs should be derived with a shared understanding, where ideas and concepts are kept
simple, contributory factors are clear and the attainment of results are kept circumspect.
More information on the logic model can be found here:
Community Tool Box. (n.d.). Community Tool Box. Retrieved from Chapter 2, Section 1: Developing a Logic Model or Theory of Change:https://ctb.ku.edu/en/table-of-contents/overview/models-for-community-health-and-development/logic-model-development/main
Director-General of Social Welfare
Ministry of Social and Family Development