1. Foreword

The UK Statistics Authority strategy, Statistics for the Public Good, sets out the need for high quality data to inform the UK, improve lives and build for the future. As set out in the strategy, the data revolution continues at pace and there is a big prize for the statistical system, the Civil Service and the UK – if statisticians and analysts have access to the best evidence and can communicate effectively, helping to inform the country while reducing the potential for misrepresentation. 

The Office for National Statistics (ONS) has a lead role to play in this revolution. It benefits from the statutory independence of the UK Statistics Authority, supported by the Code of Practice for Statistics, key ethical principles and its ability to convene experts and analytical resources. The ONS also produces important reference statistics that show how the country is changing. We are responsible for important aspects of the Digital Economy Act, which provides access to data in support of its remit from across the UK. The ONS also has an important role in ensuring that the country's evidence base is inclusive and reflective of the full characteristics of the UK. 

The development of effective programmes to deliver the UK Statistics Authority strategy depends on timely and accurate monitoring and evaluation to understand and assess progress against objectives, and understand what difference is made and whether it is value for money. Without continuous monitoring and evaluation, it is not possible to understand how far programmes are achieving their goals, nor generate the evidence needed to understand successes and learn from failures to improve the future delivery of our statistics. 

This strategy reflects our strong commitment to maintaining and developing a robust evidence base across the ONS, providing a clear pathway to embedding monitoring and evaluation throughout the organisation. It also recognises our contribution to the cross-government evaluation landscape through the leadership of the Analysis Function and development of the Integrated Data Service, which will provide a secure, integrated data system, allowing analysts from across government access to monitor, evaluate and help design policy.

Professor Sir Ian Diamond

National Statistician UK Statistics Authority

Back to table of contents

2. Executive summary

Our vision of the future of evaluation at the Office for National Statistics (ONS) and across government is one of inclusive learning, radical improvements, and ambitious and sustainable programmes. It is of a future where analysts have the capability to deliver robust evaluation, there is clear ownership of standards for evaluation, and evaluation is built into departmental analysis practices. All projects, programmes, and workstreams will be evidence based and continually improved based on their evaluations.

We must overcome fundamental barriers, including:

  • the resourcing of evaluations

  • technical barriers associated with the timing of evaluations and the impact on their scope to influence programme design

  • cultural barriers to ensure a positive and open approach to introducing evaluation practices and publishing reports

This document outlines the ONS evaluation strategy, across four pillars of activity, to address these barriers faced by the ONS and wider government to achieving our vision.

The four pillars

  • Pillar one: developing the hub and spoke model for evaluation at the ONS. 

  • Pillar two: embedding evaluation into intervention design and governance processes at the ONS.

  • Pillar three: facilitating a culture of continual learning among colleagues at the ONS. 

  • Pillar four: the role of the Analysis Function and Integrated Data Service in cross-government evaluation. 

Pillar one: developing the hub and spoke model for evaluation at the ONS

This pillar in our strategy is central to our ambitious and sustainable programmes to ensure we are using evidence from previous programmes to support new project and programme design.

We will continue to develop the hub and spoke model to strengthen evaluation capability across the organisation and deliver an increasingly coordinated and coherent approach to stakeholder and user engagement for evaluation purposes.

Pillar two: embedding evaluation into intervention design and governance processes

To ensure that our programmes and workstreams are evidence based and continually improved, we need to embed evaluation within the design and delivery of projects and programmes from the initiation stage. This is because the scope to influence programme design and to incorporate and adequately fund evaluations reduces significantly if evaluation is not included from the outset.

We will strengthen the evaluation governance that has been established within existing processes to inform future learning and decision making. The development of the ONS-wide Theory of Change will provide a framework for all projects and programmes to understand their overarching impacts, align themselves to the ONS strategy and allow us to identify potential opportunities to centrally coordinate research that demonstrates those impacts. The development of regular dashboard reporting will provide critical insight to inform decision making and support the creation of the annual impact report.

Publishing evaluation reports in line with our evaluation workplan will ensure that we are transparent. It will also support our commitment to the production of evaluations that include robust methodological designs for evaluating public goods and non-policy programmes.

Pillar three: facilitating a culture of continual learning among colleagues at the ONS

We will continue to develop an inclusive learning environment to upskill colleagues, raising the level of evaluation capability at the ONS to achieve radical improvements to the evaluations of our programmes. Building upon our existing channels, we will seek to facilitate a culture of continual learning and champion the value of evaluation across the organisation.

Pillar four: the Analysis Function and Integrated Data Service

The ONS will contribute to the cross-government evaluation landscape through the leadership of the Analysis Function. The Analysis Function leads on capability building through knowledge sharing and development of guidance, partnering with departments with less-mature evaluation functions to improve capabilities, and providing oversight and direction of The Magenta Book. In addition, the Integrated Data Service will provide a secure, integrated data system, allowing analysts from across government to monitor, evaluate and share evaluation techniques and outcomes.

The strategy, and progress against it, will be reviewed at regular intervals.

Back to table of contents

3. Introduction

The UK statistics system and architecture of evaluation

The UK Statistics Authority is an independent body at arm's length from government, which reports directly to the UK Parliament, the Scottish Parliament, the Welsh Parliament and the Northern Ireland Assembly. The Statistics and Registration Service Act 2007 (SRSA) established the Authority with the statutory objective of "promoting and safeguarding the production and publication of official statistics that serve the public good". The public good includes:

  • informing the public about social and economic matters
  • assisting in the development and evaluation of public policy
  • regulating quality and publicly challenging the misuse of statistics

Our strategy Statistics for the public good (PDF, 1.09 MB) is an ambitious call to action. It sets out our aims, priorities, mission and values to make the case for change. This strategy covers the principal elements of the UK official statistics system for which the Authority has oversight, including the Government Statistical Service (GSS), the Office for National Statistics (ONS) and the Office for Statistics Regulation (OSR).

The ONS is the Authority's statistical production function and is part of the GSS. Led by the National Statistician, the ONS is the UK's internationally recognised National Statistical Institute and largest producer of official statistics. The ONS produces data, statistics and analysis on a range of key economic, social and demographic topics.

This evaluation strategy outlines the ONS's vision and approach to embedding best practice evaluation as well as its pivotal role in supporting evaluations across government through the Analysis Function, Integrated Data Service and provision of data.

Context

Evaluation has been established as part of the policy cycle for some time as a means of collecting evidence to understand the effectiveness of an intervention and the outcomes it achieves. Several recent developments have highlighted the need for improving evaluation capability in government, including the recent National Audit Office review of government provision and use of evaluation, which  included a focus on the Analysis Function's role in integrating quality evaluation into policy development, and the associated Public Accounts Committee inquiry.

There has also been a substantial movement within government to embed evaluation into policy making, for example through the requirements for clear evaluation plans and evidence bases associated with Spending Review bid settlement letters and in departmental Outcome Delivery Plans, and through increasing emphasis on improving transparency of evaluation outcomes. To enable the delivery of this ambitious agenda, the Evaluation Taskforce and Analysis Function Evaluation Support team were established. These teams collaborate with existing networks such as the Cross-Government Evaluation Group to enable a positive change in how evaluation is delivered, built into policy making, and open to scrutiny from the public.

The ONS's Central Evaluation Function was set up to address the changing landscape, meet central government requirements of the ONS with evaluation, and increase the profile and importance of timely and robust evaluations within the department.

Our vision

Our vision of the future of evaluation at the ONS and across government is one of inclusive learning, radical improvements, and ambitious and sustainable programmes. It is of a future where analysts have the capability to deliver robust evaluation, there is clear ownership of standards for evaluation, and evaluation is built into departmental analysis practices. All projects, programmes, and workstreams will be evidence based and continually improved based on their evaluations.

Impact evaluation is central to improving the programmes we deliver at the ONS by facilitating collective learning and accountability.  Impact evaluations are now also a requirement by the Cabinet Office and HM Treasury for all large expenditure. The two key aims of delivering evaluations are:

  • learning –evaluations can provide the evidence with which to manage risk and uncertainty, especially in areas that are breaking new ground; early learning can also illuminate what works and what does not, and how this can be improved. 

  • accountability –government makes decisions on the public's behalf and spends tax collected from individuals and businesses (as it has a responsibility to maximise public value and outcomes, and ensure policies are effective), and these decisions should be evidence based.

Evaluation is defined by The Magenta Book as "a systematic assessment of the design, implementation and outcomes of an intervention." Delivering a successful evaluation, or one that is fit for purpose, involves understanding how an intervention is being, or has been, implemented and what effects it has, for whom, and why. It identifies what can be improved and estimates its overall impacts and cost-effectiveness.

The importance of monitoring and evaluation

The Green Book (PDF, 1.45 MB) states that monitoring and evaluation are "approved thinking models and methods to support the provision of advice to clarify the social or public welfare costs, benefits, and trade-offs of alternative implementation options for the delivery of policy objectives". Evaluation is important as it provides "objective analysis to support decision making". The Analysis Function Standard more widely sets expectations for the planning and undertaking of analysis across government and provides a high-level overview of the role of appraisal and evaluation within this.

Critical to understanding why the ONS has prioritised monitoring and evaluation, Sir Ian Diamond, the National Statistician and Head of the Analysis Function Board notes in his foreword to The Magenta Book that: "High quality monitoring and robust evaluation of our programmes and projects will provide the evidence that the data and analysis can enable decision-makers to better target their intervention; reduce delivery risk; maximise the chance of achieving the desired objectives; and increase understanding of what works".

Sir Ian Diamond added that "without robust, defensible evaluation evidence, government cannot know whether interventions are effective or even if they deliver any value at all. Routine, high-quality evaluation is part of a culture of continual improvement and should be core to the work of all government departments".

Monitoring and evaluating programmes, projects and policies will also assist the ONS in achieving its organisational strategy, which is to produce data that will inform the UK, improve lives and build the future (Statistics for the public good: 2020 to 2025).

A comprehensive monitoring and evaluation framework will help the ONS track its progress towards the delivery of transformation, assess the plan outcomes, evaluate the public good impact (planned, unplanned, positive or negative) and offer learnings.

Monitoring and evaluation principles

The ONS principles for delivering fit-for-purpose evaluations are those outlined in The Magenta Book. For an evaluation to be "fit for purpose", it should be:

  • useful –designed to meet the needs of the many stakeholders involved and produce usable outputs at the right point in time

  • credible –ensuring transparency and objectivity

  • robust – that means well-designed, with an appropriate evaluation approach and methods, and well executed, as well as using independent peer review and independent steering to help quality-assure the design and execution of an evaluation where possible

  • proportionate –using a criteria-based approach to determine the appropriate level of monitoring and evaluation

Case study on proportionate evaluation approaches used in the ONS: the Analytical Hub 

The Analytical Hub was set up to bring analysts together from across the ONS to provide a centre of multi-disciplinary analytical and methodological capability dedicated to cross-cutting analysis. 

Its main functions are to: 

  • deliver towards the ONS Strategic Business Plan's ambition to deliver a "radically increased level of cross-theme analysis that cuts across government and societal boundaries" 

  • respond flexibly and timely to the uncertainties and challenges of coronavirus (COVID-19), the nation's recovery from it and the ongoing economic and public policy priorities of the day​ 

  • address cross-cutting issues and produce analysis to inform and engage governments, policy makers and the wider public​ 

The purpose of the evaluation is: 

  • to understand the impacts of our work, and to evaluate to what extent ONS statistics support users to make better decisions for the public good 

  • to share lessons learnt and continuously improve our ways of working 

We will gather evidence  by building case studies and using qualitative interviews, surveys, sources of user engagement and management information relating to volume of queries, outputs and associated response times. We are using contribution analysis to understand the likelihood that the intervention has contributed to outcomes identified within the Theory of Change. The evaluation is being resourced internally including the use of ONS Evaluation Champions rather than commissioning external evaluators.  

The production of the first annual evaluation report is forthcoming.

Back to table of contents

4. Implementing the monitoring and evaluation strategy

Establishing monitoring and evaluation at the Office for National Statistics (ONS)

The Central Evaluation Function was created in October 2020, following the 2020 Spending Review. Activities that have taken place since then within the evaluation space have enabled us to make significant progress towards overcoming fundamental, technical and cultural barriers to evaluation, including:

  • establishing the hub and spoke model to support the resourcing of evaluations

  • establishing evaluation governance and integrating evaluation best practise into business case development to address issues with the timing of evaluations, increasing the scope to influence project and programme design, and develop robust methodologies from the outset

  • leveraging senior management support to communicate the importance of evaluation across the ONS and establish the Evaluation Champions Network

The implementation of the ONS evaluation strategy will be critical to building upon that progress and embedding evaluation across the organisation.

Pillar one: developing the hub and spoke model for evaluation at the ONS

The hub and spoke model

The ONS operates a hub and spoke model for evaluation capability where guidance and support are provided by the hub while evaluation activities are performed in the spokes.

Following the Spending Review 2021 (SR21), the National Statistics Executive Group (NSEG) approved plans to allocate a minimum of 1% of a programme’s budget to evaluation. This currently enables the direct resourcing of evaluation spokes within the programmes and will support development of the existing hub over time to enable the organisation to deliver more effectively and efficiently on the main commitments that support the ONS vision for evaluation.

Programmes at the ONS experience a shared challenge in measuring the impact of our statistics and analysis and the need for a coordinated and coherent approach to stakeholder and user engagement for evaluation purposes. Plans to develop the capability of the hub will deliver against these needs, including:

  • coordinating reputational research

  • joint procurement exercises

  • creating a dataset of evaluation activity

  • upskilling colleagues through training and guidance

  • supporting best practice

  • working with a broader portfolio of projects and programmes

  • maintaining a strong Evaluation Champions Network and stakeholder relations with other departments, the Cross-Government Evaluation Group, and the Evaluation Taskforce

Pillar two: embedding evaluation into intervention design and governance processes at the ONS

Creating strategic alignment with the ONS-wide Theory of Change

An Organisational Theory of Change (OToC) will be developed to support the alignment of programme monitoring and evaluation plans at an organisational level. The OToC will highlight our organisational-wide impacts and the metrics we will use to measure performance against the strategic objectives, and it will form the basis of our impact report and impact dashboard.

We have also recently established a Task and Finish Group with the aim of developing the impact dashboard, which will support learning and decision making at different levels across the organisation. We will use the dashboard, alongside programme evaluation reports, to inform the future ONS annual impact report.

Alignment of evaluation planning with investment appraisal and governance

The planning of monitoring and evaluation for spending proposals should follow Her Majesty's Treasury Better Business Case guidance for both programmes and projects, and this guidance has been integrated into business case development at the ONS. This allows us to:

  • use a wide range of analytical and logical thinking tools when considering potential solutions of intervention 

  • bring evaluation to the forefront of programme and project design

  • determine a proportionate approach and accountability for delivery of the evaluation

Evaluation best practise is integrated into ONS business case templates and ONS programme and project management lifecycle guidance, aligned to the Her Majesty's Treasury five case methodology and Magenta Book guidance on evaluation planning expected at each appraisal stage: strategic outline business case, outline business case and full business case.

Practices outlined within the ONS programme evaluation cycle section naturally support the development of the business case; namely, the Theory of Change, which provides an important foundation and focus for development of the strategic case. Subsequently, the detailed monitoring and evaluation framework requirements are integrated into the management case as programmes and projects develop their management arrangements from outline business case through to full business case.

The size and complexity of new projects or programmes determine the evaluation requirements and the level of service to be provided by the Central Evaluation Function.

Monitoring and evaluation plans (including Theory of Change) are approved by the Programme Evaluation Group as part of the ONS investment approval process.

Resourcing evaluations

Planning and provision of resources for monitoring and evaluation should be proportionate when judged against the costs, benefits and risks of a proposal both to society and the public sector. Prescriptive guidance on how much to budget for an evaluation is difficult given the varying degrees of existing evidence and sizeof different programmes. Following the Spending Review 2021 (SR21), the National Statistics Executive Group (NSEG) approved plans to allocate a minimum of 1% of a programme’s budget to evaluation.

Overview of the ONS programme evaluation cycle

The ONS programme evaluation cycle is comprised of three main phases - business case development, evaluation set up and monitoring - each of which is described in more detail in the following sections. The Central Evaluation Function currently supports programmes to undertake the activities and develop the relevant products within each of these phases.

Programme management approaches are essential to successful delivery of change, benefits and outputs within the ONS, and ensuring that value for money is achieved. ONS programmes follow a standard lifecycle based on the government's project delivery functional standard definition of phases:

  1. Identifying

  2. Initiating

  3. Managing

  4. Closing

  5. Reviewing

The ONS programme evaluation cycle has been aligned to the programme management lifecycle to establish evaluation best practise within existing programme design and governance processes.

The initial focus of the Central Evaluation Function is to support portfolio programmes with impact evaluations. Impact evaluation focuses on "what difference has an intervention made", or the changes caused by an intervention. These will be measurable achievements that either are, or contribute to, the objectives of the intervention.

The next iteration of the ONS evaluation strategy will explore the use of process evaluation ("What can be learned from how the intervention was delivered?") and value for money evaluation ("Is this intervention a good use of resources?") in greater detail.

Business case development

Stakeholder mapping

Stakeholder maps are used to identify all the teams or individuals interacting with a project or programme, and they help with planning stakeholder management strategies. A stakeholder map grades stakeholders by their importance to the programme and recommends how to manage them according to their influence over and interest in the work. Influence diagrams can also be used.

Influence diagram

Influence diagrams show the key elements within an intervention, how they influence one another, and the dependencies. For example, one key aspect of a programme might be the working relationship between the programme team and the customers. A second aspect might be the funding available for the programme. The second may influence the first by affecting the resource available to dedicate to the relationship between the programme team and the customers.

Within any intervention, there are multiple elements or aspects, which can affect one another in many different and complex ways. Influence diagrams aim to capture all these in one place to inform the delivery of a project or programme.

Theory of Change

A Theory of Change is a framework that outlines why a project or programme is needed, what it aims to achieve, and how it will achieve its aims. In more detail, it shows why the chosen approach will be effective and how change happens in the short, medium, and long term to achieve the intended aims. It is a project or programme's "theory" about how their work will change the current status.

A Theory of Change draws on existing literature and past interventions to predict the outcomes and impacts that a piece of work will have, as well as capturing the "need" or rationale for the piece of work by assessing the current status of the situation. This clarity of thought is crucial for not only the evaluation but the programme as a whole.

Logic framework

A log frame (logical framework) is an important document that provides an overview of a project or programme. Logic frameworks at the ONS include the aims, measures of success, stakeholders, risks, assumptions and dependencies, and a timeline for delivery. The framework operates as a matrix so that teams can identify how the planned activities, aims and impacts map onto plans for stakeholders, measures of success, strategic goals, and programme timelines. This provides the foundation for the delivery workplan for a project or programme.

Evaluation set up

Monitoring and evaluation framework

The monitoring and evaluation framework identifies the indicators required to monitor the progress of an intervention and evaluate its impact. Its purpose is to support programme managers in continuously monitoring the results of the intervention, making informed decisions at important points in the delivery of the intervention, based on timely data, and providing effective departmental reporting on the progress of the intervention. It will also ensure that the information that will support the final evaluation is being gathered at the appropriate times.

Each output and outcome (immediate, intermediate, and final) identified in the Theory of Change should have a corresponding performance indicator that should be used for day-to-day programme monitoring as well as for evaluation purposes.

The list of important performance indicators should be relatively brief to remain proportional and relevant. The purpose of these data should be explained to those collecting the data required and monitoring the indicators.

There are two types of indicators:

  • quantitative performance indicators are made up of a number and a unit –examples include "number of mentions in academic literature" or "number of people accessing a website" 

  • qualitative indicators represent qualitative assessments (for example, "excellent", "average", "below average") –these should stay consistent over time to allow for comparability

The criteria for good important performance indicators are:

  • reliability –would the data collected be the same if collected repeatedly under the same conditions and at the same point in time?

  • affordability –is the data collection cost-effective?)

  • availability - are the data necessary for the indicator readily available for multiple collections?

  • relevance – does the indicator clearly track back to the output and/or outcomes of the intervention?

Performance data can only be used for monitoring and evaluation if there is something to which the data can be compared. The baseline serves as the starting point for comparison while the target serves as the end point for comparison. Thus, the baseline and the target allow us to assess the contribution of the intervention. This highlights the importance of developing a monitoring and evaluation plan during project initiation to ensure that a baseline can be gathered before the start of any activities.  

Just like the important performance indicators, the baseline data and target data can either be quantitative or qualitative. While identifying the important performance indicators, the sources that will be used for the data collection can be identified.

Possible data sources include:

  • administrative data

  • secondary data (information that has been collected for other purposes)

  • primary performance data (data obtained through collection exercises tailored to the intervention, such as stakeholder focus groups or stakeholder surveys)

When assigning the responsibility of data collection, it is important to answer the following questions:

  • Will the data collection process be conducted by an internal or external team or individual?

  • If the responsible party will be internal, which parties have the easiest access to the data sources identified?

  • What systems need to be put in place to facilitate data collection?

Monitoring

Mid-term evaluation

At the programme planning stage, you should have identified a suitable point to hold a mid-term evaluation. A mid-term evaluation is usually either mid-way through the duration of the programme or mid-way through the completion of your important milestones, whichever is most appropriate to your area of work to check whether the programme is on course to deliver the expected impacts.

Case study: mid-term evaluation of the COVID-19 Infection Surveillance Programme 

The emergence of the coronavirus (COVID-19) pandemic in early 2020 led to the Department of Health and Social Care (DHSC) commissioning the Office for National Statistics (ONS) to develop and deliver a UK-wide surveillance programme of COVID-19 infection at pace. Outputs from the COVID-19 Infection Survey (CIS) and Schools Infection Survey (SIS), alongside wider activities, formed valuable sources of information that have helped the Government make informed decisions about how to manage the coronavirus pandemic. 

The programme developed a Theory of Change that identified important outcomes and impacts ultimately aligned to the production of "Statistics for the Public Good" and a monitoring and evaluation framework that supported their measurement. 

The mid-term evaluation focused on the ONS's role in the programme, which was informed by internal data gathering, analysis and research carried out with delivery staff and study participants, social media activity and media interest. These sources of evidence were used to provide a high-level test of the Theory of Change that was developed for the programme, with the intention that an external evaluation would fully test the anticipated outcomes and impacts of ONS's role in the programme. 

Recommendations from the mid-term evaluation were presented around the following themes:

  • programme resourcing

  • upskilling staff through partnerships

  • stakeholder analysis and engagement

  • communications

  • setting tolerance levels for deviating from standard survey processes

  • the development of a playbook for future programmes in response to health emergencies 

A full independent external evaluation of the ONS's role in the COVID-19 Infection Surveillance Programme is also planned.

End-term evaluation

A final evaluation is usually conducted once a project or programme has completed, or afterwards, depending on when you can reasonably expect your outcomes and impacts to be realised.

Dissemination of evaluation results

All evaluations conducted at the ONS will be published and their outcomes will be stored centrally in a "lessons learned" log to be used as evidence of what works for new programmes in the future. All projects and programmes are encouraged to further disseminate their evaluation reports and findings via communications with their stakeholders, colleagues, and at conferences. All ONS mid-term and final evaluations will be published on the ONS's website and on GOV.UK via the Evaluation Task Force, where appropriate and not considered sensitive.

Publication

Final evaluation reports should be completed within three months of programme closure, and reports and associated data sources should be published within six months of programme closure. The ONS maintains an open, honest, transparent approach to publication of evaluation findings aligned to the Government Social Research Publishing Protocol, which presents principles for the publication of all government social research. This includes outputs from the evaluation of policy and delivery initiatives, and pilots and trials.

Programmes will publish evaluation reports on the ONS website with the support of the publishing, content and design teams, aligned to the timings within the ONS evaluation workplan.

Pillar three: facilitating a culture of continual learning at the ONS

Cultural and capability-based barriers are being addressed through training, communication, setting up guidance and templates, and managing a growing Evaluation Champions Network. We are doing this to embed evaluation across the department and ensure evidence and theories of change are used to support programme development and inform decision making.

Communication is important in demonstrating the value of evaluation tools (for example, Theory of Change) to support collaborative planning beyond the portfolio and evaluation.

Embedding evaluation into ONS culture

The perception of evaluations is becoming increasingly more positive across the ONS. Providing colleagues with evaluation tools, training, and open dialogue has resulted in positive cultural shifts. Communications and positive engagement between the Central Evaluation Function and the Evaluation Champions Network, alongside an engaged Senior Leadership Team and engagement by the National Statistician, has encouraged ONS staff to work towards our vision of the future of evaluation at the ONS.

Improving capability

Intranet and the Evaluation Hub

The Central Evaluation Function have developed, and will continue to maintain, the evaluation homepage on the planning and project delivery community hub to communicate news, evaluation standards, guidance, tools, templates, and training across the ONS.

Evaluation Champions Network

Established in April 2021, the Evaluation Champions Network, managed by the Central Evaluation Function, is currently a group of over 60 colleagues from all grades and professions across the ONS with experience or an interest in evaluation.

The purpose of the Evaluation Champions Network is to deliver a connected, consistent approach to evaluation across the ONS by:

  • promoting evaluation best practice across the business

  • identifying areas for improvement

  • reducing duplication of evaluation work across the ONS

  • improving evaluation capability and capacity across the ONS

  • establishing a pro-evaluation culture at the ONS

  • sharing knowledge and providing support and resource across the ONS

Case study: establishing the ONS Evaluation Champions Network 

We are currently conducting an impact evaluation to assess the extent to which the Evaluation Champions Network has made a difference to evaluation practice and culture across the ONS.  

As a result of the comparatively low profile and small scale of the intervention, we will conduct the evaluation on a small scale, focusing on the important questions of whether the network has achieved its intended outcomes and impacts. 

Evaluation activities have included a skills audit, output and performance monitoring, surveys and network analysis. We will collect additional data via in-depth interviews with programmes that have developed monitoring and evaluation plans. All data collection outputs will feed into a mixed methods contribution analysis and pre- and post- analysis. 

The evaluation has, so far, highlighted improvements attributable to the introduction of the Evaluation Champions Network. Firstly, champions made new connections across the ONS and valued being a part of the network, and the professional development opportunities available to champions helped to progress their career goals. There was an increase in pro-evaluation attitude within ONS teams and team members perceiving evaluation as useful for learning and accountability. Information, knowledge and learning regarding evaluations were also shared across the ONS. 

Training

To facilitate continual learning and increase capability, the Central Evaluation Function run a regular programme of evaluation training and held the ONS's first evaluation month in February 2022.

The Evaluation Champions Network meets every month for two hours. The first hour is regular talks and trainings that are open to the ONS, while the second is an information cascade, tasks, and show and tells for the Evaluation Champions Network.

Adhoc trainings also take place, particularly on topics colleagues have requested via surveys or feedback forms.

The ONS's evaluation month included further training, talks from guest speakers across government and showcases of evaluation practice at the ONS. An evaluation log has been established and regularly maintained with approved monitoring and evaluation plans stored centrally and shared across the network.

All training sessions from the ONS are recorded and stored on the hub with their presentations and transcripts so colleagues can take the trainings at a time that suits them or refer to trainings they have previously attended.

Colleagues are also encouraged to attend other evaluation trainings and events outside the ONS.

Communications

The Central Evaluation Function communicates regularly via an evaluation newsletter and on the ONS's intranet via news and blogs, and cascades information via the private office, the Evaluation Champions Network, all-staff emails, and during information cascade meetings by directorates and all-staff meetings. The Central Evaluation Function uses multiple communication channels to ensure that everyone at the ONS has the chance to understand new evaluation requirements, can see guidance and templates, understands the requirements, and has a direct channel to respond to the Central Evaluation Function with questions and feedback.

Openness in communication and explaining evaluations, their purpose, and the importance of evaluation at the ONS has helped tackle the fear of negative outcomes and the consequences of publishing reports and improved the perception of evaluation at the ONS.

Introducing evaluation into inductions

To embed the cultural change, in 2022, the ONS will introduce a module on the basics of evaluation within its inductions for new staff. This module will include the why, when and how of evaluation at the ONS, with links to the hub, trainings, and how to contact the Central Evaluation Function for help or join the Evaluation Champions Network.

Pillar four: the Analysis Function and Integrated Data Service

The Analysis Function's approach

The Analysis Function supports the government's 17,000 strong community of analysts across government and works in close collaboration with the member professions to deliver our mission. The Analysis Function strategy, which will be updated shortly, sets out our vision for analysis in government, our priorities for the function and the support that will be provided to analysts. In 2021, the Analysis Function Strategy and Delivery (AFSD) division was established within the ONS to address the top priorities across government and ensure core functional responsibilities are being met. The new division works closely with partners across the ONS and serves the wider government analysis community. The division's leadership has agreed the important priorities of the function with Departmental Directors of Analysis, aligned to the Cabinet Office's Government Functional Standard (PDF, 365 KB).

Evaluation was identified as a cross-cutting topic requiring AFSD support. This was highlighted by the National Audit Office's (NAO) Evaluating government spending report (PDF, 469 KB) published in 2021, which identified the need to further strengthen the central coordination and leadership of government analysis to set consistent standards and solve cross-system problems. The AFSD will work with partners across the ONS including the Central Evaluation Function and the Analysis Function to deliver improvements to the system in which evaluations are developed and implemented, including the governance of standards and the provision of support to analysts across government. The AFSD will draw on best practice in the ONS where possible and look to share this across the analytical community where appropriate. The division's work on evaluation covers:

  • developing the Analysis Function standard for analysis, and accompanying assessment framework, including guidance on evaluation and appraisal (NAO recommendations 27 c and d)

  • establishing an Analysis Function Standards steering group to provide governance and oversight of The Magenta Book appraisal (NAO recommendation 27 c)

  • providing guidance, support and best practice on priority topics for the evaluation community such as integrating Theory of Change into policy development and value for money evaluation appraisal (NAO recommendations 29 h)

  • working with departments that have less-mature evaluation functions to ensure they have the right tools and support to deliver effective evaluations

  • undertaking a review of analytical capabilities of the policy profession, including on evaluation, and developing products to improve analytical capabilities in priority areas (NAO recommendations 30 k)

  • working with the Cross-Government Evaluation Group to assess the needs of evaluation practitioners across government and develop products accordingly (NAO recommendations 30 j)

The Integrated Data Service

Evaluation will require cross-cutting analysis to improve knowledge of what works. Analysis through the Integrated Data Service can streamline the available evidence and make evaluation easier to do efficiently for departments. The potential benefits of a secure, integrated data system, allowing analysts from across government access to monitor, evaluate and help design policy, are huge.

Such an integrated approach can support better policy formation, delivery and evaluation, and it can facilitate a better and more common understanding of priority government policy areas. It can make data available to researchers in an infrastructure that is safe and secure. To give some comparison, the census is due to provide over £5 billion worth of benefits, broadly built on one large survey giving near 100% coverage. A new Integrated Data Service enabling deeper and ongoing analysis will allow benefits to accrue in terms of better policymaking, granular assessment of what works and evaluation, efficient allocation of public funds, efficient public services, and better interventions tailored to population characteristics and locality.

ONS evaluation workplan

The department's evaluation workplan is outlined below, setting out the programmes to be evaluated over the Spending Review period, including their plans for evaluation, types of evaluation and indicative publication dates.

Programme

Integrated Data Programme (IDP):

The Integrated Data Programme is responsible for delivering the Integrated Data Service (IDS) for Government that will bring together ready-to-use data to enable faster and wider collaborative analysis for the public good.

The IDS remains the best means to maximise the impact of data analysis and data science in the UK public sector. The IDS provides a secure analytical environment enabling cross-government access to linked versions of the UK’s richest datasets. It provides the cutting-edge cloud-based tools (such as Python and R) to analyse those data safely, securely and with ethical and legal considerations fully managed.

The IDS enables a leap forward in efficiency, making it possible to run more comprehensive policy evaluations earlier in time, to share prepared data across departmental analytical teams so that we can solve multiple problems at once (rather than repeating the same analysis in silos). It enables us to tackle cross-cutting policy problems by drawing together data covering multiple disciplines (for example, bringing together data on health, crime, and the labour market).

The IDS will aim to improve decision making through evidence while making government more effective. Two broad measures of successful outcomes are:

  • IDS improves UK-wide decision making; aiding policy development and, crucially, policy impact evaluation, which will take place earlier in, and continue throughout, the policy life cycle

  • IDS is a value-for-money service

The key benefits and impacts for the programme, in line with the Theory of Change are:

  • increased speed and lower costs for government departments in identifying, acquiring, accessing and processing data to undertake analysis and reduce platform costs, via the integrated data service and virtualised data-sharing cloud facility

  • social benefits through better statistical and analytical outputs, delivering enhanced public policy outcomes

  • increased culture of cross-government statistical and analytical collaboration, for the greater public good, that builds greater trust in statistics

For more information visit the Integrated Data Service website.

Evaluation type

Impact, process, value for money

The programme will be taking the following approach to evaluating the outcomes of the programme:

  • develop quantifiable metrics, together with deeper qualitative insight from users, to ensure the production of the high-quality data and analysis the UK needs and an inclusive, trusted, and engaging narrative on our social fabric and trends, including web metrics; media reach from Prime Research; sentiment analysis on social media; citations in policy documents, business reports and speeches; monitoring errors and breaches

  • evaluate outcomes from analytical projects and develop case studies throughout the programme as they help to inform and shape the IDP’s rollout

  • undertake surveys of Government senior analysts and policy makers on the availability, use and impact of the integrated data service

  • provide externally commissioned independent, value-for-money reviews

Evaluation Report Publication

  • Annual Report: Quarter 1 (Jan to Mar) 2023

  • Annual Report: Quarter 1 2024

  • End-Term Evaluation: Quarter 3 (July to Sept) 2024

Programme

Coronavirus (COVID-19) Infection Surveillance (CIS) Programme

The Programme aims to be the trusted source of detailed and timely data and analysis on the coronavirus (COVID-19) pandemic, providing regular insights on incidence, prevalence and transmission of infection, on antibody prevalence and an understanding of the social impacts of the pandemic across the UK population. It aims to enable informed decision making by citizens, services and Government, and to be recognised as playing an important role in the UK’s battle against the virus, while striving to offer a lasting legacy for ONS’s future health monitoring.

The impacts identified within the Theory of Change for the programme include:

  • decision making about the coronavirus pandemic is evidence based

  • the ONS is more responsive to user needs – informing the wider debate on COVID-19 with a trusted, high-quality, timely, open and collaborative service – so policy becomes more evidence based for the public good

Evaluation type

Impact, process

The CIS Programme was set up to respond to the urgent requirements of the pandemic. These requirements have continued to change over the course of the programme as the disease and its effects have evolved. It was not possible to design or build in evaluation before Programme activities began nor to plan any experimental evaluation methods or collection of a counterfactual.

The evaluation methods proposed fall within Section A5 of the Magenta Book – Generic research methods used in both process and impact evaluation.

Evaluation Report Publication

End-term evaluation: Quarter 4 (Oct to Dec) 2022

Programme

Census and data collection transformation programme (CDCTP)

The core objectives of the programme are to:

  • deliver a successful Census 2021

  • provide evidence to support a decision, to be made in 2023, about the future provision of population statistics after this Census

  • transform how we collect, process, and analyse data

The outcomes and impacts identified within the business case for the programme include:

  • better-informed decisions made by central Government – improved statistics used by central Government for resource planning, service planning, targeting investment and policy making and monitoring

  • better-informed decisions made by local Government – improved statistics used by local Government for resource planning, service planning, targeting investment, policy making and monitoring

  • better-informed decisions made by commercial organisation –improved statistics used by commercial organisations for targeting investment, and market research and statistical benchmarking

  • better-informed decisions made by society – improved statistics used by society informing public debate, for example on health of carers, changes in religious affiliation or migration

  • Improved efficiency across the ONS and the Government Statistical Service (GSS) through reuse of services developed by the Census

Evaluation type

Impact, process, value for money

The programme was in place and a substantial way through its life cycle when the Central Evaluation Function was established in 2020. However, the following evaluation activity was planned throughout its life cycle:

  • testing the Census – the Census underwent a small-scale test in 2016, a large-scale test in 2017 and rehearsal in 2019, which contributed to understanding the best way to implement the design of Census 2021, and helped reduce risk and uncertainty

  • user Research – conducted on the questions included in the Census questionnaire, Central Digital and Data Office (CDDO) assessment of the electronic questionnaire and Assisted Digital offering, behavioural insight analysis and tests on communications

  • assessment of benefits of Census 2021 published – evidence of how the benefits of the programme are being realised will begin to be captured once the Census outputs become available via a dedicated project; the benefits realisation project will evaluate whether the forecast benefits have been realised and will include engagement with stakeholders in local and central government and the private sector, and a variety of methods will be used to communicate with those stakeholders and a variety of methods used to collect evidence for the evaluation.

  • National Statistician’s Recommendation about the future approach to the Census and population statistics – the 2023 National Statistician’s Recommendation will also cover recommended approaches to future Census and data collection, evaluating the evidence produced by the output projects within the programme; consultations will take place over the coming years alongside this research ahead of publication

  • general report – the final review of the programme will come via the general report, taking a similar approach to the 2011 Census General Report; the report will require parliamentary sign off ahead of being published to inform the public on how the Census programme performed

  • major programme assurance – we report on a quarterly and annual basis to the Infrastructure and Projects Authority (IPA), data is published within each IPA annual report; regular gateway reviews are undertaken on the programme

Evaluation Report Publication

  • Census General Report: Quarter to be confirmed 2023

  • National Statistician's Recommendation: Quarter 4 2023

  • Assessment of the Benefits of Census 2021: Quarter 4 2024

Programme

Ambitious, Radical and Inclusive Economic Statistics Programme (ARIES)

The aim of the ARIES programme is to deliver an ambitious programme that will provide clear, insightful economic, social and environmental statistics and analysis to inform decision making across the UK in an inclusive, timely and sustainable way.

In doing so, the programme scope included within this case has three clear spending objectives:

  • improve our core statistical offering, maintaining international standards and comparability, in line with user needs

  • exploit new data sources and innovative methods to inform better quality, more timely and relevant statistics

  • mitigate the risk to existing economic statistics and Environmental Group outputs

The impacts identified within the Theory of Change for the programme include the provision of access to more granular, timely, and accurate economic, social, and environmental statistics, which:

  • inform evidence-based decisions and debate by Government, the public, private and third sectors, and the public

  • provide better policy evaluations for policy makers

  • offer increased user satisfaction

Evaluation type

Impact, process

Process evaluation will measure the progress of activities and outputs within the Theory of Change against plan implementation.

Outcome/Impact evaluation will provide evidence of the contribution of the programme towards the impacts identified within the Theory of Change.

Surveys and administrative data will be used to understand how the programme meets the needs of users and the level of satisfaction.

Evaluation Report Publication

  • The timing of Annual reports (undertaken after the end of each financial year) and the mid-term evaluation of the programme are to be confirmed

  • End-term evaluation: Quarter 3 2025

Back to table of contents

5. Next steps

This strategy has outlined the department's sustained ambition for monitoring and evaluation. It has also highlighted the ONS's pillars of activity designed to address barriers to good evaluation, alongside the role of the Analysis Function and Integrated Data Service.

This strategy has two versions – an outward facing document and an inward facing document linked to guidance and trainings on our intranet. We encourage discussion and feedback from our stakeholders. Please direct any questions or comments to evaluation@ons.gov.uk

We will renew this strategy by 2027, and we will update it more regularly as evaluation develops at the ONS.

Back to table of contents

6. ONS organisational theory of change

The ONS Organisational Theory of Change is used to ensure new business cases, and monitoring and evaluation plans align to the organisation’s current five-year strategy. The model will be used in organisation-level monitoring and evaluation activities including the annual impact report. The model is updated annually in line with the organisation’s strategic business plan, and in line with future organisational strategies.

ONS organisational theory of change model

Back to table of contents