1. Main points

  • From 26 March 2024, annual public service productivity estimates will include improvements to the measurement of the quantity output, quality adjustment, and inputs of healthcare.

  • The estimates will also incorporate improvements to quantity output and the quality adjustment of education.

  • Changes in healthcare data and methods will have a minimal impact on the final output measure, reflecting the fact that the components being updated have small cost weights relative to hospital and community health services.

  • Improvements in education quantity output will result in minor revisions of education output series compared with the previous method.

  • Changes in quality adjusted output linked to the methodological challenges in capturing attainment during and after the coronavirus (COVID-19) pandemic are proposed for education.

Back to table of contents

2. Overview of the improved methods for public service productivity

The Chancellor of the Exchequer commissioned the National Statistician in June 2023 to review and improve how public service productivity is measured.

This article presents the second wave of improvements to public service productivity measures since the beginning of the Public Services Productivity Review. The Office for National Statistics (ONS) published an overview of UK annual public service productivity between 1997 and 2020, and a new experimental measure for the path of annual UK public service productivity in 2021 and 2022 in November 2023.

The first phase has been to review measurement improvements for England but the ONS will also be working with the devolved administrations to share best practice and identify data sources required to make improvements to the estimates for Scotland, Wales and Northern Ireland. This is with a view to how UK-wide measurement improvements will be incorporated into the national accounts in future. More information about the review's governance and contacts is available. We have recently published the first article on progress toward making improvements to public services productivity measures as part of the Public Services Productivity Review.

Today's article describes changes in:

  • quantity output, quality adjustment and inputs of healthcare

  • quantity output across the data time series for education

We also propose to improve the education quality adjustment for the years affected by the coronavirus (COVID-19) pandemic.

These changes will be included in the March 2024 updated versions of the previous releases Public service productivity, healthcare, England: financial year ending 2021 and Public service productivity: total, UK, 2020. They are in accordance with the Code of Practice for Statistics and follow discussion with government departments, devolved administrations, academics and experts.

As in previous years, we will continue to develop and improve our methods for estimating public service inputs, output and quality adjustment, and more data might become available in the future. These aspects may lead to revisions of these estimates.

Therefore, we welcome feedback to PSP.Review@ons.gov.uk. We will take this into consideration in the 2024 to 2025 development plan for the measures and methods of public service productivity.

The main components of public service productivity are output, quality adjustment and inputs.

Volume output in public service sectors, such as healthcare, is measured using a cost-weighted activity index (CWAI). This calculates the change in the number of activities undertaken, weighting each activity by its cost such that an increase of one unit of activity for a high-cost activity has a greater effect on the output than an increase of one unit of activity for a low-cost activity.

The volume output measure is produced using the Laspeyres approach:

where:

l = index value

a = activity count

u = unit cost

t = year

i = activity type

Where data are available and relevant, output measures are quality adjusted. A quality adjustment is, in its simplest terms, a statistical estimate of the change in the quality of a public service. This provides a more accurate picture of the link between output and the desired outcomes.

Inputs comprise volume estimates of labour, goods and services (intermediate inputs), and capital assets used in delivering public services. These series are aggregated together to form an overall estimate of the volume of inputs used to provide each of the public services identified in the total public service productivity articles.

More information can be found in our Sources and methods publication and in the Quality and Methodology Information article.

Back to table of contents

3. Healthcare output and quality adjustment improvements

Current method for healthcare quantity output

The current measures of primary and preventive care output, for which the changes are described in the following Improvements for healthcare quantity output sub-section, are summarised here. Primary care relates to services that are the first point of contact in the healthcare system, while preventive care represents a broad range of activities intended to stop illness or injury before onset, or before conditions deteriorate. This article only describes aspects of healthcare output for which improvements have been made. For more information on healthcare output overall, please refer to our 2022 Sources and methods article.

Dental services

This element measures dental practice activities, weighting activities according to complexity using dental bands since April 2006. Before this period, activities are not weighted separately.

Ophthalmic services

This element measures output based on the following activities:

  • sight tests at an opticians

  • home-based sight tests (individual test)

  • home-based sight tests (group tests)

  • optical vouchers issued

NHS phoneline and website

Activity is measured as the number of web hits and telephone calls associated with National Health Service (NHS) 111 and NHS 111 online services.

Preventive care (public health)

In England, local authorities have a range of public health responsibilities in the community, and commission a range of different services. Many services are provided by the NHS, including children's public health services, covered in our estimates of hospital and community health services.

Improvements for healthcare quantity output

Dental services

We have reviewed our measures of NHS dentistry, using an alternative source of data to estimate activity growth from 1996 until 2006, and are implementing an improvement in linking different activity data sources.

Ophthalmic services

The unit cost used for sight tests is the NHS sight test fee that optometrists and ophthalmic medical practitioners can claim. An average unit cost is estimated for all types of optical voucher, which is calculated from dividing the number of vouchers issued by the expenditure on ophthalmic services once sight test expenditure has been deducted. We have improved the expenditure measure we use to ensure overall ophthalmic output is weighted appropriately to represent total ophthalmic spend.

NHS phoneline and website output

We have reviewed our measures of NHS 111 and NHS 111 online activity to improve the coherence between national accounts and public service productivity estimates. We have also reviewed unit costs for these services and are now updating annually rather than using a fixed historic unit cost, to try to capture the changes in the cost of provision. Changes have minimal impact on output growth.

Preventive care (public health activities)

We have reviewed the coverage of preventive healthcare in our existing output measures, in particular public health activities provided by local authorities. Following the review, we are introducing new activity measures to capture the growth in the volume of:

  • local authority commissioned treatments for alcohol misuse (excluding NHS providers)

  • local authority commissioned treatments for drug misuse (excluding NHS providers)

  • local authority commissioned smoking cessation services

The activity of NHS providers is already accounted for elsewhere in healthcare output.

We have made the following considerations:

  • pharmacological treatments are already captured within our measure meaning that we are only introducing measures for other types of intervention such as behavioural support

  • drug and alcohol misuse output is estimated using the total number of psychosocial interventions; we do not distinguish between the intervention setting, as there are not enough data to disaggregate unit costs by setting

  • for smoking cessation, we use the number of quit attempts as our activity measure

No equivalent activity or expenditure data for alcohol and drug misuse services could be identified for the devolved administrations. However, suitable activity data for smoking cessation programmes has been identified and included in the measure for Scotland. Equivalent expenditure data could not be identified for the devolved administrations, therefore the unit costs produced for England are used as a proxy.

Current method for healthcare quality output

We currently quality adjust healthcare output to account for two important facets of quality:

  • whether a service succeeds in delivering intended outcomes

  • whether the service is responsive to user needs

An outcomes-based adjustment is applied to an element of hospital and community health services and to general practice output. The outcomes quality adjustment applied to general practice is based on the care management of patients on a GP list who have been diagnosed with hypertension, coronary heart disease, chronic kidney disease or as having had a stroke.

We also apply adjustments to account for the patient experience during their care, using results from the following patient satisfaction surveys to measure the change in experience:

  • Adult Inpatient Survey

  • Urgent and Emergency Care Survey

  • Community Mental Health Survey

  • Outpatient Survey (discontinued since 2011)

  • GP element of the National Patient Survey Programme (discontinued since 2008)

The results from each survey are aggregated and weighted together to give us Overall Patient Experience Scores (OPES). Patient satisfaction measures are weighted differently for different healthcare services to reflect the notion that the importance of patient experience varies by the service provided. Where surveys are discontinued, growth is forecast for a maximum of five years and then held constant.

Improvements for healthcare quality output

GP outcomes quality adjustment

We use the NHS England Quality and Outcomes Framework (QOF) as the data source for the quality adjustment measure for GP output.

We have improved the quality adjustment by:

  • using available indicators to substitute for the discontinued indicators used in the existing quality adjustment

  • expanding the type of indicators used to measure changes in outcomes for the health conditions identified in the existing quality adjustment

  • increasing the number of health conditions to expand the coverage of the quality adjustment to also cover diabetes and asthma

We have improved how we combine and weight together different indicators into a single outcomes-based quality adjustment that can be applied to general practice output. Specifically, we have changed:

  • weights used to combine multiple indicators for a specific health condition

  • weight used to factor for the proportion of patients with one or more of the health conditions measured in the quality adjustment as a share of the overall GP patient register

Given that QOF is an incentive scheme where higher-value indicators are allocated more points, we use the allocation of QOF points to weight the indicators within each condition rather than giving them an equal weighting, to better reflect the value of that indicator. For example, an indicator measuring healthy blood pressure in patients aged below 80 years is allocated a greater number of QOF points, and therefore a larger weighting, than the indicator for measuring healthy blood pressure in patients aged 80 years and over.

While we use prevalence weights to account for the share of patients with a specific health condition, patients can often have more than one health condition. Therefore, weighting by the prevalence alone would overweight the quality indicators for the selected conditions, relative to the wider GP patient population, as the prevalence weights are not simply additive.

To counter this, the proportion of patients with one or more of each of the identified health conditions can be used to dampen the combined impact of the quality adjustment. In the absence of robust information regarding comorbidities, we assume a midpoint of the value range of potential comorbidity from prevalence rates presented in the QOF. That is the midpoint between the minimal possible prevalence of these health conditions (total comorbidity among the selected health conditions) and the maximum possible prevalence (no comorbidity).

Patient experience quality adjustment

We have reviewed our patient experience-based quality adjustment to look for alternative data sources to replace surveys that have been discontinued. We identified the GP Patient Survey (GPPS) as a viable option to provide a new measure of GP patient satisfaction. The GPPS offers patient feedback on various aspects of primary care services, providing insight into the overall patient experience. It provides a single experience metric based on the question "Overall, how would you describe your experience of your GP practice?", eliciting responses spanning from "Very poor" to "Very good".

To evaluate patient experience, we concentrate on the percentage of positive responses, encompassing those categorised as "Fairly good" and "Very good". The change in the sum of these responses as a share of overall responses becomes the metric to determine changes in patient satisfaction year-on-year. It is a representation of the change in the positive sentiment expressed by respondents, which is like the approach used to calculate Overall Patient Experience Scores (OPES).

The GPPS also captures patient satisfaction with community dentistry. Using this enables us to expand our patient satisfaction measure to cover an additional healthcare service sector (dentistry) for which we have previously been unable to capture the changes in the quality of the service provided. The outpatient survey has now been removed from our quality adjustment, as this was discontinued in 2011.

Impact on healthcare volume output

Figure 1 shows the impact of these changes on the quality adjusted healthcare output volume index. The overall impact is minimal, reflecting the fact that the quantity output components being updated have small cost weights relative to hospital and community health services, and quality changes are incremental improvements to existing quality adjustments

Healthcare inputs changes

A review of the inputs, specifically the value of goods and services used in the provision of healthcare activities (intermediate consumption), showed that we are using the best existing data. Therefore, only small improvements are required, such as the inclusion of legal and audit services. We have also updated the weights used to calculate the labour inputs better to align them with our measure of full-time equivalent (FTE) staff. The overall impact of these changes is minimal.

Back to table of contents

4. Education output and quality adjustment improvements

Current method for education quantity output

The direct output method measures output using a cost-weighted activity index (CWAI). For education, activity is calculated as the number of pupils enrolled in a particular school phase multiplied by the attendance rate. This gives a truer picture of activity than enrolment alone, as those who are enrolled but do not attend will not contribute to volume output. The expenditure for each phase is then employed as the cost-weight, to give a value of per-pupil expenditure. A unit cost can then be calculated for each phase (for each year), which will capture the relative effect that an increase or decrease in activity will have on output.

The CWAI is produced using the Laspeyres approach.

Unit costs are defined as:

The output index is defined as:

where:

u = unit cost

e = expenditure

a = activity

I = index value

t = year

i = school phase

Improvements for education quantity output

Academies split

In England, when academies were expanded in 2010, many academies were secondary schools. The proportion of schools converting to academies has increased over time. In the 2022 to 2023 academic year, just over 40% of primary, 80% of secondary and just under 45% of special schools in England were academies. With this large change in the education landscape, it is important to capture the contribution of each phase of education to productivity as accurately as possible, such that any further changes in the landscape will give a more accurate picture of overall education productivity.

Currently, compulsory education in England is captured through five categories. Two of these are pre-primary phases and will be discussed further in the following Pre-primaries combined sub-section. The remaining three categories are primary, secondary and special. These will now be split into a further five categories, splitting out primary, secondary, and special academies from their local authority (LA)-maintained counterparts, as well as splitting out alternative provision (education for students who cannot go to mainstream schools) into two separate categories (academies and LA-maintained). Updates to the expenditure weights will allow for the cost-weighting of each of these phases individually and allow the contribution of each phase to overall education productivity to be accurately captured.

Pre-primaries combined

Pre-primary education is currently captured through pre-primary (all LA-maintained pre-primary schools plus pre-primary classes in primary schools) and private, voluntary and independent (PVI) pre-primary schools. However, expenditure data for PVIs are not available separately from LA-maintained pre-primary expenditure. Similarly, for Scotland and Wales, there are no PVI enrolment data.

Because of the scarcity of the PVI data allowing for a consistent LA-maintained and PVI split in pre-primary education across the nations, LA-maintained pre-primary schools and PVI pre-primary schools will be included as a single pre-primary figure. This will allow a more accurate cost-weighting for pre-primary and will therefore give a clearer contribution to productivity.

Pre-primary age and full-time equivalence (FTE)

Currently, for England, those within primary schools but who are of pre-primary age are not being captured as pre-primary but rather primary pupils. All pupils under 4 years old as of 1 September on the year prior to the school census (which is taken in the January) will now be considered as pre-primary, reducing the enrolment for primary pupils. These pupils will then contribute to the LA-maintained pre-primary figure.

Another consideration in the current method is that 2-year-olds enrolled in PVI pre-primaries are not currently included, despite there being funding for such pupils since 2013 to 2014. The data are available for these 2-year-olds and will be included from the 2013 to 2014 academic year onwards to capture this extra output.

Alongside this, currently only those receiving 15 hours of free childcare are being included in productivity output. In 2017, the extended entitlement of 30 hours was introduced for families meeting the eligibility criteria. We will be implementing improvements to capture the extended childcare entitlement in output. Note that the pre-primary inclusion for Wales, Scotland and Northern Ireland will not be affected by these changes.

For enrolment, full-time equivalence (FTE) is used to calculate activity. FTE is calculated as 50% for part-time pupils and 100% for full-time pupils. This has a small impact on primary, secondary or special schools, where there are very few part-time pupils; however, it is more of a consideration for pre-primary pupils. Where part-time pupils are not identifiable, pre-primary enrolment (headcount) figures are currently given a factor of 0.5 to proxy FTE (that is, pre-primary pupils are assumed to attend school 50% of full time).

Where part-time pupils are identifiable (predominantly, the England school census), the FTE factor is higher and growing. Therefore, where part-time pupils are not identifiable, the pre-primary figure is now given a factor equal to that of the calculated FTE factor for each year, instead of the current constant factor of 0.5.

Phasing out Initial Teacher Training (ITT) and healthcare training

Towards the end of the 1990s, most teacher training colleges were absorbed into universities and have therefore been picked up through higher education measures. Similarly, when nursing degrees became compulsory in 2009, the number of nurses training via non-degree routes declined. By 2012, there were very few nurses left in training outside higher education (HE). Nursing degrees, which would have been attained through universities, are included within the UK National Accounts through existing methods for higher education.

From 2012, ITT and healthcare training were being included in the further and higher education categories. Therefore, there was a risk of double counting all aspects of ITT and healthcare training, although these only made up a small proportion of total output. Improvements in the treatment of ITT and healthcare training have therefore been made to restrict these components from being double counted in government sector non-market output.

Impact on education volume output

Figure 2 shows that the overall impact of these changes on the education output volume index is minimal. ITT and healthcare training were already contributing little to overall growth and the inclusion of more pre-primary pupils makes up just a small proportion of activity and therefore has only a small effect.

The other changes improve upon the cost-weighting of activity, therefore better capturing the contribution to growth of each phase, with activity itself largely remaining unchanged (with the exception of the two aforementioned changes). The small but significant changes in cost-weighting have little impact on the overall series.

Current method for education quality output in the years of the coronavirus pandemic

The education productivity index is quality adjusted, as described in our Sources and methods article.

The coronavirus (COVID-19) pandemic caused widespread disruption to society, including the closure of schools and cancellation of examinations. Teacher-assessed grades (TAG) were provided for GCSE results in place of typical attainment grades, but this approach resulted in grades becoming inflated. Inflated grades cannot be incorporated into the quality adjusted measure, as this could overestimate output and thus productivity.

Our current approach has been to apply a "learning lost" metric that represents students' performance in reading and mathematics relative to expected performance had the disruption incurred by the pandemic not happened. Data from the Renaissance Learning report (PDF, 746KB), commissioned by the Department for Education (DfE), has served as the source of the learning loss metric for primary and secondary schools, and was included in the Public service productivity: total, UK, 2020 article. Information on the "learning lost" metric commissioned by the DfE was available only for the academic year 2019 to 2020, as the assessments took place in autumn 2020.

Proposed changes for education quality output in the years of the coronavirus pandemic

The Office for National Statistics (ONS) has conducted additional research to capture the best quality measures for education in the years affected by the coronavirus pandemic.

Discussion with experts in this area have highlighted that the National Reference Test (NRT) is a more appropriate measure than the "learning lost" metric because it is an indicator of academic performance at GCSE level. Taken by year 11 students who are due to take their GCSEs that year, the NRT is a short test that reflects the questions that would be faced in a GCSE Mathematics and English examination.

The NRT was first implemented in 2017 in England to monitor student performance over time without the accountability pressures of GCSEs, and to serve as additional evidence in the awarding of GCSEs. Approximately 300 schools participate (composed of local authority (LA)-maintained schools, academies and independent schools), and are selected based on size and previous GCSE achievement. Students are also randomly selected to participate. 

The outcomes of 2017 GCSEs were mapped to the NRT 2017 to establish baseline values of how NRT performance correlates with grades 4, 5 and 7. In subsequent years, NRT outcomes are compared with the 2017 baseline measure to assess trends in academic performance over time.

We propose to include the NRT in our series for the years affected by the coronavirus pandemic. As primary and secondary school attainment are included within our statistics, the NRT (grade 4 and above) will inform our secondary school measure. Standard Assessment Tests (SATs) growth rates for primary schools will be kept constant until new attainment data become available.

Following discussions with experts, it has been agreed that the NRT is the best measure available for our purpose. However, we recognise that, as with every metric, the NRT also has some limitations.

Firstly, although the NRT is formally recognised as a powerful indicator of student performance, it is not a final attainment measure consistent with our typical attainment metrics and there are limited data to compare trends between the NRT and typical attainment measures.

In addition, the sample size of the NRT changes over time, capturing 333 schools in 2020 and 215 in 2021.

Back to table of contents

5. Deflators

The Office for National Statistics (ONS) is working to improve the use of deflators in public service productivity statistics as part of the Review. While the majority of improvements will be included in future estimates, we have updated the weight of some deflators in our statistics.

Back to table of contents

6. Data sources in national accounts

In the forthcoming annual release in March 2024, we will be using government expenditure data consistent with Blue Book 2023.

We have also made some more general systems improvements to maintain best practice and improve consistency across the different aspects of our processing. All the additions in this article, and in the forthcoming publication, were not possible without these system improvements, which have enhanced the consistency of our processing.

Back to table of contents

7. Acknowledgements

We would like to acknowledge the help and guidance we have received so far from the Department of Health and Social Care (DHSC), the Department for Education (DfE) and other experts in the Public Services Productivity Review.

We are grateful to colleagues at the Office for National Statistics (ONS) and academics for providing helpful comments. Developing measures for public service productivity has been recognised as particularly challenging (since the output of goods and services are free at the point of delivery) and continuing the discussion with experts and stakeholders will be crucial to improve our statistics.

Back to table of contents

Contact details for this Methodology

Public Service Productivity Review
psp.review@ons.gov.uk
Telephone: +44 3456 013034