1. Overview of improved methods for public service productivity

The Office for National Statistics publishes annual estimates of public service productivity, which are National Statistics. We produce estimates of inputs and output growth by service area and the final aggregated index, which is an estimate of the productivity of all public services. We measure productivity in nine service areas, four of which are adjusted for quality.

This article updates users on methodological improvements to the annual estimates, that will be implemented in the article Public service productivity: total, UK, 2018. Our changes are in accordance with the Code of Practice for statistics and follow discussion with government departments, devolved administrations and relevant experts.

Comments to productivity@ons.gov.uk are always welcome and will be taken into consideration in the development plan of the measures and methods of public service productivity.

This article covers improvements to the quality adjustment of education and re-states the changes applied to healthcare primary output (published in Methodological developments to public service productivity, healthcare: 2021 update). It describes the differences with the previous publication in adult social care and it explains the new data used for the inputs of education and police, the revisions of deflators and the plan for future improvements in our measures. All the additions in this article and in the forthcoming publication were not possible without a general system improvement, which started last year, and has enhanced the consistency of our processing.

This work covers the UK as a whole and recognises that there are differences in the way services are delivered and administered in the four countries of the UK.

Details on the definition of output and inputs for each service areas as well as the methodological approach are available. Concepts, statistical properties and the application of quality adjustment are also summarised in this guide.

Please be aware that some of the resources cited in this article require a log-in and some journals will require a registration or paying a fee for access.

Back to table of contents

2. Education: new quality adjustment using the “cohort split” approach

Quantity output

Education is the second largest service area in public service productivity (after healthcare), accounting for 17.7% of total public service output in 2017. Consequently, its contribution to the year-on-year changes in the overall quality adjustment has been always among the highest.

Education output consists of an estimate of quantity, which is then adjusted for quality. Quantity output is the sum of full-time equivalent (FTE), publicly funded pupil and student numbers from pre-school until further education, adjusted for attendance rates. Based on this definition, if the number of pupils in schools increase, the education system will look more productive (assuming inputs remain the same).

However, if the pupils are leaving school with fewer (or worse) qualifications then schools are producing poorer outcomes. Therefore, a quality adjustment measure that accounts for the qualifications pupils achieve at the end of Key Stage 4 is used. For this reason, measures of attainment (at GCSE or equivalent level) have been included in the Education productivity index since 2007.

Quality adjusted output – current and proposed methods

Education was one of the first service areas for which quality adjustment was developed. This change was then followed by other improvements published in 2015, 2017 and 2019. This quality adjustment was based on the recommendations published in the Atkinson Review (PDF, 1.1MB), which is the foundation of our public service productivity estimates.

When attainment data were first introduced as a proxy for change in quality of education, the GCSE (or equivalent) results for a given year were used to quality adjust the output of primary and secondary education for that year. This method was changed for our 2019 publication which introduced a new "cohort split" approach to apportion the increase in attainment results over the five years of secondary education.

Using this method, the GCSE attainment data published for each academic year reflects the quality of teaching from Year 7 to Year 11. As such, the new "cohort split" approach applies certain percentages of the new attainment data back to previous years, subject to weighting. Specifically, based on the national and international literature, and the structure of secondary education, we decided to weight Year 7 and Year 8 at 10%, Year 9 at 20% and the last two years (Year 10 and Year 11) at 30%.

However, this did not properly disentangle the role of primary education on the final GCSE results, excluding an important part of the education process. Research shows that primary schools continue to influence students' longer-term academic outcomes at secondary school (see The Influence of Secondary and Junior Schools on Sixteen Year Examination Performance: A Cross‐classified Multilevel Analysis). Furthermore, measures of primary school academic effectiveness significantly predicted the later academic attainment in mathematics and science even three years after transferring to secondary school (see Influences on students' GCSE attainment and progress at age 16).

Additional GCSE analyses show that primary school continues to influence later academic attainment up to the end of Year 11. Furthermore, students who had attended a primary school that was more academically effective for mathematics had significantly better grades in GCSE mathematics than students who had attended a mathematically less effective primary school (see Influences on students' GCSE attainment and progress at age 16; Final Report from the Primary Phase: Pre-school, School and Family Influences on children's development during Key Stage 2).

Supported by this literature and following the recommendations in the Atkinson Review, which recognised that GCSE results "are the outcome of 11 years of compulsory schooling", we propose a new approach which also takes into consideration primary education output. In doing so, we have created a new weighting distribution that covers the 11 years of compulsory schooling. The updates presented in this article are in this spirit.

This table should be read as follows: Every time the academic year results are released, 15% would be retrospectively applied to the years in primary school and the remaining 85% to secondary school. Specifically:

  • 2.5% will be applied for each year from Year 1 to Year 6

  • 5% will be applied for Year 7

  • 10% will be applied for Year 8

  • 15% will be applied for Year 9

  • 25% will be applied for Year 10

  • 30% will be applied for Year 11

We acknowledge that we must reflect the differences in the structure of the education system of the different countries in the UK in our measures. Reception in England and Wales is the equivalent of Year 1 in Northern Ireland, as such our cohort split starts in Year 2 (at 2.5%) and ends in Year 12 (at 30%) for Northern Ireland, as this is equivalent to Year 11 in England and Wales.

Further, we recognise that secondary school starts at Year 7 in England and Wales with GSCE exams taken in Year 10 and Year 11 (Key Stage 4). For Scotland, the majority of GCSE equivalent exams are taken in S4 and S5, which are equivalent to Years 10 and 11 in England and Wales. As such, we think it appropriate to keep the proposed weights equivalent for the four countries.

The main objective is to evaluate the long‐term causal effect of primary education on pupil's attainment at age 16. Figure 1 shows the application of the "cohort split" approach, using the weights in Table 1.

The effect of this change to the “cohort split” approach can be seen in Figure 1. The proposed “cohort split” approach has further smoothed the exam attainment data and shifted it to the left. Since more of the attainment score is attributed to earlier years of schooling, the higher attainment in recent years is shown to be attributed further back in the time series than in the current “cohort split” approach.

Back to table of contents

3. New methods for education quality adjustment – bullying

There are several definitions of bullying present in the literature, however, there is a general agreement that bullying is a behaviour by an individual or group, repeated over time, that intentionally hurts another individual or group either physically or emotionally.

In its different forms, bullying is often motivated by prejudice against particular groups, based on race, religion, gender, sexual orientation, special educational needs or disabilities, or because a child is adopted, in care or has caring responsibilities. It might be motivated by actual differences between children, or perceived differences.

The government has given a lot of attention to this issue, since there is an awareness that bullying, especially if left unaddressed, can have a devastating effect on individuals. Bullying which takes place at school does not only affect an individual during childhood but can have a lasting effect on their lives well into adulthood.

Research on bullying has shown that bullying can lead to several mental, social, and emotional disorders over life: poor school achievement, higher loneliness, poor health, depression, anxiety, self-harm, and suicidal thoughts (see Bullying by peers in childhood and effects on psychopathology, suicidality, and criminality in adulthood; Bullying in schools: the state of knowledge and effective interventions; Bullying, social support and adolescents' mental health: Results from a follow-up study).

Therefore, by preventing and tackling bullying, "schools can help to create safe, disciplined environments where pupils are able to learn and fulfil their potential" (DfE, 2017). Official documentations (Bullying at school; Scottish Government, 2017; Welsh Government, 2019) highlight that staff must act to prevent discrimination, harassment and victimisation within schools and it is a legal obligation for schools to tackle bullying (DfE, 2017; Department of Education, 2016).

Data and methods

Data

Our data on bullying in schools comes from the British Household Panel Survey (BHPS) and its successor survey, Understanding Society (USoc). Understanding Society is a longitudinal survey of the members of approximately 40,000 households (at Wave 1) in the United Kingdom. Many design features, instruments, and questions from the BHPS live on in Understanding Society and data collection from eligible BHPS sample members continues as part of Understanding Society, offering opportunities to exploit data from the two studies jointly to create a long panel of data.

The BHPS (1991-2008) asks children how often they worry about being bullied while Understanding Society (2009 onwards) asks children how often they are physically bullied or bullied in other ways. Both surveys also ask children how happy they are with their "life as a whole" on a scale of 1 to 7, where 1 represents "completely happy" and 7 represents "not at all happy", which we take as a measure of the child's life satisfaction.

Methods

We create a bullying index from this data which incorporates both the prevalence of bullying in schools and the impact of bullying on children's wellbeing. For the prevalence, we take the percentage of children who reported that they were bullied (from the USoc survey) or worried about being bullied (from the BHPS).

We approximate the severity of bullying by determining the mean life satisfaction of students who reported being bullied (or worried about being bullied) and the mean life satisfaction of students who did not report any bullying (or likewise, did not worry about being bullied). From this data we derive an index, expressed in the following equations:

Figure 2 shows the bullying index derived from the BHPS and Understanding Society data. The bullying index falls (gets worse) over time, so any decrease in the prevalence of bullying in schools has been offset by an increasing impact of bullying on the life satisfaction of children.

The bullying index is then incorporated into the current quality index by weighting the growth rate of the index with the growth rate of the current attainment index, as in the equation below.

To reflect the importance of attainment in our education statistics, for the period 1997 to 2005, we have chosen to weight the growth rate of the attainment index at 95% and the growth rate of the bullying index at 5%. From 2006 onwards, we increase the weight of the bullying index to 10% of the quality index and weight attainment at 90%. This is to reflect the introduction of the Education and Inspections Act 2006, which we consider to be a price signal by government, signalling the intent of government to take action against bullying.

The impact of including a bullying adjustment into the current education output statistic can be seen in Figure 3. This is shown alongside the impact of adjusting the cohort split method (described in section 2). Since the bullying index rises by less than the attainment index (the existing quality adjustment), the combination of both reduces the overall growth rate of the quality adjustment.

Discussion around this index has raised concerns about the possibility of double-counting in our measures when including a bullying adjustment in our statistics. It was argued that because bullying may have an impact on the exam results of bullied children, as is often noted in the literature, then bullying is already accounted for in our current methods.

However, our intention of including bullying in our quality adjustment is to widen our definition of quality in educational services away from being a pure measure of exam attainment, and we view the reduction of bullying in schools as an explicit measure of quality in and of itself, particular given school's legal requirement to attempt to address this.

Future improvements to education quality adjustment

We suggest that the proposed methods of quality adjustment will improve the understanding of the quality of education as a service area, disentangling the role of primary education on school-leaver attainment and capturing bullying, an important issue that schools have an obligation to tackle.

Our measures have shown that bullying is a persistent feature of children's experience in schools. The data therefore suggests that bullying is important and, as such, we believe it appropriate to include it in the public service productivity statistics.

From 2006, more regulations have been implemented in schools to tackle bullying and this might have increased the perception of bullying for children. The longitudinal feature of BHPS and USoc means that the same people respond over time, which partially mitigates changes in the understanding of bullying on the quality adjustment. Subjective measures (like bullying) are difficult to quantify but, to the best of our knowledge, bullying data available in BHPS and USoc are recognised for their high quality. In addition, future data and indicators might help us to control for changes in perceptions.

The proposed measures will be a strong ground for future developments to the education quality adjustment in future years. We understand that attainment is a key indicator of life outcomes and improving productivity and that tackling bullying helps to improve mental and emotional wellbeing. However, these are not the only desirable outcomes of education services. Schooling should help students to develop social skills, support their general wellbeing and prepare them for the labour market.

Additional data on well-being and mental health and employment-related indicators should be considered. Ofsted data could be a good starting point to understand the standards of schools as an environment in which to learn, while well-being data could show the potential negative effects of education on students' health.

It is unlikely that only one indicator provides a full sense of education quality but combining these data with additional measures of education quality would help to develop a more sophisticated and holistic quality adjustment.

We have also begun to consider the impact of the coronavirus (COVID-19) pandemic on our education output measures and how policy decisions such as the awarding of teacher predicted grades at GCSE and equivalent level will impact our measures of quality in the education services. Further information on how the ONS has recorded the impact of COVID-19 on education output in our GDP measures can be found in an ONS blog post from October 2020, an ONS blog post from March 2021 and from this ONS article published in March 2021.

Developing measures for public service productivity has been recognised as particularly challenging, since the output of goods and services are free at the point of delivery. In line with recommendations in the Atkinson Review and consistent with the basic principles of national accounting, continuous improvement and development of these methods are important. In using this approach, it will be crucial to continue the discussion with experts on education and colleagues from the Department for Education and the Devolved Administrations to understand the responsibilities of schools and their most important goals.

Back to table of contents

4. Healthcare output improvements

The main methodological development in public service productivity: healthcare is a set of improvements to the collection of services referred to as primary care output, (referred to as Family Health Services (FHS) in previous publications). This section summarises the changes included in Public service productivity, healthcare, England: financial year ending 2019.

The new primary care output measure has been developed for England. We have separate data on primary care output for Scotland and Northern Ireland, although data on general practice consultations have not been available for Scotland since financial year ending 2014 and we currently do not have data on primary care output for Wales. For more information, please refer to our Public service productivity: healthcare methodological developments release.

Primary care output is one of four components of healthcare output used in measuring public service healthcare productivity. It includes output for general practice (GP) and dental and ophthalmic services, as well as the smaller elements of NHS web and phone services. This new primary care output methodology has not yet been incorporated in the National Accounts, but it will be considered for inclusion when the availability to make changes allows.

General practice

Measures of GP output in previous releases were modelled on the relationship between data collected between financial year ending (FYE) 1996 and 2008 and the changes in population of different age groups in the years since. This model has been replaced by new methods from FYE 2008 onwards.

Between FYE 2008 and FYE 2018, growth in GP output has been based on a methodology developed by the Centre of Health Economics at the University of York in their Productivity of the English NHS series. This method is based on survey data drawn from the General Life Survey up to FYE 2010 and from the GP Patient Survey from FYE 2011 to FYE 2018.

From FYE 2018, a relatively new publication by NHS Digital, Appointments in General Practice, is used to calculate the growth in GP activities. The NHS Digital data are collected from a large sample of general practices (approximately 95% of practices are included in the data collection). The number of consultations (attended appointments) are then upscaled by the estimated patient coverage in order to give a total number of consultations for England.

The estimated number of consultations are divided into consultations led by General Practitioners (GPs) and those led by other practice staff. The GP-led appointments are further split by type of consultation. This allows us to distinguish between cost and complexity of different types of activity and apply different cost weights accordingly. This is an important aspect of measuring volume output, as it means growth in activities that are more common and/or have a higher cost results in greater output growth than growth in activities that are less common and/or have a lower cost.

Dental services

In previous years, total dental activity was cost-weighted to estimate output growth. However, dental treatments are widely varied in terms of both cost and complexity. Using data that exists from FYE 2007 onwards, we have therefore expanded the level of granularity of the activity data used to cost-weight different treatments separately. For earlier years, total dental activity continues to be used. Dental activity is aggregated into five different courses of treatment, then weighted by Units of Dental Activity (UDA) to account for differences in the complexity of each type of treatment. We then apply a cost element by dividing net dental expenditure by total weighted activity to give an average cost per treatment. The UDA ratios can then be used to estimate a unit cost for each treatment type.

Ophthalmic services

Previously, our method for estimating ophthalmic output growth was based on a cost-weighted of all sight test activity. We have now expanded the coverage and the level of granularity of activity data to more accurately determine growth in ophthalmic services, using NHS Digital's General Ophthalmic Services activity statistics. Ophthalmic services output is now calculated by disaggregating activity by type: sight tests and optical vouchers. Sight test activity is then disaggregated further, for example by sight tests that take place at home and sight tests that are performed on site.

Developments to the NHS cost inflation index

The NHS Cost Inflation Index (NHSCII) was introduced into public service healthcare productivity statistics in 2020 to replace a previous discontinued healthcare cost deflator and was used to deflate input costs to produce a volume measure of inputs. The NHSCII draws on a range of data sources to deflate different elements of inputs and the methodology of the index is detailed in Methodological developments to public service productivity: healthcare, 2020 update.

The NHSCII has been further developed over 2020, with the main change being the introduction of a specific deflator for agency staff costs, which were previously deflated using a deflator produced from NHS employee data.

More information regarding changes in the NHSCII can be found in our Public service productivity: healthcare methodological release.

Back to table of contents

5. Adult social care

A quality adjustment is applied to Adult Social Care (ASC) output to account for changes in service quality. The adjustment measures how well care clients' needs are met across domains of quality of life that are affected by social care, while taking into account factors outside the control of ASC services that may also affect quality of life. The quality adjustment is applied to both residential and nursing, and community services, for different age groups and groups with or without a learning disability.

The quality adjustment is produced using data from the NHS Digital Adult Social Care Survey, which collects data across 151 Local Authorities in England in both a standard questionnaire and an easy read questionnaire targeted at social care clients with learning disabilities. However, because we discount survey returns that are missing essential components needed for our quality adjustment, in some years not every local authority has usable questionnaire results for each age group and questionnaire type.

To better account for this in our measure, we have implemented an improvement to how the results for different local authorities are weighted together using population data. This improved method has been implemented across the time series back to the introduction of the quality adjustment in the output growth between financial year ending (FYE) 2011 and FYE 2012. This improvement does not result in a change to the trend growth of the quality adjustment but does mean that the quality adjustment growth is stronger in some years and weaker in others.

Back to table of contents

6. Education inputs

The measure of Education inputs is an aggregate of labour, goods and services, and consumption of fixed capital with data taken both from local authorities and central government. Several sources are used to create the final inputs (see sources and methods document) mostly obtained from government departments and devolved administrations under an ONS agreement.

Starting from this year, real data on workforce and salary data for Wales for the year 2017 and 2018 are now provided by the Welsh Government and, therefore, we do not need to forecast them as in the previous publications.

These changes are consistent with the recommendations set out in the Atkinson Review (PDF, 1.1MB), where it is specified that the measurement of inputs should be as comprehensive as possible.

Back to table of contents

7. Police inputs

We have adjusted police expenditure data from 2013 onwards to account for some structural changes such as the role of Police Scotland. These changes resulted in some historic double counting in local government and central government expenditure data.

Police inputs are an aggregate of labour, goods and services, and consumption of fixed capital which are estimated by deflating their current price expenditure. The exception to this is local government labour inputs, which are measured directly using data on full-time equivalent employees (FTEs) and relative salaries for different groups (the volume of central government labour inputs is measured indirectly). More information on the sources and methods can be found in our methodological article.

Back to table of contents

8. Data sources in National Accounts

In the forthcoming release, we will be using government expenditure data consistent with Blue Book 2020.

We have also made some more general systems improvements to maintain best practice and improve consistency across the different aspects of our processing. In particular, we have improved the consistency of the construction of our deflators in our processing system, leading to minor revisions to our deflators (as well as the deflator development described elsewhere in this methods article).

Back to table of contents

Contact details for this Methodology

Rhys Humphries and Sara Zella
productivity@ons.gov.uk
Telephone: +44 (0)1633 455759