This paper fulfils ONS's development commitment made in 2010, to review alongside external evidence, the use of GCSEs and equivalent qualifications to quality-adjust primary and secondary education. We review evidence on GCSE and other measures of pupil attainment in England, and conclude that further development work on quality adjustment is needed.
This paper fulfils ONS’s development commitment made in the last education productivity article:
“[To] review, alongside external evidence, the use of GCSEs and equivalent qualifications, and Standard Grades, to quality-adjust primary and secondary education quantity” (ONS, 2010).
The paper is written in the light of the publication of the Wolf Report into vocational education in England (Wolf, 2011) and the Government’s response published in May 2011 (Department for Education 2011). It focuses on English data sources and we recognise that the analysis presented is only directly relevant to the English context.
We find evidence that all measures of pupil attainment in England have shown improvements over the last ten to fifteen years, although the extent of the change varies. For example, our current measure, uncapped Average Point Score (APS), has risen by 35 per cent between 2003/04 and 2010/11 compared to capped APS which has risen by 19 per cent over the same period. Threshold measures of 5 A*-C GCSEs or equivalents have increased by 49 per cent over the same period, compared to 35.7 percent in the threshold measure which also includes English and Maths.
We report recent trends identified in the Wolf Report on the rapid increase in the awards of GCSE equivalent qualifications, and the impact this has on the achievement of the 5 A*-C performance standard. We also find that the overall GCSE grade distribution has become more concentrated towards higher grades since 1997/98, with the greatest shift occurring in the six years between 2003/04 and 2009/10.
We report changes in student attainment based on other measures and systems of pupil attainment used in the UK and within the OECD. In contrast to APS, these systems find little overall improvement in the level of pupil attainment in the UK, although these data are less up to date and are subject to much smaller sample sizes than national performance data.
We present a review of a selection of the academic literature which suggests there is a positive link between school inputs such as expenditure and teaching quality and pupil attainment after controlling for a wide range of student and other background effects. Further exploration of these relationships could provide the basis for development of the quality adjustment method.
Finally, we present an initial review of how far the current method of quality adjustment meets the recommendations in the Atkinson Review (2005) and whether this is affected by the results of the triangulation evidence presented in this paper. We conclude that the current measure of quality adjustment for public service education based on APS requires further development. ONS will therefore consider the options for alternative quality adjustment methods in consultation with data providers, users and stakeholders.
This triangulation article summarises the findings of an initial ONS review into the use of GCSEs and equivalent qualifications as a measure of quality adjustment for public service education output. This is in line with meeting the development commitment made in the last Education Productivity article:
“[To] review, alongside external evidence, the use of GCSEs and equivalent qualifications, and Standard Grades, to quality-adjust primary and secondary education quantity” (ONS, 2010)
In March 2011, the Wolf Report was published which considered evidence around vocational education for 14-19 year olds in England. The review made a number of recommendations and the Department for Education published the government’s response in May 2011 (DfE 2011). An important change as part of the Government’s response is to reform the 14-16 Performance Tables from 2014, announced in January 2012 (DfE, 2012a). In particular, this reform sets new criteria for qualifications to count as equivalent to GCSE grades, and will only count qualifications that are the same size or larger than a GCSE. Eligible equivalent examinations will also only count as no more than one GCSE.
This reform will have an impact on the current method of quality adjustment that is used by ONS in its estimates of public service education productivity because it will affect the way Average Point Score (APS) per pupil is calculated. The purpose of this triangulation paper is therefore to summarise relevant evidence on pupil attainment in England and review how this affects the use of the current method of quality adjustment.
This paper draws on material and analysis presented in the Wolf Report and national statistics published by the Department for Education on GCSE and equivalent results in England. Discussion focuses primarily on maintained schools, which include City Technology Colleges and from 2003, also includes (City) Academies.
The remainder of this triangulation article is structured as follows:
section 3 provides information on the current measure of quality adjustment that is applied to public service education output. We show how it has changed over time, compared to alternative measures of attainment such as capped APS, and A*-C threshold measures
section 4 explores how examination entry volumes and pass rates for academic and equivalent vocational qualifications have been changing in England over recent years, drawing on evidence from the Wolf Report and the Department for Education
section 5 considers other national and international evidence of UK pupil attainment as additional context for GCSE and equivalent-based measures of quality adjustment
section 6 presents some further triangulation evidence using the academic literature on schools’ effects on pupil attainment, which focuses on teaching standards and input effects such as pupil:teacher ratios and school expenditure
section 7 discusses some wider issues regarding how the current method of quality adjustment fits the criteria for quality adjustment suggested in the Atkinson Review (2005) and the scope for further development
section 8 draws some conclusions on quality adjustment from the evidence presented in the article
The current method of quality adjustment for public service education output uses annual Average Point Score results for each country from academic year 1995/96 and multiplies the change in these scores by the number of pupils (adjusted for attendance) each year.
The resulting output measure for public service education is therefore sensitive to the system of point measurement in operation and how many qualifications are included in the measure. At present, for England, ONS combines a time series of movements in the uncapped GCSE Average Point Scores and since 2003/04 the measure includes equivalent qualifications.
There are therefore both volume and GCSE-equivalent effects within the current measure. There is evidence that take up of equivalent subjects has been increasing in parallel with continued entry rates for GCSEs which means that the national Average Point Score per pupil increases in line with the number and equivalence of the examinations achieved. One of the main drivers for increased point scores since the mid-2000s has been the rise of BTEC Firsts at Level 2 which have been included as either two or four GCSE A*-C grades in school performance tables.
The volume effects within the APS measure can be alleviated by using the capped APS, which takes the best eight results of each pupil. Improvements in this measure of attainment at a national level therefore isolate the effect of improving grades of GCSEs or equivalents.
Figure 1 shows the difference between capped and uncapped GCSE and equivalent scores between 2003/04 and 2010/11 in index form, using 2003/04 as the base year for both series.
As can be seen in Figure 1, the capped APS grows at a slower rate than the uncapped APS over this time period. Capped scores rise by 19 per cent over the period, compared to 35 per cent in the uncapped APS. This difference in growth rates has a more pronounced effect from 2007/08 onwards.
The period since 2007/08 has been identified by the Wolf Report and elsewhere as a period during which there was a rapid rise in the number of completions of BTEC Firsts at Level 2 and other non-GSCE subjects which have counted within the APS and other school performance measures. For example, a BTEC First Certificate at Level 2 counts as two GCSE qualifications at Grade A*-C and a BTEC First Diploma at Level 2 counts for four GCSEs at Grade A*-C. See Section 4 for more discussion.
An alternative measure of pupil attainment used in school and national performance tables is the achievement of five or more GCSEs or equivalents at grade A*-C. Data is also published for this measure which includes English and Maths.
Figure 2 below shows the difference in the percentage of pupils within maintained schools that achieve the 5 A*-C threshold with and without the constraint of including English and Maths. Constraining the measure to include at least two GCSEs in core academic subjects reduces the proportion of the pupil cohort who have achieved the threshold measure.
The English Baccalaureate (Ebacc) was also included in performance tables from 2009/10. This requires pupils to achieve A*-C grades in a specified group of GCSE subjects. These are maths, English, the sciences, history or geography and a language. The Department for Education report a survey of almost 700 maintained secondary schools which shows that 47 per cent of the cohort who will take GCSEs in 2013 are now entered into examinations that would meet the criteria for the English Baccalaureate. This percentage has slightly more than doubled since 2010 when the entry rate was 22 per cent (DfE 2012b).
These effects coupled with the recently announced changes to the equivalents to GCSEs from 2014 (DfE 2012a) are likely to have a significant impact (positive or negative) on the trend in APS and threshold measures of pupil attainment.
The number of GCSE entries per pupil has remained broadly stable since 1997/98 at around 7.8 per pupil. Within the number of GCSE entries in 2010/11, applied GCSEs which include subjects such as applied science, catering and health and social care are relatively small at around 2 per cent of the 4.4 million GCSE entries.
However, as discussed in the Wolf Report, the volume of entries and completions of Level 2 courses has rapidly increased since their inclusion as equivalences in performance tables in 2003/04. Completions of Level 2 BTEC Firsts, for example, have doubled in the last three years, from around 185,000 in 2008/09 to 364,000 in 2010/11 in all settings. Around 80 per cent of these qualifications are taken in schools (291,983) (Pearson, 2011). Some Level 3 BTEC nationals are also accredited for pre-16 year old pupils in schools, and are included as equivalent to GCSEs. The Department for Education has announced a significant change to the criteria and list of vocational qualifications that will be included in the school performance tables from 2014 onwards (DfE 2012a).
The Wolf Report produces a table which shows the contributions of equivalencies towards the 5 GCSE A*-C Key Stage 4 measure in England from 2004/05 to 2009/10. Note this table covers all schools, not just maintained schools in England. See Table 1.
|Percentage point contribution made by:||2004/05||2005/06||2006/07||2007/08||2008/09||2009/10|
|GCSE only (including short course)||51.7||52.1||52.6||54.7||55.7||56.3|
|GCSEs in vocational subjects||1.3||1.6||1.7||1.8||1.8||1.5|
|All other qualifications||0.3||0.8||2.0||4.4||5.8||7.4|
|Percentage of pupils achieving 5 A*-C GCSE and equivalent||56.8||59.0||61.4||65.3||70.0||75.4|
Columns may not sum due to rounding
The table illustrates the significant rise in the contribution to the Key Stage 4 performance measure made particularly by BTECs and all other qualifications from 2007/08 onwards. In 2009/10, including all equivalent qualifications increased the proportion meeting the A*-C threshold by 17.6 percentage points, compared to 3.8 percentage points in 2004/05.
The effect of equivalents on the GCSE A*-C Key Stage 4 measure is significantly reduced if the measure must also include English and Maths GCSEs. Using provisional data from 2010/11 for maintained schools only, the effect is cut by two-thirds. Equivalents only contribute around 10 per cent of the threshold compared to 30 per cent if the threshold does not have to include English and Maths. See Figure 3.
In addition to the increasing popularity of equivalent qualifications and their effect within the performance tables, the distribution of pass rates for GCSE subjects themselves have been increasing towards higher grades.
Figure 4 shows that the cumulative distribution of GCSE grades at six yearly time periods since 1997/98 between Grade A* to G, U and X in all schools. The most recent year’s provisional result is also included. This distribution has been shifting outwards over time, so for example, by 2010/11, 8 percent of GCSE entries were awarded an A* compared to 4 per cent in 1997/98. 73 per cent of GCSE entries achieved a grade C or above in 2010/11 compared to 54 per cent in 1997/98. Similarly, for lower grades 97 per cent of entries were awarded a Grade F or above in 2010/11 compared to 93 per cent in 1997/98.
The most marked shift in the GCSE distribution shown here, appears to have occurred in the six years between 2003/04 and 2009/10.
An analysis of GCSEs in applied subjects (double and single awards) in England shows a rapid increase in the pass rates at Grade A*-C from 34 per cent in 2003/04 to 50 per cent in 2009/10. Within the total, there is a high degree of variation in the A*-C pass rates by subject area. In the most popular vocational single award GCSE in additional applied science for example in 2010/11 (approximately 29,000 entries), the A*-C threshold was met by 35.8 per cent of candidates. This compares with a pass rate at A*-C of 62 per cent in the next most popular single vocational GCSE in catering studies (from nearly 16,000 entries).
In addition to the changes in absolute measures of GCSE performance published by the Department for Education, performance results have also been published on whether pupils are making their “expected progress”.
This measures how far pupils achieve at least their expected grade at GCSE or equivalent given their scores achieved at the end of Key Stage 2. Figure 5 shows how in both English and Maths, the percentage of eligible pupils making the expected level of progress has increased consistently since 2007/08.
Alternative evidence on the performance of British students is available through national and international standardised tests of ability. A well-known international programme measuring educational achievement is the international PISA results (Programme for International Student Assessment). Within the UK, there are test systems called YELLIS (Year 11 Information System) and ALIS (A Level Information System) administered by the University of Durham for GCSE and A level students.
Recent academic work designed to produce internationally comparable measures of education outcomes could, in principle, offer potential as a source of quality adjustment.
The OECD’s Programme for International Student Assessment (PISA) set out to develop a method of measuring ‘how far students approaching the end of the compulsory education have acquired some of the knowledge and skills essential for full participation in the knowledge society’. As national examination systems present significant difficulties for international comparisons, the PISA researchers set a random sample of fifteen-year-old students a single examination which tested reading skills, numerical ability and scientific knowledge and know-how.
Four waves of the PISA tests have been carried out to date in countries across the OECD. While each subject area was covered in each set of tests, each wave had a specific focus. In 2000, the initial set of PISA examinations focused on students’ reading ability. In 2003, the follow-up set of tests focussed on mathematical ability and in 2006, a further set of examinations focused on scientific knowledge. The 2009 PISA tests were once again focussed on reading. In principle, these offer snapshots of academic attainment in each OECD country which allow researchers to track educational achievement in each subject area over a nine year period. Two further waves of PISA tests have also been planned, to take place in 2012 and 2015.
In the UK, while all four waves of the PISA tests have taken place, the response rate for the 2000 and 2003 exams was not sufficient for the results to be used with confidence. Consequently, the UK PISA results are limited to just two data-points for each subject area. Table 2 shows the mean scores for the UK, England and the OECD as a whole in 2006 and 2009.
Relative to other countries within the OECD, the 2006 results place the UK around the average in Reading and Mathematics, and above average in Science. In 2009, the results largely confirm the earlier findings. However, the limited time-horizon of the data and the potential for error induced through variations in sample selection make analysis challenging. Jerrim (2011) also points out difficulties in comparing test scores for England over time and relative to other countries, due to changes in the way the survey was implemented.
While in principle the PISA results could be used to quality-adjust education output, several factors make the OECD data series impracticable. Firstly, earlier waves of the PISA survey in the UK suffered from bias arising from selective non-response (OECD 2010). While school response is likely to become mandatory in the future which will help to reduce this problem, a long time series cannot be constructed for the UK at present. The three-year delay between publications also makes the series too infrequent for the annual national accounts and productivity work.
As discussed in the last Education productivity article (ONS 2010), Coe (2007) considers how a student with an average YELLIS score of 45 later went on to perform in their GCSEs. Across 26 subjects, the report finds an improvement of around half a grade between 1997 and 2006. This differs across subjects, with maths increasing by almost half a grade over the period, and science remaining relatively flat.
For 18 year old students preparing for A levels, Coe (2007) shows declining standardised scores on the optional International Test of Developed Abilities (ITDA) since 1988, with a break in the time series in 2002. Since 2002 candidates’ performance in the test have remained broadly constant. In contrast, the percentage of higher passes at A level has increased significantly, leading a student with an ITDA score of 50 achieving an improvement of around 2 grades between 1988 and 2006. Again, this differs between subjects, with mathematics showing the most change.
Coe (2007) offers a number of possible reasons for these trends which include improved teaching and learning, the increased prevalence of re-sittable modular exams, or changes in education content. A real decline in education standards or “grade drift” is also a possibility but one which is very difficult to test for.
In terms of the regulation of England’s examination system and awarding bodies, Ofqual (Office of Qualifications and Examinations Regulation) has a role in maintaining public confidence in the examination system and assessments. It achieves this by monitoring grading standards in examinations and monitoring the quality of marking by qualification awarding organisations (Ofqual 2011a).
Specifically, OFQUAL runs a comprehensive programme of monitoring and auditing, including reviews of standards for main GCSEs and A levels over time and across awarding bodies. The reports from these reviews are published on the OFQUAL website. These provide an in-depth discussion of the requirements made on candidates by different examinations, and the levels of performance required to meet a particular grade. They also consider how these two elements relate to each other.
OFQUAL also commissions and publishes an annual survey of public attitudes to GCSEs and A levels. The most recent survey from 2010 reports that confidence in the GCSE system is high overall and unchanged from 2009. All audiences are reported to remain largely confident that most GCSE candidates are awarded the right grades (OFQUAL 2011b).
Some further triangulation evidence regarding the factors influencing academic attainment can be considered to provide more context for the improvements seen in GCSE or equivalent attainment measures in England. Measurable improvements or gains in factors such as teaching quality or expenditure levels which have a positive relationship with attainment may help corroborate the improvement in measured attainment seen in England over recent years.
Many academic studies using US and UK data have highlighted the importance of teaching in explaining differences in attainment levels across schools. For example, Rivkin, Hanushek and Kain (2005), Rockoff (2004), Gibbons et at (2011). These studies use statistical methods and panel data which take into account the effects of pupils’ family characteristics, prior ability, neighbourhood and school choice, and find coefficients to apply to teacher effects on pupils’ attainment levels.
The Sutton Trust (2011) quote evidence that having a very effective teacher (defined by the Sutton Trust as in the 84th percentile or one Standard Deviation above the mean in terms of value added scores) rather than an average teacher raises each pupil’s attainment by a third of a GCSE grade.
The Sutton Trust (2011) also report that the impact of an effective teacher as opposed to an average one is the same as the effect of reducing class size by ten students in Year 5 and thirteen or more students in year 6.
Teaching quality effects can also be found in studies of life-time earnings. For the US, Hanushek (2011) finds a teacher of one standard deviation better than the average, with a class size of 30, generates over $460,000 in present value. The Sutton Trust find bringing a poorly performing teacher up to the average would increase lifetime earnings of a class of 30 by £240,000-430,000.
The literature e.g. Aaronson et al (2007), Rivkin, Hanushek and Kain (2005) also finds that direct measures of teacher experience and qualifications do not explain the differences in teaching performance very well. However, authors such Koedel and Betts (2007) quoted in the Sutton Trust (2011) find there are difficulties of instability and potential bias in the use of value added test scores to measure teacher performance. Personal evaluation on the other hand is found to be highly correlated with future pupil learning.
In practice, the inspection regime of schools offers a way to assess teaching and leadership standards in schools to a consistent standard. Evidence from the most recent, time-consistent period of inspection in England between 2005/06 and 2008/09 shows an improvement of 10 percentage points in overall effectiveness of primary schools from 58 per cent to 68 per cent rated as “good” or above. The equivalent improvement in secondary schools is 14 percentage points from a lower base of 49 per cent. (Ofsted annual reports).
There are however difficulties with using an inspection measure as a reliable measure of overall quality for England, due to the relatively frequent changes to the inspection framework, selection criteria for undertaking an inspection which are increasingly based on risk instead of a random sample, and changes to the scoring systems. For example, current policy in England is for outstanding schools not to be subject to routine inspection unless risk assessment raises concerns about performance. Ofsted (2011).
The last Education productivity article (ONS, 2010) discussed the results of some recent studies of education production functions. Some of these have shown a positive relationship between expenditure in schools, pupil teacher ratios (or class size) and attainment levels after controlling for student background and other effects. Where found, class size effects are typically small, or statistically insignificant.
Table 3 summarises a selection of the evidence on teacher quality and other input/school effects such as class size and expenditure.
|Aaronson, Barrow and Sander (2007)||Teachers and student achievement (US data)||Increase in teacher quality of one standard deviation above the mean is associated with a 0.15 sd increase in maths test score||Supports positive teacher effects on attainment.|
|Blatchford et al (2004)||Class size (UK data)||Found positive effects of class size on first year of pupils’ attainment (age 4-5) in literacy and mathematics||Supports limited effects of reducing class size on attainment.|
|Effects did not persist strongly in literacy or mathematics|
|Chowdry, Greaves and Sibieta (2010)||The Pupil Premium: assessing the options||Reviews four studies from UK National Pupil Database on expenditure and school attainment measures.||Concludes that increasing resources is likely to have a direct impact on pupils’ attainment. Also some support for reducing pupil-teacher ratios with extra resources, although effect varies between schools.|
|Find mostly positive effects of expenditure on attainment at KS2 and KS3|
|Gibbons, McNally and Viarengo (2011)||Does additional spending help urban schools||Find large effects of expenditure on educational attainment at the end of primary school. £1000 per student extra raises KS2 test scores by 0.25 standard deviations||Supports a positive relationship between school expenditure and pupil attainment.|
|Find effects tend to be higher in schools with more disadvantaged pupils|
|Hanushek (1986)||The economics of schooling (US data)||Summarises other studies on education attainment.||More cautious on link between expenditure and attainment.|
|Meta data cautiously supports no strong relationship between spending and attainment|
|Introduces idea of (un-measureable) skill differences among teachers, leading to different outputs|
|Holmund, McNally and Viarengo (2009)||Does money matter for schools?||Find positive effect of expenditure on national tests taken at end of Primary school.||Supports positive relationship between spending and attainment at primary level|
|Find bigger effect for more disadvantaged students|
|Jenkins, Levačić and Vignoles (2006)||Estimating the relationship between school resources and pupil attainment at GCSE||Using instrumental variables find small but significant effects of resources (expenditure and pupil/teacher ratio) on GCSE attainment in science. Not in maths or English||Supports positive relationship between resources and attainment. Measure uses capped GCSE score.|
|Find stronger effects for bottom 60% of pupils at KS2 for maths and science.|
|OECD (2011)||Reforming education in England||Spending per pupil rising, but international attainment relatively flat||Recommends refinements to GCSE based quality adjustment. Also more emphasis on support for disadvantaged students in terms of expenditure and teacher support.|
|Report student teacher ratios appear to have larger impact on outcomes than spending (Chowdry et al, 2010)|
|More time spent on instruction and homework associated with higher PISA scores in UK|
|Rivkin, Hanushek and Kain (2005)||Teachers, Schools and Academic Achievement||There are important gains in teacher quality in the first year, and smaller gains over the next few years.||Confirms a generally positive relationship between input quality, input mix/class size and attainment.|
|Class size has a modest but statistically significant effect on mathematics and reading achievement growth that declines as students progress through school|
|More experienced teachers should be encouraged to remain in the classroom of disadvantaged students|
|Rockoff (2004)||Impact of teachers on student attainment (US data)||Find large statistically significant positive teacher fixed effects.||Supports positive relationship between teacher quality and attainment|
|Find a statistically significant positive effect of teaching experience on reading scores|
|Slater, Davies and Burgess (2009)||Do teachers matter? (English data)||Being taught by a higher quality rather than low quality teacher adds 0.425 of a GCSE point per subject to a student (25% of the SD of GCSE points).||Supports positive relationship between teachers quality and test scores for GCSE (high stake) exams.|
|Identifying good teachers ‘ex ante’ is difficult.|
The Atkinson Review (2005) discussed quality adjustment for public sector output and how it should be approached. Atkinson suggests there are at least three ways in which to measure quality in the National Accounts:
by differentiating the services by quality characteristics, e.g. a punctual bus service compared to a delayed bus service
by repackaging the unit of output by quality into the measure of output, e.g. a higher grade petrol gives 10 per cent more miles to the litre, so it is equivalent to 1.1. units of the lower grade. This applies when we are sure that 100 per cent of the outcome is attributed to the supplier
Within the latter approach to quality-adjustment, there are a number of different methods that could be employed. The issue of how to reconcile the timing of changed outcomes with the pattern of change in inputs – which may be regarded as an investment good in early years of education - is also raised by Atkinson. Schreyer (2010) also warns of the need to establish a factor of proportionality between the change in scores and the change in output - a 10 per cent rise in scores does not automatically translate into a 10 per cent rise in the volume of output.
Ideally to measure quality adjustment, a schools-only effect would be identified each year, and applied to the level of activity provided by schools. In practice, such value-added methods have been complex and difficult to implement at a national level, and across the four countries of the UK. As a proxy, the current method then attributes all of the change in GCSE and equivalent examination performance at the end of Key Stage 4 to the whole volume of pupils from age 4 to 18 each year. This is in contrast to the proposed approach in Denmark which suggests only attributing 50 per cent of the change in quality measure to estimates of education output. (Deveci, 2009)
The evidence in this triangulation paper on trends in alternative methods of measuring attainment suggests that estimates of education output will be sensitive to the choice of performance measure used, e.g. APS, capped APS, A*-C thresholds, or PISA scores. When breaks in time series occur, as with the recent changes to the definition of what examinations qualify within Average Point Scores, this will cause significant difficulties for this method of quality-adjustment.
Alternative quality-adjustment measures suggested as worth consideration by Atkinson include the quality of teaching as measured by school inspections, and quality of resources based on class size or pupil/teacher or adult ratios. Also, the value to the individual of gaining a set of qualifications in terms of future earnings could be explored, which would be expected to rise over time in line with real average earnings in the economy. Studies such as McIntosh (2004) and literature reviews such as Dickson and Smith (2011) into measuring the returns to education and obtaining different types of qualification could be used as a starting point for this work. These studies generally find strong positive wage rate effects of around 20 per cent for men for those obtaining qualifications at age 16 compared to none at all.
Finally, within this area of public sector output measurement, ONS will be bound by the forthcoming ESA 2010 regulation from 2014 onwards. This will affect the way quality adjustment for public service output measurement is permitted to be included in core National Accounts. Once implemented, this will have a bearing on the type of quality adjustment ONS applies to education output within National Accounts and productivity articles.
Given previous changes to the method of calculating APS by including a wide range of equivalent qualifications, and the recent announcement by the Government regarding equivalencies within English school performance tables at GCSE, ONS has reached the view that the current method of quality-adjustment requires further development. ONS will therefore consider the options for alternative quality-adjustment methods outlined in this article, in consultation with data providers, users and stakeholders.
This triangulation article has considered recent trends in various performance measures of attainment at GCSE level in England such as capped and uncapped Average Point Scores, and threshold measures of 5 A*-C GCSEs and equivalents.
It has analysed the underlying trends in the volume of GCSE equivalent examinations taken in recent years which has been highlighted in the Wolf Report (Wolf 2011) and the upward movement of the overall distribution of GCSE grades in England over the last 13 years. Independent evidence from other tests of student ability and knowledge such as YELLIS and PISA show much less improvement over recent periods than Average Point Scores. On the other hand, there are improvements in the expected progress that students are making, and some academic evidence to link input effects to higher attainment levels. Ofqual also has not recommended changes to exam criteria until recently when some tightening of the coverage of examinations in English literature, geography, maths and history was announced (Ofqual 2012).
There is a body of academic evidence that finds important school level effects on pupil attainment once the difficulties of study design such as controlling for endogenous resource allocation policies, levels of deprivation, prior attainment levels and socio-economic background are overcome. This leads us to conclude that there are probably statistically significant positive effects in the UK on attainment from higher teacher quality, and from expenditure per pupil. The evidence is generally weaker on the relationship between class size or pupil/teacher ratios and attainment, and often the effects seem to vary by subject or by ability level, rather than be present across the board.
In addition, there is the alternative concept of assigning values based on the returns that can be achieved in the labour market to obtaining different types of qualification as a measure of the quality of education. ONS will explore the options for an alternative method of quality adjustment that takes into account the issues raised in this article and by the Atkinson Review. We will do this in consultation with data providers, users and stakeholders.
Details of the policy governing the release of new data are available by visiting www.statisticsauthority.gov.uk/assessment/code-of-practice/index.html or from the Media Relations Office email: email@example.com
Atkinson, T (2005) Atkinson review: final report. Measurement of Government Output and Productivity for the National Accounts. Palgrave MacMillan, UK.
Blatchford, P., Bassett, P., Brown, P., Martin, C., and Russell, A (2004) The effects of class size on attainment and classroom processes in English Primary Schools (Years 4 to 6) 2000-2003. Research Brief Number RB605. DfES (ISBN 1 84478 3731) (http://www.dfes.gov.uk/research/data/uploadfiles/RBX13-04.pdf)
Department for Education (2012a) Reform of 14–16 Performance tables from 2014. Available at http://www.education.gov.uk/schools/teachingandlearning/qualifications/otherqualifications/a00202523/reform-of-14-to-16-performance-tables
Deveci, N (2009) Quality of public health care and education services. ONS http://www.ons.gov.uk/ons/media-centre/events/ukcemga-and-niesr-conference/ukcemga-and-niesr-international-conference-papers/index.html
Wolf, A (2011) Review of vocational education – the Wolf Report Available at https://www.education.gov.uk/publications/standard/publicationDetail/Page1/DFE-00031-2011