Revisions to Official Statistics
Statistics are most often revised for one of two reasons:
1. For certain statistics such as Gross Domestic Product (GDP), migration statistics, and Retail Sales Index (RSI), initial estimates are released with the expectation that these may be revised and updated as further data becomes available.
This type of planned revision should not be confused with errors in released statistics, which are genuine mistakes. Despite our best efforts and quality control procedures, mistakes can happen. When they do, corrections are made in a timely manner, announced and clearly explained to users in line with the Code of Practice for Official Statistics (principle 2, practice 7).
2. Revisions may also be made when methods or systems are changed.
Further information about these and other reasons why revisions are made is outlined below.
Revisions made due to further data being made available
Many of the regular economic statistics are first released as initial estimates, enabling timely political and commercial decisions to be made based on the most up-to-date data available at the time.
One of the most common reasons for revising statistics is therefore to replace previously released initial estimates. It should always be clear from the announcement of a first release of statistics (this is usually in the form of a Statistical Bulletin) whether the figures are initial estimates and possibly subject to revision later.
Initial estimates are often released before all data has been collected but you can be sure that, whenever we publish a statistical output, it draws on all the relevant source information available at the time of release. For example, the first monthly release of the latest Retail Sales Index is based on about 65 per cent of respondents to the relevant survey, which is all that is available in the first 10 days after the end of any given month. The figures are either confirmed or revised when we receive more responses.
If we subsequently make revisions, they are generally small and increase the precision of the initial estimates; although, very occasionally, new information may lead to unexpectedly large revisions.
Benchmarking also refines the precision of statistics and is a longer-term reason for revising a statistical series. Some short-term, in-year statistics may be produced using data sources that are readily available but of a lower quality than other sources, or they may be based on smaller sample sizes than would be the case for a large annual survey.
When more reliable data sources become available, short-term statistics can be benchmarked against them and appropriate adjustments made. For example, estimates of employee jobs are compiled from quarterly business surveys and revised when they are benchmarked to estimates from an annual business survey. The annual estimates are more detailed and accurate, but less timely.
All statistics retain an element of statistical uncertainty. Statistics are often based on samples and approximations of the 'true' value of the activity being measured. As a consequence of this normal statistical process, all statistics are subject to statistical errors. The International Passenger Survey, for example, is a sample survey and therefore the results are subject to sampling errors and non-sampling errors.
The terms 'statistical error' or 'sampling error' should not be confused with errors arising from human mistakes or system failures. Statistical sampling errors are inherent in the processes of sampling and estimation.
Non-sampling errors may also arise. These may be because of problems with coverage, non-response, respondent error (for example, failure to understand questions), or the use of proxy measures when direct measurement is not possible. Human error and system failures may also lead to non-sampling errors.
When new information becomes known, revisions are incorporated to reduce statistical errors in previous figures.
Revisions due to methods or systems
New methods, techniques or systems
While users naturally want statistics to be comparable over time, the things we measure and our sources of information are continually changing. The methods, techniques and systems we use must keep pace.
For example, new methodology on measuring drinking habits introduced in 2007 reflected the fact that the average glass of wine now contains more alcohol (a glass of wine is now considered to be 2 units rather than 1). If we simply went on assuming that the alcoholic strength of wines never changes, or that the popularity of a large glass of wine hasn't grown over the years, then statistics on drinking habits would become inaccurate.
While stable methods, systems and techniques ensure comparability over the life of a statistical series, revisions ensure that the statistics more accurately reflect reality in a rapidly changing world.
Before we make changes that affect, for example, coverage, definitions or methods, we consult users in line with the Code of Practice for Official Statistics (protocol 1, practice 7). If a proposed revision affects time series data, we make consistent historical data available where possible (principle 4, practice 7).
When a statistical index is rebased it can be because the relative importance of the individual component items have been re-evaluated and the 'weight' attributed to each has changed. It can also be because the index reference period or price reference period has been updated.
The Producer Prices Index (PPI), for example, has 2005 as its base year (2005=100) from which movements in the index are measured. The previous base year was 2000. The 2005 rebasing exercise also included changes to the relative weights allocated to each industrial sector that contributes to the index, taking account of changes in the preceding five years.
Rebasing does not mean that the underlying data from which the re-referenced index is calculated have been revised. It simply means that the previously published statistical series will be made available on the new basis. Rebasing allows an index to remain relevant and accurate.
Unlike corrections, most revisions are planned. For those outputs that are subject to scheduled revisions, we publish revisions policies, in line with the Code of Practice for Official Statistics (principle 2, practice 6). These policies describe how we will communicate to users the nature of the revised data and the reasons for revisions, the internal documentation required, and the circumstances in which historical data will be revised. When we publish revised statistics, we include an explanation of the nature and extent of the revisions.