Skip to content

National Statistics Methodology Advisory Committee 13th meeting 27 November 2007

Thirteenth meeting, 27 November 2007

Paper 1: Quantifying and Measuring Revisions in Time Series Estimates

This paper discusses a decomposition approach being developed to 1) help users better understand movements and revisions and their impact on different time series estimates and 2) provide a framework for wider research. The extra information will enable ONS to better quality-assure its own processes and outputs, to improve targeting of development resources, and will provide users with more detailed and transparent information on which to base decisions.

Committee conclusions:

  • The decomposition approach should be used as part of other ONS revisions systems, particularly at an aggregate level

  • Suggested improvements to the plots; included using standard definitions for box and whisker plots and adding r (correlation) values between components

  • Report the denominator and numerator separately, instead of using the MSR ratio

  • Contextual information should be provided to help users interpret trend

  • Future work should continue to develop visual representations, considering adding in-depth analysis of data, sensitivity analysis, detection and treatment of systematic bias and providing more analysis of serial correlation in the irregular component

  • The Mainwaring and Skipper work on presenting a system for recording and quantifying revisions would benefit from the inclusion of the root MSR measures

  • If possible, these methods should be applied across the Government Statistical Service (GSS)

Paper 2: Winsorisation for estimates of change and outstanding issues with the implementation of Winsorisation for level estimates

This paper describes issues surrounding the implementation of a value modification treatment for outliers in business surveys, known as Winsorisation. It outlines the basic theory of one-sided Winsorisation for estimates of levels, which has been implemented in many ONS business surveys. A number of outstanding issues with this method are discussed. The main body of the paper describes recent research work to develop Winsorisation methods that are optimised for estimates of change rather than estimates of level. Two methods are discussed along with an evaluation of their effectiveness.

Committee conclusions:

  • Sensitivity analysis should be done on estimation procedures for the mean

  • Data should be pooled from several years to obtain an estimate of the bias instead of calculating the bias on several survey occasions and averaging them later

  • To obtain consistency between the estimate of change and the change in estimates of the level, to look at minimising the sum of the MSE of level estimate on occasion 1, the MSE of the level estimate on occasion 2 and the MSE of change in level, as a compromise solution of minimising only the MSE of change in level

  • Method 2 was not preferred as it does not guarantee consistency between estimates of change and change in level estimates, if the level estimates are Winsorised separately

  • A comparison of the proposed estimators with other types of estimator (robust alternatives) could be carried out

  • Consider modelling the sample with different probability distributions to be able to estimate the parameters of interest using M-estimators and a log-normal or other skewed distribution to see what is more efficient

Paper 3: Measuring and Monitoring Response Burden in Business Surveys

This paper provides an overview of the research, question sets and findings from two Eurostat funded projects which developed methods for measuring and monitoring perceived response burden in business surveys. The paper also provides a comparison to ONS's implementation of the Standard Cost Model (SCM) which is currently the preferred European method for measuring objective administrative burden. The paper recognises that ONS's implementation of the SCM does meet the data requirements for monitoring and measuring the economic costs to businesses which is in line with the Prime Minister's Instructions on the Control of Statistical Surveys. However, the approach does not measure perceived response burden. This is important as perceptions that a specific survey is burdensome can possibly lower response rates, lower quality data, and increase errors.

Committee conclusions:

  • This work is important with relevance to all countries

  • A quality driven approach to response burden is a viable objective, but a clear distinction must be made between actual and perceived burden, remembering that there is a relationship between them

  • Measuring perceived burden could be done during the testing phase

  • A number of suggestions were made regarding the core perceived response burden question set.
    Keep the task of measuring response burden to a minimum by keeping these exercises to a reasonable size

  • A change in the questionnaire design can initially raise the response burden placed on businesses, this then decreases as respondents become familiar with the new design

Paper 4: Improving Migration Statistics: A Review of Survey Data Collection at UK Ports

In December 2006, the Interdepartmental Task Force on Migration Statistics published a review which proposed a number of recommendations in order to improve the quality and coverage of migration statistics. One of the recommendations proposed a review of the way in which ONS collects information on passenger-flows through UK ports. This paper presents the planned changes to the International Passenger Survey (IPS) and further work planned to improve estimates of international migration flows in and out of the UK. As a result of the first phase of a Port Survey Review extra 'filter shifts', to boost the sample size of all migrants, will be introduced from April 2008 onwards at Manchester, Stansted and Luton airports - among other changes to the IPS. The interim report also sets out some areas for further detailed development work that will be undertaken during 2008, including the feasibility of using administrative data to complement a port survey. The intention is that this detailed work will recommend more significant changes to port data collection which may be implemented from 2009 onwards.

Committee conclusions:

  • The confidence interval on the estimates in relationship to the overall size of the population was not large and in assessing fitness for purpose each use of the estimates would need to be assessed on its own merits

  • Need to also look at other sources or error, for example non-response bias (for example, more refusals in some groups of people, or language issues) and questionnaire design issues

  • The feasibility of some form of bespoke survey should be investigated. It was suggested that the feasibility of data collection by immigration agancy staff working on border points should be assessed.
    The current sample design of the IPS is based on passenger-flows in the past. A more dynamic sampling design should be considered, for example by using predictive modelling of where increases or decreases in passenger-flows would take place. ONS acknowledged the limitations of the current approach as it is retrospective, this one reason why they are reviewing methods

  • Information from passport scanning could help produce a sampling frame, and the feasibility of this approach should be assessed. It is likely this would also require information on intent and purpose of travel; some ideas as to how this could be done were made

  • The feasibility of the use of information from other countries should be investigated

Paper 5: Overcoverage in the 2011 Census

This paper lays out the background to the problem of overcoverage in census, international practice and developments in this area, and the areas of research that will be addressed. Early ideas for measuring overcount are discussed, such as survey and matching-based methods. In particular there is an issue of whether a methodology can be developed that integrates overcoverage assessment within the undercoverage assessment framework.

Committee conclusions:

  • Undercoverage is the major problem, but need to look at assessing overcoverage

  • The assumption that there would be no overcounting in the Census Coverage Survey (CCS) was questioned, this is critical to the DSE methodology for undercount

  • The use of DSE demonstrated a clear rationale for the use of the Micro approach. An overcount estimate could be obtained from the difference between the pre- and post-adjustment DSE

  • The rationale for the use of the Macro approach for overcount was not clear. The case for using it may hold if we assume that over- and under-counts were uncorrelated

  • It was difficult to use both an E sample and a CCS approach, emphasising that if both were used, there would be a critical need to follow up cases in Census, but not CCS

  • Recommended that any E-sample survey be small scale and high quality

  • An E-sample is important to both the Macro and Micro approaches

Content from the Office for National Statistics.
© Crown Copyright applies unless otherwise stated.