The Census Quality Survey (CQS) is a voluntary telephone survey that we carried out across England and Wales after the Census 2021 data collection. The CQS allows us to assess how accurately people answered the questions in Census 2021. We carried out a similar survey following the 2011 Census (PDF, 1390KB).
The CQS works by asking a sample of people the same questions as were asked on the census. Respondents were asked to think back to their circumstances on Census Day. These survey responses were matched to each respondent's census questionnaire. We then calculated how many people gave the same answer to the survey and to the census, and how many people gave different answers.
Where people gave different answers, this might be because of a mistake in completing the census or a mistake answering the survey. This means that the CQS does not give us an exact measure of "respondent error" in the census. However, it does indicate which questions might be more subject to that error and the possible scale of such error.
During census processing, inconsistent or missing responses to census questions were flagged using edit rules and updated by imputation. CQS agreement rates include these cases in order to give a complete picture of quality. For more information, see "Impact of proxy responses, editing, and imputation" in Section 4.
The coronavirus (COVID-19) pandemic meant that the circumstances of some respondents could have been changing more than usual. This implies that respondents may have had more difficulty recalling their circumstances at the time of the census, as well as uncertainty about how to complete the questions on the census. This could potentially contribute to greater disagreement between Census 2021 and CQS for some topics than others, or differences in quality compared with past censuses.
The CQS should not be confused with the Census Coverage Survey (CCS). The CCS was used to estimate how many people did not respond to the census. This enabled us to adjust the data to provide the most complete picture of the entire population. We used the CQS to assess how accurately people completed the census questionnaire, and we have not used the results to adjust the census data. For more information about the CCS, see our Design for Census 2021 article.Back to table of contents
In 2011, the Census Quality Survey (CQS) was conducted using face-to-face interviews. In 2019, during the Census Rehearsal, we conducted a pilot CQS to test possible methods for the full CQS in 2021. In the pilot survey, we tested both face-to-face and telephone interviewing.
While we expected face-to-face interviewing to provide the most accurate possible data, our test did not identify any specific problems with carrying out the CQS as a telephone survey. The Office for National Statistics (ONS) suspended face-to-face interviewing during the coronavirus (COVID-19) pandemic, and so we carried out the CQS as a telephone survey. Impacts of possible mode effects on agreement rates are discussed in this report where relevant.
We formed the sample by randomly selecting households that had responded to the census. We made sure that we selected broadly representative numbers of households in each region of England and Wales, and people possessing a range of demographic characteristics. For example, this included households with older residents or members of minority ethnic groups.
As in 2011, the CQS sampled only people living in households and did not approach people living in communal establishments, such as care homes or halls of residence. This means that the CQS results do not measure the accuracy of responses from communal establishment residents.
We sent letters to sampled households inviting them to contact us to arrange an interview. We monitored the numbers responding to these invitations and sent out additional invitations where needed.
The achieved response differed demographically from the known overall census population, for example by containing a higher proportion of older and retired people. When calculating agreement rates, we used weighting to ensure that results are representative of the census population. For more information, see "Calculating agreement rates" in Section 3.
The selected sample size for the CQS was 110,000 households. The final achieved sample size was 8,724 households, with a total of 16,044 known residents. This gives a household response rate of 7.9%.
Because of factors such as within-household non-response, the final usable data covered 8,598 households, with interview data for 11,939 residents. This exceeded our target of 5,000 households and 10,000 residents. The final usable household sample size is greater than the 2011 sample size of 5,172, though the number of resident interviews in 2011 was similar at 12,103.
The ONS telephone interviewers carried out the survey. The interviewers took extra measures to help people interpret the questions correctly. This included explaining categories and definitions where needed and reminding respondents to think back to what their situation was on Census Day, rather than at the time of the survey.
Interviews were carried out between 28 June and 27 August 2021, around three to five months after Census Day. Proxy responses (that is, responses on behalf of someone else in the household) were only taken for children aged under 16 years, and for people who had a health issue that meant that they could not respond to the survey themselves.
The 2021 Census used data cleaning plus edit rules and imputation to impute missing data and handle inconsistent responses given by respondents. However, missing values among CQS responses were not imputed, because this would introduce a confounding error to the results.
When the survey was complete, we linked the survey responses to the census responses. To maintain data protection, a census team matched the survey data to the census data within a secure environment. For resident questions, this linkage was primarily done by matching household ID, name, and date of birth, with clerical linkage of the remaining records. For household-level questions, we used the household data belonging to successfully linked usual residents.
We successfully linked 98.6% of the CQS households and 98.0% of known CQS residents to a census response. However, some residents were linked but did not have usable data for agreement rate calculations. This included those that refused to answer the CQS, were unable to provide interviews, or who were ineligible for other reasons.
For more detail about refusals, linkage, and sample sizes, see "Data linkage and usable sample size" in Section 5.Back to table of contents
Calculating agreement rates
Most census questions are answered by selecting the appropriate tick-box from a list of options. Some boxes come with a write-in field to capture additional information, or details for an "other" category. Telephone interviewers for the Census Quality Survey (CQS) would read the list of options aloud and record the respondent's stated selection, including additional "write-in" information as appropriate.
For the purposes of the CQS, we typically consider an answer to agree if the verbal selection matches the census tick-box selection and do not require an exact match of write-in text. The country of birth, main language, and religion questions are exceptions to this. For these questions, write-ins are converted to standard codes before CQS analysis.
A small number of census questions allowed for more granularity in response options than the corresponding CQS questions. For these questions, census categories were aggregated to allow for direct comparison. Some CQS analysis also aggregates categories when this provides a clearer or more meaningful measure.
Weighting of the CQS results was carried out to ensure the proportions of people of varied characteristics in CQS are representative of the proportions seen in the census. For example, when a higher proportion of older people is seen in CQS than census, these are given lower weights than younger people.
Characteristics for weighting were chosen from looking at past approaches to the CQS and also the degree of difference between distributions in census and CQS. A limited number of characteristics were chosen to achieve a relatively small number of category combinations (strata), and such that no combination was unrepresented within the CQS.
The weighting of rates for individual questions was calculated based on combinations of the following five traits:
whether or not the individual is aged 55 years and over
whether the individual is male or female
whether or not the individual's ethnicity is "white British"
whether or not the individual's census response was given by proxy
whether the individual lives in England or Wales
The age threshold was set at 55 years, because there are a higher proportion of people within the responding CQS sample at or above this age than among the census population. Below this age, there were lower proportions in the CQS sample than the census.
We used a slightly different approach to weighting the gender identity question. For this question the weighting characteristics were age, ethnic group, country and gender identity. This is because the CQS sample design included a boost to ensure that both "yes" and "no" answers to this question were represented, and so the weighting needs to account for this. Using only four characteristics was necessary to ensure that all strata had adequate representation.
The weighting of rates for household questions was calculated based on the following five traits:
whether or not the household contains any residents aged 55 years and over
whether or not the household contains any residents of an ethnicity other than "white British"
whether the household is in England or Wales
whether the household's accommodation type is "detached" or "not detached"
whether the household responded to the census online or by a paper form
The approaches for household and person weights are similar. Sex and proxy answers only apply to people rather than households, which is why these are not used in household weighting.
Calculation of agreement rates involve summing the weights to give the weighted number of households or usual residents who answered the same on CQS and the census or differently on the two surveys.
We calculated the agreement rate for each question using a simple formula. The formula is:
Weighted number of linked records where the census answer was the same as the CQS answer, divided by the weighted number of linked records where both the census answer and the CQS answer exist.
As an equation, this can be written as:
In this equation:
p is the agreement rate
Nmatch is the weighted number of linked records where the census response was the same as the CQS response
Nvalid is the weighted total of linked records where both the census answer and the CQS answer exist
Respondents who did not answer a particular question on either Census 2021 or the CQS, or stated they do not know, are not included in the calculation of the agreement rate for that question.
Calculating confidence intervals
We also calculated a 95% confidence interval for each agreement rate. These intervals are also provided in our accompanying dataset.
To calculate a confidence interval, we must first calculate the standard error of the agreement rate. This is defined as:
In this equation:
SE is the standard error
p is the agreement rate
The 95% confidence interval for each agreement rate is plus or minus 1.96 standard errors around the estimate calculated from the sample:
Results show that 58% of the Census Quality Survey (CQS) questions had an agreement rate greater than 90%, and 68% had an agreement rate greater than 85%. In general, agreement rates are comparable with those seen in the 2011 CQS. In 2011, 60.5% of questions had rates greater than 90%, and 76.3% had rates greater than 85%.
The highest agreement rate overall in 2021 was for sex, at 99.3%. The lowest was for national identity, at 59.2%.
In terms of individual responses, the average CQS respondent provided comparable answers for 24 questions (depending on routing and item non-response) and agreed with their linked census responses 89.7% of the time.
Questions related to employment and economic activity had lower agreement rates, on average, than other questions. At the time of the census, it is likely that disruption of and changes to working arrangements because of the coronavirus (COVID-19) pandemic affected both census response accuracy and CQS recall. For more information about the impact of the coronavirus pandemic and furlough on economic variables, see our Labour market quality information for Census 2021 methodology.
For a full list of CQS agreement rates, see our Census Quality Survey agreement rates, England and Wales: Census 2021 dataset.
We asked three additional questions in Census 2021 that had not appeared on a UK census before. These were about UK armed forces veterans, sexual orientation, and gender identity. Findings from CQS analysis may be used to assist with review or further development of these questions. The harmonised design of the gender identity question is currently under review. See the Government Analysis Function's Review of gender identity data harmonised standard for more information.
UK armed forces veterans
We asked respondents if they had previously served in the UK armed forces. Initial agreement rates for this question were calculated using only two categories: yes or no. The agreement rate was 98.5%.
Though not explicitly stated in the instructions, the census allowed multiple selections on this question (both "yes, the regulars" and "yes, the reserves"). The CQS did not allow multiple selections. Excluding a small number of CQS respondents who selected both of these on the census allowed us to provide a more detailed agreement rate with three distinct categories: "yes, the regulars", "yes, the reserves", or "no" (neither). The overall agreement rate in this scenario was barely changed, at 98.3%.
We asked respondents if they identify as heterosexual ("straight"), gay or lesbian, bisexual, or some other sexual orientation. If respondents answered "other", they could also write in a description. The CQS agreement rate is for the listed options only and did not require exact matches of write-ins for the "other" category. This was a voluntary question on the census, and we only asked respondents aged 16 years and over. The overall agreement rate for sexual orientation was high, at 98.3%.
We asked respondents if the gender they identify with is the same as the sex they were registered as at birth. People answering "no" therefore indicate that they are trans. If respondents answered "no", they could write in their gender identity. The CQS only measures agreement on the two options, "yes" or "no", and did not require an exact match on stated gender identity when respondents selected "no". This was a voluntary question on the census, and we only asked respondents aged 16 years and over. Overall agreement rates for gender identity were high, at 99.5%.
Impact of proxy responses, editing, and imputation
The census form allows residents to answer on behalf of other people in their household, who might not be available or able to answer for themselves. These are called proxy responses.
When answers to individual census questions are missing, we use donor-based item imputation to provide a value. When census responses to multiple questions are flagged as inconsistent by edit rules, they are adjusted to resolve the inconsistency.
Unlike census data, the CQS data are not subject to edit and imputation. Missing values are not imputed, and any inconsistent responses are not flagged or changed. Proxy responses are not collected on the CQS, except in the case of children aged under 16 years.
This means that sometimes the comparison between CQS and census data means comparing a value provided directly by the respondent themselves with one provided on their behalf, or with a value that has been imputed.
We choose to include comparisons with responses affected by these factors as part of the CQS's overall measures of accuracy, because they do have some limited effect on the overall accuracy of data for each question.
Where investigation of a low agreement rate found that proxy responses or editing and imputation seemed unusually impactful, this is noted in the detailed discussion for that question.
A proxy response is when someone else in the household answers on their housemate or family member's behalf. Naturally, this only affects resident-level questions rather than household characteristic questions.
Overall, the CQS data suggest that proxy census responses were usually not much less accurate than non-proxy responses. In other words, the fraction of disagreements involving proxy responses was usually proportionate with the fraction of responses received by proxy.
Among the final linked CQS response data, around 30% of the census responses for each resident-level question were made by proxy. The average fraction of disagreements (pre-weighting) involving proxy responses was also close to 30%. This, in turn, is also close to the overall fraction of census responses that were delivered by proxy.
As might be expected, the questions where proxy responses disagree with CQS responses most often are subjective ones, such as general health.
Editing and imputation
Respondents occasionally submit logically impossible or highly improbable combinations of responses to a series of census questions. For example, someone might claim they travel by train to work from home. Edit rules define combinations that are not allowed. These values are then adjusted by imputation. Also, when respondents do not answer individual questions, we use donor-based item imputation to fill in missing values.
It is important to note that the editing and imputation strategy of the census is focused on ensuring the overall quality of data at aggregate population levels. This means that editing and imputation should not be expected to produce perfect results at a record level, because this is not a requirement of the data given any of its planned use. More information is available in our Item editing and imputation process for Census 2021, England and Wales article.
Among linked CQS data, imputation of census values caused by non-response had a much greater impact on agreement rates than imputation caused by inconsistent census responses. On average, less than 2% of values for each question are affected by editing and imputation. As might be expected, imputed census values disagree with CQS responses more often than non-imputed values. Disagreement between imputed census values and CQS responses is also higher on questions where there are many possible responses, such as marital status.
Around 5% of disagreements on resident-level questions involved edited and imputed census values. However, there are only six resident-level questions where editing and imputation seem to account for more than 10% of all disagreements, and four of these have overall agreement rates higher than 95%.Back to table of contents
There are two household variables and 15 resident variables with agreement rates less than or equal to 90%. We performed further analysis on these variables to investigate why the response rates are relatively low.
This question applied to all households. There were 10 response options and respondents could select more than one option. Where multiple options were chosen, these have been simplified into two categories: "two or more types of central heating (not including renewable energy)" or "two or more types of central heating (including renewable energy)". This applies to both the census outputs and the CQS comparison and gives 12 possible response categories in total.
The most common response on the census was "mains gas only" (74.0% of responding households with usual residents). Only 9.1% of responding households had two or more central heating types in the census.
The weighted agreement rate for this question was 79.9%, lower than the 2011 CQS agreement rate of 90.2%.
Of all CQS household responses that disagreed, 55.0% are from households that state "mains gas only" on the census, then "two or more types of central heating (not including renewable energy)" on the CQS or vice versa. In other words, the CQS results imply that "mains gas only" households are both understated and overstated, so at an aggregate level these effects largely cancel each other out.
Response categories in 2011 were slightly different, but the most common disagreements on the 2011 CQS were between the closest equivalents, "gas" and "two or more".
In total, 67% of all disagreements involve households that responded on the census as having "mains gas only" but any different answer on the CQS, or households that responded "two or more types of central heating (not including renewable energy)" on the census but not CQS.
The household section of the Census 2021 questionnaire asked people who did not own their own home (outright or with a mortgage or loan) who their landlord was, choosing from one of six options. In the census, 37.4% of responding people in households do not own their own home. Of those who did not own their own home, the most common scenario is where the landlord is a "housing association, housing co-operative, charitable trust, registered social landlord".
The weighted agreement rate for this question was 88.0%. The agreement rate in 2011 was very similar, at 87.6%.
The most common disagreement was where respondents stated that they rent from a "council/local authority" on one survey, but on the other survey they say they rent from a "housing association, housing co-operative, charitable trust, registered social landlord". These are the first two response options on the questionnaire. This accounted for 45.1% of all disagreements. In the past, Census 2011 disagreements were also largely "council/local authority" versus "housing association (etc)". As the differences between CQS and census for these categories are in both directions, the net impact is low at national level.
The disagreements may simply reflect uncertainty of the respondents, but in 2011 it was also suggested that a then-recent trend of ownership of social housing shifting from councils to other social landlords was a contributory factor. The issue of respondent error on this question in the 2021 Census is noted in our Housing quality information for Census 2021 methodology.
We asked respondents whether they identified as English, Welsh, Scottish, Northern Irish, British, or any other (that is, non-UK) national identity. Respondents could select as many options as they wanted. This means that there are a vast number of possible combinations, though the majority of people identify with one UK nation identity and/or the British identity.
For the purposes of agreement rates, all non-UK "other" national identities were treated as one group; we did not code write-ins for specific other countries. Only around 7% of CQS respondents had national identities that included a non-UK nation.
The agreement rate for this question was 59.2%, the lowest of any question on the survey. This seems to be driven almost entirely by respondents disagreeing on whether they identify as British, English, or both British and English. Three-quarters of disagreements in the CQS data are among and between these three categories.
The agreement rate for this question in 2011 was similar to 2021, at 60.4%. The 2011 CQS report also found that the vast majority of disagreements were among "British", "English", and both. The design of this question was slightly changed for 2021, with the "British" box moved to the top of the list. This affected the distribution of responses, as discussed in our National identity, England and Wales: Census 2021 topic release, but did not affect CQS agreement.
The weighted agreement weight for people who answered (just) "other" on the census was 83.7%. Only around 9% of all disagreements seen in the data involved people with a non-UK national identity as part of their identity.
It follows that agreement rates vastly improve to 93.1% when generalising to just three groups – "any individual or combination of UK national identities", "any UK plus 'other'" and "(just) other". In this case, most disagreement occurs when people identified with just UK national identities on the census, but UK plus another nationality on the CQS.
Census respondents were asked if they were Christian (any denomination), Buddhist, Hindu, Jewish, Muslim, or Sikh. There was also a write-in field for any other religion, and a response option for "no religion". This question was voluntary, and missing values were not imputed.
The agreement rate for this question was 90.0%, which is close to the 2011 agreement rate of 90.4%. The vast majority of disagreements were where respondents said they were Christian on the census but "no religion" on the CQS, or vice versa. These two options were also the most common response categories on the census. There were approximately as many disagreements between Christian on the census and "no religion" on the CQS as the reverse, "no religion" on the census and Christian on the CQS.
These disagreements between Christianity and "no religion" may be the result of respondents varying in their perception of their own religion. People may only consider themselves Christian if they are actively practicing at the time, or they may simply consider it to be a broader part of their identity that varies in perceived importance.
Respondents occasionally provided additional write-in information about their denomination on either the census or CQS, but not the other. Combining all Christian denominations into one category and combining all write-ins expressing atheism into the "no religion" category increases the agreement rate slightly to 91.2%.
We did not detect a large difference between the overall agreement rate and the agreement rate for either children or proxy responses – both cases where the subjectivity of the question might be expected to affect agreement rates.
Agreement rates for people selecting religions with dedicated response options, other than Christian, on the census were generally above or around 90%. However, there were only 200 or fewer CQS respondents in each of these groups, which increases the width of confidence intervals and limits more detailed analysis.
Welsh language skills
We asked respondents if they could understand, speak, read, or write Welsh. Respondents could report on each skill separately or select "none of the above" to indicate that they had no Welsh skills. This census question is only asked in Wales, and the CQS sample size was around 2,255 people. This is higher than the 2011 CQS sample size for this question, which was 1,230.
The weighted agreement rate in 2021 was 76.6%. The 2011 agreement rate was 75.3%, though the comparatively low sample size prevented detailed analysis. The most common disagreements in 2021 were where people said they had no skills on the census, but then said they could (only) understand Welsh on the CQS. More broadly, the vast majority of disagreements – over 80% – involve respondents claiming a greater number of skills on the CQS than census.
Around half of all disagreements are cases where the number of skills reported on the CQS is only one more or less than on the census. Large shifts in reported skills were comparatively uncommon. Of people who claimed they had all skills on the census, 90.3% agreed on the CQS, and almost none of those reported "none of the above" on the CQS. Of people who claimed no skills on the census, 83.1% agreed that they had no skills on the CQS. However, around 11% of all disagreements overall were between "none" on the census and "all" on the CQS.
Setting aside all other skills and only considering speaking ability, the weighted agreement rate for people who said on the census that they could speak Welsh was 94.9%. Few people said they could speak Welsh on the census, then disagreed on the CQS. However, there were over 10 times as many people in the overall sample who disagreed by reporting the opposite: that they could not speak Welsh on the census but then that they could speak the language on the CQS. This follows the general pattern of respondents reporting more skills on CQS than on census.
Overall agreement rates improve slightly when responses about all the separate skills are grouped. The agreement rate with three categories – "all skills", "some skills", or "no skills" – is 79.4%. The agreement rate with two categories – "any skills" or "no skills" – is 85.5%.
As with many census questions, this question asks respondents to subjectively assess their own situation. It is also possible that younger children, and their parents answering for them, may have difficulty with this question as their language skills are still developing. However, there are not enough Welsh-speaking children in the CQS sample to investigate this further.
Respondents were asked to assess their general health on a five-point scale: "very good", "good", "fair", "bad", or "very bad". The overall agreement rate for this question was 66.6%, which is slightly lower than the 2011 Census agreement rate of 68.2%.
Nearly half of all disagreements in the linked CQS data are among people who answered "good" on the census and "very good" on the CQS, or vice versa. However, these two categories are also the most common responses, accounting for more than 80% of all census responses between them.
As this is a subjective question, and the majority of people generally consider themselves healthy, it seems plausible that these people might vary between "good" and "very good" on different days. They therefore might struggle to recall exactly how they felt on Census Day.
Grouping responses to a three-point scale of "good or very good", "fair", and "bad or very bad" increases the agreement rate to 86.1%. In 2011, the three-category agreement rate was comparable at 87.5%. Only 7% of disagreements in the grouped 2021 data are between the "good" and "bad" groups. This highlights that the majority of disagreements overall are between adjacent categories on the scale, which is not surprising for a subjective question.
There is no strong trend in the direction of disagreements up or down the scale. In other words, approximately the same volume of people reported better health on the CQS than the census as reported worse health.
People who disagree on the general health question are more likely to disagree on the disability question, and vice versa.
We asked people whether they have any physical or mental health conditions or illnesses lasting or expected to last 12 months or more. If they answered yes, we then asked them whether their disability reduced their ability to carry out day-to-day activities "a lot", "a little", or "not at all".
The design of this question allows respondents to report a long-term condition that does not impact their ability to carry out daily activities. By the definition in the Equality Act (2010), people in this situation are not considered disabled. There are therefore four response categories for this question: "yes, disabled and limited a lot", "yes, disabled and limited a little", "not disabled, but with a non-limiting long-term condition", and "not disabled, with no condition".
The 2021 agreement rate for this question is 78.8%. The most common disagreements were where respondents said "yes" to the first part of the question followed by either "limited a little" or "not limited at all" on the census, but then simply "no" on the CQS.
Among respondents who disagreed, 57.2% reported lower impact from long-term conditions on the CQS than the census (for example, from "limited a lot" to "limited a little", or from "yes, but not limited at all" to simply "no"). The majority of disagreements in 2011 were also in this direction.
Notably, among 2021 CQS respondents who answered "yes, but not limited at all" on the census, a larger fraction of people then disagreed on the CQS by simply answering "no" than agreed with "yes, but not limited at all". This, and the pattern of common disagreements previously discussed, might suggest a trend of respondents downplaying the impact of their conditions on the CQS survey. For example, there may be a mode effect when reporting this information to a live interviewer rather than on a written form.
As is often the case when response options are on a scale, the majority of disagreements - 62.4% - were cases where the census and CQS responses were adjacent categories on the scale.
Slightly over 1% of linked census responses for this question were affected by editing and imputation. The low frequency means that we could not compare agreement rates for this subgroup, but also means the overall impact will have been low. We did detect a statistically significant agreement rate difference between respondents who answered the census online (79.7% agreement) and on paper (71.1%).
Following the definition of the Equality Act (2010), the four response categories can be collapsed to three by grouping the two "not disabled" statuses. The 2021 agreement rate in this scenario is 86.2%. This enables comparison with the 2011 CQS, when the three options were "yes, limited a lot", "yes, limited a little", and "no", and the agreement rate was similar at 88.9%.
Further simplifying to two categories – "yes, disabled (and limited a little or a lot)" and "not disabled" – increases the agreement rate to 89.3%. In 2011, the two-category agreement rate was 91.9%.
Some disabilities are progressive, and some fluctuate in their severity and impact over time. These changes may make it harder for respondents to recall exactly how they would have rated their condition on Census Day. Also, people's subjective evaluation of their long-term health conditions, disability status, and what it means to be healthy may have been affected by the context of the coronavirus pandemic, as this may have prompted people to reassess their circumstances.
The census asks respondents whether they have achieved any of a range of recognised qualifications, including GCSEs, apprenticeships, A levels, degrees, and foreign equivalents. Many analytical uses of these data first summarise them in terms of the highest level of qualification the respondent has received. Therefore, CQS agreement rates for these questions are also measured in these terms, on a scale from "no qualifications" through standardised Levels 1 to 4, with apprenticeships placed between Levels 2 and 3.
The overall agreement rate for this question was 73.1%. This is an improvement on the 2011 rate, which was 67.6%. Unlike many other questions, the distribution of observed disagreements in 2021 is not dominated by frequent disagreements between a specific pair of categories. Among disagreeing respondents, 10% report Level 1 (one to four GCSEs or equivalent) on the census, but Level 2 (five or more GCSEs or equivalent) on the CQS. This was the most common disagreement in both 2021 and 2011.
Overall, around 60% of disagreements involved reporting a higher qualification level on the CQS than the census. Around half of all disagreements were between adjacent levels on the scale. For example, the weighted agreement rate for respondents with a Level 4 qualification (degree or higher) on the census was 92.3%, and the most common disagreement among this group was when they then reported only Level 3 (two or more A levels or equivalent) on the CQS.
We found two subgroups of respondents with notably lower agreement rates than the overall rate. The first was people responding by paper questionnaires, who had an agreement rate of only 66.2%. Whereas the online form is able to separate each part of this complex question and provide plentiful help and guidance text, the paper form is constrained by limited space. It is likely that this contributed to paper respondents finding this question harder to answer.
The second subgroup with low agreement rates is people whose highest qualification was an apprenticeship. Although only around 6% of CQS respondents reported an apprenticeship as their highest qualification on the census, 22% of all disagreements involved apprenticeships on either the census or CQS. This may be because apprenticeships are somewhat distinct from the main academic sequence of qualifications in England and Wales, and harder for respondent and surveyor alike to recognise and classify.
Employment last week
The census asks, "in the last seven days, were you doing any of the following" and gives six tick-box response options. This question is used in combination with others to produce "activity last week". Guidance for people on furlough because of the coronavirus pandemic said that they should identify themselves as temporarily away from work. However, we are unable to determine what fraction of people followed guidance.
The most common census response among responding usually resident people aged 16 years and over in households was "working as an employee" (42.3%), followed by "none of the above" (41.2%), then "self-employed" (8.3%). Other less common responses were: "temporarily away from work ill, on holiday, or laid off", "on maternity or paternity leave" and "doing any other kind of paid work".
The weighted agreement for the question was 88.4%. The agreement rate on the 2011 CQS was 91.2%, though the set of response options has changed between censuses.
Where there is not exact agreement between the census and CQS responses, the most common group is people who said "none of the above" (not employed) on the census but then either "working as an employee" or "self-employed" on the CQS (22.3% of disagreements). Only 9.1% of disagreements said "none of the above" on CQS but "working as an employee" or "self-employed" on the census.
Those who said "temporarily away from work ill, on holiday, or laid off" on the census but then "working as an employee" on CQS account for a further 14.5% of disagreements.
A simplified comparison can be made by categorising responses as employed or not employed. The weighted agreement rate using this simpler categorisation is 95.4%.
Economic inactive status
The census asked people who were not in paid work and not temporarily away from work to select one or more from five options to describe what they were doing in the last seven days. The most common census response to this question was "retired (whether receiving a pension or not)", ticked by over 9 million responding usual residents in households. The next most common response was "looking after home or family" with nearly 2.5 million people. Nearly 6% of responses involved selecting multiple response options for the question.
The weighted agreement rate for this question was 68.7%. The agreement rate in 2011 was 86.4%.
This question allows multiple selections and is therefore more prone to disagreement, as a match requires that exactly the same combinations of answers are selected on the census and CQS. These differences are not necessarily indicative of a quality issue. For example, among those indicating on census that they were retired, the largest source of disagreement were cases where both "retired" and "looking after home or family" were chosen on CQS. These account for almost half of all disagreements.
Removing responses with multiple selections on CQS increases the weighted agreement to over 85%. Aggregating at a coarser level based on a prioritisation order of student, retired, long-term sick, looking after family, then "other", increases agreement rates to over 90%. In 2011, this prioritisation improved the agreement rate to 90.9%.
Available for work
This question was aimed at people who were not in paid work, not temporarily away from work, and not on maternity or paternity leave. They were asked "if a job became available now, could you start it within two weeks".
This question was answered by 4.5 million responding usual residents aged 16 years and over in households in the census. Note that guidance for furloughed people was that they are "temporarily away from work", so the question of availability for work does not apply to them, though the guidance may not have been followed.
The question requires the respondent to think back to census day and remember how they would have answered a hypothetical question at the time, so some level of disagreement could be expected. Only a tiny proportion answered the question on census but did not respond on the CQS. The weighted agreement rate for the question was 83.5%, which is a similar result to the 2011 agreement rate of 86.2%.
A high proportion of linked census and CQS responses on this question were proxy answers on behalf of someone else (87.5%). Responses to this question were not subject to imputation, so this did not affect the agreement rate.
Has ever worked
This question applies to adults who were not in paid work, not temporarily away from work, and not on maternity or paternity leave. They were asked if they had ever done any paid work. Of responding usually resident people aged 16 years and over in households, 41.7% (about 19.2 million people) answered this question.
There were three response options for this question: "yes, in the last 12 months"; "yes, but not in the last 12 months"; and "no". Note that in the previous 2011 Census this question only had "yes" and "no" as options, with a write-in box for the year last worked. In 2021, just over 75% of responding usually resident people aged 16 years and over in households said they had ever worked.
The weighted agreement rate for this question was 80.4%, which is lower than 2011's 94.4% (when there were only two options). Of those residents who answered differently on census and CQS, the most common disagreement was between "no, have never worked" and "yes (worked), but not in the last 12 months" or vice versa. The next most common disagreement was between the two "yes" options.
Supervisor or manager status
We asked respondents whether they supervised or oversaw the work of other employees on a day-to-day basis. There were only two response options, "yes" and "no". If respondents had previously been employed but were not currently, the question applied to their most recent employment.
The agreement rate for this question was 85.4%. The 2011 agreement rate was very similar, at 86.2%. Around 60% of disagreements were where respondents said they were not managers on the census, but said that they were managers on the CQS, and the remaining 40% were the opposite.
Previous census quality surveys and question testing have found that respondents sometimes struggle with what "counts" as supervising. For example, whether they must be formally part of a management chain, whether the duty must be part of their job specification, and whether supervising volunteers counts.
As with many employment questions, there may be some recollection effect when respondents answer questions about past jobs, as they may have difficulty remembering details. The agreement rate for people whose most recent employment was more than 12 months ago was 83.9%, whereas the agreement rate for people actively employed at the time of the census was 86.9%. Confidence intervals indicate that this difference is statistically significant.
We did not find a significant mode difference between paper and online census responses.
We asked respondents how many hours per week they usually worked in their main job, including paid and unpaid overtime. The response options offered four bands: "0-15 hours", "16-30 hours", "31-48 hours" or "49 or more hours". This is the first of a section of employment-related questions that are only asked to people who are currently working (or temporarily away from work). This is in contrast to the preceding group of questions that ask about the respondent's most recent employment.
The agreement rate for this question was 80.9%, which is slightly lower than the 2011 agreement rate of 83.9%.
Around 60% of people who disagreed reported more hours on the CQS than on the census. The vast majority of disagreements – around 86% – were between adjacent categories. As is often the case when response options lie on a scale, the least common disagreements were between the ends of the scale – very few respondents reported "0-15 hours" on one survey and "49 or more hours" on the other.
This question will have been harder to answer for people whose working hours differ from week to week, for example as a result of variable-hour contracts, variable amounts of overtime, or workflows based on a series of short-term, client-based tasks. For example, the agreement rate among self-employed respondents is much lower than the overall rate, at 65.3%.
Respondents' working hours may also have changed in the time between the census and CQS, making it harder to recall past circumstances. This is especially notable given the widespread disruption of the coronavirus pandemic.
Transport to work
We asked respondents how they usually travelled to work (or if they worked mainly from home). When respondents travelled by more than one method, they were instructed to select the method for the longest part of their journey, by distance, rather than selecting multiple responses.
This question was only asked to people who were currently working (or temporarily away from work). People who were on furlough because of the pandemic were advised to answer based on how they would have travelled if they were not on furlough, but we do not have sufficient data to identify how frequently this advice was followed.
The 2021 agreement rate for this question was 80.7%, compared with 85.5% in 2011. In both 2021 and 2011, the most common disagreements were where respondents' census records said they worked from home, but their CQS response said that they drove a car (or van), or vice versa. These were also the most common responses on the 2021 Census, with around 45% of workers driving and 31% working from home.
Along with workplace type, this question had the highest fraction of responses that were affected by editing and imputation among all questions studied. Their position near the end of the census questionnaire may result in greater missingness because of respondent attrition. Unlike most questions, however, the majority of imputed values in this case were imputed to replace inconsistent responses, not to fill missing values.
In the case of method of travel and its companion question, workplace type, this editing mostly affects respondents who answered that they were working from home on one of these questions, but not the other.
This same inconsistent response pattern is also visible in the CQS data. Because CQS responses are not edited, we can see that this is the case for around 4% of CQS records.
Editing and imputation were especially impactful in this case because of the routing conditions and close interrelationship between economic activity, method of travel, workplace type, and workplace address. In some cases, more than one of these variables would require imputation in order to produce a consistent set of answers.
The complicated situation surrounding the coronavirus pandemic and the disruption it caused to usual working patterns will have made it harder for respondents to answer accurately and consistently across these questions.
For example, they may have started by answering how they would usually have travelled before the pandemic, then responded to the workplace type question by saying that they actually currently work from home. This would then trigger editing and imputation. Or, if respondents were temporarily working from home on Census Day but their circumstances had changed again by the time of the CQS, they might have trouble remembering the exact situation at the time.
Though only around 5% of CQS responses for this question had linked census data that had been affected by editing and imputation, these records accounted for one-fifth of all disagreements on this question. Responses in this group were more likely to agree with their original, pre-editing census response than the edited value, even if this meant repeating their inconsistent response on the CQS.
This question asked whether the respondent works from a workplace (or reports to a depot), works from home, works at an offshore installation, or works from no fixed place. It was only asked to people who were currently working (or temporarily away from work).
The agreement rate for this question was 81.5%. The most common disagreements were where census records said that the respondent worked from home, but their CQS response said that they worked from a workplace or depot. In 2011, this question was part of the workplace address question and the 2011 CQS did not produce separate agreement rates for it.
This question is closely related to the method of travel question, as edit rules required consistency when respondents claimed they worked from home on one question but not the other.
Around 7% of CQS responses to the workplace type question were linked to census responses that had been affected by editing and imputation. Disagreements among this group accounted for around 31% of all disagreements on this question. As with method of travel, CQS respondents were substantially more likely to agree with their original census response than the imputed value.
Residents who travelled to work at a workplace or depot were asked for the address of that location, including postcode. For the purposes of the CQS, we measured agreement in terms of postcode sector only, which excludes the last two letters of the postcode. For example, the sector of the imaginary postcode "AB1 2CD" is "AB1 2", and the sector of "X9 8YZ" is "X9 8". There are around 12,500 distinct postcode sectors in the UK as a whole. For more information, see our page about Postal geographies.
The agreement rate for this question was 77.3%. The agreement rate for this question in 2011, also at a postcode sector level, was 82.2%.
The 2021 agreement rate when comparing postcode areas, rather than sectors, increases to 89.4%. At a higher geography level than postcode sector, postcode area is encoded in the one or two letters at the start of the first block of a postcode (referred to as the "outward code"). For example, the postcode area of "AB1 2CD" is "AB".
This question had the lowest usable sample size among CQS questions, at around 885 respondents. This means that its confidence interval is also wider than that of other questions. This is primarily caused by the routing. The majority of CQS respondents did not answer this question because they were children, retired, or working from home at the time of the census.
Because of the low sample size, we were not able to compare agreement rates for proxy and non-proxy responses. We were also not able to compare online and paper census responses. However, the quality impact of infrequent scanning errors from handwritten responses is likely to be low, as the majority of responses in 2021 were online.
Inspecting observed disagreements suggests that several may in fact be the result of mishearing and transcription errors on the CQS, which in 2021 was conducted by telephone interviews, and therefore not necessarily indicative of an issue with census quality. For example, "LS" could be misheard as "LF", and "EN" could be misheard as "CN". We cannot estimate the size of this effect upon the CQS results.
Between this mode-related risk and the low sample size, the CQS results for this question will be less accurate than for other questions. However, as this is the only CQS question where agreement rates depend upon the transcription of detailed responses that could be misheard, this limitation does not affect other results.Back to table of contents
Counts of persons and households are rounded to the nearest five. Percentages are rounded to one decimal place.
Data linkage and usable sample size
The Census Quality Survey (CQS) received responses from 8,725 households containing 16,045 known residents.
Households in CQS were linked to census households using a household ID that is common to both. A minority of cases required clerical matching for where the household ID had changed in the census since the sample was selected. Individuals within households were then matched using a sequence of match keys.
The first match key matched on exact full name and month and year of birth, with progressively looser criteria being applied to deal with cases that are more challenging to match. By the end of this process, there were 430 unmatched residents. Clerical matching was then used to resolve a further 140 of these cases.
Matches suspected of being incorrect were identified by looking at disagreement between sex, age, marital status and highest qualification, and four residents were excluded as potential mismatches. The outcome of the data linkage was that 15,730 people identified by the CQS were linked to census respondents in 8,645 households.
The final agreement rates were not calculated for all of the 15,730 identified residents because of non-response and the application of filtering rules. We did not receive full interviews for every resident of the responding households. Refusals, non-contact, or people moving away from the address decreased the number of eligible respondents by 2,930.
As the main population base for census is the usually resident population, 30 respondents were removed for indicating on CQS that they were short-term residents or were students at a non-term time address on census day. At this stage, we also removed a further 10 respondents whose linked census data indicated that they were not usual residents.
The census data includes responses given by proxy. To prevent additional sources of error being introduced into the calculation of agreement rates, we did not use CQS data that was itself reported by proxy, except for children aged under 16 years. This decreased the number of eligible respondents by a further 825.
After linkage and filtering, the number of respondents used for calculating agreement rates was 11,940 respondents from 8,600 households.
The usable sample sizes for individual questions will vary from this total. This is because we can only compare valid responses on both the census and CQS, and therefore exclude records with missing data on either survey for a given question.
Also, not all questions are asked to all respondents. Many questions are only asked to adults aged 16 years and over, and some questions are only asked to respondents who gave a particular answer to previous questions.Back to table of contents
Office for National Statistics (ONS), released 7 March 2023, ONS website, methodology, Census Quality Survey for Census 2021 in England and Wales
Contact details for this Methodology
Telephone: +44 1392 444972