Table of contents
- Overview of the Panel Design Test
- Background to the Crime Survey for England and Wales Panel Design Test
- Current and experimental survey design
- Response rates, agreement to recontact and attrition
- Effective sample profile
- Using weights
- Experimental results and main estimates of crime
- Potential sample bias
- Telescoping
- Panel-design and modal effects
- Panel Design Test outcomes
- Data sources and quality
- Cite this working paper
1. Overview of the Panel Design Test
Between October 2022 and April 2025, the Office for National Statistics (ONS) conducted a large-scale test of a prototype panel design for the Crime Survey for England and Wales (CSEW). This test was an important part of our ongoing research into the redesign of the CSEW. For a comprehensive summary of this programme of work please see our Consultation response: Redesign of the CSEW report (PDF, 558KB).
This working paper summarises the research undertaken, and evaluates the usefulness of the design and its ability to provide estimates of crime. It also considers whether the panel design could be used to effectively double the sample size of our main crime estimates.
These research outputs are not official statistics relating to crime. This working paper describes experimental research into a longitudinal, panel-designed victimisation survey conducted by the Office for National Statistics (ONS). The information and research presented in this working paper are not an alternative to official cross-sectional Crime Survey for England and Wales (CSEW) estimates and must not be reproduced without this warning.
2. Background to the Crime Survey for England and Wales Panel Design Test
The Crime Survey for England and Wales (CSEW) has been the primary source of estimates of crime since its introduction in 1981. It is widely regarded as the most reliable source of information about trends in crime experienced by the population that are resident in households in England and Wales.
The core content, methodology and operation of the CSEW have remained largely consistent, enabling changes in crime to be reliably measured over the entire time series. The only exception to this was between April 2020 and September 2022, when substantial changes to the survey's operation were implemented because of the coronavirus (COVID-19) pandemic. These changes included using telephone interviewing, rather than face-to-face interviewing, and making better use of respondents who had already participated in the survey (a quasi-panel design).
The survey returned to the more traditional approach in October 2022. However, it was clear that the COVID-19 pandemic had highlighted a need to make the survey more resilient to future shocks and to account for long-term changes in the way people interact with surveys. This was coupled with increasing user demand to reflect the changing nature of crime and the ability to produce more detailed and subnational crime estimates. These factors meant that there was a clear need to explore the feasibility of increasing the CSEW sample size in a cost-effective way. This would then require changes to the survey's existing methodological approach.
As a result, we developed a programme of work to:
investigate the effect implementing a longitundinal panel design, with respondents being re-interviewed annually over several waves, instead of the existing cross-sectional survey design
experiment with other interview modes, by using telephone interviews and, following further development, a fully multi-modal survey
review the CSEW offence classification system
design improvements to the Children's Crime Survey for England and Wales
3. Current and experimental survey design
The Crime Survey for England and Wales currently operates as a cross-sectional survey, collecting data from the private household population at a single point in time. Interviews are conducted in respondents' own homes, with one adult selected at random from one household at the sampled address. These interviews follow a structured format, using Computer-Assisted Personal Interviewing (CAPI). The survey's sample size has fluctuated substantially over the years, ranging from a peak of 46,000 annual interviews in the period between 2004/2005 and 2011/2012 to the current level of approximately 32,000 annual achieved interviews.
User requirements for more granular estimates of crime at subnational levels required an increase in sample size. The existing sample (at the time) of 32,000 interviews was deemed insufficient, and the best solution was to double the sample to approximately 60,000 interviews. However, simply expanding the survey by doubling fieldwork capacity was not feasible, because of the associated costs and logistical challenges of recruiting and managing a substantially larger team of interviewers. This prompted the need for an alternative survey design.
Lessons learned during the coronavirus (COVID-19) pandemic provided valuable insights. During this period, we suspended in-home interviewing and introduced a quasi-panel design, whereby respondents who had previously participated in the survey were invited to take part in additional rounds of interviews conducted by telephone. Based on these experiences, we developed a new design that retained the initial interview and sample size, but also introduced follow-up interviews. Under this approach, respondents would be invited to participate in second and third annual interviews that were conducted by telephone, rather than in person.
Since crime estimates are based on incidents occurring in the previous 12 months, this design would substantially increase the number of interviews available to produce our annual estimates, without requiring additional field interviewers or expanding the original (Wave 1) sample size.
The National Crime Victimization Survey (NCVS) in the US has operated successfully on a similar panel design since its introduction in 1973. This long-standing success suggested that adopting a comparable approach in England and Wales would be both feasible and likely to deliver reliable results.
From October 2021, respondents participating in the main face-to-face survey (Wave 1) were invited to take part in additional annual rounds conducted by telephone. Second interviews (Wave 2) began in October 2022, followed by third interviews a year later. These follow-up interviews were carried out using Computer-Assisted Telephone Interviewing (CATI).
We excluded the initial six months of data from the experiment (following the resumption of face-to-face interviewing in October 2021). This is because response rates during this period were substantially lower than expected. This was largely owing to operational challenges associated with restarting fieldwork after the COVID-19 pandemic suspension, which affected sample representativeness and data quality. The first full year of usable data is April 2023 to March 2024, covering Waves 1 and 2. This helped to ensure robust and reliable estimates. In the subsequent year (April 2024 to March 2025), we successfully collected three waves of data, providing a richer dataset for analysis.
Back to table of contents4. Response rates, agreement to recontact and attrition
Household surveys in the UK and internationally have faced the persistent challenge of declining response rates for many years. These difficulties have been compounded by operational disruptions from the coronavirus (COVID-19) pandemic, which introduced additional complexities to data collection and processing. Consequently, concerns about data quality have intensified across the survey research community.
The Crime Survey for England and Wales (CSEW) has not been unaffected by these trends. CSEW response rates remain higher than many large-scale national surveys in the UK: 46.0% in both the year ending (YE) March 2024 and March 2025. However, they have fallen substantially from pre-pandemic levels, which averaged around 70.0%.
Data gathered from the main survey were used as the first stage of an experimental panel design, which was trialled over two years to assess its feasibility. Following the initial interview from the main survey, respondents were invited to participate in subsequent waves conducted by telephone approximately one year later. The rate of respondents agreeing to be recontacted ("recontact agreement" rate) is a critical metric for determining the size of the eligible sample for these second (Wave 2) interviews. The recontact agreement rate was 74.1% in the YE March 2024 and 78.4% in the YE March 2025. A further invitation (Wave 3) was issued for respondents completing Wave 2 in 2024, and this achieved a recontact agreement rate of 94.9%.
However, there was a substantial decrease once the panel progressed into Waves 2 and 3. Wave 2 response rates were 39.9% of the issued sample in 2024 and 36.3% of the issued sample in 2025. These figures declined markedly to 26.7% in 2024 and 24.0% in 2025, compared with the Wave 1 cohort (accounting for recontact agreements and ineligible cases). Wave 3 had a similar pattern. While 63.4% of eligible contacts responded in 2025, this equates to 59.9% of those who originally took part in Wave 2.
These patterns have considerable methodological implications. The cumulative response rate, which is an important indicator of overall representativeness, fell to only 12.0% at Wave 2 in 2024 (based on initial sample addresses). Cumulative response declined further by Wave 3. This raised substantial concerns about the representativeness of the achieved sample and the potential for non-response bias.
The risks of non-representativeness and bias are amplified because attrition is unlikely to be random; some demographic and socioeconomic groups are disproportionately less likely to continue participating. As the responding sample becomes increasingly selective across waves, there is less scope to correct these imbalances through weighting. Weighting adjustments can compensate for known population characteristics, but they cannot fully correct for unobserved or behaviourally driven differences between responders and non-responders. This limits the ability to restore representativeness and increases the uncertainty around survey estimates. Analysis of these issues is covered in the following sections of this working paper.
Figure 1: High agreement to recontact rates at Wave 1 did not result in high response rates at Wave 2 in year ending March 2024
Response and recontact rates by wave, year ending March 2024
Source: Crime Survey for England and Wales (CSEW) Panel Design Test from the Office for National Statistics
Download this chart Figure 1: High agreement to recontact rates at Wave 1 did not result in high response rates at Wave 2 in year ending March 2024
Image .csv .xlsThose respondents who agreed were contacted and asked to participate in a second round of interviews (Figure 2).
Figure 2: High agreement to recontact rates in earlier waves did not result in higher response rates in later waves in year ending March 2025
Response and recontact rates by wave, year ending March 2025
Source: Crime Survey for England and Wales (CSEW) Panel Design Test from the Office for National Statistics
Download this chart Figure 2: High agreement to recontact rates in earlier waves did not result in higher response rates in later waves in year ending March 2025
Image .csv .xls5. Effective sample profile
The effective sample profile provides an overview of the socio-demographic characteristics of respondents to the Panel Design Test. The sample profile for year ending (YE) March 2024 was remarkably similar to YE March 2025. The YE March 2025 survey includes three waves, compared with only two waves in YE March 2024. This working paper focuses on YE March 2025.
This section evaluates the responding sample of the Panel Design Test, compared with the responding sample of the main Crime Survey of England and Wales (CSEW) survey over the same period, using the Census 2021 estimates as a benchmark. There are differences between Census 2021 and CSEW data because of differing approaches to collection methods and the considerable gap in time between the Census 2021 and the survey interviews.
Comparing the unweighted sample profile to Census 2021 data allows us to assess participation and recruitment imbalances that arise as part of the sample design and during fieldwork (independently of any corrective adjustments applied through weighting). The survey design stratifies by geographical area, rather than demographic characteristics such as age or sex. The unweighted data provide a clear understanding of who actually responded and which groups are overrepresented or underrepresented in the achieved sample, before any statistical correction.
Respective waves use the same sample repeatedly, so it was considered unnecessary to apply any of the design weights from the survey. These comparisons highlight differences that may reflect the impact of survey design and data collection mode.
Sex
The sex sample profile shifts toward the Census 2021 distribution across waves. The sex sample profile shifted from 47.2% male and 52.8% female in Wave 1 to 48.6% male and 51.4% female in Wave 3, compared with 48.4% male and 51.6% female in Census 2021. However, this shift should not be taken as evidence of better representativeness because of the lower response rate and potential respondent bias.
Figure 3: Distribution of respondents by sex was broadly similar across waves when compared with Census 2021
Respondents' sex distribution for CSEW (Wave 1) and CSEW Panel Design Test (Waves 2 and 3), compared with Census 2021
Source: Crime Survey for England and Wales (CSEW) Panel Design Test and Census 2021 from the Office for National Statistics
Notes:
- The analysis is based on year ending (YE) March 2025 interviews, which include Wave 1 baseline participants, Wave 2 first‑year follow‑ups, and Wave 3 final‑year follow‑ups.
- Wave 1 includes respondents aged 16 years and over. Wave 2 includes respondents aged 17 years and over. Wave 3 includes respondents aged 18 years and over. This progression reflects the 12-month interval between successive interviews.
- The survey data in this chart are unweighted.
- Percentages may not sum to 100 because of rounding.
Download this chart Figure 3: Distribution of respondents by sex was broadly similar across waves when compared with Census 2021
Image .csv .xlsAge
Analysis of age distribution across waves indicates a consistent pattern of underrepresentation of younger age groups and overrepresentation of older age groups, compared with Census 2021 benchmarks. These disparities become more pronounced over subsequent waves.
Younger age groups were underrepresented at Wave 1, and their proportions decreased across waves. This is broken down as follows:
Wave 1 – 5.2% of respondents were aged 16 to 24 years and 13.8% were aged 25 to 34 years
Wave 2 – 2.7% of respondents were aged 17 to 24 years and 8.5% were aged 25 to 34 years
Wave 3 – 1.5% of respondents were aged 18 to 24 years and 5.6% were aged 25 to 34 years
This is compared with 13.0% of respondents aged 16 to 24 years and 16.6% aged 25 to 34 years in Census 2021 benchmarks.
Conversely, older age groups (those aged 65 to 74 years and 75 years and over) were consistently overrepresented in the sample, and their proportions increased over time. This is broken down as follows:
Wave 1 – 16.3% of respondents were aged 65 to 74 years and 15.5% were aged 75 years and over
Wave 2 – 23.5% of respondents were aged 65 to 74 years and 21.8% were aged 75 years and over
Wave 3 – 26.6% of respondents were aged 65 to 74 years and 24.6% were aged 75 years and over
This is compared with 12.2% of respondents aged 65 to 74 years and 10.6% aged 75 years and over in Census 2021 benchmarks.
Figure 4: Differences in age distribution were observed across Waves 1 to 3 when compared with Census 2021
Respondents' age distribution for CSEW (Wave 1) and CSEW Panel Design Test (Waves 2 and 3), compared with Census 2021
Source: Crime Survey for England and Wales (CSEW) Panel Design Test and Census 2021 from the Office for National Statistics
Notes:
- The analysis is based on year ending (YE) March 2025 interviews, which include Wave 1 baseline participants, Wave 2 first‑year follow‑ups, and Wave 3 final‑year follow‑ups.
- Wave 1 includes respondents aged 16 years and over. Wave 2 includes respondents aged 17 years and over. Wave 3 includes respondents aged 18 years and over. This progression reflects the 12-month interval between successive interviews.
- The survey data in this chart are unweighted.
- Percentages may not sum to 100 because of rounding.
Download this chart Figure 4: Differences in age distribution were observed across Waves 1 to 3 when compared with Census 2021
Image .csv .xlsEthnicity
Respondents identifying as White became increasingly overrepresented over successive waves, compared with the Census 2021 benchmark of 81.7%. The proportion increased from 84.1% in Wave 1, to 89.8% in Wave 2, to 92.6% in Wave 3.
Respondents identifying as Black or Black British were slightly overrepresented in Wave 1 (4.8%) but then declined in Wave 2 (3.3%) and in Wave 3 (2.3%) compared with the Census 2021 (4.0%).
Figure 5: Waves 1 to 3 showed lower representation of minority ethnic groups when compared with Census 2021
Respondents' ethnicity distribution for CSEW (Wave 1) and CSEW Panel Design Test (Waves 2 and 3), compared with Census 2021
Source: Crime Survey for England and Wales (CSEW) Panel Design Test and Census 2021 from the Office for National Statistics
Notes:
- The analysis is based on year ending (YE) March 2025 interviews, which include Wave 1 baseline participants, Wave 2 first‑year follow‑ups, and Wave 3 final‑year follow‑ups.
- Wave 1 includes respondents aged 16 years and over. Wave 2 includes respondents aged 17 years and over. Wave 3 includes respondents aged 18 years and over. This progression reflects the 12-month interval between successive interviews.
- The survey data in this chart are unweighted.
- Percentages may not sum to 100 because of rounding.
- The Census 2021 ethnicity figures cover entire population, including all ages.
Download this chart Figure 5: Waves 1 to 3 showed lower representation of minority ethnic groups when compared with Census 2021
Image .csv .xlsTenure
Owner-occupied households became increasingly overrepresented over successive waves, compared with the Census 2021 benchmark of 62.5%. The proportion rose from 63.1% in Wave 1, to 74.2% in Wave 2, to 80.6% in Wave 3.
Those in the social rented sector in Wave 1 (17.2%) were closely aligned with the Census 2021 benchmark (17.1%). However, they became increasingly underrepresented in later waves, decreasing to 12.6% in Wave 2 and 9.6% in Wave 3.
Private renting in Wave 1 (19.7%) was also initially close to the Census 2021 benchmark (20.4%). However, this followed a similar pattern of underrepresentation, falling to 13.3% in Wave 2 and 9.8% in Wave 3.
Figure 6: Waves 1 to 3 showed higher proportions of owner-occupied households when compared with Census 2021
Respondents' tenure distribution for CSEW (Wave 1) and CSEW Panel Design Test (Waves 2 and 3), compared with Census 2021
Source: Crime Survey for England and Wales (CSEW) Panel Design Test and Census 2021 from the Office for National Statistics
Notes:
- The analysis is based on year ending (YE) March 2025 interviews, which include Wave 1 baseline participants, Wave 2 first‑year follow‑ups, and Wave 3 final‑year follow‑ups.
- Wave 1 includes respondents aged 16 years and over. Wave 2 includes respondents aged 17 years and over. Wave 3 includes respondents aged 18 years and over. This progression reflects the 12-month interval between successive interviews.
- The survey data in this chart are unweighted.
- Percentages may not sum to 100 because of rounding.
Download this chart Figure 6: Waves 1 to 3 showed higher proportions of owner-occupied households when compared with Census 2021
Image .csv .xls6. Using weights
Traditional and experimental weighting
Traditional weighting procedure (Wave 1)
Under the long-established cross-sectional Crime Survey for England and Wales (CSEW) design, weighting is applied to correct for unequal selection probabilities and to ensure that households, individuals and victimisation incidents reflect the composition of the population by age, sex and region. The four base weight components are:
an address selection weight
an address nonresponse weight
a dwelling unit weight
an individual selection weight
Core household and individual base weights are derived from combinations of these components. They are scaled to a notional achieved sample size per quarter and then calibrated on a rolling 12-month basis to reproduce population totals by age, sex and region. The calibration stage also adjusts for the number of households, which ensures alignment with known population benchmarks in England and Wales.
This Wave 1 system is the standard CSEW weighting approach that is used in the traditional cross-sectional design.
Panel design weighting procedure (Waves 2 and 3)
Testing moving from a cross-sectional design to a panel-based multi-wave structure required a new weighting approach. The new weighting approach was based on the traditional Wave 1 weights. However, Wave 2 and Wave 3 weights were constructed differently, to account for attrition, differing population coverage, and the changing composition of panel respondents.
How Wave 2 base weights were derived
Each respondent's Wave 1 calibrated weight was averaged across all rolling 12‑month datasets in which they appeared.
Wave 2 respondents' Wave 1 characteristics were compared with a Wave 1 reference sample, which was drawn from the quarters in which they were originally interviewed.
A new propensity score adjustment was applied to ensure Wave 2 respondents resembled the appropriate Wave 1 population across a broad set of household and individual variables.
Wave 1 and Wave 2 weights were scaled together, so both contributed appropriately to combined datasets, despite differences in coverage (for example, Wave 2 excludes new respondents aged 16 years and very recent immigrants).
How Wave 3 base weights were derived
Wave 3 weights were based on the same principles as Wave 2 weights. Wave 3 weights also:
used a fixed Wave 1 reference sample (October 2021 to September 2022) because this time period was the start of the redesigned CSEW
required a more complex scaling structure, because Wave 3 excludes an additional subset of the population (for example, respondents aged 16 to 17 years and those who immigrated within the last 24 months)
These multi-wave procedures integrated Waves 1 to 3 within a panel-style framework. This enabled analysis across waves, while maintaining representativeness and comparability with the Wave 1 population.
For a more comprehensive description on our weighting strategy, please see our CSEW Technical report 2023/2024 (PDF, 2.3MB). For more information on calibration weighting, please see Section 8: Statistical conventions and methods of our User guide to crime statistics for England and Wales.
Sex
After weighting was applied, the sex distribution across all waves of the CSEW closely aligned with Census 2021 benchmarks. The calibration weighting used more current population projections for sex, age groups and regions. This means that while the weighted distributions were broadly similar to Census 2021 estimates, small differences (particularly for age and sex) were expected. Overall, this indicated that the weighting procedure effectively calibrated the sample to the general population and was largely unaffected by other weighting strategies that were applied simultaneously.
Figure 7: Weighted respondent sex distribution across Waves 1 to 3 closely matched Census 2021
Weighted respondent sex distribution for CSEW (Wave 1) and CSEW Panel Design Test (Waves 2 and 3), compared with the Census 2021
Source: Crime Survey for England and Wales (CSEW) Panel Design Test and Census 2021 from the Office for National Statistics
Notes:
- The analysis is based on year ending (YE) March 2025 interviews, which include Wave 1 baseline participants, Wave 2 first‑year follow‑ups, and Wave 3 final‑year follow‑ups.
- Wave 1 includes respondents aged 16 years and over. Wave 2 includes respondents aged 17 years and over. Wave 3 includes respondents aged 18 years and over. This progression reflects the 12-month interval between successive interviews.
- The survey data in this chart are weighted.
- Percentages may not sum to 100 because of rounding.
Download this chart Figure 7: Weighted respondent sex distribution across Waves 1 to 3 closely matched Census 2021
Image .csv .xlsAge
Weighting moved the age distribution of the sample for all waves, so that they more closely aligned with the Census 2021 benchmark. Following weighting, the biggest difference in sample composition continued to be in the youngest age group. This is broken down as follows:
Wave 1 – 12.3% of respondents were aged 16 to 24 years
Wave 2 – 11.0% of respondents were aged 17 to 24 years
Wave 3 – 9.6% of respondents were aged 18 to 24 years
This is compared with 13.0% of respondents aged 16 to 24 years in Census 2021 benchmarks.
This breakdown is logical, since the cohort of respondents age progressively. There were no respondents aged 16 years in Wave 2 and no respondents aged 16 or 17 years in Wave 3. In comparison with Census 2021, differences in sample composition across waves were much smaller when compared with the unweighted proportions. The calibration weighting effectively corrected for differences in the CSEW sample across waves and the general population.
Figure 8: Weighting closely aligned the age distribution across Waves 1 to 3 with Census 2021
Weighted respondent age distribution for CSEW (Wave 1) and CSEW Panel Design Test (Waves 2 and 3), compared with the Census 2021
Source: Crime Survey for England and Wales (CSEW) Panel Design Test and Census 2021 from the Office for National Statistics
Notes:
- The analysis is based on year ending (YE) March 2025 interviews, which include Wave 1 baseline participants, Wave 2 first‑year follow‑ups, and Wave 3 final‑year follow‑ups.
- Wave 1 includes respondents aged 16 years and over. Wave 2 includes respondents aged 17 years and over. Wave 3 includes respondents aged 18 years and over. This progression reflects the 12-month interval between successive interviews.
- As the proportion of respondents in the 16 to 24 years age group decreases, the proportions in all other age groups increase.
- The survey data in this chart are weighted.
- Percentages may not sum to 100 because of rounding.
Download this chart Figure 8: Weighting closely aligned the age distribution across Waves 1 to 3 with Census 2021
Image .csv .xlsEthnicity
Ethnicity was not included in the calibration weighting. This is because population benchmarks for ethnicity are not updated annually, unlike those for age, sex and region. However, the ethnic distribution of the sample across all waves aligned closely with the Census 2021 benchmark following weighting. The weighted sample more accurately reflected the composition of the population.
Figure 9: Weighting improved the alignment of respondent ethnicity in Waves 1 to 3 with Census 2021
Weighted respondent ethnicity distribution for CSEW (Wave 1) and CSEW Panel Design Test (Waves 2 and 3), compared with the Census 2021
Source: Crime Survey for England and Wales (CSEW) Panel Design Test and Census 2021 from the Office for National Statistics
Notes:
- The analysis is based on year ending (YE) March 2025 interviews, which include Wave 1 baseline participants, Wave 2 first‑year follow‑ups, and Wave 3 final‑year follow‑ups.
- Wave 1 includes respondents aged 16 years and over. Wave 2 includes respondents aged 17 years and over. Wave 3 includes respondents aged 18 years and over. This progression reflects the 12-month interval between successive interviews.
- The survey data in this chart are weighted.
- Percentages may not sum to 100 because of rounding.
- The Census 2021 ethnicity figures cover entire population, including all ages.
Download this chart Figure 9: Weighting improved the alignment of respondent ethnicity in Waves 1 to 3 with Census 2021
Image .csv .xlsTenure
Like ethnicity, tenure was not included in the calibration weighting because its population benchmarks are not updated annually. However, the tenure distribution of the sample across all waves also aligned closely with the Census 2021 benchmark following weighting. This suggested the weighted sample more accurately reflected the composition of the population.
Figure 10: Weighting improved the alignment of tenure distribution in Waves 1 to 3 with Census 2021
Weighted respondent tenure distribution for CSEW (Wave 1) and CSEW Panel Design Test (Waves 2 and 3), compared with the Census 2021
Source: Crime Survey for England and Wales (CSEW) Panel Design Test and Census 2021 from the Office for National Statistics
Notes:
- The analysis is based on year ending (YE) March 2025 interviews, which include Wave 1 baseline participants, Wave 2 first‑year follow‑ups, and Wave 3 final‑year follow‑ups.
- Wave 1 includes respondents aged 16 years and over. Wave 2 includes respondents aged 17 years and over. Wave 3 includes respondents aged 18 years and over. This progression reflects the 12-month interval between successive interviews.
- The survey data in this chart are weighted.
- Percentages may not sum to 100 because of rounding.
Download this chart Figure 10: Weighting improved the alignment of tenure distribution in Waves 1 to 3 with Census 2021
Image .csv .xlsOutcomes of panel design weighting procedure
Overall, there was close alignment of the weighted sample with the Census 2021 benchmarks for sex, age, ethnicity, and tenure. This provides reassurance that the weighting strategy is operating as intended and that it effectively restored the sample to known population totals.
This is particularly notable for ethnicity and tenure, which were not included in the calibration stage. These distributions move closer to Census 2021 benchmarks after weighting. This indicates that the combined set of adjustments - including the additional propensity-score and scaling procedures introduced from Wave 2 onwards - improve population alignment across waves.
However, this alignment should not be interpreted as evidence that non-response bias has been fully addressed. The weighting approach necessarily assumes that respondents within each weighting class are representative of all individuals in that class, including those who do not take part.
Non-responders may differ systematically from responders in ways that are not fully captured by the weighting variables (particularly characteristics related to criminal victimisation). These differences cannot be corrected through calibration or the extended Wave 2+ adjustments alone. This means that while weighting enhances representativeness according to known population totals, residual non-response bias may remain. The effectiveness of the weighting approach should be assessed alongside the broader quality indicators presented in this working paper.
Back to table of contents7. Experimental results and main estimates of crime
The shift from a cross-sectional survey design to the quasi-longitudinal panel approach implemented in the test is a material methodological change. As such, we anticipated a degree of discontinuity in our crime victimisation estimates. A core aim of the test was to assess the scale and nature of these differences and to understand their implications for the production of headline statistics.
The design aimed to integrate data collected across multiple survey waves to produce annual estimates based on interviews conducted within a common 12-month reference period. This combined-wave approach sought to improve the precision of year-on-year comparisons by reducing sampling variability, while also substantially increasing the effective sample size underpinning the estimates. Under this design, the survey was expected to achieve an eventual effective annual sample (over four or five waves) of 60,000 interviews, compared with approximately 31,000 interviews under the existing cross-sectional model.
Individual waves were not intended to support standalone estimates of crime. However, as part of the methodological evaluation, each wave was weighted separately and analysed independently to assess internal consistency and to quantify the impact of the revised design. Combined wave estimates were also produced across the two survey years to illustrate the expected characteristics of final annual estimates under the approach being tested.
Results for year ending March 2024
| Offence group | Wave 1 Number of incidents (1000's) | Wave 1 Lower CI | Wave 1 Upper CI | Wave 2 Number of incidents (1000's) | Wave 2 Lower CI | Wave 2 Upper CI | Significance Test [note 1] | Percentage Change [note 1] |
|---|---|---|---|---|---|---|---|---|
| Violence | 1,125 | 897 | 1,352 | 2,063 | 1,160 | 2,966 | [s] | 83.4 |
| Robbery | 113 | 60 | 167 | 69 | 9 | 129 | -38.9 | |
| Theft offences | 2,686 | 2,504 | 2,869 | 3,356 | 2,848 | 3,864 | [s] | 24.9 |
| Criminal damage | 664 | 587 | 742 | 858 | 674 | 1,041 | 29.2 | |
| All CSEW crime excluding fraud and computer misuse | 4,589 | 4,249 | 4,928 | 6,347 | 5,269 | 7,424 | [s] | 38.3 |
| Fraud and computer misuse | 4,199 | 3,939 | 4,459 | 6,111 | 5,502 | 6,721 | [s] | 45.5 |
| All CSEW crime including fraud and computer misuse | 8,787 | 8,318 | 9,257 | 12,458 | 11,117 | 13,799 | [s] | 41.8 |
| Unweighted base - number of people aged 16 and over | 30,847 | 8,207 |
Download this table Table 1: Incidents of crime, people aged 16 years and over/ households, with percentage change and statistical significance of change
.xls .csvIn the first full survey year (year ending (YE) March 2024), two waves were in operation:
Wave 1 was the existing Crime Survey for England and Wales (CSEW) cross-sectional design
Wave 2 included respondents from the previous survey year (YE March 2023) who had agreed to be reinterviewed approximately one year later
Wave 1 crime estimates in this working paper are our published CSEW crime estimates for YE March 2024. Wave 2 estimates were produced using a distinct weighting strategy reflecting its panel-based design (described in Section 6: Using weights).
The achieved Wave 2 sample size was 8,207 interviews, which limited the power to detect statistically significant differences for most of the detailed offence categories. Because of this, comparisons in this working paper focus on headline measures of crime.
Comparative analysis indicates that Wave 2 consistently produced higher crime estimates than Wave 1, though not all differences were statistically significant. Robbery was the only exception. Wave 2 produced a lower robbery estimate than Wave 1, though this difference was not statically significant.
Incident estimates for Wave 2 were 41.8% higher than those for Wave 1 for total CSEW crime (including fraud and computer misuse). Wave 2 estimates were 38.3% higher than Wave 1 for total CSEW crime (excluding fraud and computer misuse).
Violent crime had the most pronounced divergence between estimates. Wave 2 incident estimates were 83.4% higher than in Wave 1.
Differences for criminal damage were also not statistically significant.
Results for year ending March 2025
| Offence group | Wave 1 Number of incidents (1000's) | Wave 1 Lower CI | Wave 1 Upper CI | Wave 2 Number of incidents (1000's) | Wave 2 Lower CI | Wave 2 Upper CI | Significance Test [note 1] | Percentage Change [note 1] |
|---|---|---|---|---|---|---|---|---|
| Violence | 1,103 | 889 | 1,316 | 1,250 | 772 | 1,727 | 13.3 | |
| Robbery | 83 | 50 | 116 | 82 | 20 | 143 | -1.2 | |
| Theft offences | 2,801 | 2,376 | 3,227 | 3,687 | 2,691 | 4,683 | [s] | 31.6 |
| Criminal damage | 608 | 547 | 669 | 955 | 737 | 1,173 | [s] | 57.1 |
| All CSEW crime excluding fraud and computer misuse | 4,595 | 3,862 | 5,329 | 5,973 | 4,221 | 7,726 | [s] | 30.0 |
| Fraud and computer misuse | 4,850 | 4,544 | 5,157 | 6,716 | 6,056 | 7,376 | [s] | 38.5 |
| All CSEW crime including fraud and computer misuse | 9,445 | 8,405 | 10,486 | 12,689 | 10,277 | 15,102 | [s] | 34.3 |
| Unweighted base - number of people aged 16 and over | 31,532 | 7,689 |
Download this table Table 2: Incidents of crime, people aged 16 years and over/households across waves, with percentage change and statistical significance of change
.xls .csv
| Offence group | Wave 1 Number of incidents (1000's) | Wave 1 Lower CI | Wave 1 Upper CI | Wave 3 Number of incidents (1000's) | Wave 3 Lower CI | Wave 3 Upper CI | Significance Test [note 1] | Percentage Change [note 1] |
|---|---|---|---|---|---|---|---|---|
| Violence | 1,103 | 889 | 1,316 | 2,081 | 880 | 3,283 | 88.7 | |
| Robbery | 83 | 50 | 116 | 129 | 4 | 254 | 55.4 | |
| Theft offences | 2,801 | 2,376 | 3,227 | 3,229 | 1,877 | 4,581 | 15.3 | |
| Criminal damage | 608 | 547 | 669 | 949 | 699 | 1,200 | [s] | 56.1 |
| All CSEW crime excluding fraud and computer misuse | 4,595 | 3,862 | 5,329 | 6,388 | 3,459 | 9,318 | 39.0 | |
| Fraud and computer misuse | 4,850 | 4,544 | 5,157 | 6,219 | 5,304 | 7,134 | [s] | 28.2 |
| All CSEW crime including fraud and computer misuse | 9,445 | 8,405 | 10,486 | 12,607 | 8,763 | 16,452 | [s] | 33.5 |
| Unweighted base - number of people aged 16 and over | 31,532 | 5,305 |
Download this table Table 3: Incidents of crime, people aged 16 years and over/households across waves, with percentage change and statistical significance of change
.xls .csvIn the second survey year (YE March 2025), an additional third wave was introduced. This comprised Wave 2 respondents who had agreed to a further interview in the previous year.
The achieved Wave 2 sample size in the second year was 7,689 interviews, slightly lower than in the previous year. The achieved Wave 3 sample size was 5,305 interviews.
Comparative analysis shows that Wave 2 again produced higher crime estimates than Wave 1 (with the exception of robbery), though differences were not always statistically significant. For total CSEW crime including fraud and computer misuse, Wave 2 incident estimates were 34.3% higher than Wave 1, compared with 41.8% in the previous year. For total CSEW crime excluding fraud and computer misuse, Wave 2 estimates were 30.0% higher, compared with 38.3% the previous year.
Wave 2 incident estimates for theft offences, criminal damage, and fraud and computer misuse were all statistically significantly higher than those for Wave 1 (which were 31.6%, 57.1%, and 38.5%, respectively). For violent crime, the pronounced difference observed between Waves 1 and 2 in the previous year did not remain. In YE March 2025, Wave 2 incident estimates for violent crime were not statistically different from Wave 1 (a 13.3% difference, compared with 83.4% in the previous year).
There may be a possible cohort effect, shown by elevated violent crime estimates in Wave 2 in YE March 2024 and Wave 3 in YE March 2025 (88.7% higher than Wave 1). Because Wave 3 respondents are drawn from the previous year's Wave 2 cohort, the persistence of higher estimates within this retained group may reflect differences in the characteristics of those who continue to take part, rather than genuine population change.
Combined wave estimates
| Offence group | Wave 1 Number of incidents (1000's) | Wave 1 Lower CI | Wave 1 Upper CI | Combined Waves Number of incidents (1000's) [note 1] | Combined Waves Lower CI [note 1] | Combined Waves Upper CI [note 1] | Significance Test [note 2] | % Change [note 2] |
|---|---|---|---|---|---|---|---|---|
| Violence | 1,125 | 897 | 1,352 | 1,338 | 1,077 | 1,600 | 18.9 | |
| Robbery | 113 | 60 | 167 | 121 | 67 | 174 | 7.1 | |
| Theft offences | 2,686 | 2,296 | 3,076 | 2,879 | 2,472 | 3,285 | 7.2 | |
| Criminal damage | 664 | 587 | 742 | 706 | 633 | 778 | 6.3 | |
| All CSEW crime excluding fraud and computer misuse | 4,589 | 3,840 | 5,337 | 5,044 | 4,249 | 5,838 | 9.9 | |
| Fraud and computer misuse | 4,199 | 3,939 | 4,459 | 4,597 | 4,347 | 4,846 | [s] | 9.5 |
| All CSEW crime including fraud and computer misuse | 8,787 | 7,779 | 9,796 | 9,640 | 8,597 | 10,684 | 9.7 | |
| Unweighted base - number of people aged 16 and over | 30,847 | 39,054 |
Download this table Table 4: Incidents of crime, people aged 16 years and over/households across waves, with percentage change and statistical significance of change
.xls .csv
| Offence group | Wave 1 Number of incidents (1000's) | Wave 1 Lower CI | Wave 1 Upper CI | Combined Waves Number of incidents (1000's) [note 1] | Combined Waves Lower CI [note 1] | Combined Waves Upper CI [note 1] | Significance Test [note 2] | % Change [note 2] |
|---|---|---|---|---|---|---|---|---|
| Violence | 1,103 | 889 | 1,316 | 1,234 | 1,020 | 1,448 | 11.9 | |
| Robbery | 83 | 50 | 116 | 90 | 58 | 121 | 8.4 | |
| Theft offences | 2,801 | 2,376 | 3,227 | 3,046 | 2,639 | 3,453 | 8.7 | |
| Criminal damage | 608 | 547 | 669 | 697 | 636 | 758 | [s] | 14.6 |
| All CSEW crime excluding fraud and computer misuse | 4,595 | 3,862 | 5,329 | 5,066 | 4,353 | 5,779 | 10.3 | |
| Fraud and computer misuse | 4,850 | 4,544 | 5,157 | 5,358 | 5,089 | 5,628 | [s] | 10.5 |
| All CSEW crime including fraud and computer misuse | 9,445 | 8,405 | 10,486 | 10,424 | 9,442 | 11,407 | 10.4 | |
| Unweighted base - number of people aged 16 and over | 31,532 | 44,526 |
Download this table Table 5: Incidents of crime, people aged 16 years and over/households across waves, with percentage change and statistical significance of change
.xls .csvYE March 2024 combined Wave 1 and 2 estimates for total crime incidents both including and excluding fraud are approximately 10% higher than those produced from Wave 1 alone (9.7% and 9.9%, respectively). YE March 2025 combined Waves 1, 2 and 3 estimates for total crime both including and excluding fraud are also approximately 10% higher than those produced from Wave 1 alone (10.4% and 10.3%, respectively).
None of these differences are statistically significant. However, statistical significance testing with two overlapping samples is considered conservative because the overlap introduces correlation between the samples and reduces the effective independence of observations. In other words, the statistical test errs on the side of caution by requiring stronger evidence to reject the null hypothesis, which minimises the risk of false positives. Only fraud and computer misuse were found to be statistically significant in both YE March 2024 and March 2025 (9.5% and 10.5%, respectively).
Given the major methodological changes, involving switching between face-to-face and telephone operation and moving to a quasi-panel design, changes in scale of this magnitude may be expected.
Back to table of contents8. Potential sample bias
To assess whether differences in estimates across waves could be attributed to sample bias, we:
carried out additional checks to ensure the survey weighting appropriately corrected for changes in sample composition
applied logistic regressions to understand the likelihood of respondent retention across waves and the patterns in victimisation reporting
Weighting checks
For year ending (YE) March 2024, we ran the relevant Wave 2 weight on Wave 1 data for personal and household characteristics, alongside Crime Survey for England and Wales (CSEW) headline prevalence estimates. There were only marginal differences between Wave 1 data weighted with Wave 1 weights and the same data weighted with Wave 2 weights. The weighting effectively adjusted for bias introduced through differences in sample composition across waves. This procedure did not cause unexpected changes in the CSEW headline prevalence estimates.
Logistic regression models
We used four regression models to examine whether being a victim of CSEW headline crime in earlier waves influenced whether respondents reported victimisation in subsequent waves and whether they continued participation.
Regression models 1 and 3 followed the same cohort of respondents across two waves of the survey (Wave 1 in YE March 2024 and Wave 2 in YE March 2025). Regression models 2 and 4 followed the same cohort of respondents across three waves of the survey (Wave 1 in YE March 2023, Wave 2 in YE March 2024, and Wave 3 in YE March 2025).
The models aimed to address the following questions:
model 1 – "For respondents who reported victimisation at Wave 1 and continued participation at Wave 2, how likely were they to report victimisation at Wave 2?"
model 2 – "For respondents who reported victimisation at Waves 1 and 2, and continued participation at Wave 3, how likely were they to report victimisation at Wave 3?"
model 3 – "For respondents who reported victimisation at Wave 1, how likely were they to continue participation at Wave 2?"
model 4 – "For respondents who reported victimisation at Waves 1 and 2, how likely were they to continue participation at Wave 3?"
We included additional CSEW variables in each model that are known for their association with either victimisation or response (depending on the model), if they were statistically significantly associated with the outcome variable. For further information on methodology and regression modelling, see Section 12: Data sources and data quality. The models are unweighted and should be interpreted with caution. They reflect associations within the responding sample and do not adjust for selection probabilities or differential non-response.
For model 1, victimisation at Wave 1 demonstrated a statistically significant association with victimisation at Wave 2. Respondents that reported victimisation at Wave 1 and continued to participate at Wave 2 had an odds ratio of 2.5 for reporting victimisation at Wave 2 (while holding all other variables in the model constant). Respondents who were victims at Wave 1 were more than twice as likely to report victimisation again at Wave 2, compared with non-victims.
This remained broadly similar for model 2. Victimisation at Waves 1 and 2 demonstrated a statistically significant association with victimisation at Wave 3. Respondents that reported victimisation at Waves 1 and 2 and continued to participate at Wave 3 had an odds ratio of 2.6 for reporting victimisation at Wave 3 (while holding all other variables in the model constant).
For model 3, victimisation at Wave 1 demonstrated a statistically significant association with response at Wave 2. The odds of continued participation at Wave 2 were 11% higher for respondents that reported victimisation at Wave 1 (while holding all other variables in the model constant). While reporting victimisation at Wave 1 shows an association with the likelihood of response at Wave 2, the overall effect is likely to be minimal.
For model 4, victimisation at Waves 1 and 2 demonstrated a statistically significant association with Wave 3 response. This association was more pronounced, compared with model 3. Respondents that reported victimisation at Waves 1 and 2 had an odds ratio of 5.2 for response at Wave 3 (while holding all other variables in the model constant).
Models 1 and 2 together highlight patterns of repeat victimisation, where a subgroup of the panel sample continue to experience crime over time. Models 3 and 4 suggest an increase in the propensity to respond for this subgroup, which results in victims of crime becoming overrepresented in later waves. It is worth noting, however, that the individual-level Wave 2 and Wave 3 weights do adjust for Wave 1 victimisation status. These weighting procedures help limit the overrepresentation of Wave 1 victims in Wave 2 and Wave 3 weighted estimates. However, repeat victims may still appear disproportionately in later waves, relative to one-off victims.
If the panel design is revisited in future, further investigation, potentially using longitudinal weights, would be needed to assess the impact of any attrition or self-selection bias on estimates. This is particularly important if repeat victims are more motivated to report and are therefore more likely to respond.
For further details on each model, please refer to the corresponding table in our accompanying dataset.
Back to table of contents9. Telescoping
The telescoping effect refers to a common memory bias where individuals misplace the timing of past events, often recalling them as occurring more recently than they actually did. This cognitive distortion can lead to inaccuracies in survey data, particularly when respondents are asked to report events within a specific reference period.
In the context of panel-designed victimisation surveys, telescoping occurs when respondents mistakenly report offences outside the intended reference period. This causes duplicate reporting across waves and inflates crime counts. To evaluate the impact of telescoping on our test, we implemented a structured review process across consecutive survey waves. This involved three steps.
Data processing and extraction, which involved identifying potential duplicates by flagging repeated offence codes across waves.
Qualitative review by crime analysts, which involved examining victim form descriptions to determine whether offence details were repeated or closely aligned.
Classification of likely telescoping cases, which involved confirming matches where offence descriptions aligned across waves and incidents fell within three months of the current wave's reference period.
Cases meeting these criteria were classified as likely telescoping and were used to inform data quality checks and exploratory longitudinal analysis.
| Telescoping category | YE March 2024 Wave 2 | YE March 2025 Wave 2 | YE March 2025 Wave 3 |
|---|---|---|---|
| Same Offence recorded across Waves [note 3,4] | 249,636 | 210,158 | 371,590 |
| Same Offence recorded across Waves (%) [note 3, 4] | 2.0 | 1.7 | 2.9 |
| Likely Telescoping incident [note 4, 5] | 24,980 | 21,882 | 45,403 |
| Likely Telescoping incident (%) [note 4, 5] | 0.2 | 0.2 | 0.4 |
| Unweighted base - number of people aged 17 and over | 8,207 | 7,689 | 5,305 |
Download this table Table 6: Weighted telescoping incidents as a percentage of total crime by year and wave
.xls .csvWe found that respondents generally recalled the correct reference period for reported offences (Table 6). A small proportion of overlapping cases were linked to potential recall issues. However, most offences that appeared in more than one wave were attributable to re-victimisation, where respondents experienced different incidents across waves but reported the same offence category, rather than the same event.
We observed a limited degree of uncertainty regarding the timing of incidents and their allocation to the correct reference period among some respondents. However, the overall prevalence of such cases remained very low. In year ending (YE) March 2024, only 0.2% of Wave 1 incidents were identified as potential duplicates at Wave 2. These cases were characterised by incidents reported as occurring within three months of the start of the subsequent wave's reference period and by offence descriptions showing a high degree of similarity. This proportion remained stable, at 0.2% for incidents reported between Wave 1 and Wave 2 in YE March 2025.
Conversely, between Waves 2 and 3 during the same year, the proportion increased to 0.4%. This suggests that memory decay, or uncertainty about reference periods, may become more pronounced as respondents participate in successive waves over time.
Although these percentages are minimal, they provide an important insight. If a similar panel design is adopted in future, continuous monitoring and clear guidance for respondents will be essential to minimise misreporting and to maintain the accuracy of longitudinal data.
Back to table of contents10. Panel-design and modal effects
Panel-design effects arise from repeatedly surveying the same respondents over time. Modal effects reflect differences attributable to the method of data collection.
In the Crime Survey for England and Wales (CSEW) Panel Design Test, Wave 1 interviews were conducted face-to-face, while Wave 2+ interviews were carried out by telephone. As a result, potential differences in victimisation estimates reflect a combination of panel-design effects and modal effects that cannot be separated.
Panel-design effects may include panel conditioning or respondent fatigue, where respondents alter their behaviour or responses through repeated participation. This can also include learning effects, where respondents become more familiar with survey questions and concepts over time, or may also become more or less willing to disclose sensitive incidents over time.
These panel-related processes operate alongside telephone-specific measurement effects. Telephone interviewing may suppress disclosure for some crime types, owing to reduced rapport and a lack of visual aids. However, it can increase perceived anonymity for others, potentially increasing reporting. Because these mechanisms occur simultaneously, and sometimes in opposing directions, the higher victimisation levels observed at Wave 2+ cannot be attributed to panel-design or modal effects alone.
Consideration was given to interviewing half of Wave 2+ respondents face-to-face to isolate mode effects. Though this was a methodologically appealing approach, it presented major operational and design challenges. Delivering a mixed-mode follow-up would require substantial additional fieldwork, with dispersed travel arrangements. Additionally, it would be difficult to discern any true differences between groups, given the size of the Wave 2+ samples. For these reasons, this approach was considered analytically limited.
Back to table of contents11. Panel Design Test outcomes
The Crime Survey for England and Wales (CSEW) Panel Design Test collected important evidence on the feasibility and implications of moving from a cross-sectional to a quasi-longitudinal approach. The tested panel design aimed to increase effective sample size and improve precision. However, these benefits were not fully realised and several methodological challenges emerged.
Response and attrition
Agreement to recontact was relatively high. However, cumulative response rates declined sharply across waves. This attrition substantially reduced the effective sample size and prevented the design from achieving its main objective of doubling the number of interviews. The resulting loss of representativeness raises concerns about non-response bias and the reliability of longitudinal estimates.
Sample composition and weighting
Later waves did not improve the achieved sample profile. Instead, the constraints observed at Wave 1 were amplified as the panel progressed, particularly underrepresentation of younger adults and private renters, and overrepresentation of older age groups and owner-occupiers. While weighting corrected much of this bias, adjustments were considerable, especially for younger cohorts. This increased reliance on a small number of cases and potentially inflated variance. This indicates that panel continuation, under the tested conditions, worsens initial composition imbalances rather than correcting them.
Crime estimates and design effects
Crime estimates from later waves were consistently higher than those from Wave 1, with pronounced differences for certain offence types. These variations likely reflect a combination of attrition, telescoping, panel conditions, and modal differences between face-to-face and telephone interviewing. Combined wave estimates were around 10% higher than Wave 1 alone, but most differences were not statistically significant. Importantly, it was not possible to separate panel design effects from modal effects within this experiment.
Telescoping and data quality
Evidence of telescoping was minimal. However, it was slightly higher between later waves, which suggests that recall challenges can accumulate over time. This underlines the importance of clear guidance and ongoing monitoring, if longitudinal features are retained.
Recommendations for future panel design
Prioritise representativeness across waves
Later waves amplify Wave 1 imbalances, indicating that panel continuation without additional controls will not improve coverage of rarely heard groups. Targeted retention strategies could be used to stabilise composition over time.
Investigate mode and conditioning effects
Quantify mode-related differences and panel conditioning through controlled experiments.
Assess cost-benefit trade-offs
Evaluate whether an additional cross-sectional sample or a rotating panel offers better representativeness and efficiency than continued annual re-interviews.
Strengthen reference period accuracy
Maintain low telescoping through clearer anchoring, event-dating probes and automated checks.
Future survey design testing
Based on these findings, we have decided to investigate supplementing the existing face-to-face survey with a separate online-first, push-to-web sample. This combined approach would allow production of integrated estimates across both samples and is expected to deliver several advantages. These are:
improved sample profile - push-to-web is likely to increase participation among younger adults and other groups; this will address persistent underrepresentation in the current design
cost efficiency - online data collection reduces costs per interview, without a reliance on interviewers and associated costs; this will mean resources can be focused on non-digital follow-up and rarely heard cases
scalability - supplementing the face-to-face survey with an online first sample offers a realistic way to double the effective sample size, without the operational constraints observed in the panel experiment
In addition, we plan to explore a hybrid panel design that uses the existing face-to-face sample and conducts second interviews online. This approach could improve both response rates and the sample profile of a panel design by reducing respondent burden and offering a more convenient mode for follow-up interviews. Testing this option will help determine whether a mixed-mode panel structure can deliver the benefits of longitudinal measurement while mitigating attrition and demographic imbalance.
Back to table of contents12. Data sources and quality
Crime Survey for England and Wales
All results presented in this working paper are based on the Crime Survey for England and Wales (CSEW) Panel Design Test. Wave 1 interviews correspond to respondents' original participation in the main CSEW. Wave 2 participants are those Wave 1 respondents who consented to be re‑interviewed on an annual basis as part of the panel follow‑up.
For example, the Wave 2 sample for year ending March 2024 consists of respondents who were first interviewed in year ending March 2023 and who agreed at that time to take part in a further interview approximately one year later. In this way, each Wave 2 cohort is directly derived from the preceding year's cross‑sectional sample and forms a panel of continuing respondents who complete subsequent annual interviews. For further information on this panel design, please see our CSEW Technical Report 2023/2024 (PDF, 2.3 MB).
Statistical testing
Although logistic regression can determine the strength of the relationship between one variable and another, it cannot determine causality.
The results of our logistic regression analysis are expressed as odds ratios, which is the ratio between two sets of odds (the probability of an event occurring, divided by the probability of the event not occurring).
Only differences that are statistically significant at the 5% level are described within this working paper, unless it is specifically stated that they are not significant. These changes are identified by p-values in the tables.
Although our logistic regression analysis covers a range of characteristics, our models can only partially explain differences in continued participation and victimisation reporting patterns between people within the panel. Many factors that affect being a victim of crime or responding to the survey are not quantified in our data sources or included in our models.
We present adjusted models, which include statistically significant variables associated with either response or victimisation. The models measured the strength of associations after accounting for other factors:
models 1 and 2 included age, highest qualification, region, and tenure
models 3 and 4 included ethnic group, age, highest qualification, region, and tenure
13. Cite this working paper
Office for National Statistics (ONS), released 10 March 2026, ONS website, working paper, An evaluation of the Crime Survey for England and Wales Panel Design Test