Method

 

1. Selecting the donor pool

The Improvement Analytics Unit began by selecting the 20 CCGs in England that were most similar to Northumberland (see Figure 2). There are 209 CCGs in England, but 30 CCGs in London and 59 vanguard CCGs participating in the new care models programme were excluded. This left 120 CCGs, which the unit characterised using variables such as the number of GPs per capita and the prevalence of common diseases. The unit then assessed the similarity of each CCG to Northumberland CCG across the whole set of variables, placing greater weight on variables that were more predictive of admission rates. The most similar 20 CCGs formed the basis for the selection of the synthetic control areas. More detail on these areas and the selection method can be found in the technical appendix.

Figure 2: Clinical commissioning groups selected for the donor pool

Note: The CCG areas (ranked from most to least similar in terms of the weighted distance metric) are: Shropshire; Wolverhampton; Oldham; Doncaster; Bolton; Tameside and Glossop; North East Essex; Basildon and Brentwood; Ipswich and East Suffolk; Mid Essex; Birmingham South and Central; East Riding of Yorkshire; Stoke-on-Trent; North West Surrey; Bristol; Heywood, Middleton and Rochdale; Coastal West Sussex; North Derbyshire; Rotherham; and West Essex.

2. Obtaining person-level data for residents

The remaining parts of the study used data from the Secondary Uses Service (SUS) – a national, person-level database that is closely related to the widely used Hospital Episode Statistics (HES). The Improvement Analytics Unit has access to SUS data for its work, and processes them in a secure environment based at the Health Foundation. All data are pseudonymised, meaning they have been stripped of all fields that could be used to identify patients directly, such as name, date of birth and address. Individuals’ NHS numbers are replaced with a pseudonym, which the unit used to link records for the same individual over time. The overall approach to information governance was scrutinised by the programme oversight group and by information governance experts at NHS Digital.

There are some limitations to how the SUS data are collected that affected the ability of the evaluation team to reflect some specific details of health care delivery in Northumberland.

  • Northumbria Healthcare NHS Foundation Trust records some ambulatory care unit activity in SUS as emergency admissions, even though a local payment arrangement exists for this activity. Figures for emergency admissions therefore include some ambulatory care unit activity. Other ambulatory care unit activity is recorded as outpatient attendances, but outpatient data were not considered in this evaluation.
  • When studying emergency admissions, it was not possible to confidently distinguish between patients who originally presented at A&E from those who were admitted to hospital directly following an urgent request from a GP. Both types of emergency admission are therefore included in the analysis.
  • A&E attendances at North Tyneside, Wansbeck and Hexham were recorded in SUS as attendances to ‘type 1’ departments (major, consultant-led 24-hour services with full resuscitation facilities) for the entirety of the period covered by this evaluation, even though these departments were gradually refocused on providing care for minor injuries and illnesses. To deal with this, the Improvement Analytics Unit chose to define A&E attendances as including activity at minor injury units, walk-in centres and specialty departments, in addition to those at major departments. The unit applied the same definition before and after the changes, and for the different CCG areas, to make sure the analysis compared like with like.

3. Producing impact metrics

The SUS data were used to assess how the population of each CCG used hospital care. The impact metrics were as follows:

  • The rate of visits to A&E departments – calculated as the total number of A&E visits from the SUS data, per 10,000 people in the local population. As explained, the metric included major A&E departments, specialty departments, minor injury units and walk-in centres used by the population of Northumberland CCG, whether run by Northumbria Healthcare or other trusts.
  • The rate of inpatient (elective and non-elective) admissions per 10,000 people. This metric included all inpatient admissions recorded in the SUS. It included certain admissions to the ambulatory care unit at the new specialist emergency care hospital.
  • The rate of emergency admissions per 10,000 people. This metric was intended to capture admissions that were unpredictable and occurred at short notice because of clinical need. It may have included certain admissions to the ambulatory care unit.
  • The rate of elective admissions per 10,000 people.
  • The percentage of patients in the CCG who were admitted, transferred or discharged within 4 hours of arrival at A&E. It is worth noting that this figure relates to the CCG population as a whole. The metric therefore differs from the A&E performance measure reported monthly by NHS England, which is reported separately for each trust. In addition, the metrics used in this report are risk-adjusted (see the next step for more details).
  • The average length of a visit to A&E in minutes, from the point of arrival to admission, transfer or discharge.
  • The percentage of patients attending A&E who were subsequently admitted to hospital.
  • The average length of inpatient admissions in days – this was analysed separately for all admissions, emergency admissions and elective admissions.

These impact measures were calculated monthly by applying the same definitions to data for each CCG. The synthetic control areas were determined using monthly data from May 2011 to April 2015 only. The impact of the changes was examined using analogous data for the period following implementation (August 2015 to July 2016).

4. Risk-adjusting the impact metrics

The Improvement Analytics Unit was aware that the redesign of urgent and emergency care might have led to changes in the characteristics of patients attending hospital, and these changes to case-mix might in turn have affected the impact measures. For example, it is plausible that following the redesign, more routine care was provided within primary care settings. In this case, A&E departments might have increasingly focused on care for patients with more severe needs, and we would expect a greater proportion of these patients to be admitted. Therefore, when comparing impact metrics, the unit adjusted for changes in the characteristics of patients attending hospital over time.

The risk-adjustment method took account of the following variables:

  • age (structured into 5-year bands up to 90 years, and then 90 years and over)
  • gender
  • ethnicity
  • the CCG responsible for the patient’s care
  • the numbers of elective and emergency admissions and A&E attendances in the preceding 24 months
  • the day of the week and month that the admission or A&E visit commenced
  • comorbidities (indicators for each of the 31 conditions used in the Elixhauser index, plus dementia)
  • for the impact measures that related to inpatient care, the primary diagnosis attached to the admission record. These were categorised using the summary hospital-level mortality indictor-grouped Clinical Classifications Software.

By risk-adjusting the impact measures, the findings could not be affected by changes in these variables over time. However, the risk-adjustment method was unable to take account of variables that were not recorded in the SUS data, including direct measures of illness severity. The rate of A&E visits and overall admissions were also not risk-adjusted, as the health profile of the overall population was unlikely to change significantly over the period involved.

5. Selecting synthetic control areas

The Improvement Analytics Unit chose a different synthetic control area for each of the impact metrics. In each case, the synthetic control area was formed by assigning weights to the CCGs in the donor pool, then taking the weighted average of the relevant risk-adjusted impact metric. The weights were chosen so that the synthetic control area was very similar to Northumberland CCG in terms of the relevant risk-adjusted metric over the 48 months from 1 May 2011 to 30 April 2015.

The rationale is that, if the two areas were similar on a particular metric over this long period, then it is reasonable to suppose they would have continued to be similar in the absence of the changes to urgent and emergency care. The synthetic control areas provide a ‘best estimate’ regarding what would have happened to hospital use in Northumberland had these changes not been made.

The unit used an established statistical procedure to select the weights, and then validated the procedure by graphically assessing the similarity of the synthetic control area and Northumberland CCG over the period from May 2011 to April 2015. Data for the 6 weeks immediately preceding the changes to urgent and emergency care were excluded (from 1 May 2015 to 15 June 2015), since patterns of hospital use might already have started to change in anticipation of the changes being made. For some of the impact metrics, it was not possible to select a synthetic control area that adequately tracked Northumberland CCG over the entire period. In these instances, findings have not been reported as they are unreliable.

6. Estimating the impact of the changes to urgent and emergency care

After forming the synthetic control areas, the unit estimated the impact of the changes to urgent and emergency care based on the differences between Northumberland CCG and the relevant synthetic control area. Although the new specialist emergency care hospital opened on 16 June 2015, data for the first 6 weeks were excluded, to allow the changes to bed in. Impacts were examined for the period from 1 August 2015 to 31 July 2016.

As explained in Box 3, the effect of the changes to urgent and emergency care can be estimated as the difference between the impact metric for Northumberland and that of the relevant synthetic control area. However, the precision of this estimate must be assessed. This is important since outcomes vary over time even without changes to care delivery, and it would be misleading to attribute normal statistical variation to the effect of changes made. Traditionally, statisticians deal with this issue by reporting the ‘p-value’, which is the probability that an effect of at least the magnitude observed could have arisen by chance. If this probability is low (eg less than 5%), then the findings are usually considered to represent a systematic difference between the two groups.

However, the synthetic control approach does not lend itself to the calculation of p-values, or related quantities like confidence intervals, and so a different approach was needed – the significance score. The significance score performs a very similar role to the p-value: the lower the significance score, the more confidence that the findings reflect a systematic difference in the impact measures between the two areas, rather than chance. The unit used a threshold of 5% to determine whether its findings were statistically robust.

7. Conducting sensitivity analyses

Although the evaluation approach is considered robust, it made several assumptions. As previously stated, the key assumption was that if Northumberland CCG and the synthetic control area were similar on a particular metric over a long period, then they would have continued to be similar in the absence of changes to urgent and emergency care. The Improvement Analytics Unit tested the sensitivity of the findings to these assumptions by making the following changes to its method:

  • structuring the hospital data on a quarterly, rather than monthly, basis
  • increasing the duration of the anticipation period, and thus only using data from May 2011 to April 2014 when determining the weights used to form the synthetic control areas, rather than from May 2011 to April 2015
  • considering the importance of risk adjustment by estimating effects without risk-adjusting the impact measures
  • reducing the size of the donor pool to the 10 CCGs most similar to Northumberland
  • using an alternative donor pool, selected using the Commissioning for Value method. This donor pool differed substantially from the donor pool in Figure 2, with only five CCGs common to both rankings. As before, CCGs in London and other new care models vanguards were excluded.

This included primary and acute care systems, multispecialty community providers, enhanced health in care homes, and urgent and emergency care, as well as CCGs linked to two acute care collaborations – the Salford and Wigan Foundation Chain and the Healthcare Group in Dartford and Gravesham.

Since April 2014, admissions to the ambulatory care unit have been counted as emergency admissions when medical ambulatory care is provided or frailty assessments are conducted, provided that the activity is not scheduled in advance. They are recorded as outpatient activity when the activity is scheduled at least a day in advance (eg when a patient is asked to come back the next day for a scan) or when the unit is providing surgical care or emergency gynaecology. Before April 2012, all admissions to the ambulatory care unit were counted as emergency admissions. Between April 2012 and April 2014, they were all recorded as outpatient appointments.

** The Elixhauser comorbidity index is a method of categorising comorbidities of patients based on the International Classification of Diseases (ICD) diagnosis codes found in administrative data, such as hospital abstracts data.

†† Indicators for summary hospital-level mortality indicator categories with less than three cases per 10,000 population for elective or emergency admissions were excluded to ensure model convergence and prevent overfitting.

‡‡ For more information see the technical appendix: www.health.org.uk/impact-redesign-care-Northumberland

§§ The rate of elective admissions, the proportion of accident and emergency visits leading to an admission, the average length of stay of elective admissions, and the average length of stay of emergency admissions.

¶¶ For more information see the technical appendix: www.health.org.uk/impact-redesign-care-Northumberland

*** These were Shropshire, North East Essex, Ipswich and East Suffolk, East Riding of Yorkshire, and North Derbyshire.

Previous Next