What we did

This analysis explores how quality is measured at a national level in England, in three specific clinical areas: breast cancer care, children and young people’s mental health (CYPMH) care and renal care.

We selected these clinical areas because of their differing levels of national scrutiny. Breast cancer care was chosen because cancer is a national priority. Additionally, breast cancer has for some time been a key priority within cancer services, with roots back to the NHS Cancer Plan in 2000. Renal care was chosen because of its relatively low profile at a national level, but also because it is associated with the UK Renal Registry – a longstanding registry which collects quality data. CYPMH care was chosen because, in contrast to cancer and renal care, national quality measurement in this area is a relatively new development, and we wanted to speak to clinical teams working outside of acute hospital settings. Within CYPMH care we focused on indicators relating to young people aged 14 to 18 years, not including those with learning difficulties or looked after children.

The research was in two interrelated parts: desk research in the form of quality indicator mapping and analysis, and fieldwork in the form of semi-structured interviews.

Mapping the indicators

We compiled a list of the national quality indicators specified for breast cancer care, CYPMH care and renal care. These were taken from a wide range of sources, such as regulatory frameworks, official datasets, registries, audits, dashboards and patient surveys.

The lists of indicators were designed to give a comprehensive snapshot of quality metrics that were directly relevant to the services being delivered by the clinical teams we spoke to. As a result, they are not exhaustive. For example, we did not include indicators on screening in the breast cancer care list, indicators on mental health spending or transformation milestones in the CYPMH care list, or indicators on kidney transplants in the renal care list. We only included indicators that were part of a formal, national measurement framework or similar.

To explore these indicators in more detail, we analysed them in the context of the following considerations:

  • source – such as a national framework or patient survey
  • organisations requesting, using and/or monitoring the data
  • indicator type – for example, an outcome measure
  • publication level of indicator data – for example, trust level
  • timeliness and granularity of the data published.

The lists and mapping exercise were up to date as at November 2017, to the best of our knowledge. Although the specific indicators may have changed over time, what is clear from the mapping exercise is that this is a complex landscape. For certain indicators, where we were unable to locate publications online, we contacted the collecting or sponsoring organisation to check whether and where the data was made available. For example, through speaking to members of Public Health England’s National Cancer Registration and Analysis Service (NCRAS) team we became aware of the CancerStats website (now CancerStats2), which is accessible to any registered NHS user.

Case study interviews

We explored front-line clinicians’ and managers’ awareness and use of national quality indicators through semi-structured interviews across the following five case study sites:

  • Imperial College Healthcare NHS Trust (breast cancer)
  • Sheffield Teaching Hospitals NHS Foundation Trust (breast cancer)
  • Sheffield Teaching Hospitals NHS Foundation Trust (renal care)
  • Portsmouth Child and Adolescent Mental Health Services (CYPMH)
  • Southampton Child and Adolescent Mental Health Services, and Southampton Children’s Hospital (CYPMH).

We selected the five case study sites through convenience sampling: contact was initiated through existing Health Foundation contacts (alumni, award holders and others) across each of the three areas, and the final list was dictated by clinician and manager availability within the services. It was not designed to be representative of the NHS in England as a whole, but instead to generate themes for further research and debate.

It is likely that some of the teams featured in the case studies have above-average awareness of national benchmarking data and use it more than most. For example, the renal care team at Sheffield Teaching Hospitals NHS Foundation Trust has a strong improvement track record; and we know that organisations with higher maturity in quality improvement governance are outward-looking and use national benchmarks for quality, safety and experience.

The fieldwork involved interviews with 52 clinicians and managers between July 2017 and March 2018. These included clinical directors, consultants, senior doctors, nurses, managers, IT specialists and administrative staff, with recruitment aiming to generate insights from across the range of professions working at each case study site. We explored six broad themes in the interviews:

  • How familiar are clinical teams with the national quality indicators?
  • How meaningful and/or relevant are the indicators?
  • Do clinical teams receive feedback from national bodies or access national data on the indicators?
  • Do they use national data on the indicators to improve services locally?
  • What additional local quality indicator data do they collect and/or use?
  • What quality indicators are missing?

Notes were taken during the interviews and case study reports were written for each site. Our analysis of these case studies sought to draw out the differences and commonalities between approaches across the five sites and three specialities.

We also spoke to a number of people working in quality measurement and quality improvement in arm’s-length bodies (ALBs), both to inform the research and to share emerging findings. This included the National Quality Board’s Measuring Quality Working Group, hosted by NHS England, and the Indicator Governance Board, hosted by NHS Digital.

Previous Next