Key points

  • The findings from this report paint a complex picture of quality measurement in three clinical areas in the NHS. It highlights how multiple sources and numerous national bodies preside over data collection and explores whether there is a case for rationalisation and simplification in order to ensure that data are being used most effectively to bring benefits to patients and clinical teams.
  • During 2017, we mapped the quality indicators for three clinical areas: breast cancer care, children and young people's mental health (CYPMH) care and renal care. We identified a large number of national quality indicators, sitting across multiple sources (such as audits or patient surveys), for each of the three clinical areas. For breast cancer care there were 68 indicators and 23 sources. For CYPMH care there were 56 indicators and 15 sources. For renal care there were 47 indicators and 16 sources.
  • We found multiple organisations using or collecting indicator datasets: 11 for breast cancer care, six for CYPMH care and 10 for renal care. It was difficult to establish which national bodies were looking at which data, and for what purpose. A lot of the data were very hard to locate online.
  • Indicators are published at a number of different levels, ranging from all-England, to trust and ward level. We found considerably fewer indicators for which data was published at trust and ward or team levels than the total number of national quality indicators. For breast cancer care, 32 out of 68 indicators were published at trust level, and 10 were published at ward or team level. For CYPMH care, 13 out of 56 indicators were published at trust level, and one was published at ward or team level. For renal care, 24 out of 47 indicators were published at trust level, and 10 were published at ward or team level.
  • Between July 2017 and March 2018, we conducted interviews with 52 clinicians and managers working in five separate teams or units (based within hospital trusts), who were delivering services across the three areas. Awareness of national quality indicators was high among clinicians and managers working in breast cancer care and renal care, and most indicators were considered either relevant, meaningful or both. However, awareness was lower among those working in CYPMH care, and there was less agreement on how relevant or meaningful the indicators were.
  • Waiting time targets were the most familiar quality indicators across all three clinical areas. These targets dominated conversations within clinical teams working in breast cancer care, and increasingly within CYPMH care teams. Interviewees across all five case study sites expressed a desire for more emphasis on, and greater measurement of, patients’ experience of health care services.
  • Our interviews suggest that little insight or use is being derived locally from national quality indicators, with the exception of the annual Multisite Dialysis Access Audit for renal care. Apart from this, there was a big gap between the national data that was actually available and what clinical teams used. Many clinical teams reported generating local data for improving their services, and were interested in seeing more national data on the quality of the service they are giving to patients.
  • Given the high volume of national indicators across multiple sources, and multiple national bodies sponsoring and using the data, there is a case for reviewing national quality measurement with a view to streamlining and simplifying it. It is currently not clear whether there is a mechanism or organisation that maintains an overview of all the indicators in use across the system, or across the whole patient pathway for a clinical area.
  • If the perceptions of the clinical teams we spoke to for this report are in any way indicative of those of other teams and clinical areas, it suggests there is untapped potential for using national quality measurement for local quality improvement. Our findings suggest that the appetite for improvement is there, but that policymakers’ aspiration for local clinical teams to use national indicators to their full effect has not come to fruition.
  • A more coherent national framework might need to articulate and differentiate more clearly between the different audiences for, and purposes of, quality measurement, with a greater focus on clinical teams.
Next