Introduction

The NHS is recognised as having led the way internationally for its policies to improve the quality of care over recent decades. These have resulted in an extensive and complex infrastructure of organisations and initiatives being involved in the measurement of care quality in England, at both a national and a local level.

Quality measurement has many different audiences, including parliament, government, NHS managers and staff, regulators, patients and the public. This is because it has many different functions, such as performance management, public accountability, providing patients with information to support choice, informing research, and as a tool for quality improvement.1 Which quality indicator is appropriate depends on what it will be used for: measures of accountability, for example, need to be more robust and comparable than measures that are used to improve quality at a local level, by a clinical team or within a hospital. Gauging the quality of care is complex: all datasets have their limitations, and most indicators are pointers rather than absolute markers of performance.

The volume of data requested by national bodies to monitor the quality of NHS care in England has grown rapidly in recent decades. At the same time, the number of organisations that define indicators of care quality has increased, and many of them also collect, process and publish data relating to these indicators nationally.

Measurement and data are essential to local quality improvement, and data can often be generated by local teams to suit their own purposes. Local quality improvement activities are therefore not solely dependent on national quality data. Nevertheless, policymakers have been attracted to the idea of a self-improving health service underpinned by nationally published quality indicators. This was demonstrated when the former secretary of state for health, Jeremy Hunt, championed the idea of ‘intelligent transparency’ driving change in 2015:

‘Self-directed improvement is the most powerful force unleashed by intelligent transparency: if you help people understand how they are doing against their peers and where they need to improve, in most cases that is exactly what they do. A combination of natural competitiveness and desire to do the best for patients mean rapid change – without a target in sight.’

There have been several attempts to systematically draw together some of these national measures of quality across the NHS, as a mechanism for accountability and to drive improvement. These include the NHS Performance Assessment Framework in 1999, and the national outcomes frameworks from 2012. Both of these initiatives combined quality indicators from a range of sources. However, they were largely designed to assist with performance management at national, regional and commissioner levels, rather than to be used by front-line teams.

In 2009, the searchable database Indicators for quality improvement was launched by NHS Digital. This was explicitly aimed at making national measures available to front-line teams for quality improvement, but has now been archived. There have since been several national reviews into how performance measurement of local health systems could be improved, including the Department of Health-commissioned Measuring the performance of local health systems.

In early 2019, the NHS Long Term Plan set out new objectives to improve the quality of care across a wide range of clinical areas, with a pledge to support clinicians to lead these improvements. At the same time, NHS England has begun a review of some high-profile national measures of quality (in particular waiting times), and has started to pilot new measures.

This research places these developments in context by offering a snapshot of the complexity of national quality measurement and its perceived use to clinical teams as a tool for improvement. The aims of this study were to explore:

  • how many quality measures exist
  • who collects the relevant data, and how frequently these data are published
  • whether front-line staff are aware of these national indicators
  • if front-line staff use the indicators to improve care for their patients.

This report explains our methodology for both the desk research and interviews, before detailing what we found. First, the indicators across the three clinical areas are mapped in relation to timeliness, publication level and comparability. We then use the interview findings to incorporate perspectives from what the clinical teams shared in relation to how familiar, meaningful and useful the indicators are at a local level. We also examine the types of additional local indicators that exist, and the indicators the teams have identified as lacking. The discussion explores the findings of this research in relation to quality indicator development across the NHS in England.

Previous Next