Discussion

This research was designed to be a brief survey of quality indicators in three clinical areas, with perspectives from a small number of clinical teams, probing their awareness and attitudes towards the indicators. We found a high volume of quality indicators for each clinical area, with multiple sources and numerous national bodies presiding over data collection. There appeared to be an absence of one organisation or mechanism to maintain an overview of all the indicators in use across the system, or within any one clinical area or across a patient pathway.

The proliferation of quality indicators is not unique to the NHS. In the United States, commentators have drawn attention to the complexity of care quality data collection (and its cost). Others have questioned whether what is measured also reflects what is important to patients and those paying for services.

The complexity of measurement in the NHS is partly a function of its longevity: formal collections of hospital activity data to help manage the service date back to 1982. In its 2016 review of quality in the NHS in England, the Organisation for Economic Co-operation and Development (OECD) noted that despite the pioneering role the UK has played in quality measurement, the range, format and reporting level of quality indicators was now ‘extremely complex’. The OECD also noted that in England the governance and approach to improving quality has become increasingly top-down in recent years, as the system has moved away from a more bottom-up way of doing things that relied on professional motivation to improve.

In England, this complexity is also a result of differing policies towards managing and regulating the NHS as governments have changed over time. This process has been further complicated by successive administrative reforms to the national, regional and local bodies that oversee and manage health care providers. The most recent of these (the 2012 Health and Social Care Act) created a new arm’s length body to manage the NHS (NHS England) and a separate organisation for public health (Public Health England).

In 2016 the Health Foundation reviewed the national strategy for improving care quality in A clear road ahead: Creating a coherent quality strategy for the English NHS. The report argued that, as a result of the national leadership of the NHS having become more fragmented, there was no coherent overall approach to quality within the system, and so a shared approach to quality – underpinned by a core set of metrics – was needed.

The NHS has had a National Quality Board (NQB) since 2009. It brings together regulators and senior leaders responsible for quality and safety to improve oversight of quality and the coherence of quality-related policies. In 2016, the NQB identified the need to simplify the approach to measuring quality, and to ‘align our measurement and monitoring activities’ to reduce duplication and ‘measure what matters’. This included a programme of work to promote measurement for improvement at every level within the NHS, and an upgrade of the CancerStats website by Public Health England to make data more accessible to clinicians, managers and providers more widely.

Since then, NHS England has published the NHS Long Term Plan. This has added a new set of objectives for improving the quality of care, including for major diseases such as cancer, and cardiovascular and respiratory illnesses, and for population groups such as children and young people. The existing national framework for measuring quality may need to be simplified to avoid confusion and overload for clinical teams, regulators, commissioners, patients and the public alike. This is consistent with the findings of previous reviews, such as the report Measuring the performance of local health systems, where there were calls for radical simplification.

If the perceptions of the clinical teams we spoke to for this report are in any way indicative of those of other teams and clinical areas, it would suggest there is untapped potential for using national quality measurement for local quality improvement. Ten years ago, the Department of Health said the NHS needed to find ways of ‘harnessing the creativity, energy and appetite for improvement of our staff from the bottom up’, and set out a vision for enabling quality improvement at all levels of the NHS.

While quality improvement at a team or organisational level depends on a range of data, some of these data can be locally generated, with less rigorous requirements around consistency than might be the case for national datasets. Both types of dataset have potential advantages over each other: locally generated data can be pulled together and shared back at greater speed, whereas national datasets bring greater validity and opportunities for comparison.

Our findings suggest that the appetite for improvement is there, but that policymakers’ aspiration for local clinical teams to use national indicators to their full effect has not come to fruition. However, the case studies offer evidence of some provider organisations using national datasets to benchmark themselves against their peers, and investing time and energy in extracting as much insight as they can, providing high-quality data for front-line teams and services to support their improvement work.

Any approach to enabling greater use of data for local improvement will need to consider the lack of analytical capacity within NHS organisations. The Health Foundation has identified gaps in training and development within analytical teams and in the infrastructure (both human and technical) to use and disseminate the results of analysis to clinicians and managers.

A more coherent national framework might need to articulate and differentiate more clearly between the different audiences for, and purposes of, quality measurement,1 with a greater focus on clinical teams. Measurement activities primarily designed for judgement or accountability will not automatically lead to improvements at a local level, and our interviewees reflected a sensitivity about what data was used for, and by whom. If clinical teams see data being collected purely for accountability or judgement purposes, they may not see the potential value in using the data for other purposes, such as improvement.

Previous Next