What we found

Volume of indicators and organisations involved

Table 1 demonstrates the complexity of how quality is measured at a national level across the three clinical areas.

Table 1: Number of indicators for each clinical area, detailed alongside organisational involvement and the number of sources

Clinical area

No. of indicators

No. of organisations involved

Highest no. of organisations using or collecting data on any one indicator

No. of sources

Breast cancer care

68

11

8

23

CYPMH care

56

6

5

15

Renal care

47

10

5

16

In all three clinical areas, there were a high number of national indicators that were relevant to the services the teams delivered. Many of the indicators included in the clinical area lists were not specific to the clinical area itself. For example, for breast cancer care many of the indicators related to cancer or inpatient care more generally. The volume of indicators surprised both the clinicians and managers we spoke to during the fieldwork, and the people working in the ALBs.

Trusts were required to report data against 47 of the 68 breast cancer care indicators, 43 of the 56 CYPMH care indicators and 34 of the 47 renal care indicators. The remainder of the indicators were either not mandated for reporting (such as National Institute for Health and Care Excellence (NICE) quality standards and statements, which take the form of guidance rather than requirements) or they involved data that was collected outside of trusts (such as patient surveys run by the Care Quality Commission (CQC)).

The indicators were included in multiple sources, such as regulatory frameworks, official datasets, registries, audits, dashboards and patient surveys. During the course of the mapping exercise, a small number of the indicators moved, changed or were withdrawn from their original source.

To add further complexity, the indicators were sponsored, owned and used by multiple organisations, making it difficult to establish who was looking at which data and for what purpose. Table 1 shows that some indicators were under particular scrutiny: in breast cancer care the highest number of organisations collecting data on or using any one indicator was eight, whereas for both CYPMH care and renal care the highest number was five.

Type of indicator

Different types of indicators can be used to measure quality in health care. We used the following five types for this analysis:

  • Outcome: a measure of the health status of a patient or the impact of the health care service (for example, rates of readmission, mortality or survival).
  • Process: a measure of adherence to standards or procedures of a non-clinical nature (for example, waiting times).
  • Clinical process: a measure of adherence to standards or procedures of a clinical nature (for example, regular blood pressure monitoring for hypertension, statin prescribing or recording of hormone status).
  • Patient reported experience measures (PREMs): a measure of people’s experience of health care services, as reported by patients (for example, the NHS Friends and Family Test).
  • Patient reported outcome measures (PROMs): a measure of the health status of patients as reported by patients (for example, pain levels or quality of life before and after surgery).

Figure 1 shows that the majority of indicators found for breast cancer and renal care were outcome measures, whereas for CYPMH care the majority were process measures. For all three clinical areas there were similar proportions of clinical process measures, relatively few PREMs and only CYPMH care had any PROMs (the NHS Improvement’s Mental Health Safety Thermometer).

Figure 1: Proportion of each type of quality indicator for each of the three clinical areas

Source: Health Foundation analysis

Box 1: A deeper dive into the breast cancer care indicators

The table below illustrates further complexity behind the breast cancer care indicators. This is not unique to breast cancer care and similar complexity was also evident for CYPMH and renal care. It shows the split across the indicator types by the organisation collecting data on or using the indicators.

NHS England was involved in the data collection for, or use of, the majority of process measures and all PREMs, while Public Health England was involved in the majority of clinical process measures. The highest involvement in outcome measures was split between Public Health England, NHS Improvement and the Department of Health.

Table 2: The number of each indicator type, detailed alongside the organisation collecting data on or using the breast cancer care indicator

Organisation collecting data on or using the indicator

Indicator type

Total

Process

Clinical process

Outcome

PREM

Public Health England

5

7

13

4

29

NHS England

11

1

7

6

25

NHS Improvement

4

1

11

2

18

Department of Health

3

0

10

2

15

Health Quality Improvement Partnership

4

2

7

0

13

Association of Breast Surgery

4

2

7

0

13

Royal College of Surgeons

4

2

7

0

13

Care Quality Commission (CQC)

5

1

5

1

12

NHS Digital

4

0

6

1

11

Clinical commissioning groups (CCGs)

2

0

5

2

9

Sustainability and transformation partnerships

2

0

1

4

7

Quality Health

1

0

0

4

5

Not mandated or routinely reported

1

2

0

0

3

Total

17

12

33

6

68

Note: When summing up the the numbers for each organisation and indicator type they exceed the total number shown, as in many cases more than one organisation collected data on or used the indicator.

Publication level

We investigated whether data was published for each indicator. We defined ‘published’ to mean both data that is in the public domain and data that is available only to NHS users. We further broke this down to look at what level of the system the data is available at: national, clinical commissioning group (CCG), trust and ward or team.

As shown in Figure 2, while the number of national indicators we found for all three clinical areas was high, not all were published, and there were considerably fewer indicators for which data was published at trust and ward or team levels.

Figure 2: Publication level of the indicators across the three clinical areas

Note: When summing the numbers for each level of publication they exceed the total number of indicators because many of the indicators are published at more than one level of the system.

Source: Health Foundation analysis

The ‘No data published’ category in Figure 2 includes the NICE quality standards and statements, which are not mandated for collection. In addition to these, we could not find any data published at any level for 17 of the breast cancer care indicators, eight of the CYPMH care indicators, and eight of the renal care indicators.

Timeliness and comparability of the published data

We looked at the frequency of publication, the data time lag and the availability of benchmarked and time series data.

Trust level

Most trust-level indicator data was published at a frequency of quarterly or higher: 85% for CYPMH care and 75% for both breast cancer care and renal care. The majority of trust-level indicator data was published with a time lag of three months or less: 75% for breast cancer care, 69% for CYPMH care and 67% for renal care. Time series data was available for the majority (78%) of trust-level breast cancer care indicators, but for significantly fewer of the CYPMH care and renal care indicators, at 46% and 42% respectively. All of the CYPMH care and all but one of the breast cancer care indicators at a trust level provided benchmarked data, however this was much lower (75%) for the renal care indicators.

Ward or team level

At a ward or team level, the proportion of indicator data for breast cancer care and renal care that was published at a frequency of quarterly or higher fell to 50% and 60% respectively; the remainder was published annually. Half of both the breast cancer care and renal care indicators were published with a time lag of three months or less, while the other half had a time lag of more than 12 months. The majority of the breast cancer care indicators had time series (80%) and benchmarked (90%) data available. No time series data could be located for the renal care indicators and benchmarked data could only be found for half of them. The only CYPMH indicator published at a ward or team level was published monthly with a time lag of one to three months. There was benchmarked data available for this indicator, but no time series.

Even where the indicator data were released in a timely and comparable form, the interviews suggested that the potential usefulness for trusts, wards or teams was limited by the data being hard to locate online, with multiple spreadsheets to choose from and large Excel workbooks to download and navigate.


* We acknowledge that there are valid and unavoidable reasons why data cannot be made available for all the indicators at trust, or ward or team, level.

Previous Next