Perspectives from clinical teams

Familiarity with quality indicators

In the fieldwork, we explored the familiarity of clinicians and managers with the national quality indicators. We did this first unprompted, to see what spontaneous knowledge people had of any indicators, and then with the lists as a guide. There was, unsurprisingly, considerable variation between teams (and between individuals within teams) in relation to which indicators people recalled spontaneously, and their levels of awareness when presented with the lists.

Spontaneous recall

For team members in the renal care unit, the UK Renal Registry was clearly well known as a source of quality indicators. Some interviewees were aware of the registry itself as a potential source of indicators, while others pointed to the annual Multisite Dialysis Access Audit (run by the registry). About half of the interviewees could name specific indicators derived from the registry which are used by NHS England for the renal indicator dashboards, including rates of infection and access to transplant lists for patients. National waiting times indicators (the 18-week referral to consultant-led treatment and the six-week diagnostic test target, which are applicable to most specialties) were mentioned much less frequently.

By contrast, for breast cancer care and CYPMH care, waiting time targets were most likely to be mentioned spontaneously. Some interviewees knew about specific indicators based on NICE guidelines and quality statements, or those that were linked to professional associations (for example, the Association of Breast Surgery). These were generally mentioned by the members of staff most directly involved with specialised aspects of care, such as surgery. Members of the CYPMH care teams were broadly aware of waiting times, but also outcome measures (which are not routinely collected at a national level) and the Friends and Family Test.


Once lists were presented as a reference point, we found high levels of familiarity with and awareness of many of the indicators among the breast cancer care and renal care teams. There was, however, notably less awareness among the CYPMH care teams. National quality measurement in CYPMH care is relatively new: the first Mental Health Five Year Forward View Dashboard report was published in January 2017, and the children and young people’s eating disorders waiting time standard came into effect in April 2017.

For all three clinical areas, familiarity was stronger with indicators that were trust key performance indicators (KPIs), such as waiting times and serious safety incidents. Waiting time targets were particularly dominant within breast cancer care, and this appeared to be increasingly becoming the case within CYPMH care.

Familiarity was less strong for indicators not immediately relating to a clinician’s own practice. For example, there was less familiarity among breast surgeons with chemotherapy or radiotherapy measures; mental health practitioners or clinicians working in the community had less knowledge of indicators relating to hospital care; and haemodialysis nurses were less aware of peritoneal dialysis measures.

Clinicians and managers associated some of the indicators with the national organisations using them; for example, those used in CQC inspections or NICE quality standards and those set by professional associations or used as part of clinical audits. For the remainder, there was a general lack of understanding around which organisations set the indicators and collected the data, or what they were used for.

How meaningful are the quality indicators to clinical teams?

Waiting time targets dominated conversations within breast cancer care teams. The targets were generally accepted as being meaningful and a relevant measure of quality, given the impact of prompt treatment on both patient outcomes and experience. In the words of one clinician, the measures ‘keep the focus on not allowing patients to drift’.

Waiting time indicators were also seen as having a performance management dimension, bringing scrutiny from managers within the trust (although this was not always welcome). According to an interviewee in one of the teams, the volume of patients in the breast cancer service meant that ‘we carry the rest of the trust’, in terms of overall trust performance.

Despite being a relatively new development, waiting time targets also seemed to be growing in importance within the CYPMH care teams we spoke with. Prompt treatment was seen as important, especially for higher risk patients – as one interviewee put it, ‘without waiting times, people will wait forever.’

There were some frustrations. People spoke about waiting times being used by commissioners as the main way of measuring performance. They also said that action to improve waiting times often resulted in spare capacity being moved to one point in the pathway (such as early assessment) at the expense of other parts (such as capacity to provide treatment for those who then need it). Waiting time targets were seen as less relevant in the context of renal care, which is consistent with the smaller proportion of waiting time indicators identified on the list of national indicators.

Indicators that could be linked with clinical outcomes, patient experience, trust KPIs and CQC inspections were widely accepted as meaningful. There was some variation between the three clinical areas. For example, the high rate of day cases for breast cancer surgery reduced the relevance of inpatient safety measures, and infection rates were a high priority for renal care given the risks associated with dialysis.

Indicators relating to protocols or guidelines were not identified by clinicians and managers as being meaningful in themselves, but they did appear to be well established within normal clinical practice. For instance, NICE guidelines were seen as important within CYPMH care because ‘they set a benchmark’. However, there was occasional criticism; for example, the NICE quality standard for children and young people with suspected depression is to have their diagnosis confirmed and recorded in their medical records; but mental health practitioners were wary of labelling children and young people or pushing them down a particular pathway or route prematurely.

The Friends and Family Test was frequently cited. While many interviewees acknowledged that it is a fairly limited tool, it was still seen as a valuable resource, without which there would be a considerable gap in measuring patient experience. Positive patient feedback, from the Friends and Family Test and other patient experience indicators, was thought to have significant value in terms of improving staff morale.

Feedback from national bodies and use of data locally

The clinicians and managers we spoke to across all five case studies reported seeing very limited national data fed back from the full range of bodies that collect data and use indicators. There were some important exceptions, including the annual Multisite Dialysis Access Audit produced by the Renal Registry (as one interviewee said: ‘we pore over [the report] and benchmark ourselves against colleagues’), and the National Cancer Patient Experience Survey (which provides data broken down at trust and tumour level).

Generally, where interviewees had experience of data feedback, many felt it was not quick enough: ‘it is painfully slow, which makes it irrelevant.’ While many concluded that there was an absence of proactive feedback on the part of national bodies, one interviewee wondered whether local services could also be more active: ‘I don’t think it is a closed circle... perhaps we don’t ask?’ Related to this is a lack of awareness about where and how national data could be obtained for use by teams that were interested in quality improvement.

The renal unit case study is unusual compared to the other sites. Staff described how the annual audit was closely scrutinised, even though the data were a year out of date. Senior team members would then investigate and take action where the unit was seen to be an outlier, for example by identifying and rectifying poor performance on MSSA infection rates. Results from the audit had also prompted visits to other renal units for the purposes of peer learning.

In the other sites, we did not find there to be routine mechanisms in place for teams to use national data to improve services locally. The exception is some inpatient safety measures that are also trust-wide KPIs, and the waiting time targets in breast cancer care and CYPMH care. Evidence of feedback was particularly limited for staff working in CYPMH care (one described this as ‘zero’), and we heard several comments about the burden of data collection: ‘All the [data] we collect feels like a huge task and nothing happens to it… we’re told we have to collect it, but [we’re] not aware that it goes anywhere else.’

There was an appetite among many of the teams to have better access to national quality data. Ideally this would not be raw data, and teams would need access to analytical capacity to make full use of the data. A number of clinicians and managers, particularly those who had been involved in academic research, talked about the limitations of national datasets – for example, around the robustness and completeness of data collection and the importance of risk adjustment methods to get an accurate picture of local performance.

More data was seen to be helpful, particularly if it was accompanied by recurrent audits, opportunities for peer review through benchmarking, and the chance to scrutinise comparative data around particular quality issues during team meetings.

The interviews also flagged up the important role that regional and intermediate bodies play in providing quality-related data to trusts. There was no mention of the NCRAS provided by Public Health England, or the CancerStats website (which provides comparative data on a wide range of breast cancer care and services). However, interviewees did report receiving data via the local Cancer Alliance (or Cancer Vanguard) in London, and the regional Cancer Research Network.

Regional clinical networks play a similar role within renal care and CYPMH care. External accreditation schemes for chemotherapy and radiotherapy were also mentioned (for example, CHKS standards for oncology, which use over 1,400 quality standards to benchmark services). For mental health, one of the case study sites had taken part in the national child and adolescent mental health services (CAMHS) benchmarking report coordinated by the NHS Benchmarking Network, as well as the Quality Network for Community CAMHS, run by the Royal College of Psychiatrists. These bodies fill gaps in analytical capability within trusts and actively disseminate and share comparative data, making this data more accessible than the data that is published nationally.

Local quality indicators

We asked the interviewees at the case study sites to tell us about any additional quality indicators that they measured and used locally – whether initiated by the trust, the CCG or the individual team – that were not stipulated at a national level. We did not seek details on how well embedded these local measurement activities were within the service or how frequently or routinely any data was collected.

Within breast cancer care, the local indicators we found included those around surgery (complication rates, wound infections), chemotherapy and radiotherapy standards, patient complaints (from the trusts’ patient advice and liaison service) and compliments (from the NHS Choices website). Many interviewees also described generating their own data through audits and small research projects. These could be useful, but were not generally sustained, as one interviewee explained: ‘the audits we do are very small... if we had a system where we could look at, for example, post-operative wound infections in real time, that would be helpful to quality.’

For CYPMH care, local indicators included various patient outcome measurement tools (such as the Child Outcome Rating Scale), out-of-area placements for CYPMH care, patient complaints, and the number of children and young people with mental health needs presenting within services for physical health.

For renal care, local indicators included the number of patients regularly not attending for haemodialysis treatment, peritoneal dialysis efficacy (catheter and membrane testing) and data from local audits on antibiotics prescribing, as well as uptake of and access to NICE-approved medicines and technologies.

Missing quality indicators

We asked clinicians and managers to tell us about aspects of their service, patient care and quality that were meaningful and of interest to them, but were not currently being measured.

In breast cancer care, teams said that indicators would be helpful in a number of areas. In relation to breast reconstruction surgery, for example, measures could include rates of infection, failure and success, and patient satisfaction with cosmetic outcomes. Clinicians were interested to know more about the patient experience, including that of younger patients, and to get a better understanding of quality of life after cancer treatment, as well as local recurrence rates. Other missing indicators included re-excision rates for breast cancer surgery and access to other services such as clinical psychology, lymphedema services and fertility preservation. Some interviewees mentioned how important it was for weekly multidisciplinary team meetings to operate effectively, although they acknowledged it might be difficult to create an indicator for this.

CYPMH care teams identified fewer missing indicators. Again, clinicians and managers were interested to receive more in-depth feedback on patient experience. There was also an interest in measuring staff workload, wellbeing, recruitment and retention, given the increasing pressures being put on the service. Other missing measures included the prevalence of particular CYPMH care problems, such as eating disorders.

We also found fewer missing indicators in renal care. Staff thought it would be helpful to have indicators that allowed them to track patient access, activation and choice in relation to dialysis, so that they could identify patients who might need more help managing their disease. They also wanted to understand how effective the service was at delaying the progression of kidney disease, and there was an interest in using PROMs in renal care.

Previous Next