Part I: The case for change

Learning from major safety failings

Health care is a hazardous business. It brings together sick patients, complex systems, fallible professionals and advanced technology. It is classed as a ‘safety-critical industry’, where errors or design failures can lead to the loss of life. Terrible recent care failings illustrate the reality of these hazards, and the significant challenges involved in trying to address them. As with other safety-critical industries, it is imperative that when failures do occur, lessons are learned and action is taken to prevent the same issues reoccurring. As to whether or not this is happening, history tells a different story.

Take the following list of factors, identified in the 2001 Bristol Royal Infirmary Inquiry into the deaths of babies undergoing heart surgery:

  • Isolation – in organisational or geographic terms, leaving professionals behind developments elsewhere, unaware or suspicious of new ideas, with no exposure to constructive critical exchange and peer review.
  • Inadequate leadership – by managers or clinicians, characterised by a lack of vision, an inability to develop shared or common objectives, a weak or bullying management style, and a reluctance to tackle problems even in the face of extensive evidence.
  • System and process failure – where a series of organisational systems and processes were either not present or not working properly, and the checks and balances needed to prevent problems were absent.
  • Poor communication – affecting both communication in the health care organisation and between health care professionals and service users, where stakeholders knew something of the problems subsequently identified by an inquiry, but emergence of the full picture in a way that would prompt action was inhibited.
  • Disempowerment of staff and service users – where those who might have raised problems or concerns were discouraged from doing so, either because of a sense of helplessness in the face of organisational dysfunction or because the prevailing organisational culture precluded such actions.

The same factors had been cited some 30 years earlier, in the 1969 Ely Hospital Inquiry into long-stay care for older people and people with mental health problems.

The factors are systemic, cultural, contextual and human in nature, and elements of all of them were also identified in the inquiries into failings of care at Mid Staffordshire NHS Foundation Trust and, most recently, Morecambe Bay, some 46 years after the Ely Hospital Inquiry. While such factors are complex, multifaceted and difficult to eradicate, their persistence across the decades is cause for serious concern.

Viewpoint: What is the future of patient safety?

Learning from failure, by Martin Bromiley

Martin is Chair of the Clinical Human Factors Group and an airline pilot.

When you work in or observe a safety-critical industry, safety isn’t an extra. ‘Safety’ or ‘quality’ are rare words because these things are already part of the day-to-day conscious and subconscious thoughts and behaviours of everyone. The system is designed, refined, observed, questioned and challenged to make it easy to do things right.

Ironically, the successful industries accept failure as inevitable. When little problems occur with no adverse outcome, they’re seen as big problems. Failure is seen as the path to robustness and resilience. Just as an elite sports person constantly makes small changes, the ‘aggregation of small gains’ applies to industry. Sports people don’t ‘beat themselves up’ about failure; they learn from it. In the same way, clever industries don’t beat up individuals; learning and blaming are two different things.

Learning from failure should be a thoughtful and coordinated process led by people who make sure best practice becomes the norm across the whole business.

And to what end? To minimise variability of outcome – which is what any industry (or individual) in a high-risk pursuit fears. Variability, whether measured by loss of life or money, is becoming increasingly unacceptable in society. Human beings are remarkably variable in their output. Their non-linear thought processes create adaptability and remarkable ‘saves’, but heroic saves are often the result of a system found wanting. Only a system that makes it easy for humans to do things right is consistent, efficient and safe.

Why is it so difficult to improve safety?

The complex range of factors identified in major care failings also play a central role in the success (or failure) of efforts to improve safety. In 2004, the Health Foundation launched the Safer Patients Initiative – the first major improvement programme to address patient safety in the UK. The programme ultimately helped to raise awareness of the problem of avoidable harm and provide a basis for a wider safety movement. It also began a journey which has deepened our understanding of why improvement programmes often fail to achieve the desired impact. So just why is it so difficult to improve safety? Four key reasons are: complexity, connectedness, context and counting.

Complexity: local improvement interventions cannot solve organisation-wide safety problems

The Safer Patients Initiative focused on improving the reliability of care within four clinical areas across 24 hospitals. A range of improvement interventions were used to tackle problems ranging from central line bloodstream infections to ventilator-associated pneumonia. The aims were a 30% reduction in adverse events and a 15% reduction in mortality, as well as specific goals relating to a range of process and outcome measures.

The independent evaluation of the initiative showed that all sites had improved on at least half of the 43 measures chosen. But many comparison sites that were not participating in the initiative were also improving at the same time owing to a ‘rising tide’, including many concurrent national policy initiatives. The evaluation also showed that the programme failed to become embedded into wider structures and processes, highlighting the scale of resources needed to bring about organisation-wide change.

Connectedness: deceptively simple safety problems often need system-wide solutions

The Safer Clinical Systems programme ran in two phases from 2008 to 2014. In the first phase, the Safer Clinical Systems approach was developed and tested. In the second phase, organisations were supported to try and create systems that were free from unacceptable risk in two areas – clinical handover and the management of medicines. Teams conducted systematic diagnostics on their clinical systems, collecting an array of evidence in order to make a ‘safety case’ about the safety of their service. On the basis of this, the teams chose interventions to address the risks.

Among other lessons, the independent evaluation found that many safety problems were outside the control of individual front-line teams to tackle. These problems often needed to be addressed at the organisation or system-wide level. They included inconsistent staffing, issues with information technology, or challenging established cultures. Although such problems aren’t new to those working in the health service, the evaluation highlighted the degree to which they are entrenched and can stand in the way of local improvement.

Context: the success of safety interventions depends as much on the context in which they are applied as on how well they are carried out

Many safety improvement efforts aim to replicate initiatives that have been successful in other contexts. The Lining Up research project evaluated efforts in England to reproduce the success of the Keystone programme in the US state of Michigan. Keystone had achieved dramatic reductions in bloodstream infections linked to central venous catheters (CVC-BSIs) in intensive care units. The English initiative, known as Matching Michigan, adopted the same interventions, which included technical components (such as the use of chlorhexidine to prepare the patient’s skin) and non-technical components (such as education on the science of safety).

The Lining Up team found that, even where the technical components of an initiative were applied well, a range of contextual factors, including the legacy of previous initiatives, influenced its success. This demonstrated that a programme transplanted from elsewhere does not always work in the same way in the new setting (see also Box 1 overleaf). The team concluded that a deep understanding of why a programme was successful, and how it must be adapted to the local needs and priorities on the ground, was essential.

Box 1: The importance of context: PROMPT (Practical Obstetric Multi-Professional Training)

Developed by Tim Draycott and colleagues at Southmead Hospital, PROMPT is a one-day multi-professional training course that uses sophisticated simulation models to address the clinical and behavioural skills required in obstetric emergencies. Since its introduction at Southmead in 2002, injuries to babies caused by a lack of oxygen have reduced by 50%, and injuries caused by babies’ shoulders becoming stuck during delivery have reduced by 70%. Work with the NHS Litigation Authority demonstrated that litigation claims at the trust have fallen by 91% since PROMPT was introduced.

The tool has since been adopted by 85% of UK maternity services and by units in many other countries. However, how to reliably reproduce the success of Southmead in new contexts has remained a key question. Led by Mary Dixon-Woods, a study is now underway to fully characterise and describe the mechanisms underlying the improvements.

Counting: assessing safety by what has happened in the past does not give a complete picture of safety now, or in future

The Lining Up research demonstrated that even the seemingly straightforward task of measuring improvements in safety varied so much as to make comparisons between sites ‘almost meaningless’. The challenges of safety measurement were further explored in a 2013 research report by Charles Vincent, Jane Carthey and Susan Burnett, The measurement and monitoring of safety. Their report brought together existing literature and findings of expert interviews and case studies of health care organisations already exploring innovative ways to measure safety. The researchers concluded that, despite the high volume of data collected on medical error and harm to patients, it was still not possible to know how safe care really is, and that assessing safety by what has happened in the past – such as by the number of reported incidents – does not tell us how safe care is now or will be in the near future.

Given the tendency to focus on measuring individual aspects of harm in the NHS, rather than system measures of safety, it is inevitable that an answer to the question of whether the NHS is getting safer remains ‘curiously elusive’.

The state of patient safety in the NHS

Harm caused by health care affects every health system in the world, and the NHS is no exception. Research from the UK and abroad has shown that people admitted to hospital have around a one in 10 chance of being harmed as a result of their care. About half of these episodes of harm are avoidable. We know far less about safety in settings outside of hospital, but research has suggested that around 1–2% of consultations in primary care are associated with an adverse event. The cost of harm – to patients, to those working in health care, and to productivity – is significant. However, creating a health care system that achieves zero avoidable harm is not a realistic ambition.

This is, in part, because understanding of harm – and the types of harm that are avoidable – continues to develop. Problems that were once seen as inevitable consequences of health care (such as some infections) are now seen as both preventable and unacceptable. Other problems, such as medication errors, which appeared in principle to be solvable, have turned out to be much more intractable. Further developments will give greater attention to the psychological aspects of harm, or the harm caused by failing to provide care in a timely way. However, these dimensions of safety are rarely captured through current reporting systems. Therefore, rather than zero avoidable harm, the appropriate ambition should be to continually reduce harm.

So how well is the NHS doing in continually reducing harm? Analysis of some of the measures available nationally shows that, despite some significant achievements, many gaps in knowledge remain. Even where there are measures available, the indicators paint a mixed picture (see Figures 1–13 overleaf). For example:

  • People working in the NHS are increasingly willing to report safety incidents (Figure 1). However, just 0.3% of all reported incidents are in primary care, despite 90% of all patient contact taking place there. This suggests significant underreporting of harm in primary care, even taking into account the generally less risky interventions involved.
  • There has been great progress in reducing rates of health care associated infections (HCAIs) such as methicillin-resistant Staphylococcus aureus (MRSA) and Clostridium difficile (Figures 4 and 5). However, rates of methicillin-sensitive Staphylococcus aureus (MSSA) and E. coli – which have drawn less political attention – have actually risen (Figures 6 and 7).
  • People working in hospitals are more confident that action will be taken following an incident (Figure 8). However, more of them say their organisation has a blame culture (Figure 9).
  • More people feel they would be safe if they were treated in hospital (Figure 11). But around 40% of patients feel there aren’t always enough members of staff on duty to care for them (Figure 12).
  • In 2014, the Commonwealth Fund ranked the UK as first for safe care out of 11 developed nations. However, in 2014/15, the Care Quality Commission rated 61% of hospital trusts as ‘requires improvement’ and 13% as ‘inadequate’ for safety.

It is clear that things must continue to change. There needs to be recognition of the successes of the past, but also of the limitations of current approaches to improving safety – and the role that policymakers and national bodies can play in fostering improvement. These issues, together with examples of the practical experiences of front-line teams, are explored in parts II and III.

Figures 1-13: NHS performance on a range of patient safety indicators over time, across England, England and Scotland or England and Wales


* Safety cases are widely used in other safety-critical industries. They involve compiling a structured argument, supported by a body of evidence, to make the case that a system is acceptably safe for a given application in a given context. For more information about safety cases, and their use in health care and other industries, see www.health.org.uk/safetycasesreport

In our recent report, Indicators of quality of care in general practices in England, we recommend that improving data and indicators in primary care should be a priority for the NHS in England.

For more information see our briefing, Is the NHS getting safer?16

Previous Next