Summary of the research

Past harm – Has patient care been safe in the past?

There are relatively few ways in which care can go right and many more in which it can go wrong. Therefore, organisations need to understand the different types and causes of patient harm, which can be caused by:

  • delayed or inadequate diagnosis (eg misdiagnosis of cancer or a patient not seeking an appointment after noticing rectal bleeding)
  • failure to provide appropriate treatment (eg rapid thrombolytic treatment for stroke or prophylactic antibiotics before surgery)
  • treatment (eg surgical complications or the adverse effects of chemotherapy)
  • over-treatment (eg drug overdose or painful treatments of no benefit to the dying)
  • general harm (eg delirium or dehydration)
  • psychological harm (eg depression following mastectomy).

The multiple types of harm require more than just a single measure. A range of measures might include: mortality statistics, systematic record review, selective case note review, reporting systems and existing data sources – taken together, they give units and organisations the best chance of understanding harm, but the strengths and limitations of each must be understood.

The measurement of past harm will always be a cornerstone to understanding safety. Measures need to be specific and tracked over time to help to assess whether care in a particular area, and overall, is becoming safer. Measures also need to be valid and reliable, selected from a broad range of approaches according to their suitability to the care setting.

Reliability – Are our clinical systems and processes reliable?

Reliability in other industries can be thought of as the probability of a system functioning correctly over time. But in health care it can be difficult to define exactly what ‘functioning correctly’ means. It is possible to do this in those areas where protocols have been developed to standardise treatments – for instance, in the management of acute asthma in emergency departments or the management of diabetes in primary care. However, there will always be occasions where guidance either cannot or should not be followed.

The measurement of reliability in health care should focus on areas where there is a higher degree of agreement and standardisation. This is typically achieved through clinical audit measures, set either locally (eg percentage of patients with two complete sets of vital signs in a 24-hour period) or nationally (eg percentage of all inpatient admissions screened for MRSA).

Although clinical audits have value, they tend to focus on specific points of care processes. Many national initiatives have introduced ‘care bundles’, where previously separate care processes are brought together to reduce the chance of aspects of care being missed. A more holistic, though more challenging, approach requires understanding reliability across an entire clinical system and exploring the factors that contribute to poor reliability. This could include staff accepting poor reliability as normal or a lack of feedback mechanisms.

Sensitivity to operations – Is care safe today?

Safety needs to be managed on a day-to-day or even minute-by-minute basis, whether it be the clinician monitoring a patient, or the manager monitoring the impact of staffing and resource levels. It involves a state of heightened awareness that enables information to be triangulated in real time, and action to be taken to tackle identified problems before they threaten patient safety.

Formal and informal mechanisms that organisations can use to support this ‘sensitivity to operations’ in health care might include:

  • safety walk-rounds, which enable operational staff to discuss safety issues with senior managers directly
  • forums, such as operational meetings, handovers and patient/carer meetings, to act as sources of intelligence on the safety of services
  • day-to-day conversations between teams and managers
  • patient safety officers actively seeking out, identifying and resolving patient safety issues in their clinical units
  • briefings and debriefings, such as at the end of a theatre list, to reflect on learning
  • patient interviews, letting patients tell their story to identify any threats to safety.

Some measures have been externally mandated, such as the staff and patient surveys, as well as the regional development of Quality Surveillance Groups.

Anticipation and preparedness – Will care be safe in the future?

The ability to identify future hazards and potential problems in clinical services is an essential part of delivering safe care. This is best achieved by encouraging questioning, and creating opportunities for individuals and teams to discuss scenarios, so that teams become resilient in the face of unexpected events. The research tells us that this is an area where other safety-critical industries are more developed than the NHS.

Documents such as risk registers are commonplace in the NHS, where local risks are identified and graded. However, their ability to help to anticipate whether care will be safe in the future is open to question. This may be more effectively achieved in other ways, such as the following:

  • Toolkits for identifying and monitoring risks, for example those developed in the Health Foundation’s Safer Clinical Systems programme.
  • Safety cases, which offer a means by which organisations can use a range of evidence to demonstrate that a system is acceptably safe.
  • Studies of safety culture and climate have shown that the culture and climate have a correlation with patient outcomes and staff injuries.
  • Staff indicators of safety, such as sickness absence rates and staffing levels, can help to forecast an organisation’s ability to safely provide care in the future.

Integration and learning – Are we responding and improving?

There are many different sources of safety information available to organisations, but these must be integrated and weighted if risks and hazards are to be effectively understood and prioritised so that effective action can be taken. This must also be done in different ways at different levels. For example, the level of detail and specificity required by a unit would be different to the summarised, high level information that a board would need for an overview of the safety across all of an organisation’s services.

A disproportionate amount of effort tends to be spent on data collection, whereas an effective system for incident reporting would be made up of information, analysis, learning, feedback and action. Incident analysis should go further than explaining the nature of the event, to help to identify wider problems in the system.

Feedback, action and improvement are vital to making systems safer in the future. There are many different types of local feedback mechanisms in use, ranging from individual discussions to safety newsletters and web-based feedback. The challenge at the higher levels of organisations is to integrate the information available to draw wider lessons and to spread learning right across the organisation where appropriate, without losing the granularity that makes information real for individuals.

Examples of how to do this include producing organisation-wide regular learning reports, hosting learning seminars or tracking performance across a number of safety themes on a regular (eg quarterly) basis. Developments in the visual representation of data and technological solutions can help to do this in the future.


‡‡ www.health.org.uk/areas-of-work/programmes/safer-clinical-systems

§§ www.health.org.uk/publications/using-safety-cases-in-industry-and-healthcare

Previous Next