3. Challenges for applying automation and AI in health care

 

There is clearly great potential for the use of automation and AI in health care. But there are also some important constraints on where and how these technologies can be applied, along with a range of design and implementation challenges if they are to be successfully deployed. It is perhaps understandable that the promise of automation and AI typically excites the most attention. But given that the introduction of technologies into health care settings necessarily creates new risks and potential points of failure, we believe the challenges require just as much focus if we are to get automation right.

In this chapter, we explore some of these challenges and constraints by exploring different characteristics of tasks in health care. One set of challenges emerges from tasks requiring uniquely human traits or human presence. Another stems from the complexity of tasks and work environments in health care. We also explore the challenges of implementing and using automation technologies effectively in practice. While some of these issues might pose absolute constraints on the use of automation and AI in health care, others can be addressed through effective design and implementation.

We do not consider here challenges that relate specifically to the technologies themselves (such as interoperability with other systems) or the data they rely on (such as data protection), which are beyond the scope of this report – though Box 9 briefly highlights some important challenges related to machine learning. Rather, our focus is on the application and use of automation and AI in health care.

Box 9: Some data and algorithmic challenges for automation and AI in health care

There are a range of risks and challenges relating to the data and algorithms used in automation and AI in health care, which have attracted significant attention in recent years.

Bias

Automation and AI systems require design, programming and training in order to function and are therefore susceptible to the biases of the datasets used for training, as well as biases within the environment in which the system is built. Many of the data sources typically used by these technologies present possible issues of bias including patient self-selection, inconsistent availability of outcome data, incomplete data and poor representation of certain populations, which can all result in inadvertent bias in machine predictions.,, In addition, these technologies will be shaped by the biases of the teams that research, design and develop them. These issues in turn create the risk of biased outcomes that disadvantage particular populations, raising important questions about the fairness and accuracy of decisions made by automation and AI systems.

Transparency

Automation and AI systems challenge conventional conceptions of moral responsibility because of their ability to influence and make decisions and potentially act with a degree of autonomy. While it may not be possible to attribute moral responsibility to a technological system, many agree that the need for clear accountability requires the ability to explain and justify the actions of a system. Such systems therefore need to be transparent and understandable. However, this can be a challenge because the inner workings of automated decision-making systems, particularly those incorporating machine learning, can be so complex that they cannot be explained or audited, leaving only the outputs visible (sometimes called ‘black box’ decision making). This can make it difficult for those who have been adversely affected by algorithmic decisions to understand the reasons on which they were based.

Sensitivity

Challen and colleagues highlight some further important challenges arising from a potential lack of sensitivity of an automated system to its context, with important safety ramifications. These include situations where an automated system attempts to make decisions despite possessing insufficient information; situations where an automated system does not take into account the impact of its decisions (for example, the impact of a false positive prediction); and situations where an automated system trained on historical data cannot adapt quickly enough to new populations or sudden policy changes.

These risks and challenges highlight the importance of regulation and standards to ensure that health care technologies are safe, ethical and deliver high quality outcomes for all – discussed further in Chapter 4.

3-1 The human dimension

Many tasks in health care require human traits that cannot (yet) be replicated by machines. For other tasks, there is an intrinsic value of human agency or relationships meaning the task cannot be delegated to a machine. Both create limitations on the scope for automation.

3.1.1. Human traits that technology cannot (yet) replicate

Frey and Osborne’s influential work on the future of employment highlights three types of human ability that are difficult for technology to replicate: perception and manipulation; creativity; and emotional and social intelligence.

  • Perception and manipulation: Replicating humans’ ability to understand and respond to external stimuli remains a challenge. Consider the use of a robot to assist a hospital patient to move from a bed to a toilet, or to clean the kitchen for someone needing help at home. While on the face of it these tasks might appear simple, they involve a vast number of distinct movements and the ability to respond to external stimuli in order to navigate through changing environments. Every possible movement and response requires codification; as Brynjolfsson and McAfee note, ‘low-level sensorimotor skills require enormous computational resources’. While machines can increasingly copy many aspects of human perception and movement, the ability to interpret the world and act appropriately remains much harder.
  • Creativity: The psychological processes and values underlying human creativity are difficult to specify and therefore hard to automate. Creativity involves generating new ideas, or making new connections between familiar ideas. While this is not hard in itself, the issue is that creativity requires not just novelty but also value – which novel ideas or connections ‘make sense’? The challenge here lies in describing our creative values in a clear enough way for them to be programmed into a system., Later we discuss one example of where the need for creativity may constrain the scope of automation: in the strategies health care workers use to improvise and adapt when faced with unpredictable events.
  • Emotional and social intelligence: Emotional and social intelligence, which are required in activities such as caregiving and managing people, pose challenges for automation. They include the ability to ‘read’ and deal effectively with the feelings of other people and to manage relationships. Automating this type of intelligence is a challenge not only because of the difficulty of recognising human emotion in real time, but also of knowing how to respond in an appropriate way, which requires a complex combination of skills and knowledge. Box 10 explores the role of emotional intelligence in health care.

Box 10: Emotional intelligence in health care

Emotional intelligence – ‘the ability to perceive and express emotion, assimilate emotion in thought, understand and reason with emotion, and regulate emotion in yourself and others’ – enables individuals to build relationships, moderate conflict and foster harmony., It is widely recognised as crucial for quality and safety in health care, including for creating an environment of trust and openness.

In particular, empathy – the clinician’s willingness to appreciate the patient’s perspective – which is one component of emotional intelligence,, underpins good patient–clinician communication and is critical in delivering person-centred, compassionate care.

As the Institute for Healthcare Improvement argues, ‘staff and providers’ skills in understanding and meeting the patient’s emotional needs are essential to creating an excellent experience of care’. It is hard to envisage, at least in the short term, how a computer could address these needs given the level of emotional and social intelligence required.

Even where the role of emotional and social intelligence is less obvious, such as with administrative tasks, it might still be important for avoiding undesired results. An example is rostering. This might appear to be a good candidate for automation because it is rules-based, routine, frequent and time consuming – and there has been much recent interest in ‘e-rostering’ in the NHS, most recently in the Carter Review. However, exploration of how rostering works in practice reveals it is not just a technical task, but a social one too, requiring careful planning and consideration of a range of factors, such as a team’s sense of fairness and when discretion about rostering rules should be applied. While this doesn’t mean that rostering can’t be automated, it highlights the importance of considering tasks carefully to understand where human intelligence adds value, so an informed judgement can be made about whether they should be automated or not.

3.1.2. Tasks requiring human presence

The use of automation in health care will not be determined simply by whether it is technically possible. We also need to understand where it is desirable, and where human input should be retained.

One key issue is the importance of human agency in care giving, particularly for treating patients with dignity and respect. While some patient-facing applications of automation, such as automated online appointment booking, might be considered perfectly compatible with treating patients with dignity and respect, others, such as robotic carers, are more contentious. A recent survey of 4,000 US adults on attitudes towards in-home robotic carers found that more were worried about the idea (47%) than enthusiastic (44%). The loss of human interaction was the predominant theme mentioned by respondents who said they wouldn’t want a robotic carer. In cases like these, people may feel that interacting with another human is necessary for, and indeed constitutive of, being treated with dignity and respect. There are a range of tasks in health care – such as informing a patient of a diagnosis of a serious illness – which for similar reasons may simply not be ‘delegable’ to a machine. As Liu and colleagues note, this type of communication requires ‘considerate assessment of a patient’s hopes, fears and expectations’, much of which is non-verbal and happens at an ‘innate level’, which an algorithm cannot replicate.

Another fundamental issue, as Batalden has observed, is that health care is not a product but a service that is co-produced with patients and families. Human relationships are fundamental for this co-production and shared decision making, which lie at the heart of person-centred care. In areas like care planning, for example, the need for genuine partnership between health care professionals and patients may pose ‘hard’ constraints on the use of automation.,

On other occasions it may not be that the use of automation is incompatible with dignity and respect, but simply that there is a preference for human contact. For example, in the Oxford study’s fieldwork, the researchers observed patients bypassing the touch-screen check-in at the GP surgery and choosing to check in with (and say hello to) the receptionist instead.

There are also a range of concerns that arise where automation and AI are used in decision making.,

Relating to the decision process are important issues of procedural fairness and acceptability concerning how decisions are made and resources allocated, where the increased use of automation and AI in decision making could have important ramifications. A significant finding of social psychology in recent decades is that people can care about the processes of decision making independently of the outcomes. For example, people who lose court cases may nevertheless feel more satisfied if they believe they’ve had a fair chance to have their argument heard compared to those who feel they haven’t. If automation and AI are used in diagnosis, triage and treatment decisions, interesting questions arise as to whether patients will feel these processes are fair and acceptable. For example, some patients might feel that algorithmic triage is acceptable as a process due to its potential to be consistent and free from certain kinds of human bias (though note the potential for data bias described in Box 9); whereas others might feel that having a human listen to and consider their case is a key component of being treated fairly and respectfully. More research is needed to understand perceptions of procedural fairness and acceptability surrounding automated decision making in health care; it is an area where the distinction between full and partial automation, and perceptions of the extent of machine involvement in decision making, may assume particular significance.

Where tasks have a direct impact on patient health, human participation in decision making may also be necessary for reasons of safety and accountability. For while automation can reduce the scope for human error, human oversight is often necessary to guard against machine error; in this sense, automated systems always require a ‘wrapper of human control’. Furthermore, the fact that machines can learn, make decisions and potentially act with a degree of autonomy raises important questions of accountability when mistakes occur. While attributing responsibility for problems can be complex (for example, when are they the responsibility of the machine operator and when of the manufacturer?), there is nevertheless general consensus that accountability must lie with humans rather than machines.

3-2 The complexity of work in health care

Another potential challenge for automation is the sheer complexity of tasks and work environments in health care – a prominent theme of sociological, ethnographic and human factors studies. Very often, there is more going on in a task than meets the eye. Attempts to introduce automation and reorganise work based on a simple interpretation of the task will therefore lead to problems or ‘unintended consequences’ – with potential risks to quality and safety.

Here we briefly discuss three potential challenges for automation arising from the complexity of work in health care: task multidimensionality (the fact that single tasks can fulfil multiple functions), task variation (the fact that tasks can look very different on different occasions and in different settings) and task unpredictability (the fact that tasks can evolve in ways that require flexibility and adaptation).

3.2.1. Task multidimensionality

One characteristic of many tasks in health care is their ‘multidimensionality’ – the fact that even in a single task there can be multiple things going on. Sometimes it might make sense to think about a task as comprising a range of ‘sub-tasks’ (in which case, the question of automation can move down a level to consider which sub-tasks can be automated). In other cases, it might make sense to think of the same task as serving multiple functions, and in these cases if you automate the task with the aim of fulfilling its ‘primary’ function, it will mean the other functions get lost.

The Oxford study identified a powerful example of this. While clinical documentation can be automated,,, which might well save time, the GPs in the study’s focus groups pointed out that this could remove an important opportunity for them to reflect on their cases. Such reflection is valuable not only for ensuring they make the best decisions about diagnosis and treatment, but also for reflecting on their practice more generally, which matters for professional development. Echoing Bansler’s observation that documentation can serve as a ‘tool of thinking’, the Oxford team observed how clinical notes were often kept open on screens for periods of time in order to enable GPs to consider, revise and make sense of the material, particularly notes for new or complex cases. Willis and Jarrahi comment that ‘If the practice of writing and thinking through writing is wholly removed from the clinicians’ workflow… it removes an opportunity for the clinician to think and reflect critically in the way they practice medicine.’ So the task of note-taking is potentially about more than recording information, and simply automating it and assuming this fulfils all the functions of the task would be wrong. Alternatively, clinical documentation could be automated but with GPs still taking time out to analyse and reflect on their cases in other ways. Some GPs may prefer this, though it wouldn’t necessarily result in the productivity gains that a simple interpretation of note-taking might lead one to expect.

In the same way that documentation can be about more than recording information, communication can be about more than transferring information. Ash and colleagues observe that communication can also be about generating an effect on the person you’re communicating with, testing their assumptions, receiving feedback, and establishing and maintaining relationships. Important functions like these can get lost if automated systems displace human communication and social interaction in the workplace.

Another kind of multidimensionality is where a decision-making task tacitly involves two different elements: the appraisal and decision making itself (decision selection); and the checking and ‘sign off’ of the decision (decision authorisation). When one person is performing the task, these two elements are usually elided. But while an alternative worker or computer might be able to perform the appraisal and decide on a course of action, this may not remove the need for a stage of checking and authorisation, and it may be undesirable to delegate this authority to the alternative worker or computer – particularly where the decision carries significant risk. So even when a decision-making task has been delegated or automated, there may still be a need for a suitably qualified worker to check the decision and sign it off. In some cases, this might still be more efficient than the previous arrangement; in other cases, the level of engagement required for the suitably qualified worker to familiarise themselves with the case and authorise the decision may be almost as much as if they were performing the whole task themselves, in which case the delegated arrangement may reduce productivity.

Studies of health care roles, including literature on ‘hidden work’, identify other examples of tasks fulfilling multiple functions. Health care assistants, for example, typically perform a range of tasks, including taking blood samples, monitoring vital signs, serving meals and helping patients move around. However, through the contact they have with patients in carrying out these tasks they also provide emotional support and identify patient needs (for example, needs around pain management), something that enables them to act as advocates for patients and bridge the relationship between patients and other clinicians. While it is conceivable that in future robots and automated systems may be used to assist patients with meals and mobilisation, and take measurements and samples, this could weaken an important dimension of patient support if it significantly reduced contact between health care workers and patients.

While none of the issues highlighted here are absolute barriers to automation, it is clearly important that proposals for automation are grounded in a detailed understanding of the work in question. What the literature on ‘unintended consequences’ reveals is that changes are sometimes made without this understanding.

3.2.2. Task variation

Many tasks and work activities in health care resist standardisation. This is not simply because patients and cases may differ, but also because roles, processes and workflows may be organised differently on different occasions and in different contexts, depending on the staff, skills and resources available. The Oxford study documents many types of variation across tasks in primary care (both within practices and between practices), including variation in the content of tasks (for example, phone calls), variation in the process for completing tasks (for example, handling correspondence) and variation in who performs tasks (for example, letter writing). According to the authors, ‘variance in tasks can occur in the order parts of the task are performed, duration, the occupational role of the person performing the task, the importance of the task or how time-critical it is, and how many individuals become involved in completing it’.

That tasks can be organised in different ways on different occasions may pose challenges for automation. For example, if the performance of a task is distributed between workers in different ways in different settings, then some task and work reorganisation may be necessary to make automation possible. For administrative tasks in primary care, the Oxford study found that the extent of task sharing – and therefore the way tasks are organised – varies depending on practice size, with more sharing of tasks at single-site practices than at large, multi-site practices.

In other cases, variation in who performs a task may reflect not simply variation in how the roles, processes and workflows are organised, but variation in the underlying nature or content of the task. The Oxford researchers noted that what looks like an administrative task can suddenly get transformed into a clinical task requiring specialist medical knowledge – something they argue ‘is what makes work in health care different and exceptional when compared to other fields with similar task descriptions’. Reviewing prescriptions, for example, an administrative necessity, can reveal information requiring a decision by a pharmacist with specialist knowledge. Medical coding is another type of task that can flip between administrative and clinical. Such variation in the nature of a task may render it less tractable to automation, or suggest only automating some aspects of it but not others.

Ultimately, variation itself need not be a challenge for automation provided that the nature of the variation is understood. But there are cases in health care where the parameters within which a task might vary can’t themselves be specified or constrained – the ‘left-field’ piece of information from a patient, for example, that could require a totally different approach to handling their case. Much automation discourse is grounded in the paradigm of product manufacturing, where tasks might be more prone to standardisation and routinisation than tasks in health care.119 So caution will be needed in assessing the applicability of the wider automation literature to health care.

3.2.3. Task unpredictability

One important source of task variation in health care is unpredictability in the way tasks unfold over time. This is partly because much health care work is responsive in nature – to the needs of patients, staff and organisations – and takes place in a dynamic work environment that can create disruptions to workflow. It is also because tasks are often being carried out in non-ideal circumstances where workers have to navigate uncertainty and trade off conflicting goals (such as whether to spend more time with one patient or move on to the next). While some disruptions result from operational failures (such as the need to fetch missing equipment) that can in principle be prevented, others result from intrinsic aspects of health care (such as the need to respond to a change in a patient’s condition).

Work interruptions are one example of this phenomenon that has been extensively studied. They pose challenges for task performance,, but while in some cases they are irrelevant to the task underway, in others they are vital for the delivery of safe care, requiring an immediate response.,

Managing fluid and unpredictable workflows safely requires flexibility, adaptation and improvisation from health care workers. But while automation technologies can potentially handle unpredictability, they can only do so if the appropriate strategies for handling it are understood. Ebright and colleagues highlight (in the context of nursing work) that not enough is known about the strategies and reasoning that front-line health care workers use to cope and adapt in complex work situations to be able to codify this knowledge. As Autor puts it, ‘The tasks that have proved most vexing to automate are those demanding flexibility, judgement, and common sense – skills that we understand only tacitly.’

While no one is proposing to automate complex clinical work such as nursing work, there is a wider challenge for automation here: the use of technologies in clinical pathways will need to support health care workers’ ability to improvise and adapt. One risk is if an automated system introduces unnecessary rigidities into the workflow, undermining flexibility and the capability to react. An example would be where an automated system won’t allow progression to the next stage of a task until certain information has been entered, despite the fact that real-life situations sometimes demand that task steps are initiated early, in the absence of the information or in an unconventional order. Another risk is where reliance on automation technologies restricts communication or takes workers ‘out of the loop’, reducing the situational awareness needed to respond to unexpected events.,

So automation technologies need to be designed in ways that match the realities of complex workflows and hectic work environments (discussed further in Box 11). Coiera observes that technology is often designed on the incorrect assumption of a fully concentrating user in a single-task scenario; the reality is that health care workers may be carrying out several tasks simultaneously or interacting with colleagues to help them complete other tasks. Ash highlights a range of problematic design features of technologies in this respect, such as interfaces that are hard to use in an interruptive context or that unnecessarily increase the cognitive load on staff by requiring information to be entered in overly structured formats.

In summary, while automation is best suited to routine work, work patterns on the front line are often adaptive and emergent as workers juggle competing demands under resource constraints. The successful design and deployment of automation technologies will need to support the ability of health care workers to flex and adapt in the face of task unpredictability.

Box 11: The ‘human infrastructure’ on which technology depends

Many of the challenges described here relate to the sociotechnical nature of health care. As Coiera observes, any health care technology does not sit in isolation, but is part of a larger ‘sociotechnical system’ which involves the people using the technology and the people they are interacting with, the other processes and tasks going on, and features of the surrounding work environment, which may well be complex and unpredictable. Results emerge from the ‘sociotechnical coupling’ of technology with people and processes – ‘not as the injection of technology into a location, but as a process in which we mould together a unique bundle that includes technology, work processes, people, training, resources, culture, and more’.

This means the effectiveness of a technology will be determined not just by how well it accomplishes the specific tasks for which it was designed in isolation, but by how well it can fit and is fitted into an organisation’s processes, workflows and wider institutional norms. And understanding and modelling these processes and workflows is not trivial: many processes in health care have never been consciously designed, and often there can be significant gaps between ‘work-as-done’ and ‘work-as-imagined’.

Problems can arise when there is a mismatch between the design of a technology and the reality of the work environment in which it is to be used. Indeed, this can be an important source of safety risks: as well as the risk of purely technological failures, the introduction of technology into a live health care setting will also create the possibility of failure occurring in the interface between the technology and the surrounding work processes. Work on patient safety is increasingly recognising the vulnerabilities that can be created through the introduction of technologies, which can give rise to adverse consequences that are hard to foresee.

A variety of sociotechnical frameworks and models have been developed to describe the interplay of these different factors in the development and use of health technologies., For example, Sittig and Singh’s eight-dimensional model, in addition to factors such as hardware, software, data and the human–computer interface, describes the importance of social and contextual factors such as people, workflow and communication, as well as internal organisational features such as procedures, processes and culture.

More recently, Healthcare Improvement Scotland has developed a model to support service redesign, based on an approach from Tan Tock Seng Hospital in Singapore. The model, illustrated in Figure 6, highlights how technology needs to be seen as just one part of a broader process of role redesign and process redesign that must take place for the successful adoption of technologies like automation, AI and robotics.

Figure 6: Interrelationship between technology, process redesign and workforce redesign

 

Note: This model is based on an approach developed by Tan Tock Seng Hospital, Singapore

Source: Healthcare Improvement Scotland. Discussion Paper – an evolving approach to supporting the redesign and continuous improvement of health and care in Scotland. Healthcare Improvement Scotland; 2019

3-3 Challenges for implementing automation and AI in health care

Even when automation systems are designed in appropriate ways for live health care settings, there still remain significant challenges in implementing them effectively. If the success of a technological intervention depends not just on the technology itself but also on a whole set of accompanying role, process and workflow elements, and if these role, process and workflow elements can vary from one setting to another, then implementation can be viewed as the practice of fitting the technology into the specific organisational context. And this can be a complex process. As the 2016 Wachter Review of health IT in England argued, implementing digital technologies is not a simple case of ‘technical change’ (like following a recipe) but of highly complex, adaptive change, which requires substantive and long-lasting engagement between those leading change and those on the front line responsible for making technologies work.

The challenges of implementing automation technologies go beyond the usual set of challenges faced in implementing new health care interventions. First, there are specific challenges associated with the implementation and use of new technologies. Second, as highlighted in the literature on skill-mix change, there are specific challenges associated with ‘task shifting’ – work reorganisation that relocates the performance of a task. Automation potentially combines both technology and task shifting, making successful implementation far from straightforward.

As with many of the issues examined in the previous section, these kinds of challenge are not necessarily barriers to automation, but rather factors that will have to be addressed if automation is to work effectively. Failure to do so can lead to unintended consequences, such as operational failures, workarounds, decreased productivity and increased workloads.

3.3.1. Challenges related to technology implementation

A first set of challenges relates to the implementation of technologies. Here we highlight challenges related to the ‘human’ (sociotechnical) dimension of technological interventions; challenges relating specifically to the nature of technologies themselves or the data they rely on are beyond the scope of this report.

  • Embedding new technology successfully in health care settings requires developing new organisational routines, ways of working and behaviours. The challenges of doing this are a recurring theme of evaluations of health technology interventions. They include the need to establish the implications of the new technology for organisational processes and workflows, the need to agree and coordinate the new ways of working required, and then – even when all of this is known – the need to actually change behaviours and ways of working, which may be deeply entrenched. Problems will arise if the workflow models used in developing the technology don’t match real-life workflows. As noted earlier, the complexity of work in health care, where there can be significant gaps between ‘work-as-done’ and ‘work-as-imagined’, means there may need to be a stage of process and workflow mapping to understand existing work patterns and think through how automation could be successfully introduced. Problems will also arise if there are not appropriate strategies and protocols in place to ensure safety and continuity if things go wrong. Relevant here is the ‘Safety II’ approach, which focuses on purposefully enabling things to go right, rather than merely seeking to prevent failures, and on being proactive rather than reactive. Safety II also focuses on the importance of adaptation and flexibility to match the conditions of work, and the role of humans to ‘absorb variability’ and create resilience, which is essential if technology is to work safely and effectively.

Box 12: A learning health system approach to avoiding acute occupancy crises

NIHR CLAHRC Northwest London and Chelsea and Westminster NHS Foundation Trust (Innovating for Improvement, 2017–18)

Many acute NHS providers currently face high bed occupancy rates, with peaks that frequently exceed capacity. This leads to emergency department overcrowding, delayed care and patients being in inappropriate wards, all of which impact on patient experience and outcomes, including length of stay. When demand for beds reaches a critical level, hospitals may attempt to absorb additional patients by expediting discharge of existing inpatients. However, without a means of early warning, the problem often only becomes apparent once it has started to have an impact, leaving little time to begin an effective mitigating response.

Supported by the Health Foundation, this project aimed to develop a model which would use data about patient characteristics to predict which inpatients would be likely to remain in hospital in two days’ time (residual occupancy) and therefore the risk of a bed occupancy crisis. The idea is that if a hospital could have prior warning of an impending bed crisis, this could trigger a response with sufficient time to deploy accelerated discharge strategies, for example, prioritisation of discharge-related diagnostics, ambulatory care or social care.

The project team interviewed acute care staff to identify data that could help to predict residual occupancy. This aided the design of a statistical model that was based on data from 86,000 patient spells in hospital. The model was able to predict residual occupancy two days in advance with an average error of 6%. However, the initial model was static and could quickly become outdated over time, so the team is now developing a machine-learning version that will be able to learn from and adapt to changes in the causes of bed occupancy peaks.

For the model to be useful in routine practice, the team developed a software module to import, standardise and clean hospital admission and A&E attendance data, and link them to hospital spells. This means the model could be deployed in other hospitals and be trained on each hospital’s data on an ongoing basis, so that changes in patterns of occupancy and patient care can be learned by the model.

The next step will involve testing the model with prospective data before being run in live settings. Just as importantly, the team recognise that the tool is a means to an end, not an end in itself: using it to avoid occupancy crises in practice will rely on staff taking action before rates get too high. To support this, they are developing a protocol to standardise responses to rises in occupancy, including when responses should happen and who should lead them.

‘The predictive model is an important component, but ultimately it is how you act on the data that counts. This means taking early action when a bed occupancy rise is flagged, which requires a big cultural shift.’

Paul Sullivan, project lead

  • Training and re-training are a critical part of successful implementation, particularly in light of the increasing complexity of automation, AI and robotic technologies, and functionalities that are new and unfamiliar. On this issue, the 2019 Topol Review argued there will need to be an increase in digital literacy among health care professionals.,, Health care workers will not only need to understand how to use new technologies safely; they may also need to understand how the technology works, at least at a basic level, in order to interpret the outputs of the system correctly or to explain the system’s operation to others.
  • There is a risk that automation and AI could create or widen health inequalities. In addition to the risks to health inequalities associated with data that were highlighted in Box 9, digital inclusion poses particular challenges given inequalities in access to technology and digital literacy, especially in cases requiring patient interaction with new technologies or with a digital interface. It has been estimated that 11.7 million people do not have the essential digital skills needed for day-to-day life in the UK and they are more likely to be older, poorer, live with disabilities and need health care services. Approaches that rely on expensive technologies purchased by patients, such as smartphone symptom checkers, can also present barriers to access. So steps to promote inclusion – for example, user training, or creating user interfaces to suit differing levels of digital literacy – will be important to ensure that the use of digital platforms and tools doesn’t exclude particular groups. This will likely require meaningful co-design with people from a range of demographic groups to ensure these technologies meet diverse needs. Furthermore, certain digital technologies may not work well for everyone and some people may prefer a non-digital option, so it is important that accessible and high-quality non-digital alternatives are available where appropriate. The Health Foundation is exploring some of these issues with the Ada Lovelace Institute, as part of a research project examining how the adoption of data-driven technologies and systems during COVID-19 may have affected health inequalities.

3.3.2. Challenges related to task shifting

A second set of implementation challenges relates to the fact that automation often involves shifting the performance of a task from a human to a computer. The literature on skill-mix change shows that removing a task from a health care worker who has traditionally performed it and relocating it elsewhere can be tricky, and automation can be looked at as a radical case of skill-mix change. Here we highlight some challenges involved in task shifting, which are relevant for the implementation of automation.

  • New ways of working will only be effective if there is consensus around, and ownership of, the new working arrangements. Involving staff in identifying the problems to be tackled and co-designing the solutions can be important steps to achieving this, as well as increasing the likelihood that the changes made are the optimum ones. Conversely, particular challenges might arise if there is suspicion about the underlying motivation for work reorganisation – for example, if staff think it is driven by cost-cutting and therefore potentially a threat to care quality. There is also a risk that work reorganisation results in workers feeling their role is being devalued, or creates anxiety about job security, potentially having a negative impact on staff morale. So it is important that proposals for reorganisation address a recognised challenge and are accompanied by a broader vision of role development, and that the stated reasons for change resonate with the intrinsic motivations of health care staff to deliver high-quality care. The example given in Box 13 highlights the importance of gaining consensus for the successful development and implementation of AI.

Box 13: Predictive analytics for triaging patients in emergency departments

Barking, Havering and Redbridge University Hospitals NHS Trust (Advancing Applied Analytics 2018–19)

The A&E department at Queen’s Hospital in Romford is one of the busiest in the UK, with 240,000 attendees and more than 50,000 admissions a year. While some people present with very serious symptoms, who clearly need to be seen quickly and admitted, and while others are clearly non-urgent, there is uncertainty with regard to a substantial number of patients. Nurses usually only have a short time of around 5–10 minutes to triage patients, which can be challenging when a case is not straightforward.

Funded by the Health Foundation, this collaboration between Barking, Havering and Redbridge University Hospitals NHS Trust and the Alan Turing Institute aimed to support the triage process by using advanced health analytics to help identify high-risk patients. Drawing on data from over a million health records at the Trust, the project created a risk prediction tool that uses machine learning algorithms to identify the severity of presenting cases, predict the probability of admission and flag three major pathologies – stroke, myocardial infarction and sepsis – when they might be present. As part of this, the team developed a dashboard to capture clinical metrics, demographic information, recent attendance history and free text comments in the electronic record and to present the predictive analysis in visual format.

The tool was then tested in a pilot, at first retrospectively (on completed cases) and then in real time, supervised by clinicians, to assess its usability in everyday practice. The early results have been promising, with the tool increasing triage accuracy, compared to the standard process, by 7% with regard to over-triaging (over-estimating the urgency of a patient’s condition) and 2% with regards to under-triaging – providing initial evidence that it could be a useful support for A&E staff during triage.

According to the project team, gaining buy-in at the outset was essential – from nurses and patients, as users of the triage system, and from executive-level sponsors. Gaining support from nurses, in particular, involved providing reassurances that the technology would be deployed to support decision making, rather than replace staff, and that nurses would retain decision-making authority.

While the tool is designed primarily to be used in live clinical settings, the project team is now exploring opportunities to develop a version that can be used as a simulation environment for training staff in triage. In response to COVID-19, the team is also exploring whether the model can be used to help identify which patients with COVID-19 have a high risk of deterioration.

‘If you are doing work that involves nurses and patients you need their buy-in right from the start. Executive level sponsorship is also essential.’

Nik Haliasos, project lead

  • Clarity around roles and responsibilities is essential, especially if the task requires team coordination. Without this, task shifting can create confusion about the division of labour and professional responsibilities. This in turn can risk tensions over the ownership of tasks, lead to staff being deployed inappropriately and create inefficiencies – for example, duplication of efforts if work is not fully handed over as intended., So carefully articulated role and task definition is vital to create shared understanding regarding new working arrangements.
  • Another challenge is to consider the implications of task shifting for workloads and job quality. While removing tasks from someone’s role can potentially free up time, if they are expected to use the time freed up for more demanding or stressful work (for example, overseeing more complex patient caseloads), this could result in increased burnout – unless support is provided to help them meet the demands of their new responsibilities. Another risk is if task shifting means health care workers have less patient interaction, which could lead to reduced job satisfaction. For example, Verghese observes that monitoring patients using technology can draw clinicians away from the bedside, reducing the opportunity to experience the professional satisfaction that can come from bedside examination.

3.3.3. Specific implementation considerations for automation

If embedding technology in care pathways is a challenge, and shifting tasks around is a challenge, and automation combines both of these, then it is reasonable to expect implementing automation to involve many of the considerations outlined above. Beyond this, some specific challenges arise in relation to automation, which we highlight here.

  • One challenge is avoiding the loss of skills and confidence among staff who are no longer required to perform certain tasks frequently, and therefore lose opportunities to practise these skills, even though they may still be required in the future. This de-skilling is often inadvertent; workers can actively contribute to it as they naturally take advantage of and accommodate to innovations in their everyday work. Health care organisations can help to mitigate this risk through continuing professional development and providing relevant opportunities for workers to practise important skills.
  • The handover problem is where reliance on an automated system reduces the ability of staff to know how and when to take back control in the event of system failure or other risks. This is partly related to the issue of de-skilling highlighted above, but is also a product of the loss of situational awareness by staff, and sometimes also a lack of agreed strategies for taking back control. For example, Wears, Cook and Perry describe the unexpected failure of an automated drug-dispensing unit in an emergency department, which, with no agreed protocol for dealing with such a system failure, gave rise to a serious patient safety risk. This type of problem can be mitigated with the right planning – anticipating vulnerabilities, scenario modelling to help teams imagine different potential eventualities and devising appropriate protocols for what to do when there is a system failure, rather than having to invent such protocols in the moment.
  • A third challenge is avoiding automation bias, where humans place excessive faith in the decisions made by automated systems, uncritically following their recommendations. This can distort professional judgement – for example, studies show medics’ diagnostic accuracy falls when they are simultaneously presented with inaccurate computer-aided diagnoses. A related phenomenon is ‘automation complacency’, where those charged with monitoring automated systems fail to pick up on errors as a result of placing too much trust in those systems and failing to challenge their outputs sufficiently. Research suggests that humans monitoring automated systems can be particularly susceptible to this when they have other concurrent tasks. The requirements of properly overseeing automated systems should therefore be factored into assumptions about staff time ‘freed up’ through automation.

Box 14: The NASSS framework  

The NASSS framework is a tool developed by Greenhalgh and colleagues to help inform technology design, implementation and spread, as well as to identify innovations that have a limited chance of large-scale adoption and to retrospectively explain programme failures.

Recognising that failures and partial successes are common with technological innovations, the framework sets out the different factors that influence the adoption, non-adoption, abandonment, spread, scale-up and sustainability of health and care technologies. Based on a systematic review of individual, team, organisational and system influences on the success of technology-supported programmes, the framework (illustrated in Figure 7) consists of 13 questions in six different domains: the condition, the technology, the value proposition, the adopter system, the organisation, and the wider (institutional and societal) context. The framework also includes a seventh domain that considers interactions and adaptations over time.

Figure 7: The NASSS framework

 

Source: Greenhalgh et al.

Box 15: Views of NHS staff on the biggest implementation challenges for automation and AI in health care

We used our NHS staff survey to ask about implementation challenges for making automation technologies work on the front line.

Respondents were presented with a list of common implementation challenges and asked to pick up to two they thought would be the biggest challenges for using automation and AI effectively. The highest ranked challenge was ‘Patients might not accept these technologies or be suspicious of them’, picked by 45% of respondents, followed by ‘Staff shortages or inadequate equipment might make it difficult to use these technologies properly’, picked by 39% of respondents.

Figure 8: NHS staff views on the biggest challenges for making automation technologies work on the front line

Which one or two of the following do you think will be the biggest challenges for using automation and AI effectively in delivering health care?

The ranking of challenges was broadly similar across different occupational groups, with patient suspicion being the most highly ranked challenge for all staff groups. Health care assistants were slightly more likely than other staff groups to pick patient suspicion as a challenge (picked by 49%). Furthermore, nurses, midwives and health care assistants were slightly more likely to pick staff shortages/inadequate equipment as a challenge (picked by 43% of these groups).

These results highlight the importance of engaging with patients to ensure support for the use of new technologies – for example, through consultation and co-designing changes – as an integral part of the process of adoption and implementation. They also highlight the importance of ensuring sufficient staff capacity to support change and adequate equipment and infrastructure – issues that are discussed further in the final chapter.

Having reviewed some opportunities and challenges for automation and AI in health care, in the final chapter we consider the implications for the future of work and look at what will be required in order to get the automation agenda right in practice.


**** According to a 2018 report by Tech Nation, diversity and inclusion is still a challenge in tech companies, with gender and ethnic make-up not representative of UK society. If the teams that research, design or develop AI and automation technologies lack diversity this could increase the risk of bias, whether conscious or not. See Tech Nation. Chapter 5: Jobs and Skills. In Tech Nation Report 2018. Tech Nation; 2018 (https://technation.io/insights/report-2018/jobs-and-skills/).

†††† According to Stanton and Noble, for example, ‘clinical leaders need to be able to react in an emotionally intelligent manner to the intensely emotional events that inevitably occur in health care’, and being able to regulate their feelings ‘creates an environment of trust and openness that is palpable to others and vital to improving patient safety’.

‡‡‡‡ Psychologist Daniel Goleman argues that emotional intelligence has five component characteristics: self-awareness, self-regulation, motivation, empathy and social skill.

§§§§ As the Health Foundation’s Person-centred care made simple puts it, ‘for care to be enabling, the relationship between health care professionals and patients needs to be a partnership… It is a relationship in which health care professionals and patients work together to understand what is important to the person, make decisions about their care and treatment, and identify and achieve their goals.’

¶¶¶¶ For example, Yeung highlights several kinds of issues, which apply to both fully automated decision-making systems and algorithmic ‘recommender’ systems – including issues relating to the decision process and to decision outputs.

***** For example, a 2018 statement on these technologies by the European Group on Ethics in Science and New Technologies concluded that moral responsibility cannot be attributed to autonomous technology, arguing that the ‘ability and willingness to take and attribute moral responsibility is an integral part of the conception of the person on which all our moral, social and legal institutions are based’.

††††† With some types of automation, this may create a new role, whereby a worker becomes the operator of the automated system and responsible for checking its outputs. The Oxford study hypothesises that, due to their privileged knowledge, it could well be that the worker who is displaced by the automation technology subsequently becomes the operator, for example, a prescription clerk could take responsibility for operating and maintaining an automated system for processing prescriptions. This is one reason why automation may not simply be about replacing human labour but rather complementing it.

‡‡‡‡‡ Related to this, another unintended consequence of task delegation is when it results in a far more precautionary approach because of the separation of decision making from a more senior source of authority or expertise.

§§§§§ According to HEE, digital literacy involves developing the skills to be able to use the technology, and also the right attitudes, values and behaviours needed to thrive in a ‘digitally-enabled workplace’.

¶¶¶¶¶ Interestingly, a 2019 evaluation of Babylon’s ‘GP at hand’ service found users to be predominantly younger, wealthier and healthier than the population as a whole (see Ipsos MORI and York Health Economics Consortium, Evaluation of Babylon GP at hand. Hammersmith and Fulham Clinical Commissioning Group and NHS England; 2019).

****** Because the question presupposed a knowledge of the challenges of delivering health care, it was deemed unsuitable for the public survey.

Previous Next