4. Implications for automation and AI in health care

 

4-1 How automation will affect work in health care

What will automation and AI mean for the future of work in health care? As the analysis in Chapter 3 suggests, and as the Oxford study concluded, it will in many cases be about supporting workers in their roles, rather than replacing them. This is partly because many tasks in health care are not wholly automatable and few occupations consist of wholly automatable tasks. It is also because human agency is such an important factor in health care – so even where tasks could be automated, it does not necessarily follow that they should be. And as we have seen, in many cases where automation is applicable, it tends to change, rather than eliminate, the type of human involvement required. 

Some labour market predictions have raised the prospect of widespread redundancies from automation. But in health care it seems that, rather than threatening jobs, automation and AI have, in many cases, the potential to improve the quality of work (and with it job satisfaction) as well as the quality of care – provided, of course, that these technologies are deployed in a way that works for patients and staff. For example, automation could be used to remove some of the burden of repetitive, everyday tasks, allowing staff to focus on those activities where they add most value.

This bears emphasis given that the policy narrative around automation is often focused on improving productivity or compensating for workforce shortages. Health Foundation analysis shows that demand for health care professionals will continue to grow, driven by an ageing population and a growing burden of chronic disease (see Figure 9). There is hope that new technologies, including automation and AI, could help the NHS increase the volume of care provided and alleviate some of this rising demand by freeing up staff time (though of course technology cannot be the only answer to meeting rising demand, and without adequate staffing it will not be possible to take advantage of new technologies). But productivity gains will depend on how any staff time released is used, and could emerge in a number of ways. For example, productivity gains could emerge through using time released to improve the quality of existing care, such as allowing longer consultations. On other occasions, time released might instead be used to enable more sustainable management of the current volume of care (for example, by reducing unpaid overtime), which wouldn’t necessarily increase productivity, but would still have long-term benefits. And there is also substantial scope for automation and AI to improve the quality of care in ways that do not release staff time, for example through the use of AI-driven clinical decision support systems.

Figure 9: Future supply of and demand for NHS staff

Note: These projections are based on data released up to February 2020 and do not account for any COVID-19 impacts Source: Health Foundation projections, based on workforce data from NHS Digital and HEE

Even where automation and AI are primarily deployed in a supportive capacity, existing roles may nevertheless evolve in response, allowing staff to focus on what humans do best. In the words of David Autor, ‘as our tools improve, technology magnifies our leverage and increases the importance of our expertise, our judgement and our creativity’. For example, the Oxford study found that many of the tasks a GP receptionist typically undertakes are potentially automatable, such as email management and processing prescriptions. Automating these types of task would allow the receptionist role to shift towards more face-to-face interaction with patients, focusing on more complex but also potentially more rewarding aspects of patient management such as coordinating care and helping patients navigate the system. Given the important impact GP receptionists have on patient experience, using automation to enable them to spend more time with patients could help improve service quality.

Automation and AI will also create new roles to monitor system outputs and ensure technologies are used appropriately. For example, the Oxford study speculates that those roles containing mostly automatable tasks, such as the prescription clerk (a member of the general practice team who processes prescriptions and deals with repeat prescriptions), could subsequently evolve to deploy their skills and knowledge to oversee automated systems.

As many have argued, it is through the partnership between humans and machines that the greatest benefits will accrue. Brynjolfsson and McAfee, for example, suggest that while AI systems could ultimately become better than humans at performing diagnostic tasks, they will not be able to cover all medical cases, so the partnership between doctor and machine will be ‘far more creative and robust than either of them working alone’. Radiology is one area where automated systems could help improve care, by using AI to analyse medical images quickly, identify potential malignancies (including those that the human eye cannot see) and triage images prior to review by a radiologist. As the Royal College of Radiologists argues, such systems could enable clinical radiologists to ‘increase the scope of their diagnostic capacity, releasing time for direct patient care and research.’ If the greatest benefits will come from the partnership between humans and machines, then how health care professions respond to the rise of automation and AI – how they shape new models of working and the extent to which they encourage or discourage change – will be a critical determinant of its impact.

Box 16: Views of the public and NHS staff on the future role of automation and AI in health care

Given expectations that automation and AI will play a greater role in health care in the future, we wanted to ask the public and NHS staff for their views on the future role of these technologies.

First, we asked people if they would like to see more or less use of automation and AI in health care in future, or about the same. In both the public and NHS staff surveys, more respondents said they would like to see more use of these technologies in future (36% in the public survey and 44% in the NHS staff survey) than said they would like to see less use of them (21% in the public survey and 14% in the NHS staff survey) or about the same (24% in the public survey and 26% in the NHS staff survey).

This is also an area where familiarity with the topic had a clear impact. Among members of the public who said they had heard, read or seen a lot or a fair amount about automation and AI in health care, 61% said they would like to see more use of these technologies in future, with just 15% wanting to see less; among the equivalent group of NHS staff surveyed, 64% said they would like to see more use of these technologies in future, with just 12% wanting to see less.

Figure 10: Views of the public and NHS staff on the future use of automation and AI in health care

How much more or less would you like to see automation and AI used in health care in the future, or would you like to see about the same amount?

Second, we asked people how they thought automation technologies might impact on the nature of work in health care and the future roles of health care professionals.

The public survey asked people if they thought machines would ever replace doctors and nurses. The answer was a clear ‘no’, with 87% saying no and just 5% saying yes.

The NHS staff survey asked respondents to choose between two contrasting statements, one suggesting the primary impact of automation and AI would be positive for health care workers (improving the quality of work by supporting them and enhancing their capabilities) and one suggesting the primary impact would be negative (threatening jobs and status as technologies replace humans in an increasing number of areas of health care). More chose the positive statement than the negative, by 45% to 36%, and this margin increased to 57% to 32% among those who said they had heard, seen or read a lot or a fair amount about this topic.

Figure 11: Views of NHS staff on the main impact of automation and AI in health care

If you had to choose, which one of the following statements comes closer to your view?

Nevertheless, there were some contrasts in responses between occupational groups, highlighting the fact that automation and AI may have different impacts for different occupations. For example, the medical and dental staff surveyed opted for the statement that automation and AI would improve the quality of work by a margin of 23 percentage points (51% to 28%), while for nurses and midwives this margin was just 11 percentage points (48% to 37%), and by contrast health care assistants opted for the statement that automation and AI would threaten jobs and status by a margin of 4 percentage points (41% to 37%). In managing the impact of automation and AI on the health care workforce, and helping workers to adapt to the impact of these technologies, it will be important to be aware that the impact may be different for different occupational groups, and mindful of the social inequalities that could be created or exacerbated as a result – in terms of how this impact may differ by gender, ethnicity and socioeconomic status.

Despite these differing views about the potential impact of automation and AI, it is worth noting that in all occupational groups more respondents surveyed said they would like to see more use of automation and AI in future rather than less – for example medical and dental staff by 53% to 9%, nurses by 42% to 17% and health care assistants by 32% to 19%. So trepidation about the potential impact of these technologies on the future of work by no means translates into blanket opposition to them.

4-2 Perceptions of the benefits and risks of automation and AI in health care

While there are many potential benefits from different applications of automation and AI in health care, a range of risks and challenges also need to be overcome. Some of these risks relate directly to the technology itself or the data being used, such as risks concerning data protection, data bias and ‘black box decision making’, while others relate to the effective deployment of technologies, such as redesigning workflows, training staff and ensuring safety.

We used our survey to explore UK public and NHS staff views of the benefits and risks of automation and AI, including some of those highlighted in earlier chapters.

Respondents were presented with a list of commonly cited benefits of automation and AI in health care and asked to pick up to three they thought were the biggest benefits. In both the public and NHS staff surveys the top three benefits chosen were the same: greater efficiency/freeing up staff time (picked by 40% in the public survey and 37% in the NHS staff survey), followed by quicker results/service (32% in the public survey and 35% in the NHS staff survey) and enabling more accurate tests/treatment (23% in the public survey and 24% in the NHS staff survey). The pattern of responses was broadly similar across different NHS occupational groups.

Figure 12: Views of the public and NHS staff on the benefits of automation and AI in health care

Which, if any, of the following do you think are the main benefits of using automation and AI in health care?

Similarly, respondents were presented with a list of commonly cited risks of automation and AI in health care and asked to pick up to three they thought were the biggest risks. For the public, the biggest risk was that ‘Health care will become more impersonal, with less human contact’ (picked by 45%), followed by ‘Health care professionals won’t question the decisions computers make, creating risks to patient safety’ (picked by 44%) and ‘It will be hard to know who’s accountable when things go wrong’ (picked by 32%). The top two risks chosen by NHS staff surveyed were the same, with loss of human contact picked by 51% and failure to question computer decisions picked by 42%. The third ranked risk, picked by 39% of staff surveyed, was that ‘These technologies might not work properly and might end up creating more work for staff’.

Again, the pattern of responses was broadly similar across different NHS occupational groups, with a few specific differences. For example, more medical and dental staff were concerned about technology not working or creating more work (which they ranked as the second biggest risk after loss of human contact) than were concerned about the failure to question computer decisions (which they ranked third). Also, slightly more nurses, midwives and health care assistants were concerned about the potential loss of human contact than with other occupational groups, with 55% from each group picking this as one of the biggest risks.

Figure 13: Public and NHS staff views of the risks of automation and AI in health care

Which, if any, of the following do you think are the main risks of using automation and AI in health care?

These findings echo the results of other surveys, including those cited in Chapter 1, that one of the biggest concerns about the use of these technologies in health care is the potential loss of human interaction. So it will be important to ensure automation and AI in health care are developed and used in ways that protect important clinician–patient interactions and are mindful of patient preferences. The survey responses also show that the phenomenon of automation bias (the failure to question computer decisions), a common theme in the academic literature, is a real concern for both patients and health care workers, and highlights the need to ensure strategies are in place to avoid it. Interestingly, despite some well-publicised controversies in recent years about the protection of personal data in health care, the risk that ‘Personal data might be shared inappropriately’ ranked lowest as a concern in both surveys.

It’s also worth noting that a sizeable minority of both the public (17%) and NHS staff surveyed (14%) said they did not think there were any benefits to using automation and AI in health care, while very few respondents said there weren’t any risks (3% in the public survey and 1% in the NHS staff survey). This suggests there may be a small segment of the population who are particularly sceptical about the use of these technologies in health care, and who it will be important to engage in attempting to build public confidence.

Finally, in addition to exploring views on the potential benefits and risks of automation and AI in health care, we also wanted to investigate how people weigh the benefits against the risks. All respondents who identified at least one benefit and at least one risk in response to the questions above were asked whether the benefits outweighed the risks or vice versa.

Opinion is balanced on this issue, with majorities of both the public (51%) and NHS staff (59%) saying the benefits and risks are ‘finely balanced’. And among the remainder of respondents, in both the public and NHS staff surveys, the numbers saying the benefits outweigh the risks (23% in the public survey and 16% in the NHS staff survey) were only marginally bigger than the numbers saying the risks outweigh the benefits (17% in the public survey and 15% in the NHS staff survey). Interestingly, this was an issue where greater knowledge or familiarity with the topic made less difference: majorities of those who said they had heard, seen or read a lot or a fair amount about automation and AI also said the benefits and risks are finely balanced (50% in the public survey and 58% in the NHS staff survey).

Figure 14: Views of the public and NHS staff on the balance of benefits and risks of automation and AI in health care

Which one, if any, of the following statements comes closest to your view?

This not only highlights the importance of attending to the risks of automation and AI and developing ways of mitigating and managing them, but also of ensuring that the public and NHS staff have confidence in the oversight and regulation of these technologies and the systems in place to manage these risks.

4-3 Considerations for policymakers, organisation and system leaders, and practitioners

There is clearly significant potential for automation and AI to improve health care. But success will depend on how this agenda is taken forward in practice. Here we highlight some key considerations for various groups, including policymakers, practitioners, and organisation and system leaders (including leaders in providers, health boards, integrated care systems, and regional and national bodies).

4.3.1. Key considerations for the design and implementation of automation technologies

In Chapter 3, we saw that the effective use of automation and AI in health care poses some important design and implementation challenges. Given that the introduction of a new technology into a live health care environment creates new risks (including risks to patient safety), a sophisticated approach to design, implementation and use with safety at its core will be essential. And because the full benefits of a new technology will only come from successfully embedding it into health care settings and pathways, it is crucial that support is in place to ensure the effective implementation and use of technologies in practice. This is particularly important to remember in light of the speed with which some technologies have been rolled out and implemented during COVID-19; while this speed has been impressive, it also makes it more likely that teams and organisations will need to revisit, evaluate and improve the use of these technologies in order to ensure they are being used safely and effectively and to maximise their benefits over the long term.

Considerations for designers and evaluators

  • Given the importance of human agency in health care it is critical that automation technologies are designed and used in ways that support and do not undermine person-centred care, and treat patients with dignity and respect. Our survey showed that the potential loss of this human dimension was a strong area of public concern. Further research is needed on public and staff attitudes here, as well as on the range of ethical and quality issues that automation and AI present, in order to better understand where some of the boundaries lie in how these technologies should and should not be used in health care. And there should be proper emphasis in the evaluation of these technologies on assessing factors such as acceptability, feasibility and impact on patient experience.
  • Where technologies are patient-facing, it is essential that designers work with a wide range of patients to identify and define their needs, and co-design solutions to meet those needs. Various resources exist to help to ensure service design is rooted in a deep understanding of the needs of citizens, such as the Scottish Approach to Service Design. Given the need to address health inequalities and make sure health technologies work well for all, designers will need to engage with and use data from a broad and diverse sample of potential users and consider the impact of new technologies on those who face particular challenges or barriers.
  • If benefits only emerge from successfully embedding new technologies into live work environments, then it is important to recognise that those designing health care technologies are also potentially designing and shaping roles, pathways and workflows. It is therefore important that technology designers work closely with health care staff to understand what they want, what will work and what will make their lives easier; otherwise, there will be a risk of poor fit between the technology, the needs of staff and the realities of the work environment. This should be a key focus for programmes supporting the development of new technologies, such as relevant Accelerated Access Collaborative programmes. Academic Health Science Networks (AHSNs) as well as bodies representing staff groups can play an important role here in facilitating dialogue and collaboration between industry and NHS staff. Technologies need to be capable of being customised and adapted to changing work environments and designed in ways that enable staff to take back control from them when situations require it.,
  • The success of a technology depends on how well it performs in live health care settings, not the laboratory., Problems will arise if design assumptions don’t reflect the complexity of the work environment in which the technology will be used. So testing and evaluation in high-quality simulation environments and real-world settings is essential before concluding that technologies are safe and effective., The necessity of real-world testing was highlighted in the Long Term Plan in England, and it is important that the NHS continues to expand the infrastructure and funding available for real-world testing and evaluation, including through initiatives such as the Test Beds programme, NIHR’s Applied Research Collaborations and the Improvement Analytics Unit (a partnership that evaluates complex local initiatives in health care in order to support learning and improvement). A range of helpful guidance is also available to help innovators understand the evidence required for new technologies by the NHS, including the DHSC’s Guide to good practice for digital and data-driven health technologies and NICE’s Evidence Standards Framework for Digital Health Technologies.,

Considerations for organisational leaders and those responsible for implementing change

  • Successful adoption of automation and AI technologies may well require pathway redesign and the creation of new roles, processes and ways of working. So it is necessary to consider the ‘human infrastructure’ and processes required for the safe and successful operation of these technologies as well as the technical infrastructure. Staffing requirements are likely to be higher during initial implementation as staff will need time and space for changes to be tested and iterated, and for new ways of working to evolve and bed in. Quality improvement skills and knowledge of methods such as Plan-Do-Study-Act cycles and simulation models can be particularly helpful for the effective implementation of new technologies.
  • Successful adoption requires training to equip staff to use new technologies safely and to their full potential, which in turn entails dedicated resources and staff time. Where automation technologies supplement or replace humans in performing specific tasks, it may also be necessary to ensure that staff still have opportunities to practice and develop important skills, in order to avoid de-skilling. This is important not only to ensure that staff have the capability to take over if an automated system fails, avoiding the ‘handover problem’ discussed in Chapter 3, but also to handle situations that may not be suitable for automation.
  • Successful adoption will rely on the support of those expected to use new technologies. This will require making the case for changes and co-designing them with patients and staff as an initial stage of any implementation plan. Without support for the use of automation, and ownership of new working arrangements, there is a risk that patients and staff end up being sceptical of or even rejecting these technologies, potentially because legitimate concerns have not been addressed. Indeed, in our survey of NHS staff, the risk that patients might not accept automation technologies or be suspicious of them was viewed as the biggest of several possible implementation challenges. Various resources are available that can support this process, such as the Health Foundation’s Using communications approaches to spread improvement.
  • Organisational leaders have an important role to play in creating a culture and environment conducive to implementing new technologies. This includes engaging with their workforce to build a shared vision around technology-enabled care, and setting out how new uses of technology align with wider organisational strategies and values. It might also include facilitating collaboration across teams and specialisms, such as finance, clinical governance, technology and innovation, public and patient engagement, HR and procurement.

Considerations for policymakers and system leaders to support implementation

  • Policymakers and system leaders need to fund ‘the change’, not just ‘the tech’. The implementation issues highlighted earlier (training, engagement, testing, evaluation, etc.) have resource implications and highlight the potential gap between the costs of the technology itself and the total costs required to adopt and use it effectively. It is therefore important that centrally-led programmes driving the uptake of automation and AI factor in these costs. More generally, the forthcoming multi-year spending review and the next stage of national workforce strategies should explicitly address the workforce, skills and infrastructure needs of the NHS in order to exploit new and established technologies successfully over the long term.
  • While support has historically often been provided to more advanced, ‘digitally mature’ organisations leading innovation, such as the Global Digital Exemplars programme, it is also critical to support organisations lower down the curve to build the infrastructure and capability that they need to be able to deploy automation and AI effectively. In England, NHSX’s Digital Aspirant programme is a step in the right direction, though an enhanced and more regular central funding stream will be needed to bring digital maturity across the NHS up to an adequate level.
  • Policymakers should ensure the right incentives and payment models are in place to support the development, testing and adoption of technologies at a local level. Without this, health care providers may not see automation and AI projects as financially viable and therefore be reluctant to support them, particularly in relation to technologies for which there is not yet widespread evidence to demonstrate financial benefit. In exploring new payment approaches, NHS England and NHS Improvement should consider models that support the upfront costs associated with adopting and implementing innovations – such as costs that arise from backfilling staff time, redesigning pathways and providing training, which are generally not covered by current payment models.
  • Policymakers and professional bodies need to ensure that education and training for both clinical and non-clinical professions provides the knowledge and skills health care workers will need in future to use automation and AI technologies safely and effectively and supervise their operation. There is an important role for HEE, including through its Digital Readiness Programme and Health and Care Digital Capabilities Framework, as well as for the royal colleges in raising awareness and leading the development of these capabilities within the NHS. This may also help address the scepticism that our poll showed some NHS staff hold towards automation and AI. In addition, given that many of these technologies are data driven, there should also be recognition of the importance of analytical capability within the NHS, with support for the recruitment and training of skilled analysts.
  • Given the importance of building support around new proposals for automation, policymakers and system leaders need to promote the automation agenda in a way that helps generate support for it. In particular, while some automation technologies hold out the prospect of cost reductions, it would be a mistake to describe automation primarily in these terms. This is not only because staff and patients may resist such changes if they see them as driven by funding pressures or workforce shortages, but also because the gains these technologies offer for care quality, patient experience and staff experience are likely to resonate more deeply with the intrinsic motivations of those who work in and use the NHS.

4.3.2. Key considerations for automation strategy

The European Commission’s 2019 report The future of work noted that ‘automation outcomes are not pre-determined and will be shaped by the policies and choices we make’. The Health Foundation’s recent work on Shaping Health Futures highlights how major developments such as automation can be shaped with the right long-term planning and preparation. Policymakers and system leaders will need to scan the horizon for developments and opportunities, engage with the public, the NHS workforce and industry to shape the development of automation and AI, and plan for the future deployment of these technologies, ensuring it is supported with appropriate investment.

Considerations for policymakers and system leaders in developing automation strategy

  • It is important that policymakers and system leaders have realistic expectations for the automation agenda. Health care is a complex, adaptive system. Problems cannot necessarily be fixed by a ‘nice and clean’ technological intervention; rather, technology will be one part of a wider sociotechnical solution. Furthermore, given the range of implementation challenges that exist, it is not surprising that, as the 2016 Wachter Review observed, deriving the full benefits of new technology can take time – in some cases several years. So while there will understandably be hope that new technologies can help the NHS meet the unprecedented demand pressures it will face over the coming years, there will also need to be realism about the timescales needed for quality and productivity gains to materialise (and expectations may need to be tempered further given the constraints posed by financial and workforce pressures, as well as the need to support staff recovery from the impact of COVID-19). On the other hand there could be opportunities for imminent gains from more established technology programmes that may now be maturing. So it will be important for the NHS to have strategies in place for getting the most out of existing and recently adopted technologies such as RPA, as well as supporting the development of new technologies like AI.
  • Considering the ethical, safety and quality issues that arise from the use of automation and AI in health care, as new technologies are developed and tested it is important that government engages with patients, staff and society as a whole to inform decisions about where these technologies should and shouldn’t be used. This will be particularly important given that public opinion is currently divided on whether automation and AI in health care are a good thing or a bad thing, and that a majority think the benefits and risks are ‘finely balanced’.
  • Given the nascent state of AI within health care, policymakers and research funders should provide long-term support to build the evidence base required for these technologies to be deployed in the NHS on a wider scale. A variety of important work is also underway to explore the specific requirements of evaluating and reporting evidence around applications of AI in health care, including the SPIRIT-AI and CONSORT-AI initiatives, designed to improve the transparency and completeness of reporting of clinical trials of interventions involving AI. As NICE will have an important role to play in assessing evidence and producing guidance on these technologies, it should also review health technology evaluation methods to ensure these are appropriate for AI and automation.
  • The regime of regulation, standards and assurance will need to be developed and strengthened to address challenges such as bias, transparency and accountability – particularly in relation to autonomous decision making – and to ensure that automation and AI technologies are safe, effective and ethical. As the AI Council’s AI Roadmap notes, better regulation and standards, along with public engagement and transparency, will also help to build public trust and confidence in AI. As guides, frameworks and initiatives are developed, such as the DHSC’s Guide to good practice for digital and data-driven health technologies and the AI Ethics Initiative, the critical challenge will be to ensure that principles and standards are translated into action, something that should be an important focus for government, regulators, royal colleges and the NHS.
  • Most automation technologies are developed commercially, putting a premium on ensuring the health technology ‘ecosystem’ works in the interests of the NHS. Policymakers and system leaders should proactively work to influence the development of new technologies so they meet the priorities of the NHS, rather than simply allowing development to be shaped by the market. As part of this, the NHS can do more to identify its priorities for technology development and signal them to industry. Building on the ambitions outlined in the NHS Long Term Plan in England, this includes strengthening the role of NHS bodies in horizon scanning, aggregating intelligence and creating appropriate links across the NHS–industry interface to support technology development and testing. There also needs to be more emphasis on developing and adopting applications of technology that can support the NHS with administrative tasks that pose a significant burden, such as resource management and letter work. Technologies for these kinds of everyday tasks don’t steal the headlines in the same way as cutting-edge medical technologies but could make a huge difference in freeing up staff and helping to address current pressures.
  • Automation and AI will have important consequences for many occupations in health care. Some occupations will no longer need to do particular tasks that are wholly automatable, resulting in role changes – and in some cases the redeployment of staff to other work. More commonly, these technologies will expand the abilities of staff and provide an opportunity to redesign roles and improve job quality, as well as creating new roles. Workforce strategy and planning, and curriculum development, will therefore need to take into account the growth and impacts of automation and AI, including the impact on health care workers. Given the differing impact these technologies are likely to have on different occupational groups, it will be important that this includes a vision of role development for those groups most likely to be affected by the adoption of new technologies, along with actions to avoid automation entrenching and widening social inequalities across the NHS workforce.
  • The extent to which the potential of automation is realised will depend on whether health care professions and all parts of the health care workforce see it as an opportunity or a threat and how they respond. While it is easy to look at professions as ‘indivisible lumps of endeavour’ that will either be affected wholesale or not at all, the reality is that automation will impact on all professions and occupational groups on a task basis and provide an opportunity to redesign roles and in some cases develop new ones. In this respect, we believe health care professions should see automation as an opportunity, not a threat. The benefits of automation will be maximised if professions and health care workers are supported to adapt to technological change, to focus on where human abilities add most value, while ensuring care quality is protected. Examples of professional bodies that are engaging with the automation agenda include the Royal College of General Practitioners through its Tech Manifesto and the Royal College of Radiologists, which has issued a position statement on AI and is producing guidance for members to help them understand the potential role for AI in future clinical practice.
  • While national health policies are keen to take a ‘digital first’ approach, there is a risk, highlighted in Chapter 3, that automation and AI technologies will create or widen health inequalities. So digital health policy must be as inclusive as possible, catering to the needs of diverse patient groups, supporting patients with the skills to access and use digital services, and ensuring that those who need to, or would prefer to, can access non-digital options in a way that doesn’t exclude them from receiving high-quality care. More generally, policymakers and NHS leaders must ensure new technologies promote ‘levelling up’ on health rather than creating or widening health inequalities. This means evaluating the effects of new applications of automation and AI and taking actions to avoid or mitigate negative impacts for particular groups. It also means being much more proactive in identifying applications of technology that can reduce health inequalities and supporting their development and deployment in practice.

4-4 Conclusion

In this report we have looked at a variety of opportunities and challenges for automation and AI in health care, including drawing on learning from the Health Foundation’s programmes and research – particularly a recent study by the University of Oxford of the automation of administrative tasks in primary care – as well as a wide range of other academic studies. In doing so, it has been our intention not only to describe some of the major areas of application of automation and AI to health care but also to explore the challenges, constraints and practical reality of making these technologies work on the ground: to move beyond just considering their potential, to engaging with what it will take to realise long-term benefits for all parts of the health and care system.

Underlying much of the analysis is the fact that health care, and work in health care, is different from other sectors of the economy – particularly manufacturing, from which much automation thinking has developed. Health care is a service, co-produced with patients and families, not a product. Among other things, this means that health care always has a human dimension, that work in health care can be responsive and dynamic, and that tasks can be complex and unpredictable. This in turn means that the logic of automation applied to health care will be different to that applied in manufacturing and other sectors, often supporting rather than replacing workers.

There will be many exciting opportunities to apply automation and AI in health care in order to improve quality of care for patients and quality of work for staff. But, as this report has highlighted, we should also be mindful of the possible constraints on the application of these technologies in health care and the work required to realise the benefits in practice.


†††††† ‘Net more’ is a combination of the two response categories, ‘much more’ and ‘a little more’. ‘Net less’ is a combination of the two response categories ‘a little less’ and ‘much less’.

‡‡‡‡‡‡ Validation of individual device design is not enough to understand performance in context.

§§§§§§ Landman AB et al. describe the different ways in which simulation environments can be used to develop and test health care technologies.

Previous Next