Implications for MDTs

MDTs are a core part of efforts to provide more integrated care for people in the UK and other countries., MDTs are not new and are widely thought to be needed to deliver high-quality care for people with chronic conditions.,, They also appear to be here to stay: the ambition for ICSs and PCNs in England involves developing MDTs to join up health and care services in the community. Expanded team-based working in primary care – with GPs working alongside physiotherapists, pharmacists, social workers and others – is also seen as one way to help mitigate growing shortages of GPs in England.,

What we found

Despite widespread policy support, evidence on the impact of community-based MDTs is mixed. Our evaluations of three MDTs involved in new care model vanguard programmes found that MDTs did not reduce emergency hospital use – and may even have led to increases – at least in the short term. Our longer term evaluations of the broader programmes in which these MDTs were implemented found some evidence of reductions in emergency hospital use. However, this took between 3 and 6 years – and we could not assess the contribution of MDTs to these reductions. The broader evidence was often of poor quality and paints a mixed picture – though some studies suggest that broader integrated care interventions involving MDTs can improve patient satisfaction, perceived quality of care and access. Other more recent studies of MDTs in England also show mixed effects on health care usage.,

The evidence summarised in this briefing pre-dates the pandemic. The IAU evaluations of MDTs covered periods up to 2018, while the wider evidence refers to studies predominately pre-dating 2018 and the IAU evaluations of the wider integrated care programmes covered periods to March 2019 or February 2020. Since then, pressures on services have grown and staff shortages have widened. Nonetheless, the MDTs we evaluated are likely to share similarities with community-based MDTs currently being implemented in England.

What the findings mean

A lack of clear evidence on impact does not necessarily mean that community-based MDTs ‘don’t work’. There may be several explanations for limited or mixed evidence – from the assumptions about how MDTs function to the way MDTs are set up, their wider context and how they are evaluated. These explanations have implications for policymakers and local leaders seeking to support new care models.

Unrealistic assumptions about MDTs

One explanation may be that the rationale for some MDTs is underdeveloped or flawed, contributing to unrealistic assumptions about what MDTs can deliver., For example, there is often an assumption that by providing more coordinated care in the community, MDTs will lead to patients needing less emergency hospital care. But MDTs often target patients at highest risk of hospital admission and with the greatest health and care needs. Although these patients will likely benefit from additional support, it may not be possible to prevent a hospital admission.,,, MDTs may also affect the behaviour of patients or staff in unanticipated ways – for example, if MDT staff are more risk averse because they do not know the patient’s medical history.

There may also be unrealistic assumptions about how quickly MDTs can achieve their aims. New models of integrated care are complex to develop and take time to implement – often over several years. Even then, it can take time for a patient’s health to improve as a result of an intervention. And although MDTs could lead to better health in the long run, they may identify unmet need that could increase the need for emergency care at least in the short term.,

MDTs might not be implemented as intended

Another potential explanation is that MDTs are not implemented as intended, or in line with the underlying theory of change. For example, MDTs may receive fewer referrals than expected, or referred patients may be more severely ill than planned, eg due to a shift in referral pathways, limiting the MDT’s ability to proactively address care needs as intended.

Implementation of MDTs could also fall short. A combination of factors shapes how well MDTs work – from how teams are organised and managed, to the wider policy context in which they are developed. At a team level, a mix of studies identifies factors that can support effective MDT working, such as strong relationships, staff resources, clarity on staff roles and responsibilities and lines of accountability, strength of management, and access to shared data.,,, Broader evaluations of integrated care initiatives in England also point to wider factors shaping the success of local integration efforts, such as the history of joint working between organisations, relationships between local leaders, staff engagement in the changes (especially of GPs) and conflicting changes in national policy. Patient voice in MDTs and broader initiatives is also often lacking, yet can provide meaningful input into their development and improvement. Table 1 in the Annex lists some of the factors that shape team working.

Wider contextual factors shape the impact of MDTs

MDTs are just one component in a complex system of interventions that interact to shape how care is delivered. In the new care models vanguard programmes, community-based MDTs were implemented alongside a mix of other interventions to coordinate services for high-risk patients, as well as changes in health and care governance and decision making and additional funding.

The broader context can shape the impact of MDTs in both directions. For example, a recent review of evidence on the impact of integrated care models found that while evidence on the impact of MDTs was mixed, UK studies of MDTs implemented alongside other interventions generally reported more positive results. This included perceived improvements in quality and access and some reductions in hospital use.,

On the flip side, analysis of MDTs introduced in the integrated care pioneer programme identified lack of services in the community as a common barrier to success. Local authority budgets have been cut substantially over recent years – public health budgets, for instance, fell by a quarter per person between 2015 and 2020 – with funding falling furthest in more deprived areas.,, Diminishing community resources are likely to affect the potential impact of MDTs, as well as adding to the pressure on unpaid carers, who play a crucial role in bridging gaps in community support.,

Evaluations may not be able to detect an effect

A final explanation is that evaluations of MDTs may be of insufficient quality to reliably assess their impact. MDTs are complex interventions – and evaluations need to be carefully designed to understand whether the intervention is having an effect and for whom. Yet evaluations of new models of care are often short term, small scale and lack robust methods – frequently due to lack of easily available data. Evaluations may also fail to capture important activities or outcomes, such as those that matter to patients, carers or staff. This could be due to an underdeveloped theory of change or lack of data.

Broad evaluations of MDTs providing care to patients with a range of conditions and needs may mask differing effects within subgroups of patients. For example, studies by the IAU on enhanced support in care homes found that while the overall effect of the programme was broadly positive, an analysis of the effect separately in nursing and residential care homes revealed large reductions in emergency hospital admissions in residential care homes but none in nursing homes.,

The role of monitoring and evaluation

While there is guidance on MDTs and evidence on enablers and barriers to effective team working,,,,,, there is no single blueprint for MDTs that would guarantee better health outcomes, reduced emergency hospital use or improved patient experience. In some contexts, MDTs and integrated care initiatives have been shown to have a positive impact on patients and the wider system, but this is not always the case. Given the diversity of MDTs and the contexts in which they operate, this is perhaps not surprising. The effect of MDTs depends on many factors, including team resources and skills, staff engagement, IT resources, access to data, population characteristics, and broader context such as local community services and overall levels of investment.

Therefore, to realise the benefit these initiatives can have, implementation needs to be carefully planned and supported by ongoing monitoring and evaluation. Applying learning health system approaches and providing rapid feedback on whether MDTs are being implemented as planned and achieving the results expected will allow for ongoing learning and improvement.

Recommendations

In designing an approach to monitoring and evaluation we make four recommendations:

1. Develop a clear, evidence-based ‘logic model’

A clear, evidence-informed logic model (or theory of change) for how service changes are expected to lead to improvements in care – along with the resources needed and factors that will shape progress – can help identify the support needed to make MDTs work and set meaningful goals.

This requires thinking through the planned service changes in detail and identifying underlying assumptions, such as referral routes, target population and access to services in the community. While existing evidence should be used to develop the logic model, the model should also be rooted in the local context and used as an opportunity to challenge thinking on how the model is expected to work. It is useful to put expected or target numbers against aspects of the logic model, (eg the required number of staff hours, the number of patients enrolled each month and the period of time for support), as well as the timeframe and magnitude of change expected in outcomes. These can form a basis for planning as well as assessing actual implementation.

A useful step-by-step guide to developing a logic model is available from the Strategy Unit. A logic model should be designed with input from staff and patients, and the public.

2. Ensure data collection is part of implementation

Collecting data on key aspects of the logic model, including resource inputs, activities and outcomes should be part of the implementation of MDTs. Access to timely data is crucial to allow local teams to monitor the implementation and impact of interventions, test assumptions and identify opportunities for improvement.

Where possible, data should be recorded routinely in a way that facilitates easy extraction and analysis for monitoring and evaluation purposes. Some data will be available from existing systems such as patient records. Where relevant information is collected in several datasets, data may need to be linked. Or new data collections may be needed, for example to capture data on broader outcomes beyond hospital use to provide a fuller picture of the outcomes that matter to patients and their families, such as experience of services or quality of life. However, manual data collections can be resource intensive.

Local linkages of data sources (eg hospital, primary care and social care or ambulance data) have provided new insights and there are instances where new data collections are being developed.,,,,,, To be most effective, there needs to be a systematic, national approach to collect a wide range of outputs and outcome metrics for all patients, not just those receiving a new service. Good-quality data on both patients receiving an intervention and those who are not will allow for more robust evaluation.

Collecting data on costs associated with setting up and running MDTs will allow for assessments of value for money. These should ideally include not only staff and infrastructure costs but also data on opportunity costs, such as diverting staff from other services.

3. Monitor inputs, activities and short- and medium-term outputs

Regularly monitoring activities, for example the number of referrals, referral routes and characteristics of patients referred, can provide early indications of whether the MDT is working as intended and opportunities to learn and course-correct. As such, done well, monitoring is a vital part of evaluation, providing rapid, actionable insights. Combining monitoring with qualitative insights can help understand patient and staff perspectives on the service changes being introduced and the underlying mechanisms. For example, if lower than expected numbers of referrals are due to a lack of clarity on referral criteria, communication to potential referrers on referral criteria can be adapted.

4. Undertake robust evaluation of outcomes

As MDTs ultimately aim to improve outcomes – whether these are patient outcomes or health service efficiencies – it is important to assess if this has been achieved, though it can be difficult to do this well. Outcomes can be slow to emerge – requiring longer term studies – or are not always well defined or routinely recorded (particularly for those patients who might form a comparison group). It is also not always straightforward to establish whether an observed change in outcomes is due to a particular intervention.

Robust evaluation requires careful design and reliable data on all patients in the analysis (including the comparison group) and their interactions with services. A good logic model will help inform a robust evaluation, for example on data collection, study population and potential subgroups, outcomes and, length of patient follow-up, and help identify potential biases.

Evaluations often compare patient outcomes before and after the intervention has been introduced (‘pre-post’ studies). But this approach risks identifying changes that would have happened anyway without the intervention – a phenomenon known as regression to the mean., A more robust method than pre-post studies, used by the IAU in its studies of MDTs, is to compare outcomes of MDT patients with the outcomes of a similar group of patients who were not cared for by an MDT. Hospital and primary care data, if available, can be used to identify a group with similar characteristics, such as age and long-term conditions. But this method can also have limitations: routinely collected data may not adequately record severity of disease or social isolation or other social factors that affect emergency hospital use,, and details on comorbidities are often incomplete in hospital data.

Robust evaluation requires resources and skills. There are a growing number of evaluation teams that specialise in service evaluation.,,,, However, for a true learning health system, these skills and methods need also be embedded within local teams. There are established local collaborations, including universities and health or social care organisations, such as Applied Research Collaborations and numerous local teams doing insightful analyses – as well as a growing acknowledgement in the NHS of the importance of data and analysis to inform decision making.,,,,, There are also opportunities for better networking and peer support,,, knowledge sharing, available written resources and open access code,,, as well as training.,, But more needs to be done. As well as having the resources to develop analytical roles and embed monitoring and evaluation, local and national decision makers throughout the system need to understand and recognise the value of analysis to derive insights, if analysis is to inform decision making.


A learning health system is a team, provider or group of providers that has developed the ability to learn from the routine care it delivers and improve as a result – crucially, doing so as part of business as usual.

Both logic models and theories of change are tools to help design and evaluate interventions. Logic models articulate the underlying theory of change that shapes the intervention and help build an understanding of goals, activities and expectations, by documenting the expected inputs, outputs and outcomes in a simple and logical sequence of steps. A theory of change typically considers the intervention within the larger system, including external factors and how they interact with the intervention.

Previous Next