THE IRONIES OF AUTOMATION: LESSONS FOR REMOTE PATIENT MONITORING

To Issue 149

 

Citation: Lock D, “The Ironies of Automation: Lessons for Remote Patient Monitoring.” ONdrugDelivery, Issue 149 (Jun 2023), pp 12–14.

Dan Lock considers how to overcome potential pitfalls and make the most of remote patient monitoring for the benefit of both patient care and clinical trials.

The basic idea of remote patient monitoring (RPM) is that a subject or patient uses a digital interface at home, such as a mobile app, to collect regular data relating to their health that is then accessible to their healthcare provider. Depending on the indication, the system may be augmented with a wearable device, such as a blood pressure monitor or connected glucose monitor. Their healthcare provider can then be notified of any concerns arising from the data and can access an analysis of data trends through a digital interface in their clinic, such as web portal.

The hope is that more and better data, along with algorithm-generated analysis of that data, will produce significantly better health outcomes, enabling healthcare professionals to make better decisions about ongoing care, for example, for the treatment of conditions such as cancer or end-stage renal disease. RPM could also help patients feel more engaged with their care as the app may provide them with insights into their progress, and provide personalised advice and content tailored to their immediate needs.

In healthcare settings, clinicians could be made aware of potential problems as soon as they arise, allowing them to intervene accordingly, rather than waiting weeks between appointments to discover a concern and then relying on patients to remember the frequency and intensity of symptoms.

“Knowing that complications can be reliably detected and dealt with before they escalate would give clinicians confidence to use new therapies with a wider range of patients.”

Intervening early could make it easier and cheaper to handle any problems. For example, a home haemodialysis patient with increasing potassium levels could be advised on dietary changes or given medication, rather than requiring an urgent – and expensive – hospital admission if it is not detected until it is too late.

Early detection may also mitigate risks with cutting-edge treatments where there are serious side effects in a small percentage of patients. Knowing that complications can be reliably detected and dealt with before they escalate would give clinicians confidence to use new therapies with a wider range of patients.

This widening of the net is also valuable when it comes to clinical trials of new drugs and treatments as the remote aspect can enable clinical trials to include more diverse patients. A more diverse set of patients will improve the quality of collected data.

Unsurprisingly, then, interest in RPM during the drug development process is increasing. It gives researchers valuable data and analysis in real time, often speeding up the process, as well as enabling a wider spread in terms of both geography and risk profile.

If the drug delivery industry is to make the most of this technology and ensure its successful adoption, it needs to ensure that clinicians and patients alike understand both its potential and its limitations. The industry also needs to avoid the twin dangers of overreliance on automation and excessive scepticism about it. The former could lead to a de-skilling of clinicians, while the latter could result in the technology merely alienating patients, as their doctors may be unable to explain the benefits.

THE PSYCHOLOGY OF AUTOMATION

Human factors researchers have long been interested in how automation affects the thinking and behaviour of the human beings tasked with overseeing it. From autopilot in planes to safety systems in nuclear power, the danger is that operators take automated systems for granted. It is not just that they start failing to pay due attention as they get used to the automation handling everything, but that over time they experience “skill fade” as their expertise wanes from lack of use.

In 1983, University College London (UK) psychologist Lisanne Bainbridge coined the term “ironies of automation” to describe the fact that “the more advanced a control system is, so the more crucial may be the contribution of the human operator”.1 When standard operations are automated under normal conditions, humans only become involved in trickier, “edge cases”. If they lack the skill to diagnose the situation and act accordingly, it can lead to disaster.

Figure 1: Can we trust “drivers” who are used to being passengers to act decisively when needed?

Since the advent of the digital age in the 1980s – and especially with machine learning and AI – the relevance of these ironies of automation has only grown. Self-driving cars are the most obvious example – can we trust “drivers” who are used to being passengers to act decisively when needed? (Figure 1). A potential de-skilling of clinicians accustomed to automated systems is just as concerning. And, of course, the other human factor when it comes to RPM is the patients. The industry needs not only to determine how the technology will be used but also how its implementation will impact patients’ perceptions of their treatment.

“There are two scenarios in which RPM could fail to meet its potential, both in healthcare and clinical trial contexts – namely, if there is under-trust in the system, or if there is over-trust.”

UNDER-TRUST AND OVER-TRUST

There are two scenarios in which RPM could fail to meet its potential, both in healthcare and clinical trial contexts – namely, if there is under-trust in the system, or if there is over-trust.

Clinicians may under-trust the system precisely because they are highly skilled and accredited professionals. They may be sceptical of an automated system that can supposedly “do their job” for them, especially if the system comes to conclusions that differ from their own.

Under-trust could be a particular problem if the algorithm used by the system to interpret data is too complex for the clinician to understand, at least without specialist training. They are unlikely to trust the system’s conclusions without at least a rough understanding of how it came to them. This is known as “explainability”. Clinicians could put any “differences of opinion” down to the system’s lack of nuance or completeness. For example, the clinician might know about some aspect of the patient’s situation and assume the algorithm has failed to address it (which might or might not be correct).

Clinicians might also fear that RPM introduces unnecessary complexity to patient care, adding yet more factors that could go wrong. Relatedly, they may be reluctant to acquire the new skills needed to make the most of the system, or even to recognise a false alarm. This is particularly the case if clinicians feel that using the system involves a loss of autonomy. If, instead of entering into a dialogue with the system, everything is one-way traffic, they can understandably feel they are being de-skilled rather than empowered.

Whatever the causes of under-trust, the consequence is that the intended benefits are lost. Instead of greater effectiveness and efficiency – and a reduced workload for clinicians – there will be unnecessary duplication of work and over-complication of patient care.

The flipside of under-trust is over-trust. Potentially, clinicians could come to trust the system too quickly. Precisely because the algorithm “works” most of the time, it may be tempting to take its reliability for granted rather than checking its results. At worst, users could become as “automated” as the system itself, developing an unquestioning, habitual response to any given prompt. In a healthcare system increasingly under pressure to meet targets, there could be a temptation to let the algorithm “check” data, so clinicians do not have to.

Figure 2: How can we ensure that RPM will save clinicians’ time and is trusted?

If the system is effectively making decisions on the clinician’s behalf, their situational awareness will decline, leading to a vicious circle of declining performance. Then there is the simple fact that skills fade from lack of practice. Moreover, a system that can be used by a less skilful operator is more likely to be used inappropriately. For example, not all data should be shared with patients or less qualified healthcare staff (Figure 2).

REMOTE PATIENT MONITORING 2.0

At TTP, we have conducted research to understand how the ironies of automation apply to RPM and how to mitigate their effects. Some high-level findings are shared here.

One finding was that patients want frequent reassurance that the system is producing valuable results, even if there is no cause for concern. If they get the feeling their data is just disappearing “into the ether”, some will disengage. Medico-legal concerns may be a barrier to an RPM system that provides patients with reassurance that all is well in terms of their own treatment, which is likely to frustrate them. This is especially important when it comes to clinical trials, given the costs and inconvenience should an unhappy subject withdraw from the study.

It was also found that patients do not always trust clinicians to monitor their data and act accordingly, for example, because they appreciate how over-worked some healthcare professionals are. Consequently, most patients have no qualms about calling if they see an unusual reading. Even if the RPM algorithm is working perfectly, this could cause RPM to actually increase clinicians’ workload; an additional “irony of automation”.

The study also showed that, for their part, clinicians tend to focus on the data points that they have been trained to interpret and understand, neglecting the system’s more sophisticated, bespoke analytical insights. This is a classic example of under-trust and means clinicians may not always make the most of the system’s potential. This is another facet of “explainability” – even if the system is right, if you do not understand why it is right, it is hard to trust it.

“There is a difficult balance between providing patients with enough information to keep them engaged and giving them a false sense of being qualified to make significant decisions about their treatment.”

Clinicians are also wary of patients getting or inferring information directly through an app and making uninformed decisions. There is a difficult balance between providing patients with enough information to keep them engaged and giving them a false sense of being qualified to make significant decisions about their treatment. New research is needed that goes beyond “ease of use” concerns to shed light on how patients understand and relate to this technology. This would pave the way for a better informed and more nuanced application of RPM, ensuring that both clinicians and patients understand its benefits and limitations.

These findings indicate that over-trust is currently much less of a problem. However, it may be accelerated if RPM starts being administered by less qualified professionals. This has never been the intention, but it should be guarded against, as some in managerial positions may view RPM as offering an opportunity to make savings on clinical budgets by deferring responsibilities to cheaper, less-qualified personnel.

If anything, even highly qualified clinicians would benefit from a certain amount of further training in the technology and algorithms behind RPM. Precedents do exist – for example, anaesthetists today typically have advanced training in how ventilators work so they can spot functional issues quickly and act accordingly.

The goal should be a level of trust that is finely calibrated to the actual reliability of the system. One way of achieving this could be ensuring that users have accurate and up-to-date data about topics such as false alarms and false negatives so that they know what to look out for. Those developing RPM need to ensure that clinicians understand what considerations are not included in the algorithm’s workings so that they can integrate their own clinical judgement into the analysis and derive a complete and holistic picture. Ideally, clinicians should learn to interrogate the system regarding its level of certainty, just as they would do with a fellow professional. Manufacturers should facilitate this by ensuring their systems are able to explain their decision-making process.

LOOKING TO THE FUTURE

All this is to say that the true benefits of RPM do not come from the technology itself but from its careful and intelligent application – with due consideration to the psychological tendencies of its users.

Those benefits are numerous, however, including reducing unnecessary appointments, reducing the likelihood of unplanned hospital admissions and facilitating better care of less mobile patients. Indeed, by reducing the salience of geographical location, RPM can also make treatments requiring closer monitoring available to a wider spread of potential patients. Moreover, because it can reduce the need for travel, as well as hospital visits and all the disposable accessories those involve, RPM could also be far more environmentally sustainable than the traditional way of working.

RPM is a very promising technology that is likely to become more widely used. However, it could easily become a victim of its own success, becoming popular with administrators but less so with clinicians, who will tend to distrust it (sometimes rightly), and with patients, who will be reluctant to agree to use RPM if they cannot see any benefits. With better implementation driven by user-centred design, testing based on human factors principles and risk planning, the ironies of automation may be mitigated, increasing the benefits of RPM for clinicians, patients and pharmaceutical companies.

REFERENCE

  1. Bainbridge L, “Ironies of automation.” Automatica, 1983, Vol 19(6), pp 775–779.
Top