« All News & Opportunities

12th March 2020

AI is driving patient centric medtech innovation: How do we get it right?

Following the publication of Tessella’s white paper, Patient Centric Healthcare – The Role of AI and Data Science, Dr James Hinchliffe, senior consultant, comments on the effect AI and patient centric healthcare is having on medtech innovation.

Companies like Netflix disrupted their industries by using data and digital technology to make mass-market services convenient and personalised.

This same customer-centric approach is now coming to healthcare. It comes with more complex challenges, but the driving factors are similar: patient expectations of more personalised and convenient services, and the ability to apply AI to ever-growing volumes of research and patient data to understand people at an individual level, and deliver personalised products and services to them.

AI has huge power to create patient-centric medical technologies, but also presents huge risks. We must learn to use AI correctly if these benefits are to be realised.

How is AI disrupting healthcare?

Patient centricity is driving change across healthcare, with a number of projects already showing promise. We will look at some of them in three categories, then discuss what makes an AI project successful.

Diagnostics: Various app-based AI technologies are developed by training AI on large medical databases of ‘digital biomarkers’, eg of images or sounds, and increasingly genetic data, so it can learn to spot unique combinations of characteristics that indicate different variations of a disease.

SkinVision checks for signs of skin cancer using your phone camera. ResApp uses coughs to identify different respiratory diseases. SniffPhone hopes to detect subtle indicators of gastrointestinal diseases from digitised breath samples. All such analysis relies on very subtle physiological correlations, which would not be possible without AI.

Clinical innovation: AI powered sensors are improving clinical trials by allowing better understanding of how environmental factors and population variations affect efficacy. This will help tailor drug development and regime recommendations for different patients and lifestyles.

For example, Cambridge Consultants’ Verum predicts stress levels. It is a wearable which measures voice and electromyography, and uses machine learning to detect weak signal changes which are correlated with stress. This helps researchers understand the effect of stress on drug efficacy, and improve trial design.

Smarter healthcare and wellbeing: Apps and wearables are allowing more personalised healthcare and wellbeing plans. Riva Digital deduces blood pressure from slight variations in the colour of blood flowing through your fingertip. Lumen digitises your breath to understand individual metabolism, then provides personalised weight loss advice.

How to approach a medtech AI project

AI-driven patient centricity will define the industry for years to come. But getting it wrong presents huge risks, which could damage reputations and undermine trust in AI across the industry.

We have seen many AI projects fail. A tool claiming to predict premature births with over 90% accuracy turned out to really deliver only 50% accuracy. IBM Watson famously fell short of its marketing claims once taken out of its controlled environment and deployed into a clinical setting. Both problems arose from a lack of rigour, not because they were trying to do something impossible. They could have been successes with different approaches.

So why do some AIs work well, and some go wrong? And how can you ensure your AI project is one of the former?

1. Start with the end in mind: Many projects fail because they are rushed. Successful AI programmes start by identifying what decision the AI should take, and what insight is needed for it to take that decision. Only then should they start gather data, aligned to that need.

So, a tool which diagnoses a rash would start by clearly defining what conditions it will look for. It will then clearly establish what the defining features of that condition are, and build a model which can classify those. Only then will it start building the data set of relevant rash images to train the model.

2. Eliminate bias from data: Data itself is another major cause of problems, since much contains hidden biases. Say a certain rash was more common in men than women (or the training data implied it was). Our skin rash image classifier may learn that male skin is a key factor in identifying a rash, and so underdiagnose women. This can be addressed through critically examining training data and by building algorithms to compensate for bias. But we can only do that if we know what we’re looking for, and that comes from a rigorous approach to project design.

3. Draw in relevant expertise: The process described above can easily fail through too narrow expertise. A data expert may be able to build a model which builds correlations based on the images. But the project also needs medical and biological experts to ensure the model understands what the data is telling them, and so can identify sources of bias.

4. Build trust: Trust is critical in healthcare. If users don’t trust the AI, it will fail, even if it works well. This means designing and validating it, and proving it on real world data sets, ideally in a way that shows users how it is working.

It also means building in explainability, with tools which explain in clear language how the model reached its decision. If the user does not understand what is driving the AI decision, they will struggle to trust the results.

AI needs to be rolled out gradually with checks. For example, a doctor may start by making her own diagnosis, then run a diagnostics AI to validate it. Over time, she may make diagnoses in parallel. Eventually, the AI may become the first port of call, with the doctor only brought in for serious or edge cases. Expectations need to be managed.

A huge opportunity, to be embraced carefully

Getting AI right will help us make faster and more convenient diagnoses, develop more personalised drugs and treatment regimes, and provide individuals with tailored health and wellbeing advice.

The benefits to medical technology innovators, overburdened healthcare systems, and society, are enormous. But AI’s promise could easily be derailed if we rush in without proper planning and rigorous processes in place. AI holds more exciting promise in healthcare than anywhere else. We have a duty to the world to get it right.