Tackling the biggest questions in healthcare AI.

Healthcare is one area in which AI has already been making significant contributions, helping physicians with tasks as varied as interpreting scans and managing their workloads.

 This rapid growth in AI's applications means it is urgently necessary to fully explore the new range of ethical considerations associated with its integration into healthcare delivery.

Angeliki Kerasidou is an associate professor in bioethics at the Ethox Centre in Oxford’s Department of Population Health, whose work focuses on the ethical issues that arise from new technologies, with a particular focus on data driven technologies.

‘At the moment I'm looking at the issue of trust in AI. When new technologies arise, it is vital that we can trust them.’

She explains, ‘One of the things that we know about trust is that it entails a reasonable belief that the person or actor we trust has good will towards us.’

That they are committed to acting in a way that promotes our values and interests.’ explains Professor Kerasidou.

‘But what does this mean in the context of AI? Whose values and interests should be promoted? What should trustworthy AI look like?’

‘The other requirement for trust, is reasonable belief in the actor’s competency’, continues Professor Kerasidou.

Despite having the potential to improve the way that we diagnose and treat diseases, as well as the way we promote health, including mental health, one potential issue with using AI, in medicine that concerns Professor Kerasidou is how to make sure that we can properly and appropriately rely on it.

‘We need to make sure that these tools actually work on the ground, and that they work for all populations on which they are going to be used. This is both an ethical issue and a practical issue.’

The issue of trust relates to concerns regarding validation and bias.

The majority of the datasets we have available to train algorithms are representative of very specific socio-demographic and ethnic groups, and often lack data on under-served populations.

So, even if AI models can perform well on represented populations, it does not mean that they will perform equally well on under-represented and under-served populations.

We need to look for ethically informed practical, technological and algorithmic solutions to these problems. Otherwise we risk perpetuating and amplifying existing biases in our healthcare systems.

As AI tools become more widespread Professor Kerasidou believes that we also need to be aware of the possible ways that the relationship between healthcare professionals and patients might be impacted by the introduction of AI.

One is what she describes as algorithmic deference, which is that doctors will start looking at the AI tool to make decisions for them rather than as a tool to inform them.

‘I’m less concerned about this,’ Professor Kerasidou says, ‘because from my own research I see that the doctors tend to hold on to their own knowledge-based authority, and tend to use AI tools in the healthcare space as tools to aid decision-making rather than ones that something that is going to replace them.’

‘Having said that, seniority and competence play a significant role on how much people will rely on AI.’

‘Whether healthcare professionals will start deferring to AI tools for clinical decision making, also depends on the context in which they operate.’

‘If they are pressured to see more patients in less time, this might make it more likely that they will use anything that can help them make these decisions faster. It is not clear however, if faster is always better for the patient.’

As such, Professor Kerasidou suggests that our focus should be on how these technologies fit into existing systems and what systemic problems might arise for from their introduction.

Despite working in one of the most technologically advanced fields today, Professor Kerasidou’s background is in the much older and more traditional field of theology and philosophy.

‘I studied theology as my first degree, and originally wanted to become a biblical archaeologist. I did a lot of philosophy, including ethics and metaphysics, but I was still charmed by the past.’

‘It fascinated me how we could reconstruct the lives of past societies, understand their value systems, what mattered to them, by looking at objects and artefacts left behind’ she explains.

‘I'm Greek. Archaeology and history was everywhere I looked. I had a fully-funded place to start my doctorate in Switzerland, but a master’s course in Science and Religion in Oxford caught my eye.’

‘It looked so interesting and so innovative – an interdisciplinary course that brought together, theology, philosophy, history, science! All the things that I loved. I applied for the course and I was awarded the first Andreas Idreos scholarship to come to Oxford and study science and religion.’

Reuben College, where Professor Kerasidou is an Official Fellow

Reuben College, where Professor Kerasidou is an Official Fellow

‘Whilst here, I discovered that I was far more interested in philosophy and ethics, than history and archaeology. It was a fascinating time too. Dolly the sheep was cloned few years back, the human genome project was just completed, and stem cell research was making headlines as the new Holy Grail for medicine.’

‘Instead of looking at the past, I realised that I was much more interested in researching the present and considering how we could harness scientific and technological advancements to build a better future for all.’

‘So, I decided to leave biblical archaeology and pursue a doctorate in bioethics instead’, she says.

It was during her Master’s and then DPhil in Oxford that Professor Kerasidou began researching practical ethics, looking at the ethics of reproductive cloning, and then the moral permissibility of using human embryos in stem cell research.

‘Oxford is a really brilliant place to be doing research. There are so many people and so many resources that one can tap into. But the most exciting thing is the people that are around here, and the opportunities to converse and collaborate.’

In Professor Kerasidou’s own research area in Oxford there are people developing AI tools for healthcare, as well as people who are looking at the ethical, social and legal impact of these technologies.

‘It is a testament to that diverse and interdisciplinary character of Oxford University, which is the one thing that I love most about this place,’ she adds.

‘that these people from different departments and different faculties want to come together and discuss those things and find much more holistic answers to these problems.’

Having reflected upon the risks, Professor Kerasidou is hopeful that bioethics and practical ethics will continue to be at the centre of new technologies, medical research and medical practice.

‘Because research is effectively ethics in action. As we develop new technologies and introduce them into our lives and health care systems, it is important that we reflect upon the ethical issues to ensure that these technologies serve the core values we want to promote’ she adds. 

However, she points out that a central question that we need to address is how we prepare our national healthcare systems, both in the UK and globally, to incorporate new technologies in a way that aligns with the values we want to see served.

This is seldom straightforward because of the enthusiasm for, and hype around these technologies.

‘It is crucial that when we incorporate these new technologies, that we do so in the most appropriate way.’

A way, she explains, ‘that aligns with the values of the publics, and the citizens, and the communities involved.

‘I think that will be the most difficult problem to solve because of its complexity, but it is a pressing one that we will have to make progress on very quickly.’

Part of the solution as she sees it will be to engage with the public.

‘I think the public is much more informed than we often give it credit for,’ Professor Kerasidou observes.

‘I've done some empirical research where I've spoken with patients about the introduction of AI in particular healthcare settings, and I was quite astonished by the level of knowledge that they have about these kind of technologies as well as how articulate they were about the ethical issues that they see as being relevant, and the answers to be brought forward for this type of ethical issues.’

A solution she suggests is to open conversations with the public to help researchers understand what the things are that people believe are important.

One of the main issues that her team is looking at is what trustworthy AI is or should look like, and the role of public trust in the introduction of new technologies on the ground.

Professor Kerasidou remains optimistic that AI can bring a lot of good things to healthcare, as long as we continue to actively work to integrate our core ethical values into them.

But at the same time, we should not close our eyes to potential problems rather actively seeking them out, identifying them and addressing them.

‘These answers are not quite straightforward,’ she says, ‘and in order to make sure that we benefit patients and we benefit healthcare, society as a whole, we need to look at the problems that arise at the systemic level as well as their interpersonal level.

‘We need and open and honest discussion about what kind of healthcare we want - and even more broadly what kind of world we want to live in - and explore how technologies, including AI can help us get there.’

More #OxfordAI experts