Applying philosophy to AI.

Professor Brent Mittelstadt is a 21st century philosopher.
For many people, philosophy may conjure images of classical Greek and Romans debating the meaning of life and morality, but its relevance in the modern age is becoming more apparent with the advent of widely available AI programmes such as ChatGPT, which appear to offer clear and objective answers to anything they are asked.
Brent Mittelstadt is Associate Professor and Director of Research at the Oxford Internet Institute, where he leads the Governance of Emerging Technologies Research Program that looks at the ethical, legal and technical implications of new technologies.
His journey into how philosophy impacts technology began when he started out on questions about the nature of things and how philosophy can impact the real world.
‘That is how I started to get interested in ethics, and in particular areas like medical ethics, which in turn naturally lent itself to technology.’
‘Early on I decided to focus on philosophy because I was really interested in the big questions that philosophy addresses,’ Professor Mittelstadt says, ‘which are things like, ‘what is truth’, ‘what is knowledge’, ‘what is a good life?’
But as I studied philosophy for longer, I became increasingly interested in how our answers to those questions can be applied to our everyday lives.’
The Oxford Internet Institute is a multidisciplinary research department, where a large group of people from different backgrounds and disciplines come together to examine the social impacts of technology.
‘We want to answer questions about how technology can change society, as well as how we can change technology,’ Professor Mittelstadt explains.
‘In my work this is fantastic because when I'm thinking about these things like ‘what is a good AI system’, or ‘can we trust whether the AI system is telling us the truth’, I will come to questions where I don't have all the tools needed to answer them as a philosopher, and I'm able to go to my colleagues who have backgrounds in law, psychology or computer science, who are able to help me answer those questions.’
In his daily work, he focusses predominantly on AI and algorithmic technologies, which are beginning to do things that it was previously assumed only humans could do.
This also means that they are beginning to assist in very important life changing decisions because they enable us to process data at a scale and level of complexity that could never make sense to us as individuals.
‘AI is slowly encroaching into so many applications in daily life, such as helping us answer questions like who is a good candidate for a job or scholarship, or what is a likely medical diagnosis.’
‘These are key decisions that change people's lives,’ Professor Mittelstadt explains.
‘So we want to make sure that AI systems make these recommendations in an understandable and transparent way, but also to make sure that they're working in a way that aligns with our values.
It is important that we ensure that they're not making decisions unfairly, arbitrarily, or being biased against certain groups of people.’
In recent years Professor Mittelstadt has helped create a method to compute easily understandable explanations of how ‘black box’ systems make decisions so that people affected by AI can hold the technology and its developers accountable.
He has also shown how the pursuit to make AI systems fair can inadvertently harm people by “levelling down” and making everyone worse off, rather than “levelling up” to make things better.
Making medical AI fair, for example, could mean missing more cases of cancer than strictly necessary while also making a system less accurate overall.
Professor Mittelstadt’s work shows why it is so important to set rules around how AI systems operate.
‘We now have systems where you can go and ask them pretty much any question you want to, and they will give you an answer to that question,’ Professor Mittelstadt explains.
‘The answer may look correct, but actually only be partially correct or biased. It's very hard to tell unless you already knew the answer to the question before you asked it. This is where I think philosophy can help us, by helping us to set requirements and guardrails around how those systems operate.’
‘I look at ChatGPT and language models as a very unreliable research assistant. So anything that it gives me, I will always fact-check it and I will always make sure that it is true.’
The importance of governance for AI and other new technologies cannot be understated, given that we are already using them to do so many different things in all aspects of work and life.
But, as Professor Mittelstadt points out, not all of those things are going to be good or for the benefit of a broad range of people.
‘I think we need to make sure that we have rules in place,’ Professor Mittlestadt explains.
‘We should be considering how to put guardrails in place so that the technology can only be used in certain, specified ways, or at least in ways that align with our values or morals and our laws that we've had for centuries.
They should not be designed solely to line up with the values and aims of private companies.’
Designing these rules and safeguards is what Professor Mittelstadt and his colleagues are currently doing at the Governance of Emerging Technologies research programme.
Professor Mittelstadt appeared in an episode of the Futuremakers podcast to talk about the built-in biases of algorithms.
While software engineers focus on how to make technology work, Professor Mittelstadt’s team tackle questions such as what does the law currently require of new technologies?
And is it enough for them to just be legally compliant? What is ethically desirable for new technologies? Should we go further than what the law requires? And when we know what is legally and ethically desirable, the next question is how to turn that into something that's technically feasible.
‘Once you’ve worked out the ethical and legal frameworks, you then have the challenge of changing the technology in a way so that it lines up with those legal and ethical expectations’ Professor Mittelstadt adds.
‘I think when you combine those three things, the legal, the ethical and the technical considerations, that's when you really get to the heart of technology governance.’
Philosophy is one of the oldest tools developed by humanity, but it is still continuing to prove its importance in honing and refining the very latest technologies, like sharpening flint, and looks set to continue to play a central role in shaping regulations and protections as the world around us changes at an increasing pace.
‘These really fundamental questions that have always been at the centre of philosophy,’ Professor Mittelstadt explains. ‘What is truth, what is a good life? We're thinking about them now more than ever because we have technologies that are challenging our prior answers to them.’
He also sees a risk in AI interfaces that are designed to appear human-like, lulling us into attributing human characteristics to them like agency, intent or good will.
‘The generative AI and large language models that we are seeing emerge now are very impressive, and they’re catching a lot of people off guard.’
‘But it is dangerous to attribute human characteristics to them,’ Professor Mittelstadt says, ‘because they do not have morals or intent other than what we programme into them, which could include our own unconscious prejudices and biases.’
But it’s not all a cautionary tale. There are exciting applications of AI that Professor Mittelstadt anticipates could have a profoundly positive impact on our world.
‘I would love to have the time and resources to answer even bigger questions about technology,’ Professor Mittelstadt says.
‘I would try to figure out how AI contributes to climate change and how it can combat it, or to identify and begin to address the deeply rooted biases and inequalities in society that these systems inevitably learn and reflect back at us.’
‘I would want to investigate the really fundamental impacts that new technologies are having on science and society, and how they change how we interact with each other in subtle but important ways. Is AI changing what it means for something to be true, or for knowledge to be objective and reliable? These are sort of the underlying questions behind a lot of my work now, and are the ones I want to increasingly answer directly.’
The earlier application of ethical thinking to new technologies is something Professor Mittelstadt looks forward to in the future, ensuring we think about what we should and want to do with technology ahead of what can we do with it.
‘I hope that ethics becomes a compass for technology research and development rather than something that's consulted much further down the line,’ Professor Mittelstadt says.
‘I really hope that ethics takes that leading role in the future because I think it's there to really help us set the path for technology rather than to just act as a barrier or a hurdle to overcome once development is already underway.’
More #OxfordAI experts
