Is AI safe?
AI has an overwhelming potential to be a force for good, but how could it impact our lives if it's allowed to advance unchecked?
Experts at Oxford discuss how artificial intelligence might impact our democratic rights, the environment, and the very fabric of society if proper governance and ethical explorations are not undertaken during the development of AI tools.
What does AI mean for democracy?
What does AI mean for democracy?
With an ever-increasing use of artificial intelligence, a constant cycle of fake news and personalised propaganda – what impact will AI have on democracy?
Carissa Véliz is an Associate Professor in Philosophy at the Institute for Ethics in AI, where she researches privacy, the ethics of AI, and public policy.
Professor Véliz explains that our democratic rights and freedoms are bound to be impacted by AI because it is such a powerful tool.
At the moment, AI is being used in quite opaque ways, where not everyone knows the rules of the game.
One risk, Professor Véliz says, that we end up in a very bureaucratic society, with unfair systems. Another, is the use of AI to create personalised propaganda in a bid to sway democratic elections.
‘Now, with generative AI, it’s very easy to create fake news, whether images or text, and it’s easy to do at scale.’
Helen Margetts is Professor of Society and the Internet at the Oxford Internet Institute and Director of the Public Policy Programme at The Alan Turing Institute.
She says we’ve known about the bias in the institutions of our political system for a long time – but highlights technology’s role in exposing some of these biases for the first time.
As a result of data from these biases being revealed, some institutions of democracy will actually become fairer and more transparent than before.
‘I think we really need to be optimistic here if we want to get the full potential out of these technologies.’
Find out more about how experts at Oxford are investigating the use of algorithms, automation, and computational propaganda in public life and understanding the impact of AI on the political landscape.
How can we govern AI?
What does AI mean for governance?
AI can be used to do so many different things in so many different fields – but not always for good, or for the benefit of society.
That’s why it’s so important to ensure that good governance is in place for AI and other emerging technologies.
Brent Mittelstadt is the Oxford Internet Institute’s Director of Research, an Associate Professor and Senior Research Fellow.
He coordinates the Governance of Emerging Technologies (GET) research programme and leads the Trustworthiness Auditing for AI project.
Professor Mittelstadt says that we want to that we have rules in place for AI, but that we need to question whether these are what’s ethically desirable, or whether we need to go further than what the law requires.
‘I think when you combine those three things, the legal, the ethical, and the technical considerations, that’s when you get to the heart of governance.’
Sandra Wachter is Professor of Technology and Regulation at the Oxford Internet Institute.
As well as holding affiliations with institutions like Harvard University, the World Economic Forum and UNESCO, she also serves as a policy advisor for governments, companies, and NGO’s around the world on regulatory and ethical questions about emerging technologies.
Good governance, Professor Wachter says, is a mixture between independent research and policymaking.
Academia has a crucial role to play in good governance – where researchers from law, ethics, politics and economics for example can give policymakers the knowledge to make informed decisions about the risks involved with technologies like AI, so that systems can be governed in a way that people can enjoy them.
‘I think it’s really, really important that we have evidence-based policymaking and I think academia has a really important role to play there.’
Discover how Oxford researchers are looking to understand AI's impact on government and policy.
How does AI affect global health and wellbeing?
What does AI mean for global health and human wellbeing?
The use of AI in healthcare could see vast benefits for patients and is only going to increase in volume as the technology is applied in new and innovative ways.
But what are the pitfalls of such applications, and how can we keep individuals, and global populations, safe when using artificial intelligence?
Angeliki Kerasidou is an Associate Professor in Bioethics at the Ethox Centre and a research fellow at the Wellcome Centre for Ethics and Humanities at the University’s Big Data Institute.
One thing to think about, Professor Kerasidou says, is algorithmic bias.
AI tools are trained on data that we have collected, meaning that there is always some risk of bias within the datasets that we have – which could be amplified by AI.
‘We shouldn’t just be looking at healthcare professionals as individuals, but we have to be looking at how these things fit into existing systems and what are the systemic problems that might arise from using these technologies.’
David Clifton is Royal Academy of Engineering Chair of Clinical Machine Learning, lead for the Computational Health Informatics (CHI) Lab and an NIHR Research Professor.
Professor Clifton believes it’s very exciting to think about how you can build wellbeing into technologies and safeguard the use of AI in medicine.
He gives the example of how a large language model that makes recommendations about medicines and identifies adverse drug reactions is immediately useful for clinicians and can be proven to be safe.
‘How do you keep AI safe in medicine? And how do you build in wellbeing into the technologies? I think that’s very exciting, particularly for medicine.’
Explore the ways in which Oxford experts are transforming global health and medicine by using AI.
How does AI impact the environment?
What does AI mean for the environment?
The generation of new technologies doesn’t come without an environmental impact. But how is AI affecting the environment, and could AI help us to reach our climate goals?
John Tasioulas is Professor of Ethics and Legal Philosophy in the Faculty of Philosophy and Director of the Institute for Ethics in AI.
As with most things to do with AI, Professor Tasioulas highlights that there are two sides to the story when it comes to AI’s impact on the environment.
Training a large language model (LLM), for example, emits as much carbon as multiple cars do over their lifetime – and that’s just training it once.
AI could, however, be used to monitor compliance with climate-related treaties, Professor Tasioulas suggests.
‘When we develop AI, and that will necessarily have an environmental impact, is it mitigated by the benefits that can bring?’
See how experts at Oxford are exploring the relationship between AI and the environment and are using artificial intelligence to investigate the world, and universe, around us.
What are the societal impacts of AI?
What does AI mean for society?
AI is everywhere, touching all spheres of life.
While AI can be a force for good, good governance is essential.
Carissa Véliz, Associate Professor in Philosophy at the Institute for Ethics in AI, says she would be more optimistic about AI’s role in society if more funding were allocated to considering the ethical issues around artificial intelligence.
You might have very impressive technology she notes, but you might be worse off than if you had never developed it if it’s not governed properly.
‘Without good governance, technology can be quite destructive.’
Michael Osborne, Professor of Machine Learning and Director of the EPSRC Centre for Doctoral Training in Autonomous Intelligent Machines and Systems, offers a stark warning.
AI could, he says, help increase the role of corporations in society, potentially leading to a loss in rights for human beings as these types of systems take on more decision-making powers.
‘I’m very worried about AI’s role in potentially a loss of control for humanity more broadly.’
Helen Margetts, Professor of Society and the Internet, is a political scientist focusing on AI and government, public policy and democracy.
With the potential for societal problems to be turbo-charged by large language models (LLMs), for example, she highlights the need for a social sciences understanding of the potential harms that everyday people face, in order to tackle such issues.
‘I really think we need a good understanding [of the potential harms people face], and a social science understanding of that, to be able to tackle those problems.’
Find out more about how researchers at Oxford are at the forefront of the investigation into the ethical issues surrounding artificial intelligence.