Addressing the legal and ethical impacts of AI.

Jurassic Park, one of Professor Wachter's favourite films, includes a famous line which aptly summarises the key question that drives her research.

‘It’s when Dr Ian Malcolm accuses the park’s creator: “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should”.’

She says, ‘That really stood out to me – nearly always, all the attention is on what can be done with technology, and no one stops to ask whether it actually should be done.’

Professor Sandra Wachter is Professor of Technology and Regulation at the Oxford Internet Institute, and leads the Governance of Emerging Technologies research group.

For Professor Wachter, however, the key question isn’t whether dinosaurs should be resurrected, but addressing the legal and ethical impacts of artificial intelligence (AI).

‘There is often this attitude surrounding AI that “The genie is out of the bottle; we can’t stop the train of progress so just get on board.” But it is critically important that we stop and examine all the potential consequences of these technologies.’

‘For instance, consider the power of face-recognition technology. Ultimately, this could be developed into a tool that could identify anyone in the world at any time.’

Such threats aren’t confined to the future.

Professor Wachter’s work has revealed how the public are already increasingly the unwitting subject of new, worrying forms of discrimination, due to the growing use of AI in decision making.

‘AI systems are now widely used to profile people and make key decisions that impact their lives.’

‘At its worst,’ Professor Wachter explains, ‘this can prevent equal and fair access to basic goods and services such as education, healthcare, housing, or employment.’

‘For example, an applicant for a financial loan may be more likely to be rejected if they use only lower-case letters when completing their digital application or if they scroll too quickly through the application pages.’

To counter these risks, Professor Wachter has helped to develop ethical auditing methods for AI to combat bias and discrimination.

After finding that most bias tests and tools did not achieve the standards of UK and EU non-discrimination law, she created the Conditional Demographic Disparity test, that has since been implemented by companies such as Amazon for their cloud services.

Tackling discrimination is personally important to Professor Wachter, who struggled against stereotypes from an early age.

‘At my high school, the girls took classes in knitting and crochet, while the boys did handicrafts and woodwork. I really wanted to join the boys and learn how to build things, but I was told “girls don’t do that” and that it was non-negotiable.’

Fortunately, Professor Wachter had a strong role model who demonstrated otherwise.

‘My grandmother was one of the first women to attend Vienna's Technical University, and she worked as a hospital technician. She really inspired me that technology was gender-neutral and that it could be positively applied to do good.’

University of Vienna

University of Vienna

Having initially studied law, Professor Wachter completed a PhD in technology, intellectual property, and democracy at the University of Vienna, and simultaneously earned a Master's degree in social sciences at the University of Oxford.

After various roles in public policy and researching ethical aspects of innovation, AI became a dominant focus of her work when it became clear this could satisfy her ‘insatiable love of tackling difficult problems.’

One of her most well-known contributions is tackling the ‘black box’ problem.

This is the fact that the process by which machine learning algorithms make decisions – whether on university applications, prison sentences, cancer diagnoses, etc – is often completely unknown and unaccountable.

‘Everyone said it was an impossible problem to solve because if you increase transparency this could risk revealing intellectual property, or enable individuals to find loopholes and game the system. But I say that everything is impossible until you do it.’

Working with Oxford colleagues Professor Brent Mittelstadt  and Professor Chris Russell, she developed a practical and affordable solution that enables individuals to understand how AI makes decisions without revealing proprietary information.

The approach is based on counterfactual explanations statements of how the world would need to be different in order for an alternative outcome to occur.

‘For instance, if you earned £10,000 a year more, you would have got the mortgage; if you had a slightly better degree, you would have got the job.’

Since then, major tech companies, including Google, IBM, Accenture, and Microsoft, have adopted counterfactual explanations into their services.

‘To me, this example demonstrates that ethics and law inspires innovation, rather than hindering it. Also, that explainable AI is not just something that is good and ethical; it is also profitable’

Professor Wachter’s work also encompasses robotics, autonomous cars, deepfakes, fake news, governmental surveillance, predictive policing, human rights online, and platform regulation.  

A particularly key issue for each of these is ‘the privacy problem’. AI systems are trained on large amounts of data, that could potentially be used to reveal personal information about people.

‘Here too, we need to have an honest discussion about whether we want these technologies, how we can safeguard them, and what is won and lost if we invite them into our world.’

Professor Wachter joined the OII as a data ethics researcher in 2017, and in 2019 founded the Governance of Emerging Technologies research group.

This had the aim of finding new ways to govern new technology in a way that is ethically and legally sound.

‘A major strength of the OII is its interdisciplinary nature. In academia, disciplines tend to be very siloed: lawyers only talk to lawyers, computer scientists only talk to computer scientists, and ethicists only talk among themselves.’

‘But emerging technologies are so versatile and can be used for so many different purposes, it means that a technology problem is not just a technology problem, it is also a legal problem, an ethical problem, a psychological problem, a political problem, etc.’

Nevertheless, Professor Wachter believes that everyone should engage with the ethical issues of AI, not just academics.

‘I want to debunk this popular belief that “Technology is too complicated for me to understand so I can’t be part of the conversation.” It is crucial that the implications of AI technologies are discussed by all of society – including young people, the general public, policy makers and NGOs – because it is not yet clear which AI future we are heading towards.’

‘At its best, AI will serve as a tool that augments and improves human skills, making jobs more enjoyable and accurate.’

‘As an example, I am slow at typing but fast at speaking, so I wouldn’t want to live without the AI-powered dictation software I use on a daily basis. I dictate all of my papers now – it has doubled my productivity.’

AI could potentially replace people with algorithms, whose decisions are not always transparent and accountable.

‘It will need a lot of political will and action to ensure we go down the better path and not the other.’

‘This makes it really important that we have evidence-based policy making, and that researchers from law, ethics, psychology, political science, economics, and so on are able to communicate the actual risks about these technologies.’

Ultimately, no matter how efficient AI becomes, Professor Wachter believes there will always be a role for the human if the political will is there and we legally safeguard our place in the world.

‘For instance, even if AI is used in medical diagnosis, we will still need human doctors to interpret the results and decide the best care plan for the patient.’

Meanwhile, creative arts such as photography, a huge passion of hers, will always inspire us most when a human is behind their conception.

‘Art is a form of communication, and it moves us when we realise what the artist is trying to say through their work. For me, a solely AI-generated image without any human input can never come close.’

AI presents deeply complex ethical and societal issues that affect us all.

But if these are tackled openly and proactively, Professor Wachter believes (to paraphrase another Dr Ian Malcolm quote), we can ‘find a way’ that these technologies can be used as a force for good, without infringing on human rights.

More #OxfordAI experts