What is AI?
AI is everywhere, but what does it actually mean?
Experts from Oxford are driving fundamental AI developments, applying artificial intelligence to tackle societal challenges, and are at the forefront of the questioning the ethics of AI.
But, what is AI and how does it learn? How do we know if AI is outsmarting us, and what does the future hold for the technology everyone's talking about?
What does AI mean?
What does AI mean?
We've all likely heard the phrase 'artificial intelligence' or come across a mention of 'AI'. But what does it actually mean?
Michael Osborne, Professor of Machine Learning at the Department of Engineering Science, is an expert in developing intelligent algorithms that can make sense of big data.
Professor Osborne explains that while it’s difficult to capture what such a broad, fast-moving range of techniques have in common, all artificial intelligence technologies have adaptivity and autonomy at their core.
‘We build algorithms that feed on data to understand the world around them and adapt to those characteristics, to take decisions for us.’
AI adapts to the world around us, learning and changing as it goes, which can be seen in the data-driven field of machine learning.
And AI has autonomy, meaning that we expect these algorithms to make decisions on our behalf, a characteristic not found in other machines or software.
What are the different types of AI?
What are the different types of AI?
Deep Learning. Machine Learning. Generative AI.
Are they the same thing? What's the difference?
Michael Bronstein, the DeepMind Professor of AI at the Department of Computer Science, works across a wide range of AI applications, from computer vision through to biochemistry.
Professor Bronstein describes AI as a vague umbrella which encompasses tools such as machine learning, a mathematical field that extracts patterns from data.
Deep learning brings together machine learning techniques such as natural language processing to teach algorithms to 'learn by example', and build a model.
These models can then be used for applications such as generative AI, where content like text and images are generated based on a prompt, as seen in tools such as ChatGPT and DALL-E.
‘Machine learning is a way of extracting patterns from inputs, data and generalising them so you can formulate it as a mathematical problem.’
Professor Michael Osborne’s own research takes a Bayesian approach, which places emphasis on the need for AI models to be transparent about how they make decisions.
How does AI learn?
How does AI learn?
What does it mean when someone says an AI has learnt something?
Sandra Wachter is Professor of Technology and Regulation at the Oxford Internet Institute where she researches the legal and ethical implications of AI and regulation of online platforms.
Professor Wachter notes it can be helpful to use an example when imagining how AI learns; in this case, hiring for a job.
An AI algorithm can be fed data about people who have held the same position in the past, and it will begin to pull out patterns and commonalities from this information. This is often called training data.
The AI will build a profile based on this training data, and will identify job applicants who match the artificially created profile to invite for an interview.
‘This shows how it works, but it also shows where the problem is.’
This also reveals how AI can be biased. If algorithms are trained solely on historical data, they can unintentionally reinforce existing biases in society.
What is the Turing test?
What is the Turing test?
You may have heard the Turing test mentioned when it comes to assessing whether AI has a mind of its own - but what is it, and has an AI passed the test before?
Carissa Véliz, Associate Professor at the Faculty of Philosophy and the Institute for Ethics in AI, specialises in digital ethics in privacy and AI.
Professor Véliz explains, the Turing test was a thought experiment from the pioneering computer scientist Alan Turing, designed to see if a machine is capable of thinking like a human.
In the test, originally called the imitation game, a human interacts with another human and a computer through text without knowing which is which, to determine if the machine displays intelligence.
‘The question is, can a human being know who is a human and who is a computer?’
As for whether AI tools are getting closer to passing the test, Professor Véliz believes that despite AI becoming harder to distinguish from human interactions, these systems are mirroring human intelligence rather than displaying the cognitive abilities that make us human.
What does the future hold for AI?
What does the future hold for AI?
Will robots take over the world or will an AI take my job?
The answer is, it’s complicated.
John Tasioulas, Professor of Ethics and Legal Philosophy in the Faculty of Philosophy and Director of the Institute for Ethics in AI, stresses the need for global safeguards agreed on between countries — including non-democratic ones — to avoid the worst possible impacts of AI.
Professor Tasioulas also highlights the need for greater democratic control of AI tools in the face of large corporations, as the future of these technologies is determined.
‘The future of AI does depend on choices that we make, individually and collectively.’
Professor Michael Bronstein, however, says it is ‘highly unlikely’ we will see a science fiction scenario of AI gone rogue.
Professor Bronstein is optimistic about the capabilities of AI technologies to transform society and change our lives for the good, and imagines AI will augment human capabilities rather than replace them.
As the technology underpinning AI continues to advance, explains Professor Michael Osborne, experts continue to be surprised by what has materialised so far, and the opportunities it presents.
‘AI should be our friend, rather than our foe.’