Meeting the needs of society with AI.

Michael Osborne, or Mike as everyone knows him, is Professor of Machine Learning at the Department of Engineering Science.

His research looks at AI based on the principles developed by John Pierce, an engineer and expert on machine translation in the 1960s.

For Professor Osborne, his particular favourite flavour of AI is known as the Bayesian approach, which derives from the work of Pierre Simon Laplace, a French scholar and polymath whose work was important to the development of engineering, mathematics, statistics, physics, astronomy, and philosophy.

It’s an approach that has come in and out of favour over the years, but has always returned, owing to its ability to provide more robust algorithms.

It is more transparent and, as a result, safer for real applications - where the stakes matter.  

Professor Osborne got into AI as a result of reading too much science fiction as a child.

The fascinating thing now, 17 years into his career, as he predicted, we are building algorithms that resemble some of those machines he read about in sci-fi, and, for him, it is exciting to see these machines out in society making a real-world impact.

‘It's a really exciting time for the field as a whole to see these machines out in society, having real impact.

His research career has been divided into three parts; at its core the Bayesian approach - named after Thomas Bayes, an Eighteenth-Century engineer, English Nonconformist theologian and mathematician, who was the first to use probability inductively and who established a mathematical basis for probability inference.

Bayes was interested in actually developing algorithms.

For Professor Osborne, his entrepreneurial interest in seeing those algorithms put to real-world use cases, was important.

In 2016, along with his colleague Professor Stephen Roberts, he founded the company Mind Foundry, spun out from the University of Oxford.

Mind Foundry now employs 76 people and is using the kind of algorithms that they have developed in the lab to address use cases.  

Professor Osborne’s particular interest is in applications of AI where the decisions matter for people.

Another area of interest was infrastructure, where many approaches to AI were simply not fit for purpose, because they have too many failure modes. 

They are not transparent, and not reliable enough. It was this kind of Bayesian approach to AI that they have been developing that can meet real-world user needs.

In addition to Professor Osborne’s academic work and entrepreneurial interest, he is also interested in how AI might be changing the world around us.

In 2013 he wrote a paper with a friend and colleague, Professor Carl-Benedikt Frey, on exactly how much work might be automatable with advances in AI and robotics.

At the time, their headline finding was that 47% of work might be automatable with AI and robotics.

Now, a decade on, many of these themes have returned to people's awareness because of advances in generative AI, such as ChatGPT, which became the fastest-growing consumer application of all time.

This achieved a user base of 100 million users in fewer than three months and many people today are wondering whether this new wave of technology might lead to the replacement of human labour.

He continuously works to understand how these impacts might diffuse across labour markets and has launched a new initiative at the Oxford Martin School with colleague Professor Robert Trager, to understand what should be done about the sweeping changes within AI.

The Oxford Martin AI Governance Initiative focuses on the governance of AI and what the regulatory and political solutions might be – to head off some of the potential harms of these technologies, as well as the many real benefits they might deliver.

‘The Bayesian approaches to AI that we've been developing can actually meet real world user needs.

Professor Osborne finds Oxford an amazing place for research, particularly when it comes to AI, with its dense concentration of researchers and proximity to similar hotspots of innovation, including Cambridge and London.

For him, the unique thing about Oxford is combining strengths, not just in AI algorithmic developments but the extension of it to the real world.

This involves use cases with deep theoretical research underpinning what the impact of AI might be.

His work stretches beyond the Department of Engineering Science, working with colleagues across Computer Science, Philosophy, Economics, and Governance.

All of them are reflecting on the broader ramifications involved in the rapidly developing technologies. For him, having these experts across all disciplines thinking about core issues related to technologies makes a stimulating academic mix.

AI technology has never developed as quickly and as broadly as it has in the last few months, and  Professor Osborne sees this as an exciting pivot point in the history of the field, where perhaps for the first time, hundreds of millions of people across the world are using AI in their day-to-day lives, especially seeing how it might impact upon their work and lifestyles.

The regulating of these technologies and making sure it is implemented in the right ways to deliver human flourishing, rather than causing harm, is critical for Professor Osborne.

Looking ahead at the next five to ten years, Professor Osborne is concerned about some of the harms that AI could bring.

Looking ahead at the next 5 to 10 years, Professor Osborne is concerned about some of the harms that AI could bring.

He’s concerned we might see disenfranchisement of already marginalised groups in our society, because AI is prone to biased decision-making - for instance, Dutch tax authorities have used algorithms to prosecute benefits claimants.

He is also concerned about the advances AI has made within large language models like GPT4, the engine that powers ChatGPT, that is capable of many tasks performed by a human worker.

Professor Osborne appeared on Jimmy's Jobs of the Future to discuss how the workforce could be impacted by AI.

He thinks there is a realistic concern that the impacts of those might not be altogether positive.

Professor Osborne also refers to historical similarities, such as the Industrial Revolution, where an immense amount of wealth and wellbeing were unleashed by the technologies which led to the exacerbation of inequality.

It took 60 years in the Industrial Revolution for the average worker to see any wage increases, against a political landscape which included the Luddites. 

They were people violently opposed to technological change and the riots put down to the introduction of new machinery in the wool industry. Luddites were protesting against changes they thought would make their lives much worse, changes that were part of a new market system.

‘Never before has it been more important to think through how we go about regulating and governing these technologies.

Looking ahead, Professor Osborne refers to some of these developments, such as the impact Uber had on taxi drivers.

Taxi drivers were protesting against all the new changes, the introduction of Uber led to a loss of income and created unstable working conditions. 

His final worry, and a big one indeed, about AI's role in potentially losing control over humanity - with the increasing role of corporations in society a loss of rights and dignity for human beings; that these systems may take over decision-making power in societies.

Funding is always directly linked to AI developments - and his biggest drive is to head off some of the harms that we may inadvertently be heading towards.

In 2023, he and many others signed open letters protesting against the rapid and unregulated pace of change driven by the largest tech firms such as Microsoft and Google.

The first thing he wants funding to do is to make sure that AI is not solely being governed by these large, tech firms, but instead steered by democratic society. 

If Professor Osborne had this kind of funding, he says, he would be directing it toward the forms of AI that best serve the needs of citizens, as individuals, and not just the commercial interests of small, powerful, and opaque tech firms.

That kind of funding could be put towards developing alternative forms of AI and not to meet the needs of the tech firms. It has to be a good fit for the needs of society at large and funding that has the right regulatory and governance solution.

Professor Osborne is concerned about the role these technologies might play in destabilising the geopolitical balance of peace.

 It is no secret that China is also pushing ahead with the development of AI, and one of the flash points of geopolitical conflicts today is Taiwan.

Taiwan is responsible for building the chip that powers most AI development today and, for Professor Osborne, funding could be well put to thinking through what the right solutions might be to relax the geopolitical situation at present to developing alternative forms of AI and making sure that it is not being driven just by the commercial interests of a small number of tech players, but is aligned with, society of the world at large.

Another class of harms that is particularly close to Professor Osborne’s residential interests are the economic consequences of automating work where AI systems might be substituting work that historically has been performed by human beings.

This can be seen in ChatGPT writing poems and essays, and even entire grant proposals.

‘I am concerned that we might see disenfranchisement of already marginalised groups in our society.

The concerns are not necessarily that AI will lead to wide-scale unemployment, but instead that AI might lead to the immiseration of workers in the form of loss of income.

This is already happening in Uber cities - that led to a loss of income of about 10% for taxi drivers and certainly led to job conditions becoming less stable.  These impacts might be felt simultaneously in quite diverse occupations. 

Another class of harm that Mike is worried about is AI in the hands of malicious actors, including rogue States and even criminals.

Parents might receive a phone call from one of their children asking for an emergency transfer of funds and the phone call, despite being in their child's voice, might have been placed by an AI that was trained on the social media output of that child to perfectly mimic their patterns of speech in a way that could almost immediately lead to a wide variety of new criminal attacks.

The final harms Professor Osborne is worried about centre on the potential for AI to pose existential risks that might threaten the continued survival of our entire species.

We already have technologies that pose such threats, most notably the introduction of nuclear weapons in the 20th century.

We should not take too much confidence from the fact that we have survived 60 years with nuclear weapons, because we are going to have to continue to survive with massive arsenals of these weapons floating around.

Professor Osborne talks about the societal risks of AI.

The risk that AI might destabilise geopolitical relationships leading to nuclear threats is already a reality - for instance, Taiwan, a flashpoint between the US and China.

Some people have even proposed the integration of AI into nuclear command and control. 

The role of AI in exacerbating biological threats such as the recent virus that led to much loss of life in recent years, is a concern to Professor Osborne.

AI could design a virus that would be even more potent, even more lethal and we should actively be trying to prevent these harms in the years to come.

‘The first thing I'd want funding to do is to make sure that AI isn't solely being governed by these large, opaque tech firms, but is instead being steered by democratic societies.

The future of AI has never been more uncertain, he says, as technology is developing rapidly, but is in the hands of so many hundreds of millions of people worldwide, all of whom are thinking through how they could use AI to further their interests.

AI will certainly play a role in changing the way we work and could lead to a loss of income.

Professor Osborne argues that, while the obstacles to replacing human intelligence and creativity seem to remain the highest, it's difficult to say for sure what machines won't be able to do over the next 5-10 years.

Despite advances in generative AI, we still need quite a lot of human creativity to get anything out of AI, at least for the foreseeable future.

More #OxfordAI experts