Expert Comment:

Artificial intelligence in education.

Just over a year ago, ChatGPT was launched by the San Francisco-based firm OpenAI. 

By Dominik Lukeš, Assistive Technology Officer, Centre for Teaching and Learning.

ChatGPT is an example of a generative AI tool based on a Large Language Model (LLM) that enables users to create sophisticated human-like text in any format, style, and even language, through the chatbot.

It quickly went viral, and since then generative AI (artificial intelligence that can generate and understand text) has become synonymous with ChatGPT.

Generative AI capabilities overlap with the tasks performed as part of academic work to a degree unmatched by any previous technology.

Unsurprisingly, the potential implications for students, academics and the wider education sector has been the subject of intense interest as the sector and institutions such as Oxford grapple with the opportunities and challenges that this technology presents.

There are also now credible alternatives to ChatGPT.

These include Anthropic’s Claude, Google’s Bard and Microsoft's Copilot and many other products based on the same Large Language Models that give generative AI tools their power.

Some are free, but many of the most powerful ones require payment.

Considerable advances are also being made in image and audio generation. What is clear is that generative AI is here to stay, and is already being used by people to make real and meaningful contributions to assist with their work.

The new era of generative AI introduced itself from a chatbot.

But this simple interface belies a wealth of potential the extent of which has still not been fully explored.

Among the unexpected uses that have emerged, over and above simply generating text, are extracting information from text (for instance, finding people mentioned in text); presenting information in tables and/or structured lists; asking it to generate and/or correct computer code in a variety of languages; generating multiple choice questions about text; explaining and analysing abstract concepts including metaphors; and answering questions about well-known facts (including the contents of some books).

Despite advances on many fronts, these tools remain unpredictable, and in an academic setting their output must come with a healthy caveat. 

Large Language Models (LLMs) are trained to generate text that is likely to occur, and so will often generate plausible but entirely fictional facts – because of their probabilistic nature, these will be different every time.

In fact, even the original announcement of ChatGPT included the warning that it sometimes writes ‘plausible-sounding but incorrect or nonsensical answers.’

For example, non-existent links, titles of books or papers, made up biographical details about known people – these statements will often be seamlessly embedded within perfectly factual statements.

Identifying AI-generated content has emerged as an obvious concern for educators.

So far, the ability to detect AI-generated content has not kept pace with developments in generative AI models, and while AI detectors may have some success, they do not offer nearly the level of reliability required in academic settings.

In particular, the rate of false positives is alarmingly high. It is not even clear whether 100% detection is possible in principle.

Over the last year there has been an explosion of AI-based tools, many of which are aimed at students and educators.

AI features are now also being introduced into established products, such as Google’s Workspace, Microsoft Office with Copilot, or Meta’s WhatsApp and Instagram with more being announced on a regular basis.

This has a high potential to contribute to the blurring of boundaries between content created by AI and content created manually using traditional approaches.

There is no doubt that generative AI and Large Language Models will continue to play an increasingly significant role in all academic contexts, and their quality and reliability are likely to increase.

But it is not enough to rely on advances in technology, it is also incumbent on all to engage with these tools and become aware of their potential and limitations.

At Oxford, we are working to ensure that we understand and, where possible, embrace the opportunities that this evolving technology can offer in support of innovation in the teaching and learning sphere.

In July last year, we supported the development of the Russell Group’s principles on use of AI in education.

These set out our commitment to share best practice through the sector, adapt our teaching and assessment methods, and support staff and students become AI-literate and able to use generative AI as an effective part of the learning experience.

Closer to home, the Centre for Teaching and Learning and Disability Advisory Service at Oxford have established a Reading and Writing Innovation Lab. 

This will lead our efforts to keep track of the impact of this digital transformation on reading and writing, testing new developments, supporting the academic community in decisions around best technologies, and making assistive technologies widely available to students and staff at Oxford.

We are also aware of the ethical dimensions and are continually engaging with the discussions on the responsible use of AI.

This includes having to think more deeply about how to ensure academic integrity in the face of ubiquitous content generation tools, and to continue to search for a balance between developing the skills of students to critically engage with academic content and preparing us for a world in which AI will play an increasing role in generating that content.

Read more about AI and education:

Beyond ChatGPT: A report on the state of generative AI in academic practice

Read the Centre for Teaching and Learning's October 2023 report on the state of generative AI in academic practice.

AI: Taking advantage of the changing benefits and addressing the glaring concerns

Rebecca Snell from Oxford's Department of Education has co-authored a blog for The Head's Conference (HMC) on the benefits of and concerns with AI.

More on #OxfordAI