What impact will artificial intelligence have on education?

What impact will artificial intelligence have on education?

The growing popularity of artificial intelligence (AI) software, capable of generating images, sound and even text in a matter of seconds, is opening up a debate on technological transformation. In this February 2023 image, a teacher in A Coruña, Spain, gives a lesson in a classroom where technology, for the time being without artificial intelligence, is ubiquitous.

(José Álvarez Díaz)

The growing popularity of artificial intelligence (AI) programmes, which have shown themselves increasingly capable in recent months of generating images, videos, music, computer programming code and even texts of all kinds in a matter of seconds, producing seemingly appropriate and coherent results, in many instances – and in many others, not – is arousing fascination and concern all over the world, especially among artists and creators.

What the AI tools of today can do is, at times, so spectacular and convincing that it is hard not to think it must be the work of a conscious being that comprehends what is being asked of it and understands what it produces in response. This is clearly not the case, but for the public at large it suddenly seems like we are witnessing the sudden emergence of revolutionary technology, full of potential and promise but also perils that could transform our world.

This day may come, but it is further away than the flurry of expectation may lead us to think. What has happened in recent months, above all, is that the current technology, quite widespread and known to all researchers who had hitherto been experimenting with it behind closed doors, has suddenly started to see the light of day, not only with a view to introducing it to the public, arousing interest and attracting investors, but also so that the programmes could benefit from interacting with people and be ‘trained’ by millions of requests and users at the same time, a massive amount of activity and information that no company could otherwise secure for their AIs.

Last year, text-to-image generators such as Midjourney, Stable Diffusion or Dall-E were already beginning to catch the attention of thousands of new and curious users around the world, but the debate reached much greater heights with the public release in December 2022 of a similar programme that generates text: ChatGPT by the company OpenAI. The results delivered within seconds can be so coherent that, in many instances, they appear to have been written by a human being.

It has taken society by storm and may be here to stay, but it is not, in itself, as transformative as it may seem.

At 94, Noam Chomsky, the father of contemporary linguistics, who has closely followed the entire history of AI to this day, cautioned, in January 2023, that ChatGPT is irreparably flawed in that it gives the same value to information that makes sense in the actual world as to information that does not, because it cannot distinguish between the two or understand it. He argued that it does not, therefore, seem to be making any engineering or scientific contributions “except maybe helping some student fake an exam or something”.

From one day to the next, old student antics such as copying entire paragraphs from Wikipedia have indeed been rendered obsolete by technology that, with little effort, does the same job more creatively and more convincingly, in a matter of seconds. Are our educators prepared for the task of incorporating the existence of AI into our classrooms and our lives?

The perils AI poses for education

Equal Times spoke to Spanish researcher José F. Morales, who has first-hand knowledge of the subject, both as a professor of computational logic and as a member of the Artificial Intelligence Department at the Polytechnic University of Madrid (UPM) and the IMDEA Software Institute. In his view, the emergence of this technology as it stands today “won’t be very disruptive” and “if ChatGPT were introduced in schools tomorrow, it would probably lead to a lot of time wasting without offering any comparative advantage over studying with a good book”.

Present-day AIs “are not intelligent beings, they are not programmed to use abstract thinking, to reason or to understand what they produce; what they do is to learn from the structure of texts and the information we put within their reach, from patterns that repeat themselves, thanks to the unimaginable amount of data they have access to, an amount beyond the reach of human beings,” he tells us. “It would be dangerous to suggest that the way they work is somehow intelligent. It would be dangerous to lead people to believe that certain important decisions can be based on the results of this type of AI; and in education, it would be dangerous to suggest that they might be a reliable instrument for imparting knowledge, because while the information they give you is correct half the time, they can also indistinguishably present us with false information, using equally convincing arguments, the other half of the time.”

That is why, he stresses, they are “creative tools” that we should see as a kind of uninformed assistant, capable of both helping us and deceiving us without knowing it: “Because their use of language and argumentation is so good, you might think they are understanding what they are saying, when in fact they are like parrots, repeating things because they know how they sound, without understanding what they are saying.”

Their answers may even seem ingenious, but they are based on a mechanism similar to the predictive text of any search engine, only on a more complex scale and with an unimaginable amount of information behind it, which the AI imitates without question.

Therein lies their other great danger, Morales points out: these AIs, with a computational power incomparably inferior to that of a person, respond based on something very similar to our intuition, doing a kind of cloaked plagiarism, as they relay, copy or reproduce huge amounts of data that are available online, but in a totally opaque way, without us seeing from where and whom the data has been sourced, or how they reproduce what they deliver using these sources.

Aside from the potential legal ramifications, and the fact that the way they operate is not based on logic (unlike other non-neural-network based AIs, which have been studied for many years but have not yet given rise to applications of use to the general public), they also carry another danger: “Their way of responding is, ultimately, very monotonous, and we have to be careful that they don’t end up standardising and killing creativity,” warns the expert. And the same applies in the classroom: “Copying is not bad, it is a necessary part of learning,” he says. “What is bad is copying but not saying from where.”

For Morales: “If students manage to learn with ChatGPT instead of with a book, then all well and good but, for now, books are a much more reliable and edifying source of knowledge.” In addition, “any country can afford to publish its own textbooks, but only two or three companies in the world have the capacity to train ChatGPT, which is done using thousands and thousands of texts, which they themselves select and tag,” and it is this information, inevitably limited and mediated, that would give rise “to ‘valid and accurate knowledge’, which would end up being set in stone, because you cannot change what the AI has learned without redoing the training”.

“Even if technology were beneficial in the classroom, even if ChatGPT were to reason, in whose hands are we leaving the students? Are we willing, as a society, to delegate to such an extent?” he asks.

“We should not forget that until technology allows us to train our own AIs we should be very cautious about whom knowledge management is delegated to and avoid the mistakes made with social media,” adds Morales.

That said, AI is already here, and many educators are going to find themselves on the wrong foot when it comes to its more immediate implications.

No need to panic

“No need to panic, but there is a need for change,” Rose Luckin, researcher at the Knowledge Lab at University College London (UCL) and professor of learner-centred design at the UCL Institute of Education, tells Equal Times. “Education systems across the globe vary and some are better prepared than others to enable their students to coexist with and indeed benefit from living and working with AI,” says Luckin, a leading expert on this technology’s impact on education.

“Education systems that focus on learning facts and testing students about the extent to which they can process information and remember and reproduce it, are not preparing students well for a future workplace where these types of skills and abilities will be done by AI systems.” But, she argues, those “that help students develop a sophisticated understanding of themselves as learners and of what knowledge is, where it comes from, how to make judgements about what good evidence is and make good judgements about what to believe, and most important of all, how to be good at learning, will build their students capabilities to thrive in a world where they coexist with many different types of artificial intelligence.”

“I don’t know of any particular institutions that are handling the situation well,” she acknowledges. “I think that’s something that will come to light over the coming weeks and months. But one thing is for sure: education institutions and systems must seize this opportunity and see it as a positive incentive for much-needed change.”

Luckin recommends harnessing the potential of this technology to make learning more fun and more effective, “using AI wisely” to perform more routine tasks and to encourage curiosity and the exploration of tools such as ChatGPT or Dall·E, to enable students to have “personalised learning journeys”.

But, above all, we should avoid pitfalls such as “failing to recognise that this is a game-changing technology” and there is no turning back.

“The genie cannot be put back in the bottle and it’s great that ChatGPT is waking up the education world to the implications of artificial intelligence for education and training,” insists Luckin, pointing out that it soon will be an integral part of our everyday communication technologies, from word processing to social media and search tools, which is why “we have to prepare people to be well equipped to use it”, rather than ignoring this reality in education or “seeing technology as a tool for cheating”.

For Martin Henry, research coordinator at Education International (EI), the global union federation representing education workers’ organisations: “What we need to teach students is digital citizenship, we need to get them involved in ideas of digital safety. We are just adjusting to technology that we haven’t seen before and that could have a far-reaching impact, or not, depending on how we deal with it and whether we humans are making the decisions.”

Henry, like Morales, sees the tremendous danger in delegating our decision-making processes to artificial intelligence, because, depending on how it is programmed or what data it feeds on, we can end up with an “algorithm that may be racist in nature, or may be based on the wrong data, or might be doing what the English algorithm was doing and upgrading the private students and downgrading the working-class students”, as was the case a few years ago in England during Covid. “An algorithm will do what you ask it to do, and if what we are asking to do is wrong, then we have a problem. That’s what I think we should be focusing on,” he concludes.

The debate, for educators, seems to be about the need to move with the times. This is also the view of Pasi Sahlberg of Finland, a former director general at the Finnish Ministry of Education, current Professor of Education at Southern Cross University in Lismore, Australia, and one of the world’s leading experts on education policy. “As far as I am aware, everybody is trying to figure out what to do with technology, the AI and VR that are slowly finding their places in mainstream schooling,” he recently told Equal Times.

“We still debate whether smartphones and other internet-based gadgets that young people have should be banned in school. Soon, I think sooner than we think, these gadgets will be embedded in what we wear or even within us, which will make these blanket bans practically impossible.” For him, the challenge lies, rather, in updating schools “to fit into these new futures”. Accordingly, he concludes: “Learning to live a safe, responsible and healthy life, with all that technology with us, is perhaps one of the most important 21st century skills there is.”

This article has been translated from Spanish by Louise Durkin