Rose Luckin from the Institute of Education at University College London and Director of EDUCATE, a London hub for startups in educational technology (EdTech), compares artificial intelligence (AI) to opening up the ‘black box’ of learning.
“90% of teachers in the UK have used some educational technology —which they had never used before— and would recommend it to a colleague,” she says.
Even though it’s a technology that can help make sense of data, how can we protect privacy?
This question raises further questions and EdTech experts discuss them during an interview at the International Conference on Educational Innovation (CIIE).
Artificial intelligence will never surpass human intelligence
AI gets a lot of negative publicity, especially from science fiction movies. “But this technology will never surpass human intelligence, because ours is much more complex,” says Luckin.
The systems we have now are smart, but in much more specific ways. For example, an autonomous vehicle can’t play chess, and a chess engine can’t drive a car. AI is very specific in its application, she explains.
“When we program machines, the most difficult part is getting them to do the things we take for granted, society’s unwritten rules,” she explains. For example, they can’t handle interpersonal relationships the way we handle them.
AI isn’t good for developing deep and meaningful relationships with other humans, and relationships are very important for delivering quality messages in education.
What can AI improve?
So, it’ll never replace us, but it will improve certain aspects. “It’s good for reliable processing of large amounts of information, something that humans have a hard time with.”
Avi Warshavsky, founder and CEO of MindCET EdTech Innovation Center and member of the board of directors at the Israel Center for Educational Technology, gives a further example.
“AI can be used very well for language learning because the machine isn’t judging, and it’s a great way to learn. It’s a system of dialog without judgment.”
What will happen to our data?
AI is data-driven. It uses data to analyze, solve problems, identify patterns, and more.
The first step in explaining to people what will happen to their data is “empowering them. We need to educate them, help them understand what AI is, why it needs data, and what it can do with it,” explains Warshavsky.
That way, it will be easier for them to decide if they want their information to be used. This is known as the ethical application of artificial intelligence.
However, there are many questions. He warns that EdTech specialists need to ask themselves very carefully what they’re accumulating data for.
For example, every time researcher Luckin decides to work with data, she runs her decision by an ethics committee.
In fact, in 2018, Luckin and Anthony Seldon founded the Institute for Ethical Artificial Intelligence in Education in the UK, so that laws are respected, and an agreement regarding data protection exists between Europe and the UK.
One of the solutions they’ve applied is to program AI to learn behaviors anonymously.
“This technology will never surpass human intelligence, because ours is much more complex.” - Rose Luckin
Tec de Monterrey’s eighth annual CIIE
Tec de Monterrey’s eighth annual International Conference on Educational Innovation was held in a hybrid format from December 13 to 16, 2021.
“(The CIIE) has established itself as an outstanding space for the meeting of professionals from different parts of the world... We involve everyone who shares the responsibility of creating the future of education,” said José Escamilla, director of the Tec’s Institute for the Future of Education.
This year, more than 5,000 attendees from 34 countries participated.