The risks of AI in schools outweigh the benefits : NPR

A stock photo shows elementary school students working on laptops.

The risks of using generic artificial intelligence to educate children and teens currently outweigh the benefits, according to a new study from the Brookings Institution’s Center for Universal Education.

The comprehensive study included focus groups and interviews with K-12 students, parents, teachers, and technology experts in 50 countries, as well as a literature review of hundreds of research articles. It found that the use of AI in education “could undermine children’s fundamental development” and that “the damage it has already caused is hard to overestimate,” though “fixable.”

Because generative AI is still young – ChatGPT was released just three years ago – the report’s authors dubbed their review a “premortem”, intended to study AI’s potential in the classroom without the benefit of postmortem time, long-term data or hindsight.

The report outlines some of the advantages and disadvantages, along with a sample of the study’s recommendations for teachers, parents, school leaders, and government officials:

Pro: AI can help students learn to read and write

Educators surveyed for the report said AI could be useful when it comes to language acquisition, especially for students learning a second language. For example, AI can adjust the complexity of a passage based on the reader’s skill, and it provides privacy for students who struggle in large group settings.

Teachers report that AI can also help improve students’ writing, as long as it is used to support students’ efforts and not do the work for them: “Teachers report that AI can ‘spark creativity’ and help students overcome writer’s block. … In the drafting stage, it can help with organization, coherence, syntax, semantics, and grammar. In the revision stage, AI can help with editing and rewriting of ideas. Can also help with punctuation, capitalization and grammar.”

But, if the report has any reservations, it’s this: AI is most useful when it complements, not replaces, the efforts of flesh-and-blood teachers.

Cons: AI poses a serious threat to students’ cognitive development

At the top of Brookings’ list of risks is the negative impact AI could have on children’s cognitive development – ​​how they learn new skills and understand and solve problems.

The report describes a kind of destructive cycle of AI dependence, where students increasingly impose their thinking on technology, leading to the cognitive decline or atrophy that is commonly associated with the aging brain.

Rebecca Winthrop, one of the report’s authors and a senior fellow at Brookings, warns, “When kids use generative AI that tells them what the answer is… they’re not thinking for themselves. They’re not learning to separate truth from fiction. They’re not learning to understand what a good argument is. They’re not learning about different perspectives on the world because they’re not really engaging in the content.”

Cognitive off-loading is nothing new. The report states that keyboards and computers have reduced the need for handwriting, and calculators have automated basic mathematics. But AI has “turbocharged” this kind of off-loading, especially in schools where learning can seem transactional.

As one student told the researchers, “It’s easy. You don’t have to use your brain.”

The report offers plenty of evidence to suggest that students who use generative AI are already seeing declines in content knowledge, critical thinking, and even creativity. And this could have huge consequences if these young people reach adulthood without learning to think critically.

Pro: AI could make teachers’ jobs a little easier

The report says another benefit of AI is that it allows teachers to automate certain tasks: “generating parent emails… translating materials, creating worksheets, rubrics, quizzes, and lesson plans” – and much more.

The report cites several research studies that found significant time-saving benefits for teachers, including a US study that found that teachers who use AI save an average of about six hours per week and about six weeks during the entire school year.

Pros/Cons: AI can be an engine of equality or inequality

One of the strongest arguments in favor of educational use of AI, according to the Brookings report, is its ability to reach children excluded from the classroom. The researchers cite Afghanistan, where girls and women have been denied access to formal, post-primary education by the Taliban.

According to the report, a program for Afghan girls has “employed AI to digitize the Afghan curriculum, create lessons based on this curriculum, and disseminate content in Dari, Pashto, and English through WhatsApp lessons.”

AI can also help make classrooms more accessible to students with a variety of learning disabilities, including dyslexia.

But Winthrop warns, “AI could also massively exacerbate existing divisions”. This is because the free AI tools that are most accessible to students and schools may also be the least reliable and least factually accurate.

“We know that wealthier communities and schools will be able to afford more advanced AI models,” Winthrop says.

Cons: AI poses a serious threat to social and emotional development

The report said survey responses revealed deep concerns that AI, particularly the use of chatbots, “is undermining students’ emotional well-being, including their ability to form relationships, recover from setbacks, and maintain mental health.”

One of the many problems with overuse of AI by children is that the technology is inherently flattering – it is designed to reinforce users’ beliefs.

Winthrop says that if kids are largely building social-emotional skills through interactions with chatbots that are designed to agree with them, “it becomes very uncomfortable to be in an environment when someone doesn’t agree with you.”

Winthrop offers the example of a child interacting with a chatbot, “Complaining about their parents and saying, ‘They want me to wash the dishes – it’s so annoying. I hate my parents.’ The chatbot will likely say, ‘You’re right. You have been misunderstood. I am very sorry. I understand you.’ Versus a friend who used to say, ‘Dude, I wash dishes all the time in my house. I don’t know what you’re complaining about. This is normal. That’s where the problem lies.”

A recent survey by the Center for Democracy and Technology, a nonprofit that advocates for civil rights and civil liberties in the digital age, found that nearly 1 in 5 high schoolers said they or someone they know has had a romantic relationship with an artificial intelligence. And 42% of students in that survey said they or someone they knew had used AI for collaboration.

The report warns that the echo chamber of AI could stunt a child’s emotional development: “We learn empathy not when we are completely understood, but when we are misunderstood and healed,” said one of the experts surveyed.

What to do about it

The Brookings report offers a long list of recommendations to help parents, teachers, and policymakers – not to mention tech companies – harness the goodness of AI without subjecting children to the risks the technology currently poses. Among those recommendations:

  • Schooling could be less focused on what the report calls “transactional task completion” or a grade-based endgame and more focused on fostering curiosity and a desire to learn. Students will be less inclined to ask AI to do work for them if they feel engaged in the task.
  • AI designed for use by children and teens should be less sycophantic and more “adversarial,” pushing back preconceived notions and challenging users to reflect and evaluate.
  • Tech companies can collaborate with teachers in “co-design hubs.” In the Netherlands, a government-backed center already brings together tech companies and teachers to develop, test, and evaluate new AI applications in the classroom.
  • Overall AI literacy is important – for both teachers and students. Some countries, including China and Estonia, have comprehensive, national AI literacy guidelines.
  • As schools continue to adopt AI, it is important that underfunded districts in marginalized communities are not left behind, allowing AI to further exacerbate inequities.
  • It is the responsibility of governments to regulate the use of AI in schools, ensuring that the technology being used protects the cognitive and emotional health of students as well as their privacy. In the US, the Trump administration has tried to prevent states from regulating AI on their own, even as Congress has so far failed to create a federal regulatory framework.

With this “premortem,” the authors argue, now is the time to act. The risks of AI to children and teens are already abundant and clear. The good news is: There are solutions, too.



<a href

Leave a Comment