AI in Schools: Risks Outweigh Benefits, Report Warns

Brookings study reveals AI's potential harm to student cognitive development and critical thinking skills, urging caution in classroom adoption.

A comprehensive new study from the Brookings Institution's Center for Universal Education raises serious concerns about the rapid integration of generative artificial intelligence in K-12 classrooms worldwide. After conducting extensive research across 50 countries—including focus groups and interviews with students, parents, educators, and technology experts, plus a review of hundreds of academic articles—researchers conclude that the current risks of AI in education significantly outweigh its potential benefits.

The report's authors characterize their work as a "premortem," a proactive analysis aimed at identifying potential failures before they become catastrophic. Unlike a traditional postmortem, which benefits from hindsight and long-term data, this assessment comes early in the AI revolution—ChatGPT launched just over three years ago—when preventive action is still possible. The findings are sobering: AI tools can "undermine children's foundational development," and the damage already observed, while concerning, remains fixable if addressed promptly.

The Promise: Where AI Shows Potential

The study acknowledges that AI isn't without merit in educational settings. Teachers surveyed reported several promising applications, particularly in language acquisition. For students learning a second language, AI systems can dynamically adjust text complexity based on individual skill levels, providing personalized reading experiences that would be difficult to replicate in traditional classrooms. Additionally, these tools offer a private, judgment-free environment for students who feel anxious participating in large-group settings, allowing them to practice and make mistakes without fear of peer scrutiny.

Writing instruction represents another bright spot. According to educator feedback, AI can serve as a valuable brainstorming partner, helping students overcome writer's block and spark creativity. During the drafting phase, these tools assist with organization, coherence, syntax, and grammar. At the revision stage, they support editing and rewriting while reinforcing lessons on punctuation and capitalization. However, this benefit comes with a crucial caveat: AI must support student effort rather than substitute for it.

The Peril: Cognitive Atrophy and Critical Thinking Decline

Despite these advantages, the report identifies a cascade of risks that demand immediate attention. Foremost among them is AI's potential to disrupt children's cognitive development—the fundamental processes through which young minds learn skills, perceive problems, and develop solutions. Researchers describe a troubling cycle of AI dependence, where students progressively delegate their thinking to algorithms, resulting in a form of cognitive decline typically associated with aging brains.

Rebecca Winthrop, a senior fellow at Brookings and co-author of the report, articulates the core concern: "When kids use generative AI that tells them what the answer is … they are not thinking for themselves. They're not learning to parse truth from fiction. They're not learning to understand what makes a good argument. They're not learning about different perspectives in the world because they're actually not engaging in the material."

This phenomenon of cognitive off-loading isn't unprecedented. The report notes that previous technologies—keyboards diminishing handwriting skills, calculators automating basic arithmetic—have similarly reduced certain cognitive demands. However, AI represents a quantum leap in this trend. The technology hasn't merely streamlined tasks; it has "turbocharged" the off-loading process, potentially atrophying mental faculties essential for critical thinking, logical reasoning, and intellectual independence.

The implications extend beyond individual students. If entire generations grow accustomed to outsourcing their analytical processes, society could face a collective erosion of problem-solving capacity, creativity, and the ability to navigate complex ethical dilemmas. The report warns that this isn't merely about academic performance—it's about preserving the fundamental capabilities that enable democratic participation and innovation.

The Human Element: Irreplaceable Teachers

A consistent theme throughout the research is that AI's effectiveness depends entirely on its role in the classroom. The technology proves most valuable when it supplements rather than supplants human instruction. Teachers provide context, emotional support, moral guidance, and the ability to facilitate nuanced discussions—capabilities that remain far beyond AI's reach. The report emphasizes that effective implementation requires educators who can mediate between students and technology, ensuring AI serves as a tool for empowerment rather than intellectual laziness.

The teacher-student relationship builds foundational trust and motivation that no algorithm can replicate. Educators can detect subtle signs of confusion, adapt their approach mid-lesson based on classroom dynamics, and serve as role models for intellectual curiosity and perseverance. These human elements prove essential for developing not just knowledgeable students, but wise citizens.

Global Perspective and Urgent Recommendations

The international scope of the study reveals that these concerns transcend cultural and economic boundaries. From developed nations to emerging economies, the pattern remains consistent: without proper guardrails, AI integration risks creating generations of students who can execute tasks but cannot think deeply about them. The uniformity of these findings across 50 countries suggests this is a universal challenge of human cognition, not a localized implementation problem.

The report calls for immediate action from multiple stakeholders. Teachers need comprehensive training to understand AI's limitations and to design assignments that leverage technology while preserving cognitive challenge. Parents must monitor their children's AI use at home and advocate for thoughtful policies at school. School leaders should establish clear guidelines about when and how AI tools are appropriate, creating boundaries that protect developmental needs. Policymakers, meanwhile, bear responsibility for creating regulatory frameworks that prioritize student development without stifling beneficial innovation.

Looking Ahead: A Crossroads for Education

As schools stand at this technological crossroads, the Brookings report serves as both warning and guide. The potential for AI to democratize access to personalized learning remains real, but so does the risk of creating dependency and eroding the very skills education aims to build. The authors stress that the observed damages are "daunting, though fixable," but only if the educational community acts deliberately and soon.

The message is clear: AI can be a powerful ally in education, but it must never become a crutch. Preserving students' ability to think independently, question authority, and engage deeply with complex ideas requires maintaining the human teacher at the center of the learning experience. Technology should open doors, not close minds. The future of education depends on getting this balance right.

Referencias