It’s Time to Pause Use of Artificial Intelligence in Early Childhood Education
- Faith Rogow, Ph.D
- Mar 11
- 25 min read
Updated: Mar 12
Reprinted by permission from Faith Rogow, PhD. Find original post here.

Introduction
It is a rare week when I’m not asked to address aspects of Artificial Intelligence (AI) in education in general, and early childhood education in particular. Every education conference, professional development request, and parent group seems to want a session about AI. The FOMO is on steroids.
I understand why people assume I’d be eager and qualified to help them navigate the challenges that AI integration presents. My career as a media literacy education specialist has been grounded in a very practical approach to media technologies: Media are ubiquitous and are an inextricable part of children’s culture. In that reality, avoidance strategies determine only who will mediate children’s media experiences, not whether they will have them.
So, rather than asking adults to serve as compliance officers in charge of enforcing some arbitrary number of minutes interacting with digital screens or keeping devices out of young hands altogether, I have always argued that we best serve children by
making and using excellent age-appropriate media,
advocating for thoughtful regulation, and
providing families, librarians, childcare professionals and other teachers, with the information and skills they need to integrate media and media literacy into their work and parenting.
I expected to take this same practical approach to AI, and that’s the article I set out to write. It is not what I ended up writing and I anticipate that some of my colleagues will be surprised by the result.
As it turns out, generative AI – at least in its current incarnations – is different enough from previous media technologies to require a change in strategy. We can’t afford to respond to the FOMO as if “resistance is futile” (IFYKYK). Allow me to explain.
Executive Summary
For those who are already nearing their TL;DR limit, this is a long post with a lot of important questions and ideas, but here’s the short version:
There is no need or benefit compelling enough to justify the costs of using generative AI with young children – and I don’t just mean financial costs. AI technologies are evolving at a rapid pace, so this conclusion may change in the not-too-distant future. But for now, the best course of action is to declare a moratorium on using generative AI in early childhood education. This moratorium should remain in place until:
AI tools are specifically designed to be developmentally appropriate for young children and meet needs which cannot reasonably be met using non-AI tools.
Errors and discriminatory stereotypes in generative AI responses are extremely rare.
The energy and water needs of AI tools can be accommodated without making global warming worse or endangering availability or affordability of electricity or water for individual families and communities.
None of these conditions exists today, but they are achievable.
Like the scorpions and tigers of fables, businesses shouldn’t be expected to be something they’re not. To be sure, there are AI entrepreneurs and small start-ups who are giving serious thought to the ways that generative AI might help meet the needs of young children, and I look forward to seeing the results of their efforts. But thus far, the questions and decisions of big tech have prioritized profits over benefits to children and families. Let’s not rely on them to lead our evaluation of AI.
I expect and welcome pushback on my proposed moratorium. My hope is that people will expand the dialogue beyond responding to me. Talk with colleagues in your workplaces, professional organizations, and public policy forums. Such engagement is the way we inject public interest concerns into decision-making about AI in education. If we don’t frame this conversation, marketers will.
Scope
For the general public, AI is a collection of evolving applications, many of which are not exceptionally new. My call for a moratorium is narrow, applying to those that typically concern educators and parents – the generative tools that can write and speak, summarize, respond to queries, have conversations, recognize faces, and/or render still and video images and audio (including deepfakes).
The pause is also specific to early childhood education (preschool through grade 2). It is limited to teaching with generative or character AI and does not apply to assistive technologies that allow for the inclusion of children with needs that make it difficult to learn or socialize in mainstream education settings without specialized help.
There are important discussions to have about using AI at home or with older youth or for professional tasks that don’t include direct interaction with children. I hope those conversations take place; they aren’t the focus here.
The Demand for Artificial Intelligence
The drumbeat for integration of generative AI is egged on by ubiquitous and often extravagant claims from tech companies. Scattered public interest voices exist as a counterbalance, but they rarely reach beyond relatively small circles of academics or tech journalists. Many educators report that they feel pressured to integrate AI, even though they aren’t being given justifications based on sound pedagogy or students’ well-being.
This relentless demand is very different than what early childhood educators experienced during introductory phases of previous technologies. For example, in the 1960s push for educational TV in the U.S., PBS and non-profit producers like Sesame Workshop and Fred Rogers broke through the mostly shallow offerings of commercial media to prioritize children above advertisers.
These non-profit producers invested in formative research about what children needed and how media could meet those needs. Their resulting success eventually pushed other producers to match their excellence, and that push eventually extended into the creation of educational videos, digital games, and apps. So when people like me encouraged early childhood educators to integrate educational media into their work, we could share dozens of excellent TV series, videos, films, games, and apps backed by solid research, and we could recommend integration strategies that achieved learning goals in ways that couldn’t be replicated by in-person activities or printed texts.
There is no corollary to that research base or public interest voice in the current wave of AI.
The Education Challenge
In contrast to the upper grades, in preschool and the primary grades no child is trying to pass off a ChatGPT essay as their own work, create a deepfake nude of their ex, seek therapy from an AI chatbot, or avoid homework by relying on AI to summarize a lecture or textbook. But young children are encountering simple search tools, AI-embedded toys, chatbots, voice assistants, AI-enhanced education apps and voice-to-image-generation apps. Additionally, children, ever-observant, are watching the ways that adults (and older siblings) interact with AI tools.
Because generative AI is in the lives of young children, educators can’t just look away. Fortunately, there are educators who have embraced teaching media literacy skills and computational thinking in the early years. These professionals are already preparing children for life with AI. Habits of inquiry, observation, reflection, and pattern recognition are easily transferable to AI when it becomes necessary. Developing those habits doesn’t require children to use AI.
There are no unique-to-AI foundational brain pathways that, if not created at a young age, impede healthy development or later skill acquisition. Nor has anyone suggested that using AI will require some type of sequencing, in which children progress from foundational beginnings to more complex skills later in life (like teaching emergent readers the alphabet and letter-sound correlations so that they will eventually be able to read complex texts).
In fact, part of “magic” of generative AI is that it is intuitive and accessible via oral language rather than symbol systems. I’ve seen children squeal with delight at the images they can create with voice commands, and I’ve seen intriguing possibilities, like bots that children can teach as a way of practicing their own skills and language. But I have yet to see anyone demonstrate a necessary use by young children for a generative AI app. And that word “necessary” is important.
We must ask what’s necessary, and not just what’s possible because the known and potential costs of AI are steep. As I’ll explain in a moment, I find the climate costs and the ethical conundrum they create, especially disturbing.
So, can children learn from AI? Of course. But that’s the wrong question. What we must ask instead is, “Can they learn from AI better than other technologies or strategies?” This is the question that lets us meet our responsibility to the children and families we serve to be sure that what we’re choosing to do is in their best interests, and not just an impulsive desire to ride the next wave.
Evaluating AI: Developmental Issues
Can AI be developmentally appropriate for early childhood? Generative AI is too new for a reliable research-based answer to that question. But there are things we know about the technology and things we know about child development that are informative:
1. No Kids Allowed
None of the common generative AI tools offer use permission for children under the age of 13. This is the industry’s way of telling us that their products aren’t suitable for young children. We should believe them.
To be clear, I’ve never encountered an early childhood educator who encouraged young children to use AI independently. Instead, the educators serve as interfaces for the technology. This arrangement doesn’t violate use policies, but it does make me wonder what the rush is when it will be years before children are able to use the apps on their own (at which time, whatever they’ve already learn is likely to be outdated).
2. No Privacy Opt Outs
Common AI systems, especially those that are available for free, surveil users and do not guarantee data privacy. They offer no choice to opt out of data collection (under the premise that such data is essential to train the application).
Even savvy adults are susceptible to using prompts that unintentionally reveal private information. When their employees do it, businesses call it “data leakage.” Imagine how much more likely it is that a child would innocently reveal things they shouldn’t (e.g., “It’s my birthday. I’m six. Draw me with a giant cake!”). Oops. Now the app knows the child’s date of birth and so will every entity that purchases (or steals) data from the app’s owner. Unlike legacy websites or online games, AI interfaces involve conversations that go well beyond the possibility of a child sharing their full name, phone number, or location.
Children learn best when they feel safe and secure and they have the freedom to explore, make mistakes, try again, and change their minds. It’s not clear that can happen if mistakes are never reversible and the presence of AI means that everyone in our education spaces is being tracked, judged, and spied on by strangers.
3. Equity Issues
An April 9, 2024 piece in Nature summarizes the essence of the equity problem: “The market's current monetization strategy consists of providing open, but registered, access to legacy models, while hiding the most advanced versions behind paywalls.” So those who have the means to pay can access the safest, most up-to-date tools. Those who lack the funds are left with mediocre tools that aren’t designed to be used with children.
4. AI Tools are Unreliable
AI tools are getting better, but much more slowly than industry leaders claim. Error rates range from 1% to more than 50% depending on the tool, and distortion is endemic to the system. Asking young children to spot AI errors is asking them to do something with which even highly educated adults struggle. See, for example, this summary of instances in which lawyers filed briefs citing cases that an AI tool completely invented.
Or, consider “S.A.R.A.H.,” the chatbot embedded on the website of the World Heath Organization (WHO). It includes this disclaimer:
WHO Sarah is a prototype using Generative AI to deliver health messages based on available information. However, the answers may not always be accurate because they are based on patterns and probabilities in the available data. The digital health promoter is not designed to give medical advice. WHO takes no responsibility for any conversation content created by Generative AI. Furthermore, the conversation content created by Generative AI in no way represents or comprises the views or beliefs of WHO, and WHO does not warrant or guarantee the accuracy of any conversation content. Please check the WHO website for the most accurate information. By using WHO Sarah, you understand and agree that you should not rely on the answers generated as the sole source of truth or factual information, or as a substitute for professional advice.
Aside from the astoundingly ironic suggestion to check the WHO website for accurate information when one is already on the website and has been invited to seek information from its AI assistant, there is zero chance that a beginner-level or emergent reader could access or understand the caution. In fact, one of the arguments in favor of chatbot technology is that it can provide information access to people (including adults) who can’t read.
Disclaimers, written or spoken, are clearly not a winning strategy for helping young children learn how to discern source credibility. If the best we can offer young children is a confusing explanation for why we’re asking for information from a source that we know, for certain, is wrong on a regular basis, it’s time to rethink. And it isn’t just factual queries that are concerning.
AI companions have advised teens to engage in unhealthy behaviors and even recommended suicide. Faulty facial recognition programs have led to wrongful arrest and detention, and denial of housing and employment. For examples, check out the work of Joy Buolawimi and the Algorithmic Justice League.
It is also important to acknowledge that AI routinely traffics in stereotypes. Sometimes it’s blatant, but often it is subtle. For example, young children seem to love asking AI image-generators to draw kids doing various (often silly or fantasy) activities. I’ve observed such interactions in which every initial image the AI app offered was of an able-bodied Caucasian child.
Making any single type of person the default, especially if we aren’t pausing to notice and discuss that pattern, is instilling stereotypical notions of what constitutes the norm and what is “other.” That “othering” remains true even if we provide children with the opportunity to offer editing prompts. If the app requires a user to specify every race other than White or it defaults to a particular body type or gender, it’s reinforcing a systemic (and harmful) imbalance.
I hear the media and information literacy (MIL) educators ask, “Isn’t that what we teach now when we use Google image searches to help children learn to identify and analyze patterns on book covers or in stock images of professions like scientists or leaders?” The short answer is, “Not exactly.” The concepts of repetition, patterns and stereotyping may be the same, but when we’re looking at Google images we know that we didn’t create the images we’re analyzing.
We don’t know what the impact is when young children prompt AI to “invent” an image based on what they’re thinking only to have that image always show up with certain features that might not have been what they were imagining. Do children start internalizing norms, as if the norms came from themselves and not from a media technology tool? Does that matter? Given the Clarks’ original research with dolls and, more recently, Project Implicit’s Implicit Bias Tests, I suspect we’ll discover that it does.
5. Eroding Critical Thinking
A recent study from Microsoft and Carnegie Mellon University revealed that “the more confident human beings were in AI's abilities to get a task done, the fewer critical-thinking skills they used.”
If that’s the way AI affects adults, imagine what it might do to young children, whose brains are in the process of physically constructing the pathways that enable high level executive functioning. Do children who become accustomed to getting simple, immediate answers from AI develop the same thinking, problem-solving, and language skills as children who try to answer their own questions?
Without AI, children find answers by experimenting and by asking real people. Along the way they learn that different people answer the same question differently, and some people will answer in ways that are culturally responsive and help build positive identity. In contrast, AI provides single or limited answers and isn’t likely to respond with cultural nuance unless specifically asked to do so. Could AI play a role that would not displace alternative interactions that are essential for children’s healthy development? Maybe. Is it possible that use of AI will build skills and brain pathways in ways that we haven’t yet identified? We don’t know.
6. Adult Content
AI-generated responses reflect the values of their training data. Products that scrape publicly available data reflect all the hate-filled, violent, prejudiced, slimy rhetoric, and, of course, sexual content that is common on many social media, gaming, and other Internet sites.
Some tools offer customizing options that allow users to minimize problematic results, but not without taking the time to train your system yourself to create guiderails that prevent unwanted interactions. This is a must for early childhood educators or you risk responses that include adult content. Even if you are willing to take on the training, it is surprisingly difficult to craft customized systems that are not error prone, as Google found out in February 2024 when, with good intentions, it directed Gemini to generate images that were more diverse and ended up with a Black George Washington and other similar historical inaccuracies.
7. Changing the Nature of Play
Play is how young children learn. The games and conversations and plots they invent allow them to process events and emotions, try out different roles, develop language and social skills, and test the abilities of their bodies. No doubt play will continue, even in a world infused with AI. The question is, will AI change the nature of play in ways that diminish its benefits?
For example, imagine a child in “conversation” with a couple of stuffed animals or action figures. The only part of the scene they speak out loud are their own lines. But their imaginations are inventing an entire conversation. They are “hearing” everything each toy says. This allows them to explore various options and invent changing scenarios in an environment that is totally in their control (a rare opportunity for young children!).
How might the benefits of this sort of free play change when conversations with character AI toys speak with them rather than children imagining all parts of the dialogue themselves? We don’t know. Nor do we know how interacting with an object that speaks like a human but does not have facial expressions that change (or have a face at all!) will influence what children learn about the connections between language and emotions, social cues, and the physical aspects of voicing words.
From a developmental standpoint, it is clear that we have some important questions to answer before we can be enthusiastic about using generative AI with young children. Rather than simply opposing AI, let’s take some time to try to find answers.
Evaluating AI: Ethical Issues
If you ask most early childhood educators how they arrived at their decision to use or not use a particular media technology, at some point they are likely to cite media effects research to back up their choice. Such research will continue to play an important role in our decision making process, as we seek to determine AI’s direct effects on children, such as its impact on language or brain development, or learning a particular skill.
But because our responsibilities to children also include ensuring that they have safe, secure environments in which to learn and grow, we must consider more than direct effects on learning and development. That consideration starts with the fact that the mostly commonly available generative AI tools are constructed from inherently unethical practices.
Labor Issues
Exploitation of workers is disturbingly common in many sectors of the economy (e.g., the garment industry, mining, agribusiness, and even social media platforms). AI companies similarly mistreat workers, as James Muldoon and Mark Graham of the Oxford Internet Institute have documented in Feed the Machine.
Additionally, many people have objected to AI because some companies are specifically using it to replace workers. That outcome tracks historically. Job displacement has occurred with the introduction of every significant new technology across the ages.
The fact that the AI industry treats labor in ways that are common does not earn it a free pass. For some, this will be where they draw the line. For me, advocacy to pressure governments and companies to urgently address the concerns is certainly on my “to do” list, but if labor issues were the only concerns, I wouldn’t be calling for a moratorium.
Copyright Violations
In contrast to labor issues, some problems are unique to AI. Arguably the most obvious is that stolen work (copyright-protected material scraped from the Internet without permission or compensation) is irretrievably embedded in generative AI tools.
Sam Altman, CEO of OpenAI, has bluntly acknowledged that ChatGPT and tools like it are “virtually impossible without [using] copyrighted content [for free].” He and other leaders of AI companies have argued that this isn’t a problem because the data they gather for training is fair game under “fair use” copyright law. [Insert skeptical raised eyebrow emoji here]. It’s an unserious claim for a variety of reasons, not least of which is that AI apps use the scraped data to create products that directly compete with the original sources.
From a content creator’s perspective, it gets worse, because in most cases it is not possible to opt out of having one’s work scraped. Blanket payments have been offered as recourse in some situations, but, given current error rates and absence of controls, these agreements can’t guarantee that original creators are given credit or that their work won’t be terribly distorted.
To further compound the problem, there is no turning back. As systems grow and learn from themselves and from interactions with users, it is increasingly impossible to separate stolen work from material that is used with permission or derivative but not plagiarized.
This presents a conundrum for MIL educators. Every comprehensive MIL curriculum includes teaching budding media creators about copyright and fair use. It’s tough not to appear hypocritical when we advise students that they should abide by the rules but that AI industry titans are seemingly exempt from the same expectation.
As with labor issues, if reliance on intellectual property theft was the only major issue with generative AI, I probably wouldn’t call for a moratorium. It’s a reality we can’t change and realistically, the need to prepare children to live with AI would outweigh my inclination to avoid it.
I would, however, insist that everyone understand the source of training data for the AI tool(s) they are using – and that’s a significant glitch. I’m not sure there is a developmentally appropriate way to help very young children understand how AI uses stolen source material. This remains a substantive obstacle as I think through the choice to use or reject generative AI.
Climate Crisis
There is one clear, significant issue that we can neither ignore nor teach our way around: generative AI requires huge amounts of water and energy. Until 100% of the demand it creates is met by renewable sources, generative AI is a significant contributor to the climate crisis.
To be fair, AI is not the only guzzler of energy, nor is it responsible for the pre-existing lags in keeping the U.S. electric grid up-to-date. That said, it is fair to give generative AI special attention because its current energy needs far exceed the technologies it is replacing. Also, rapid, exponential growth of its outsized energy demand is a built-in facet of the technology.
Consider, for example these comparisons of normal generative AI use to other energy uses:
According to research scientist Sasha Luccioni, compared to older AI models trained to do a single task (such as question-answering or translating), new generative models can use up to 30 times more energy just for answering the exact same set of questions.
Sajjad Moazeni, a University of Washington assistant professor of electrical and computer engineering, concluded that, “Just training a chatbot can use as much electricity as a neighborhood consumes in a year.” And training isn’t a one-and-done thing. To keep chatbots current, training must be ongoing, which has led some to predict that within the next few years, large AI systems are likely to need as much energy as entire nations.
Stable Diffusion XL uses almost as much energy to generate a single image as it takes to charge an average smartphone.
Creating ChatGPT 3 is estimated to have generated 552 tons of carbon dioxide equivalent, the equivalent of 123 gasoline-powered passenger vehicles driven for one year. And that’s prior to public launch.
You’d have to watch 1,625,000 hours of Netflix to consume the same amount of power it takes to train GPT-3.
When the supercomputer that Musk is building in Memphis to accommodate Grok gets to full capacity, the local utility says it’s going to need a million gallons of water per day and 150 megawatts of electricity — enough to power 100,000 homes per year.
The Bloomberg Editorial Board reports that “by 2026, booming AI adoption is expected to help drive a near-doubling of data centers’ global energy use, to more than 800 terawatt-hours — the annual carbon-emission equivalent of about 80 million gasoline-powered cars.”
Goldman Sachs estimates that by 2028, AI will account for about 19% of data center power demand. In 2022, that number was 3%.
Alex de Vries, a data scientist at the central bank of the Netherlands reports that Nvidia is set to ship 1.5 million AI server units per year by 2027. These 1.5 million servers, running at full capacity, would consume at least 85.4 terawatt-hours of electricity annually—more than what many small countries use in a year.
Bottom line? Until the grid is based exclusively on renewables, AI contributes significantly to global warming. And industry knows it.
Blackstone’s Stephen Schwarzman estimated that the AI boom could potentially propel electricity use to soar 40% in the next decade and observed that the technology will max out the capacity of the existing grid by 2028. Sam Altman, is convinced that a breakthrough in nuclear fusion technology will come to the rescue, even though it seems highly unlikely that such tech could be scaled quickly enough to meet the demand even if the breakthrough occurred this week.
Elon Musk is supporting the development of hydrogen-based fuel but has not yet succeeded at getting it to the scale needed by generative AI. Others (notably Microsoft, Google, and Amazon) are relying on newer, smaller versions of familiar fission nuclear energy technologies, even running ads to convince the public that nuclear energy is safe.
Note that none of these initiatives actively involves the public in discussions about the pros and cons of specific energy solutions or what might best serve the public interest. And in the case of nuclear, while companies would pay to build new facilities, it is unclear who would cover the financial costs of dealing with the toxic waste that existing nuclear power produces or pay for security to prevent theft or dirty bombs as smaller nuclear facilities begin to dot the landscape.
It is important to ask the civic engagement question because the existing track record of acting in the public interest is dubious. Expansion of data centers, often with tax incentives, are competing with existing power customers. Basic supply and demand economics tells us that price increases will follow and, in fact, that is exactly what has occurred.
In Arizona, rate payers saw an 8% increase and the state chose to prioritize data centers over delivering power to Native American communities. In Berwyn, PA, the Federal Energy Regulatory Commission (FERC) had to step in (at least temporarily) to block a deal between Amazon (AWS) and the Susquehanna Nuclear Plant. AWS planned to build an on-site data center that would directly “plug into” the facility. As a result, 40% of the plant capacity would bypass the grid and be diverted to serve only AWS. That power would otherwise meet the energy needs of more than a half-million homes.
And if all of that weren’t concerning enough, water use is directly tied to energy use. The greater the number of data centers or the larger they become, the more water is used to cool the servers. In that process, hundreds of thousands of gallons of fresh water evaporate and are not recovered. Water is also used to power some of the turbines used to generate the electricity for the data centers, making the overall water requirements of generative AI astronomical:
A single 100-word email generated by an AI chatbot using GPT-4 requires 519 milliliters of water, a little more than one 16 oz. bottle. UNCC professor, Dr. Damien P. Williams (@wolvendamien.bsky.social), adds that a bottle is expended for every prompt, not each session or image.
Microsoft is weaving AI into the background functionality of every one of its products. In that configuration, one month of running ChatGPT will consume more water than the city of Atlanta uses in a year.
To send one email per week for a year, ChatGPT uses slightly more than 7 gallons of water. That number – 7 gallons is a multiplier. Use it to calculate the amount of water used per week for emails and responses sent by all the people at work and all the families of the children you serve. Imagine how large the number gets if you try to calculate the number of emails sent by your entire community each week!
Like electricity, water resources are limited and we’re already seeing governments choose AI chip manufacturing facilities over community needs. In one instance in Taiwan, the government chose to allocate precious water resources to a manufacturer instead of letting local farmers water their crops amid the worst drought the country had seen in more than a century. As data centers, their suppliers, and their customers increasingly control systems and jobs on which we depend, they are being given priority over homes, local businesses, and farms when water is scarce or the grid is over capacity.
These examples and statistics are the proverbial tip of the iceberg. I’ve collected additional sources on my blog. The post is intended to serve as an appendix to this article.
Though the grid is gradually becoming greener, much of the energy used by generative AI is still generated by fossil fuels. To rapidly build an AI infrastructure, tech companies are adding to the pressures that the climate crisis places on essentials like water, and then they compete with us for those ever more scarce or expensive resources. Clearly we have a significant problem, and that’s not even accounting for the environmental and societal costs of mining the metals on which our digital devices and data centers currently rely.
So, for me, until we figure out the energy and water piece of the puzzle, the decision tree stops here. Using generative AI looks a whole lot different when you think of it as a choice between fun or convenience and the prospect of not being able to charge your phone, heat or cool your home, or get the water that you or the farmers who grow your food need during a summer heatwave.
We may be able to filter out embedded prejudices, improve working conditions within the industry, or reduce error rates, but until we have a fully green power grid or a version of generative AI that requires much, much less power and water, we shouldn’t be using AI for anything but the most essential tasks. And knowing the climate costs, if we put children and families in the position of using or normalizing generative AI, we are asking them to be complicit in an unethical act.
I’m not naïve enough to think that taking a break from generative AI in early childhood education will significantly alter the skyrocketing trajectory of AI’s energy or water consumption. But public interest voices have been marginalized at the exact time when schools, communities, and government agencies most need to hear from us.
What I am suggesting is that early childhood educators bring to the conversation our unique perspective as a professional community that, at its core, prioritizes children’s wellbeing as we prepare them for their future. Enacting a moratorium is to step into this moment as leaders who remind our nations that the health of the planet is as essential to children’s future as a healthy economy or digital innovation.
Recommendations
Early childhood educators have important contributions to make to our digital future. This does not mean it is compulsory to adopt the tech industry approach of, as Mark Zuckerberg put it in 2012, “move fast and break things.”
With that in mind, and with the exception of using AI-enabled assistive technologies for children with special needs or circumstances, early childhood educators should pause use of generative AI with young children until these conditions are met:
The energy and water needs of AI tools can be accommodated without making global warming worse or endangering availability or affordability of electricity or water for individual families and communities.
Errors and discriminatory stereotypes in generative AI responses are extremely rare.
AI tools are specifically designed to be developmentally appropriate for young children and meet needs which cannot reasonably be met using non-AI tools.
I am often awed by the fantastic creativity, deep reflection, and undeniable dedication to children shown by so many educators and colleagues who have embraced AI. When society is ready, their work will be vital to preparing children for a world in which AI will be common. But society isn’t ready yet.
I’m calling for a moratorium in early childhood education not as an excuse to ignore or ban AI (which we should not do), but because pausing to reflect is a way to embrace digital change. It allows us to
Envision the ways that we can serve as leaders who will require AI companies to provide tools that respect children’s rights and meet their needs (and resist products that don’t meet that standard).
Prepare ourselves for a world where generative AI is efficient, green, and reliable.
Identify opportunities to eventually tap AI to improve learning opportunities and relationships.
That said, there is no such thing as one-size-fits-all education, and not everyone is in a circumstance where the choice about using generative AI is theirs to make. For educators in settings that include use of generative AI, or for those who want to begin to map out what their use policies might be in the future, here are some guideposts for using AI with intention:
1. Avoid frivolous or trivial use.
Using generative AI has serious environmental, economic, and ethical consequences, so don’t treat it like a toy. Rethink using AI to reap minor benefits of convenience or for educational purposes that are reasonably achievable via other means.
2. Don’t put children or the people who teach them in the position of acting as unknowing beta testers.
AI products with high error rates should be avoided altogether. Adults should know a product’s particular vulnerabilities as well as its data collection and surveillance practices before using it. And educators should think twice before devoting free labor to AI companies by inventing use cases for them.
3. Consider designating generative AI as an adult-only option.
At this moment, there is nothing AI enables that a young child needs. They don’t need to increase their efficiency, or play with an image generator, or “converse” with a hero from history while they’re still learning how to distinguish fiction from non-fiction.
4. Teach MIL (media & information literacy) and computational thinking skills.
No matter what happens with AI, the digital world demands an understanding of algorithms and the ability to analyze and evaluate media.
5. Teach children that there is currently no such thing as error-free AI, so all results must be checked for accuracy and ideology (e.g., embedded stereotypes).
Model how to analyze results every time you use a generative AI tool. Help children build media literacy observation and inquiry habits so they begin to identify mistakes and biases for themselves.
6. Help children notice the various AI tools in their environment and help them to understand that AI tools are machines, not people.
Avoid modelling human-style interactions with AI. For example, rather than start a query with a name (e.g., “Siri,” “Sora,” or “Claude”), start the sentence with the word “computer.” If you can’t train your device to recognize the word “computer” as its activation signal, say the required name after the word “computer.”
7. Teach children about the ethical issues involved in creating, providing, and using generative AI tools.
Invite children to consider whether the things we gain from an AI tool are worth the costs to families, communities, and others. If you’re not comfortable having a serious conversation about ethics and values with young children, don’t use AI with them.
8. Look to peers rather than marketers for effective ways to use AI, but be wary of any lessons or recommendations that don’t include developmentally appropriate strategies to address the ethical issues embedded in generative AI applications, including climate costs.
Mentioning ethical issues and then ignoring those issues and using AI anyway sends the message that ethics aren’t important or don’t apply to you or the children.
9. Consider your motives for using generative AI.
Ask: How did I find out about this tool? If it was from a company selling the tool (or an educator who has a relationship with a particular company), did I do a media literacy analysis of the sales pitch? Am I clear about who benefits from using this tool and who is disadvantaged or harmed by it? Is the AI filling a need I had previously identified, or is it more like an answer in search of a problem?
10. Engage adults in conversations about when, where, and why you (and they) use generative AI tools.
Share what you know about ethical issues and the power of adults as role models. Consider whether AI is an always on/always available tool, or if it is reserved for certain tasks or times.
Conclusion
The intention of this provocation is to call educators and policy makers into a dialogue of introspection about why we support or oppose using generative AI in education, how we weigh the various options, and for those who use AI, how we do so ethically.
I call for a moratorium fully aware that AI and society will continue to change at a rapid pace. It would be short-sighted to think that the stance I take today should be permanent. So, my suggestion is to temporarily opt out of using generative AI – not ignore it, ban it, or opt out of the conversation.
Like any tool, generative AI can be used in beneficial or harmful ways. The technology is becoming more efficient and innovators are developing less destructive ways to meet our energy needs. And assuming that we’ll find our way through the worst of the ethical problems, I am very much looking forward to the day when industry has figured out climate neutral (or climate beneficial!) ways to meet their energy and water needs and to do it in ways that require investors and corporate executives to absorb the financial burden rather than shifting it to ever-more stressed communities.
Until then, let’s reserve our use of generative AI for our highest priorities. There is no scenario in which encouraging preschoolers, kindergartners, and primary school students to play with AI qualifies as a high priority.

Author: Faith Rogow, Ph.D., author of the groundbreaking Media Literacy for Young Children: Teaching Beyond the Screen Time Debates, is a pioneering media literacy educator, curriculum developer, and strategist with over three decades of experience. She is the founder of Insighters Educational Consulting, established in 1996 to promote media literacy and critical inquiry. Dr. Rogow has trained thousands of educators, childcare professionals, media creators, and parents to effectively engage with media and foster critical thinking skills.
She was the founding president of the National Association for Media Literacy Education (NAMLE) and played a key role in developing its Core Principles of Media Literacy Education. Dr. Rogow also contributed to Ithaca College’s Project Look Sharp, the first curriculum-driven media literacy initiative in the U.S., and served on the editorial board of the Journal for Media Literacy Education.