Red Light, Green Light: Our Thoughts on the DOE’s New AI Guidelines
The Spectator's editorial piece on the DOE's new AI guidelines.
Reading Time: 4 minutes
The use of artificial intelligence (AI) in the education system has skyrocketed over the past four years amongst both students and teachers, as chatbots have become more widespread. With this rapid rise comes an increasing need for clear, strict boundaries regarding how these tools can be used effectively while maintaining their educational benefit, which the New York City Public Schools (NYCPS) system has recently attempted to fulfill.
New York City’s Department of Education (DOE) released its guidance on artificial intelligence on March 24, 2026. The guidance groups different uses of AI into distinct categories of acceptability, labeled “red light,” “yellow light,” and “green light,” and outlines how teachers can use AI tools. Under the new policy, red light uses—decisions about student grades, placement in classes, or student counseling—are not allowed, as they pose the “highest risk to students, families, and the fairness of our school system,” according to the DOE. Yellow light uses are only allowed with caution and oversight, meaning that “trained professionals” are required to approve each usage of AI. It is unclear whether or not teachers are considered “trained professionals,” or if the new policy will require additional training in responsible usage of AI. The framing of the “yellow light” category by the NYCPS leaves open the question of who is responsible for the oversight of new applications of AI to various situations, and how those responsible will be trained. Finally, green light uses are encouraged in order to support staff in optimizing their work—for example, brainstorming lesson plans or translating materials to use in the classroom are considered beneficial ways to use AI. Across all three categories, AI is also forbidden from accessing any student data.
A key guideline in the “green light” category allows teachers to use AI for “brainstorming and organizing,” which includes “[using] AI to explore lesson ideas, approaches, and unit planning, aligned with intellectual property guidance.” Though using AI to generate classroom materials may save time for teachers, allowing this use of artificial intelligence risks the creation of lesson plans that are not always fitting for students, whom only their teachers can know best. The true value of teachers exceeds simply presenting or regurgitating content, but lies in being able to identify students’ strengths and weaknesses in order to approach concepts from multiple angles to accommodate different learning styles.
However, there are uses of artificial intelligence that can complement a teacher’s work in the classroom. For example, educators may use AI to generate practice problems or testing material, so long as they check for errors, align the content with their curriculum, and are fully able to explain the concepts or details behind those questions. The DOE’s own framing suggests this same reasoning: AI should supplement learning in the classroom rather than replace it.
Though there are beneficial uses of AI, it is inevitable that some teachers will misuse it. For example, teachers can use artificial intelligence to grade students’ work, an action prohibited by the NYCPS guidelines. Yet these are merely guidelines, meaning that there is no plan to enforce these suggestions.
This lack of clarity and decisiveness leaves doubt about the seriousness of the DOE’s accountability. If there are no defined consequences for teachers who inappropriately use AI, and the DOE doesn’t have a clear regulatory plan, the policy lacks the basic mechanisms needed for it to properly function. Rules operating on a scale as large as the New York City public school system will only successfully influence conduct when they create clear-cut incentives and disincentives. Without that structure, there is no meaningful deterrent to abusive AI usage, which means that these policies will make no tangible mark on the issues we’re seeing in our education system.
The policy also does not address transparency from educators. If a teacher is using artificial intelligence tools in the classroom, students and parents have a right to know. Using AI does not automatically disqualify educators, but students should be aware of how their teachers are using AI in their classrooms.
It seems as though the NYC Department of Education’s priority is teacher AI use, as these guidelines do not include regulations for students, whose usage of AI is typically more controversial. Students at schools like Stuyvesant already struggle with the lack of oversight on AI use from the DOE. There have been cases of students getting accused of AI use having to defend themselves against AI detection tools that have a proven track record of being neither accurate nor reliable, and ultimately being subject to the judgment of individual teachers whose decisions can carry real weight on a student’s academic transcript and future. Without institutional standards for students to fall back on, teachers are put in a position to make decisions that don’t offer students a fair process. This isn’t to say that there aren’t examples of students using AI on their schoolwork—there are. However, the best way to fix this issue is by creating a set of standards that can be precisely applied to both students and teachers. Both parties must work together under one fair system outlined by the city without as much room for misinterpretation, allowing for a school environment where both educators and students can be more at ease with AI usage and each other.
