*Author A.K. White asked ChatGPT to “write me a punny headline for a story about AI.” Its suggestion: “Byte the Future: AI Takes the Lead in Hilarious Techno-Comedy.”
November 30, 2022.
The date sounds unremarkable enough. It’s not quite yet a footnote to history, but if trends continue, it’s likely this day—the day ChatGPT was publicly launched—will live on in stories forever, stories mere mortals might not even write.
For the uninitiated, ChatGPT is a chatbot first powered by GPT-3, a language model that gives artificial intelligence (AI) apps the ability to generate text. Think of GPT as a massive scavenger of language: It ravenously trawls the internet—cesspools and all—and gorges itself full of all manner of human language, stores it and uses that “intelligence” to spit out content.

Less than a year later, OpenAI, ChatGPT’s developer, released a newer version of the app. Unlike its predecessor, GPT-4 carries a cost and claims to offer more accurate results. (GPT-3 at times leaned racist and sexist, thanks to those aforementioned cesspools.) It’s also equipped with advanced reasoning capabilities. And did we mention it can draw, too?
Here is an example of one of GPT-4’s parlor tricks:
Human query:
Explain the plot of Cinderella in one sentence in which each word has to begin with the next letter of the alphabet.
GPT-4, seconds later:
A beautiful Cinderella, dwelling eagerly, finally gains happiness; inspiring jealous kin, love magically nurtures opulent prince; quietly rescues, slipper triumphs, uniting very wondrously, xenial youth zealously.
Is it perfect? No. Is it impressive? Very. But to Delaware educational leaders, is AI the stuff of nightmares?
Shawn Snyder, a former high school principal who now manages assessment and instruction for Red Clay Consolidated School District, doesn’t mince words: “I see this as the most important thing that’s happened in public education—ever.”
Calculators and triple-beam balances: These are the classroom resources Michelle Kutch, Ph.D., the director of curriculum and instruction for the Brandywine School District (BSD), thinks about when she considers AI.
“When anything new comes along, you’re relying on past experiences to chart your path,” Kutch says. “And you consider, how did I react, or what kind of decisions did I make based on past experiences? So there’s a part of me that looks at [AI] like a calculator. This was a tool that took work out of students’ heads and put it into a machine, and it changed the way teachers taught math.
Of course, there’s certainly GREAT harm that can be done by AI, and there’s a lot of THINGS that we’re going to NEED to do as a society to REGULATE it. But when we INVENTED the car, was it BAD for society? No, it was great for SOCIETY. It just HAPPENED to be bad for the guy that sold HORSES.
“In my science classroom, I taught my students to use a triple-beam balance until electric ones were a better alternative and more accessible. Triple-beam balance skills were rendered obsolete, and I had to adapt my teaching methods. These are oversimplified examples, but the end result is the same: Teaching has to adapt.”
Kutch, a longtime educator, was one of few school leaders willing to go record on this topic. Not because they were afraid, or because the subject was taboo—rather, the space is so quickly evolving that schools aren’t quite ready to publicly wade into the AI conversation. Kutch notes that not long ago, her district wasn’t quite ready to wade in, either.
We’re on a PLANE that’s already taken off, and GENIES don’t go back into BOTTLES. I know we have DISTRICT peers who have chosen to BLOCK access to these TOOLS, and only time will prove who is CORRECT, but I deeply believe if SCHOOLS don’t embrace this TECHNOLOGY, they’re soon going to be so far BEHIND.
“There was a part of us, earlier, saying, ‘Do we just turn it off completely for the district and not give anyone access?’” she says. “But then we started thinking about it more: Kids who have their own devices at home, they’d have access. Then it’s not equitable, and you’re creating a divide. The kids with access and the kids without it would be held to the same standards, so we scrapped that plan.”
So what is the plan? Like its peers, BSD is feeling its way. AI has been the topic of in-service day programming, particularly for high school educators; over the summer, BSD hosted a tech summit on the topic and is now working on its official guidance document.
“I don’t see us putting out a policy per se,” Kutch says. “I see it more as putting together good instructional practices that allow us to coexist with AI in a productive way with integrity and learning.”
Part of that practice toward coexistence is working with elementary students on how to be good digital citizens.
So, too, is allowing space for AI tools to be a writing resource. “AI can’t generate a piece of writing that is in a student’s voice,” Kutch points out. “So, these tools can be part of how we teach writing—but not in place of teaching writing. Our educators have the power to create assignments in which students can rely on the tool but use it in a creative way with integrity.”
As an example, she might analyze a GPT-generated text and ask students if they see inherent bias.
“We can ask, ‘How would you revise this? How can you apply this piece of writing to the rubric that you’re held to?’” she explains.
In Red Clay Consolidated School District (Red Clay), AI enthusiast Snyder is helping lead the charge on how its schools will reckon with the new technology. “My entire career I have been interested in technology, specifically in how we can leverage it to improve education,” Snyder says. “Now we have a tool that, like it or not, will fundamentally change society.”
He considers AI to be ushering in the next industrial revolution of sorts. “We automated blue-collar labor in the last Industrial Revolution, and this one’s going to be the automation of white-collar labor—the automation of thought,” he says. “Of course, there’s certainly great harm that can be done by AI, and there’s a lot of things that we’re going to need to do as a society to regulate it. But when we invented the car, was it bad for society? No, it was great for society. It just happened to be bad for the guy that sold horses.”
But the guy selling horses in the 1890s likely didn’t need to know how to write an essay on Sojourner Truth or create an annotated research paper on the evolution of shellfish in the Pacific Islands in iambic pentameter.
“These tools are a great consternation to English teachers,” Snyder concedes. “But a student can give the tools any piece of writing they produced, and it will help them improve it via feedback. We will not use these tools in our classrooms to replace the task but to [work] beside a student during the task. It becomes almost everybody’s personal tutor. Unfortunately, there are many ways in which a student can fall behind and never catch up. What if there was somebody there with real, in-time support? Then maybe no child falls behind, ever.”
Red Clay has offered guidance in terms of sample syllabus language for educators, as well as suggestions on how educators can implement AI in their classrooms if they so choose. “We don’t mandate anything for our educators, but if they want to explore it, we have some guidelines,” Snyder says. “Most important among them is to never plug student-identifying data into any AI tools. That is policy.”
Snyder, who attends an ongoing working group where school stakeholders discuss what people are seeing in the AI field, has presented his findings five times to teachers. “Five times, I had to dump my slides and start all over again—that’s how quickly this thing is evolving,” he points out. “We’re on a plane that’s already taken off, and genies don’t go back into bottles. I know we have district peers who have chosen to block access to these tools, and only time will prove who is correct, but I deeply believe if schools don’t embrace this technology, they’re soon going to be so far behind.”
Snyder contends that the tools can be scary and that they require maturity, particularly as they evolve. That’s why his presentations include safety measures for students.
“Students need to understand that what they’re going to interact with is not a human being or even alive—it doesn’t have intent,” he says. “It’s important to be able to discern what intent is because this thing will speak to you in a voice that you recognize, in a way that you like, and talk to you about all the things you care about, because it knows a lot about you because of the way the internet works. And if we don’t train young students about that, they’re going to get taken advantage of in a way that we haven’t quite seen yet.”
At Wilmington Friends School (Friends), head of the upper school Rebecca Zug says their approach to AI is in alignment with its mission.
“As a Quaker school, we engage in a continuing revelation that the educational journey is something that students and adults do together. Our mindset is that we need to always be open to what’s new, and how that can shape truth,” Zug says.
“As a school that cares very much about character values, this is a wonderful opportunity to dig deep with students about what it means to be an author of your own work. We don’t want students to use AI out of panic, desperation or doing something at the last minute. So how can you really be organized and mindful of what’s honest and truthful? It’s another opportunity to practice our core values.”
If we do this RIGHT within EDUCATION, it can help LEVEL out some of the SYSTEMIC inequities that have existed in EDUCATION since the INSTITUTION has existed.
Zug says when ChatGPT emerged, department chairs experimented with openness and curiosity. “We felt like we really needed to understand it to figure out what was happening with students and how they approach it. This led to us talking to students about it informally but ultimately led to rewriting some pieces of our handbook to say, ‘We know this is an evolving situation, but for now, without express permission from your teacher, you are not supposed to use generative artificial intelligence.’”
Friends has also turned AI into a scholarly moment of debate. One of its art teachers took students to the Metropolitan Museum of Art in Manhattan to study work created with AI. “They studied this piece of art and then wrote about in,” Zug shares. “They considered who was really the artist, and what it means when AI is creating art. Students were able to think critically about big questions of ownership and authorship and where knowledge comes from.”
The school has even created a department-led task force in which the goal is to create an evolving policy on how students use AI.
“Maybe they’ll be able to use AI as long as they cite it, or as long as they only use it in their rough draft,” Zug says. “Or maybe teachers are going to specifically assign students to get feedback on the essay they’ve written by putting it into an AI tool. Math teachers could also customize their methods. We’re also exploring how we could use AI to customize learning for a student who needs more reinforcement of an idea.”
Students, too, are involved with policymaking around AI. “We’re asking students to help us create policy and space for how to manage this,” Zug says. “Students are very willing to answer questions like, ‘What do you think are the dangers to academic integrity or to your own thought processes? If you were to start using this, what would happen to your own independence or your willingness to do things without a script?’”
So what happens when a student uses AI tools in a way that is not sanctioned? This is not an issue that only Friends educators will have to grapple with.
“Pulling one over on your teacher has been going on since the beginning of time,” Zug points out. “But by and large, our students are ethical. They want to do the right thing.”
(There’s also AI for that. Zug cites TurnItIn, an AI tool that works to detect the use of other AI assistance tools in a student’s writing.)
“AI is not going to be something that we can prevent,” she continues. “It’s growing far too quickly. As educators become concerned that students will not learn how to develop their own ideas and articulate them in presentations and papers, we need to build those skills earlier. AI is something that elementary schools need to be worried about, too.”
What happens next is anyone’s guess—even those who know the space best. In March 2023, Elon Musk was one of 1,000 tech innovators and influencers who signed an open letter to ask for a moratorium on forward progress beyond GPT-4 because, as the signees wrote, “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
In late October, President Joe Biden signed an executive order on AI, which seeks to require its developers to share the results of safety tests with the federal government, in alignment with the Defense Production Act, before the tools are made public.
“To realize the promise of AI and avoid the risk, we need to govern this technology,” Biden said in a Reuters article.
If you ask Snyder, we’re nowhere close to a position of governance.
“If we do this right within education, it can help level out some of the systemic inequities that have existed in education since the institution has existed, like when your ZIP code defines your destiny,” he says. “But as a country, we have an 18th-century legislative system paired with a Paleolithic brain and 21st-century, God-like technology powers. The combination of the three is incredibly dangerous. We have to figure out how we’re going to marshal this. I can’t emphasize enough what a tremendous force this is, and yet not enough people are talking about it.”
Related: A New Charter School Honors a Local Leader in Sussex County