Imagine if James Madison spoke to a social studies class about drafting the U.S. Constitution. Or students studying Shakespeare asked MacBeth if he’d thought through the consequences of murder. What if a science class could learn about migratory birds by interviewing a flock of Canadian geese?

Artificial intelligence persona chatbots—like the ones emerging on platforms like Character.ai—can make those extraordinary conversations possible, at least technically.

But there’s a big catch: Many of the tools spit out inaccuracies right alongside verifiable facts, feature significant biases, and appear hostile or downright creepy in some cases, educators and experts who have examined the tools point out.

Pam Amendola, a tech enthusiast and English teacher at Dawson County High School in Dawsonville, Ga., sees big potential for these tools. But for now, she’s being cautious about how she uses them in her classroom.

“In theory, it’s kind of cool, but I don’t have any confidence in thinking that it’s going to provide students with real time, factual information,” Amendola said.

Similarly, Micah Miner, the director of instructional technology for the Maywood-Melrose Park-Broadview School District 89 near Chicago, worries the bots could reflect the biases of their creators.

A James Madison chatbot programmed by a left-leaning Democrat could give radically different answers to students’ questions about the Constitution than one created by a conservative Republican, for instance.

“In social studies, that’s very much a scary place,” he said. “Things evolve quickly, but in its current form, no, this would not be something that I would encourage” teachers to use.

Miner added one big exception: He sees great potential in persona bots if the lesson is exploring how AI itself works.

Image of speech bubbles interacting.

Laura Baker/EdWeek via Canva

‘Remember: Everything characters say is made up!’

Persona bots have gotten more attention, thanks to the growing popularity of character.ai, a platform that debuted as a beta website last fall. An app that anyone can use was released late last month.

Its bots are powered by so-called large language models, the same technology behind ChatGPT, an AI writing tool that can spit out a term paper, haiku, or legal brief that sounds remarkably like something a human would compose. Like ChatGPT, the bots are trained using data available on the internet. That allows them to take on the voice, expressions, and knowledge of the character they represent.

But just as ChatGPT makes plenty of errors, character.ai’s bots should not be considered a reliable representation of what a particular person—living, deceased, or fictional—would say or do. The platform itself makes that crystal clear, peppering its site with warnings like “Remember: Everything characters say is made up!”

There’s good reason for that disclaimer. I interviewed one of Character.ai’s Barack Obama chatbots about the former president’s K-12 education record, an area I closely covered for Education Week. Bot Obama got the basics right: Was Arne Duncan a good choice for education secretary? Yes. Do you support vouchers? No.

But the AI tool stumbled over questions about the Common Core state standards initiative, calling its implementation “botched. … Common Core math was overly abstract and complex,” the Obama Bot said. “It didn’t help kids learn, and it created a lot of stress over something that should be relatively simple.” That’s a view expressed all over the internet, but it doesn’t reflect anything the real Obama said.

The platform also allows users— including K-12 students—to create their own chatbots, also powered by large language models. And it offers AI bot assistants that can help users prepare for job interviews, think through a decision, write a story, practice a new language, and more.

‘These AI models are like improv actors’

Learning by interviewing someone in character isn’t a new idea, as anyone who has ever visited a site like Colonial Williamsburg in Virginia knows, said Michael Littman, a professor of computer science at Brown University. Actors there adopt characters— blacksmith, farmer—to field questions about daily life in the 18th century, just as an AI bot might do with someone like Obama.

Actors might get their facts wrong too, but they understand that they are supposed to be part of an educational experience. That’s obviously not something an AI bot can comprehend, Littman explained.

If a tourist tries to deliberately trip up an actor, they’ll typically try to deflect the question in character because “human beings know the limits of their knowledge,” Littman said. “These AI models are like improv actors. They just say ‘Yes and’ to almost everything. And so, if you’re like, ‘Hey, do you remember that time in Colonial Williamsburg when the aliens landed?’ The bot is, like, ‘yeah, that was really scary! We had to put down our butter churns!’”

In fact, it’s possible for hackers to knock a persona chatbot off its game in a way that overrides safeguards put in by its developer, said Narmeen Makhani, the executive director of AI and product engineering at the Educational Testing Service.

Bot creators often put in special conditions into a persona bot that keep it from using swear words or acting hostile. But users with “malicious intent and enough tech knowledge” can erase those special conditions, just by asking the right questions, turning a friendly and helpful AI representation of a historical figure or fictional character into a tool that’s no longer suitable for students, Makhani said.

Educators considering using AI-powered persona bots in their classrooms should “make sure they know who has built the tools and what sort of principles and ethics they have in place,” Makhani added. They may be best off choosing “developers that are specifically focused on educational content for a young age group,” she said.

One prominent, early example: Khanmigo, an AI guide created by Khan Academy, a nonprofit education technology organization. Students can ask Khanmigo for help in understanding assignments. But they can also ask it to take on a particular persona, even a fictional one, Kristen DiCerbo, the chief learning officer at Khan Academy said during the Education Week Leadership Symposium last month.

For instance, a student reading The Great Gatsby by F. Scott Fitzgerald, a mainstay of high school English classes, might be curious about the symbolism behind the green light at the end of Daisy Buchanan’s dock and could ask Khanmigo to pretend it is the central character in the story, Jay Gatsby. The bot will address their questions about the green light, 1920’s slang and all.

Here’s Khanmigo as Gatsby talking about the meaning of the green light: “It’s a symbol of my dreams and aspirations,” the tool said, according to DiCerbo. “The green light represents my longing for Daisy, the love of my life, my desire to be reunited with her, and it symbolizes the American dream in the pursuit of wealth, status, and happiness. Now, tell me, sport: Have you ever had a dream or a goal that seemed just out of reach?”

Any English teacher would likely recognize that as a common analysis of the novel, though Amendola said she wouldn’t give her students the, uh, green light, to use the tool that way.

“I don’t want a kid to tell me what Khanmigo said,” Amendola said. “I want the kids to say, ‘you know, that green light could have some symbolism. It could mean ‘go.’ It could mean ‘it’s OK.’ It could mean ‘I feel envy.’”

Having students come up with their own analysis is part of the “journey towards becoming a critical thinker,” she said.

‘Game changer as far as engagement goes’

But Amendola sees plenty of other potential uses for persona bots. She would love to find one that could help students better understand life in the Puritan colony of Massachusetts, the setting of Arthur Miller’s play The Crucible. A historian, one of the characters, or AI Bot Miller could walk students through elements like the restrictions that society placed on women.

That kind of tech could be a “game changer as far as engagement goes,” she said. It could “prepare them properly to jump back into that 1600s mindset, set the groundwork for them to understand why people did what they did in that particular story.”

Littman isn’t sure how long it could take before Amendola and other teachers could bring persona bots into their classrooms that would be able to handle questions more like a human impersonator well versed in the subject. An Arthur Miller bot, for example, would have to be vetted by experts on the playwright’s work, developers, and educators. It could be a long and expensive process, at least with AI as it exists today, Littman said.

In the meantime, Amendola has already found ways to link teaching about AI bots to more traditional language arts content like grammar and parts of speech.

Chatbots, she tells her students, are everywhere, including acting as customer service agents on many company websites. Persona AI “is just a chatbot on steroids,” she said. “It’s going to provide you with preprogrammed information. It’s going to pick the likeliest answer to whatever question that you might have.”

Once students have that background understanding, she can go “one level deeper,” exploring how a large language model is built and how bots construct responses one word at a time. That “ties in directly with sentence structure, right?” Amendola said. “What are nouns, adjectives, pronouns, and why do we have to put them together syntactically to make proper grammar?”

Image of a digital handshake.

Laura Baker/EdWeek via Canva

‘That’s not a real person’

Kaywin Cottle, who teaches an AI course at Burley Junior High in Burley, Idaho, was introduced to Character.ai earlier this school year by her students. She even set out to create an AI-powered version of herself that could help students with assignments. Cottle, who is nearing retirement, believes she found an instance of the site’s bias when she struggled to find an avatar that looked close to her age.

Her students have created their own chatbots, in a variety of personas, using them for homework help, or questioning them about the latest middle school gossip or teen drama. One even asked how to tell a good friend who is moving out-of-town that she would be missed.

Cottle plans to introduce the tool in class next school year, primarily to help her students grasp just how fast AI is evolving and how fallible it can be. Understanding that the chatbot often spits out wrong information will just be part of the lesson.

“I know there’s mistakes,” she said. “There’s a big disclaimer across the top [of the platform] that says this is all fictional. And I think my students need to better understand that part of it. I will say, ‘you guys, I want to make clear right here: This is fictional. That’s not a real person.’”

End

Leave a Reply

Your email address will not be published. Required fields are marked *