HomeTechnologyConsejos para que las escuelas y los maestros sobrevivan con la IA

Consejos para que las escuelas y los maestros sobrevivan con la IA

In November of last year, when ChatGPT was launched, many schools felt as if they had been hit by an asteroid. In the middle of an academic year, with no prior warning, teachers were forced to confront this new, alien-looking technology that allowed students to write university-level essays, solve complex problems, and ace standardized exams. Some schools responded — foolishly, I argued at the time — by banning ChatGPT and similar tools. But those restrictions didn’t work, partly because students could use the tools on their cell phones and computers at home. And as the school year progressed, many of the schools that restricted the use of generative artificial intelligence — such as ChatGPT, Bing, Bard, and other tools — quietly lifted their bans.

As we approach this school year, I have spoken with many elementary and secondary school teachers, school administrators, and university faculty members about their thoughts on AI today. There is a lot of confusion and panic, but also a lot of curiosity and excitement. Above all, educators want to know: how can we use this technology to help students learn, instead of trying to catch them cheating?

I am a technology columnist, not a teacher, and I don’t have all the answers, especially when it comes to the long-term effects of AI in education. But I can offer some basic short-term advice for schools trying to figure out how to handle generative AI this semester.

Firstly, I encourage teachers — particularly in high schools and universities — to assume that one hundred percent of their students are using ChatGPT and other generative AI tools for every assignment, in every subject, unless they are being supervised within a school building.

While this won’t be completely true in most schools, as some students may have ethical reservations about using AI, find it not useful for their specific tasks, lack access to the tools, or fear being caught, the assumption that everyone is using AI outside of class may be closer to reality than many teachers believe. (“You have no idea how much we use ChatGPT,” read the title of a recent essay by a student at Columbia University in the Chronicle of Higher Education). And it is a useful shortcut for teachers trying to figure out how to adapt their teaching methods. Why assign a take-home exam or a paper on Jane Eyre if everyone in the class — except perhaps the strictest rule followers — will use AI to complete it? If you knew that ChatGPT is as ubiquitous as Instagram and Snapchat among your students, why not switch to supervised exams, long-form essays, and group work in class?

Secondly, schools should stop relying on AI detecting programs to catch cheaters. There are dozens of these tools on the market now, all claiming to detect AI-generated writing, and none of them reliably work well. They produce many false positives and are easily fooled by techniques like paraphrasing. Don’t believe me? Ask OpenAI, the maker of ChatGPT, which this year suspended its AI writing detector due to its “low precision rate.”

In the future, AI companies may be able to label their model outputs to make them easier to detect — a practice known as “watermarking” — or better AI detection tools may emerge. But for now, most AI text should be considered undetectable, and schools should invest their time (and their tech budgets) elsewhere.

My third piece of advice — and the one that may get me angry emails — is for teachers to focus less on warning students about the flaws of generative AI and more on finding out what this technology does well. Last year, many schools tried to scare students by telling them that tools like ChatGPT were unreliable and often gave nonsensical answers and produced generic prose. These criticisms, while true for early AI chatbots, aren’t as true for the upgraded models, and smart students are figuring out how to get better results by giving the models more sophisticated instructions.

As a result, students at many schools are ahead of their instructors when it comes to understanding what generative AI can do if used correctly. And the warnings about flawed AI systems issued last year may ring hollow this year, now that GPT-4 is capable of getting passing grades at Harvard.

Alex Kotran, CEO of the AI Education Project, a nonprofit organization that helps schools adopt AI, told me that teachers need to spend time using generative AI to appreciate how useful it can be and how quickly it is improving.

“For most people, ChatGPT is still a trick,” he said. “If you don’t really appreciate how deep this tool is, you’re not going to take all the other steps that are going to be necessary.”

There are resources for educators who want to quickly catch up on AI. Kotran’s organization, as well as the International Society for Technology in Education, offers teachers a series of AI-focused lesson plans. Some teachers have also started compiling recommendations for their colleagues, such as a website created by educators at Gettysburg College that offers practical advice on generative AI.

However, in my experience, there is nothing that replaces hands-on experience. That’s why I advise teachers to start experimenting themselves with ChatGPT and other generative AI tools, with the goal of mastering the technology as much as many of their students have.

My final advice for schools feeling bewildered by generative AI is this: treat this year — the first full academic year of the post-ChatGPT era — as a learning experience and don’t expect to get everything right.

There are many ways AI could contribute to reshaping classrooms. Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania, believes that the technology will lead more teachers to adopt a “flipped classroom” model (where students learn the material outside of class and practice it in class), which has the advantage of being more resistant to AI cheating. Other teachers I spoke to said they were experimenting with turning generative AI into a collaborator in the classroom or a way for students to practice their skills at home with the help of a personalized AI tutor.

Some of these experiments will fail. Some will succeed. That’s okay. We are still adapting to this new and strange technology in our hands, and occasional missteps should be expected.

However, students need guidance when it comes to generative AI, and schools that treat it as a passing fad — or an enemy to be defeated — will miss the opportunity to help them.

“Many things will be upended,” Mollick said. “And that’s why we need to decide what we do, rather than retreating against AI.”

Kevin Roose is a technology columnist and author of “Futureproof: 9 Rules for Humans in the Age of Automation.” More about Kevin Roose.