Skip to main content

My Biggest Mistake

by Zoe Tzanis

22 October 2025

I have had several successes employing AI. I have also had a number of failures. And, probably more has gone wrong than has gone right. But through each of these interactions, I find something instructional. I learn what I can use again. And what I cannot. ​ 

What works for me? Generating a list of random names to help anonymize my research papers, summarizing long PDFs I cannot read in their entirety, and deriving the most important to-dos and need-to-knows from long-winded emails. What doesn’t work? A lot. If it requires accuracy, if I want to do it right, I don’t trust ChatGPT. 

Here is the story of my biggest mistake. 

Initially, I thought AI would be a fantastic tool to transform my class notes and written thoughts into something I could revisit digitally. When I’m in class, sometimes I like to put my devices away. I like to lean into the conversation and away from the distraction of my laptop. In these cases, I write out my notes and ideas in a notebook. Old school, right? After class, I’d ask ChatGPT to turn my notes into text, then transfer that into a Google Doc for easy access when writing seminar papers. This process generally felt useful and ethical, helping me engage more deeply in class and stay organized. At least at first. 

Because I found this usage so helpful for both my in-class immersion and later ability to draw from class materials to produce papers, I thought I could extend it to other purposes. Next, I tried doing some of my actual writing in a notebook, using the same picture-to-text ask, and then moving that text into my paper. I would do this in short bursts. 30 minutes of writing. Turn it into text with AI. Come back to it later. ​ 

Here’s where the problems really started. 

Reading over my written bursts days later, I began to notice subtleties that didn’t make sense. I wouldn’t use that word, would I? I wouldn’t use that many em dashes, would I? Suddenly, I began to realize ChatGPT was not simply translating my written word to text. ChatGPT was editorializing my own words, re-packaging them, and spitting out similar but slightly askew entries that reflected my intention but were not wholly mine. 

This realization was baffling. I was upset. Angry at myself. Angry at chat. Confused about what had gone wrong. More than anything, I felt incredibly uncomfortable with the fact that what I had written, and what I thought to be mine, was no longer mine. It was difficult to work with the text I had. It just felt wrong. I nixed it all and started over. I couldn’t proceed with what I had. 

Big mistake. Big lesson. Do not put your direct writing—or at least large sections of it—straight into chat. You are likely to lose yourself in what it spits back out at you. And, even the simple functions AI should do well, turning written words into digital text, for instance, must be checked and rechecked for accuracy. Always.