Skip to main content

Does Knowledge Have a Future?

by Michael Itkin, PhD Student in Film Studies and Slavic Languages & Literatures

20 November 2025

As a researcher in the humanities, I sometimes compare myself to intellectual celebrities whom we continue to cite in our papers and presentations. Will I ever be as smart as Roland Barthes or Michel Foucault, who were able to produce such influential writings and engage with so many sources without even having a Ctrl+F function? What if I’m just a mediocre young intellectual with questionable academic achievements who got into graduate school by mistake? And, most importantly, what could be my role as a producer of knowledge in a world dominated by ChatGPT? 

AI is rapidly becoming our trusted life companion—a hybrid of Donna Haraway’s posthuman perspectives on cyborgs and dogs. It can assist humans not only with cooking and answering basic encyclopedic questions, but also with understanding emotions, developing ideas, and even mapping out futures based on prompts widely shared on social media. As a free, accessible, optimistic, and tireless educator, AI is beginning to undermine our own capacities for teaching and research. By prioritizing the speed with which the information is generated over its depth, it puts at risk more traditional sources of knowledge, including academia and higher education. After all, why would anyone waste money, years of study, and burn neurons to earn a degree, when there are now stories of people landing jobs with the help of AI-generated prompts? 

The breaking down of human knowledge into close-ended summaries, even illustrated by Google’s recent update imposed on our searches, comes as a natural result of our oversaturation with information offered by the Internet. What was once seen as a vast database representing infinite knowledge is now becoming harder to process and navigate. In a time of proliferation of content—including both TikTok and academia, where the innovativeness is becoming increasingly difficult to justify—it is tempting to turn to a single source that can explain everything. ChatGPT is becoming a “companion” to academics as well, helping retain sporadic ideas, offer unusual perspectives, and sometimes even produce ready-to-go digital products. 

Meanwhile, what AI is still not fully capable of is critical thinking—precisely what universities are best at nurturing in students. In retrospect, the most enlightening moments of my higher education always involved the ability to question and converse with academic authorities, apply contemporary concepts to works of the past, and propose unconventional interpretations of life’s phenomena. Such a dialectical approach that perpetuates the search for truth rather than simply setting on a statement is often what academia is criticized for, yet it is also what makes it truly liberal and democratic. In the end, universities produce politically empowered subjects who, to use Foucault’s vocabulary, engage in their own discursive practices, discover truths about the world, and offer alternatives to dominant discourse. 

Perhaps our calling as researchers and teachers is precisely this: to defend the value of the question over that of the answer. While AI tools can certainly alleviate our workload and optimize our ways of searching and learning, they cannot account for the margin of doubt, that necessary critical distance between us and our object of inquiry which makes research possible. As we embrace the new discursive regime centered around generative AI, let us not stop at the answers it provides, but question them, just as we do with everything produced within the media environments of the post-truth era. Even though writing and reading are likely to experience radical transformations in the age of AI, we must not fully succumb to its totalizing influence and keep within ourselves the power to question and shape solutions. The universe is vast and still hides too many dark corners to stop exploring it. 

 References

Lee, Hao-Ping (Hank) et al. “The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers.” In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI '25). Association for Computing Machinery, New York, NY, USA, Article 1121, 1–22. https://doi.org/10.1145/3706598.3713778. 

Rockmore, Dan. “What It’s Like to Brainstorm with a Bot.” The New Yorker, 9 August 2025, https://www.newyorker.com/culture/the-weekend-essay/what-its-like-to-brainstorm-with-a-bot. 

Rothman, Joshua. “What’s Happening to Reading?” The New Yorker, 17 June 2025, https://www.newyorker.com/culture/open-questions/whats-happening-to-reading.