Skip to main content

AI in Research: A Helpful Partner, not a Magic Wand

by Athiana Tettaravou

20 November 2025

Have you ever wished for an assistant who could instantly debug your code, help you quickly grasp the core idea of a paper, or catch mistakes in your writing? That’s exactly how I felt when I first tried AI tools in my research. As a graduate student in economics, I spend much of my time coding, writing, and peer reviewing. Suddenly, there was a tool that could speed up routine coding tasks, suggest new ways of presenting results, and even highlight errors I might have overlooked. It felt like a productivity boost right at my fingertips. 

You’ve probably had the same reaction: a mix of surprise and excitement at how quickly these tools can make academic work just a little easier. The sense of relief is real, not because AI is doing research for us, but because it helps with the small tasks that often get in the way of deeper thinking. 

But then another question often creeps in: is this too much? Will AI replace the work we do as scholars? Here, the answer is reassuringly clear: no. AI can accelerate the mechanical parts of research, but it cannot replace the intellectual judgment we cultivate. It can give polished, confident answers, yet sometimes those answers are wrong. That is exactly why AI is not a substitute, but rather a complement. Gains always come with boundaries. 

Research supports this view. In The Simple Macroeconomics of AI, economist Daron Acemoglu estimates that advances in AI may contribute no more than a 0.66% increase in total factor productivity over the next decade. For economists, that’s meaningful, but it is far from a revolution. This duality: modest macro effects but tangible micro benefits, shapes how I view AI in academia. For individual researchers, small boosts in efficiency can be transformative. At the societal level, however, these improvements accumulate more slowly, especially since AI can struggle with complex, context-dependent work. 

Furthermore, productivity alone does not guarantee shared prosperity. Automation has the potential to widen the gap between capital and labor income. I see a softer parallel in graduate education: AI should be treated as a common resource, widely accessible and shared, not a new source of inequality. 

Perhaps the healthiest way to think about AI is as a companion. It is powerful enough to ease our workload but not meant to replace our scholarship. In academia, one challenge is to ensure that AI reduces, rather than deepens, inequalities in education and research access. If we treat it as a common good, accessible to students, teachers, and researchers alike; it can strengthen the work of human creativity. That is the true promise: a supportive partner, not an overshadowing rival, helping us push the boundaries of knowledge while keeping the discipline open and fair. 

References: 

Acemoglu, Daron. The Simple Macroeconomics of AI. NBER Working Paper No. 32487, National Bureau of Economic Research, May 2024. https://doi.org/10.3386/w32487