by Elise Silva, PhD, Director of Policy Research at Pitt Cyber
25 August 2025
I'm part of a research group across Pitt's main and regional campuses that's trying to understand Pitt students' attitudes toward AI. What makes our research unique compared to previous studies is its focus and methods. Instead of a large-scale survey, our research was up-close and personal. We sat down and talked with 95 undergraduate students to get to know them and learn how generative AI is affecting their educational experiences at Pitt. We focused on their feelings, relationships, and perceptions of the value of a university education in the AI age.
In a recent piece for The Conversation, I overviewed some of the findings from this study, which show how AI is changing university experiences. Students described increased anxiety, distrust, and avoidance in their educational relationships in the AI era both vertically (teacher to student) and horizontally (student to student). These feelings were demonstrated by all kinds of students, from AI enthusiasts to those who were more reluctant to use AI.
Such sentiments stem from many shifting realities. Students feel confused about unclear AI policies and experience discomfort about how they perceive their peers to be using AI. They also exhibit stress regarding being seen as potential cheaters rather than learners who are navigating a world with increasing pressure to adopt AI at every turn.
Based on these quickly shifting dynamics, I offer a few suggestions for how university instructors might address the more straining effects of AI on classroom relationships and begin rebuilding trust in learning environments:
-
Encourage students to meet with you. We found that students who are feeling stuck are more willing to turn to ChatGPT and other chatbots rather than consulting a human expert. Making connections, networking, and learning in community are some of the unique things a university offers. Lean into what you can offer as a human—your time and attention. If students are avoiding interactions with their professors because it's difficult, uncomfortable, or time-consuming for them, it might be time to encourage them to experience that discomfort and learn from it rather than avoid it.
-
Articulate the value of the work you assign. Students often turned to AI when they felt that the tasks they were asked to do were repetitive or they thought the assignment/class was irrelevant to them. For anyone who has ever taught a general education class, a lack of student buy-in isn’t a new phenomenon, but it is one that is “supercharged” by AI. If students understand the “why” in addition to the “what,” they might be more willing to lean on themselves rather than a machine to complete what they would otherwise perceive to be “busy work.”
-
Talk about AI early and often. Some students described seeing a standardized AI use policy (most often one that forbids usage entirely) on the syllabus on the first day of class and then little substantive mention of it ever again. Many also thought that their knowledge and understanding of AI surpassed that of their professors. Talking with students about AI throughout the span of a course might help them understand the limits of their understanding of the technology and foster opportunities to co-create learning experiences that meet students where they are at. That might still end up being consciously avoiding AI in your class, or it might be embracing it to varying extents. Ultimately, the conversation surrounding your use policy is as important as the policy itself because it clarifies your learning goals. It might also help you articulate your own philosophy on AI in a way that is translatable to students.
-
Be aware of the uniquely difficult position students are in and try to see things from their perspective. Teaching right now is hard, but so is being a student. When you're assigning group work, for instance, realize that the presence of AI can strain peer learning experiences. Students may be worried about a group member using AI and may not know how to respond—whether to turn them in or try to work it out one-on-one. Either way, it's awkward for them when their usage differs from that of their classmates and puts them in a socio-ethical bind that pits social capital against academic integrity. Or consider how AI is increasingly being added to their already established workflows. If you have a blanket ban on your syllabus for all AI usage, what are students to think when they open a Google search and the first thing they see is an AI Overview? Does that count as using AI or not? Students are asked to opt out of using AI by professors but aren't finding easy opt-out buttons in their daily lives.
I don't wish to undermine the very real concerns about academic integrity and student learning that institutions like Pitt are facing. AI has exacerbated many problems that already existed in a product-based, grades-based educational system that rewards perfection—something AI alluringly offers. If there's something that connects all of these suggestions, though, it's messy, imperfect, human communication. Students are facing a great deal of uncertainty, and many are concerned about their professional futures. They're wondering what role AI might play in those futures. Helping students understand the value of your course—and your value as an expert resource—goes hand in hand with your seeking to understand their feelings about AI and their morphing interpersonal learning experiences at Pitt.
Acknowledgements: Thank you to Pitt Digital for funding to run the focus groups and to the entire GenAI Conversations research team led by Annette Vee, including Patrick Manning, Jessica FitzPatrick, Jessica Ghilani, Catherine Kula, Patty Wharton-Michael, Jialei Jiang, Sean DiLeonardi, Birney Young, Mark DiMauro, Jeff Aziz, and Gayle Rogers.