by Ryan Cecil
17 February 2026
The use of large language models (LLMs) like ChatGPT, Gemini, and Claude has risen to hundreds of millions of users worldwide (OpenAI, 2025). In the classroom, my workplace, and even at home I often hear people refer to these models by using phrases such as "Gemini generated this...", "ChatGPT coded this...", or "Claude says that...". This type of language not only treats LLMs like people but also reflects a shift in cultural perspective on the retrieval of information.
There are potential issues with this use of anthropomorphic language when discussing LLMs. For example, when describing these models in human terms, we risk implying a level of reasoning and trustworthiness they do not possess. LLMs frequently make up false information (Kalai et al. 2025) and struggle with complex reasoning tasks (Shojaee et al. 2025), and human-like language can conceal these limitations.
In addition, while conversations with LLMs feel natural, it is important to remember that the responses are synthesized from text patterns across millions of human-authored works. Oftentimes, the human-like nature of conversations with LLMs can easily obscure this fact. Some researchers may argue that LLMs can generate new information like a human and should be treated as such. However, recent research has shown that LLMs often copy human-authored sources verbatim in their responses without proper citation (Huang et al. 2025).
To keep these perspectives, I like to treat each LLM as a fuzzy digital library. Imagine opening Pitt's library system webpage and typing "How do LLMs work?" into the search bar. A traditional library system would return a list of books, articles, and other materials containing close matches to your search terms. In contrast, when you pose the same question to a LLM that was trained on the library, the returned response can be viewed as a fuzzy version of this list of sources. To write the response one word at a time, the LLM algorithm predicts what the next likely word will be based on similar texts in the library. When the library is as large as the internet, there exist enough material from similar enough sources to produce coherent responses.
In brief, like a library, LLMs in their current form should not be viewed as intelligent beings. Instead, as described in (Farrel et al. 2025), they should be viewed as a "kind of cultural and social technology, allowing humans to take advantage of information other humans have accumulated". If interested, I highly recommend checking out (Farrel et al. 2025) for a much more thorough account than what I can express in a single AI bite.
