by Seth Davis
3 April 2026
You’re watching a presentation, or scrolling through a webpage, or reading an article, or consuming a YouTube video essay, and the illustrations make no sense. There’s a stylized world map with “AFRICA” labeling Ireland; or a series of distorted web icons; a computer monitor with Post-it notes stuck to its center; flowcharts that don’t connect; script in letters that don’t exist. You’re not surprised that the author used AI, but you wonder: why did they choose images so bad?
There are a number of explanations for the proliferation of sub-par deployments since the AI bubble began, including laziness and pressure from clueless superiors. These explanations assume that people use bad gen-AI in the hope that no one will be able to tell they cheated. But as I research my dissertation on the technological singularity, I time and again come across art, visual or otherwise, that proudly calls attention to being AI-generated. I come across popular writers and online discussion groups who insist they derive artistic appreciation from this slop. I’m not talking about the high-quality collaborative human–machine artwork that takes time and expertise to make. I speak of the slop in which we’re up to our eyes.
I call this phenomenon AI kitsch, and it must be stopped. Kitsch is crappy art that people nonetheless enjoy either ironically or because it confirms a comforting worldview. Jon McNaughton’s paintings of 47 on a motorcycle are kitsch. According to the novelist Milan Kundera, kitsch is “the aesthetic ideal of all politicians and all political parties and movements” because it discourages independent thought and demands subjects accept whatever judgment the ruling party spits out. Kitsch, says philosopher Denis Dutton, “is designed to appeal to an image we have of ourselves, and our response to kitsch is very often essentially self-congratulatory.”1 Kitsch acclimates us to eat what we’re given and ignore its taste of excrement.
Similarly, presenting bad AI art as if it were good reinforces the idea that gen-AI is good. The appreciation is on an aesthetic level: not that it’s useful, or that it’s promising, or that it can be appropriately incorporated into higher education, but that it’s good; that Silicon Valley is good; that America’s morally bankrupt techno-overlords are good.
Case in point: In 2023, two years before he disgraced his university by inviting 47 to its inaugural “AI and Energy Summit,” CMU president Farnam Jahanian spoke at CMU’s annual conference on “AI and Ethics.” The hall, in CMU’s Tepper Business School, was filled with sloppy realist oil paintings in expensive gilt frames. These paintings, the placard notes tell upon inspection, are in fact the products of a robot called FRIDA, an engineering marvel. After admiring these paintings, Jahanian conducted an hour-long conversation with a professor-turned–JPMorgan executive, who said she won’t consider ethics unless her boss orders her to. Jahanian ended by luxuriating over a ChatGPT passage and concluded: “So we didn’t even need to hold this conference—we could have just asked AI!” This alone should have gotten him booed offstage and blacklisted from future events, but the audience and organizers tolerated it as they tolerated the kitsch that has become AI’s dominant aesthetic.
AI kitsch is everywhere. It’s on bookshelves, on Spotify, in festival logos. It’s in the banners of university initiatives devoted to nuanced conversations about AI and higher ed. It’s not innocent. We cannot discuss genAI intelligently until we scrutinize its products. To think with our own brains, we must first judge with our own eyes.
