The emergence of AI shaming targets artists using AI tools, often damaging their reputations. This new enforcement of social boundaries stifles creativity, originality, and narrows the definition of art.
Humans have shamed each other for millennia to enforce the “rules”. Parents shame children who don’t measure up to some ill-defined standard. Bosses purge workers who don’t meet an impossible goal. A community banishes individuals who break an arbitrary or hypocritical taboo. Religions are particularly adept at shaming. The victim may simply take their insufferable behavior or unpopular beliefs underground. In the worst case, the blackballed person vanishes, perhaps permanently.
What is the new way of shaming?
Since at least the introduction of ChatGPT in November 2022, a new form of shaming has emerged, targeting writers, visual artists, and musicians willing to try out new tools that expand their capabilities and save significant time and effort. Not only do a certain number of self-appointed AI-police attempt to humiliate fellow artists/experimenters with accusations, such as “That looks like it was made with AI,” many point the finger of purity merely as an insult, without knowing if a work was created with AI or not.
An accusation of AI use is the insidious new way to embarrass and mortify a rival or enemy, if even the accuser has never met the victim.
Shaming is typically a way of defining social boundaries. If you don’t conform to certain behaviors or a shared belief systems, you don’t belong to the group. If you adapt the correct behaviors and beliefs, the group accepts you. When it comes to AI shaming, the in-group tends to be writers, artists and musicians who have decided for their own reasons that the creation of works using artificial intelligence crosses a legal, ethical, or artistic boundary. Some of this is rooted in a definition of “authentic” or “real” art. The cognoscenti, usually academics, rich collectors, or influential critics, see themselves as the ultimate gatekeepers of taste and decorum. Some of the concern is founded on genuine questions of copyright, though the accusers rarely bother to educate themselves on how AI works. To oversimply, AI builds things, such as poems or paintings, by predicting what the next word or color or note is likely to be, based on internal statistical rules, a large mass of data, and the suggestions of a prompting human.
Why does AI shaming cause so much damage?
The potential for damage to an artist’s reputation is enormous, especially freelance visual artists. In 2024, DC Comics cover artist Francesco Mattina was singled out for a “telltale” sign of AI use in a variant of a Superman comic cover, and Mattina was attacked on social media (the usual vector of AI shaming). Mattina was already under suspicion for alleged copycatting, but the cover art community piled on with an accusation of AI use. At least two other DC artists have faced the same opprobrium. In Mattina’s case, DC dropped his covers. Other artists have denied the allegations, but the damage was done. If this kind of shaming continues, artists risk losing their livelihoods, even if they use AI for things as benign as idea generation.
Most of the shaming mob rely on its own “observations”, i.e., biases, to spot AI use. For the literary AI shamers, the criteria is the lowly “emdash” (—), stilted or awkward sentence structure, and the verb “delve”, among other “clues”. As soon as the reader sees a sentence he/she doesn’t think is well-written or uses punctuation incorrectly, the first thought is “AI must’ve wrote it,” without evidence other than their preconcieved notions. A denial by the writer usually fails to quash the accusation, because of course they’d deny it.
Racism may even tinge some of the literary shaming, although it’s doubtful accusers are aware of it. An appearance of the verb “delve” is often seen as a red flag for AI. People who assist with AI training are called “annotators.” Many of the annotators based in third countries speak a certain flavor of English. “Delve” is a popular verb in Nigerian English, so the word appears frequently in annotations. One could argue that marking AI as “slop” because the word “delve” appears too often is a swipe at an entire population of Africans.
Boundaries to using AI in writing
AI in literature and the arts is here to stay. ChatGPT has more than 10 million subscribers. Resistance is futile, as the Borg would say. That’s not going to stop the slop-spotters, because they see themselves as upholding “proper” methods for creating art. Frankly, I welcome these AI tools. I’m a ChatGPT subscriber, and I’ve used Claude, Copilot, and a host of image-making tools. I’ve posted a few images on my Instagram account. The main problem now for people who want to use AI to do better, more productive work is protecting themselves from anti-AI crews coming at them with rhetorical torches and pitchforks.
AI tools such as ChatGPT are excellent writing assistants. Here’s some of my applications:
- Feedback on story drafts
- Suggestions when I’m stuck in a plot cul-de-sac
- Character names
- Building draft outlines
- Any kind of marketing or social media copy
There are some things I never do with AI generated text:
- Accept an AI answer at face value
- Copy/paste directly into a draft without editing
- Fail to check a source cited by AI output
- Accept the AI style as my own
The solution: Cultivate your own voice
That last one is critical to avoiding accusations of AI slop. In my view, the key to successfully using AI in your writing is maintaining and cultivating your unique voice. The concept of “voice” of one of the hardest in the writing craft to nail down. It’s how you use words, arrange words, the word choices you make, and so on. But it’s more than the sum of the parts. Most of us can recognize Shakespeare or Hemingway or Dickenson by their voice. You develop your voice over time by writing millions of words. Your writing style emerges from repetition. An AI may imitate you, but it cannot create like you.
In practice, I never write drafts on my narrative work by editing AI-generated text. This preserves my own writing voice, and avoids the taint (if you believe AI taints creative work) of artificial intelligence.
In short, don’t let AI supplant your own unique way of describing the world. AI can help you along, but it cannot be you.
Frequently asked questions
How should I deal with false accusations of AI use? It’s important that you decide first whether conforming to some unwritten rule is in your best interest. I treat these charges (I’ve been scolded at least once.) is to see them as just someone’s opinion, similar to a book review. Unfortunately, some reviews carry more weight than others. You have to decide your risk tolerance.
Are tech companies stealing original work to use in generative AI? Companies such as OpenAI (ChatGPT) and Anthropic (Claude) have admitted to scraping the web for images and text to train their AI models. In the case of Anthropic, the company agreed to pay $1.5 billion to settle piracy claims. However, a federal judge has ruled that legitimately acquired materials used for AI training and output do not violate copyrights, because they are “transformed” into something else. In other words, they aren’t copied word for word or pixel for pixel and sold as original work.
Can I copyright AI-generated work? I’m no lawyer, but the consensus at this writing appears to be probably not. Artificial intelligence is such a new technology, and there are so many unsettled lawsuits, that copyrighting an AI work might not be valid, or it might be. No one knows for sure. That said, it’s probably safe to assume that an image made with MidJourney, for example, is probably not protected as yours.
Image: octavio lopez galindo from Pixabay
Have you been a target of AI shaming? Tell your story in the comments.

