The “dead internet theory” suggests that a significant portion of online content and activity is generated by artificial intelligence (AI) agents rather than humans. These AI agents rapidly create posts and images designed to farm engagement on social media platforms. While some of this AI-generated content may seem harmless, like the viral “shrimp Jesus” images, there are concerns about more sophisticated and potentially deceptive uses.
Studies have found that bot accounts on social media can spread misinformation and disinformation, amplifying unreliable sources and swaying public opinion. Social media companies are taking steps to address the misuse of their platforms. They are exploring ways to identify and remove bot activity, as well as considering measures like requiring users to pay for membership to deter bot farms.
The concept of “slop” has emerged to describe carelessly automated AI webpages and images that clutter the internet. Unlike interactive chatbots, slop is not intended to serve users’ needs but rather to generate ad revenue and manipulate search engine results. Slop can be harmful when it contains incorrect or misleading information.
Examples include an AI-generated article listing a food bank as a tourist attraction and AI-written books with dangerous advice.
The spread of AI-generated slop
Image-generated slop, like bizarre reworkings of religious iconography, has also proliferated on social media.
Advertising agencies, the main revenue source for social media, are becoming concerned about the rise of slop. They worry that consumers may start to feel they are being served low-quality content and mistakenly flag legitimate ads as AI-generated. Tackling the problem of slop will be challenging, as major tech companies themselves are now using AI to generate content like search result overviews.
While they claim to have strong safety guardrails, slop continues to spread across the web. The story of “Shrimp Jesus” illustrates how an innocent joke can be co-opted by AI and used by scammers to lure unsuspecting users. As AI-generated content becomes more sophisticated, it will be increasingly difficult to discern the intentions behind it.
Experts call for greater transparency from social media companies, including labeling AI-generated content. While AI can create impressive images, many people still value the authenticity and “soul” of human-made art. The rise of AI-generated content on the internet is a cautionary tale, reminding us to be skeptical and navigate social media with a critical mind.
As one researcher noted, “Sometimes people use AI for creation, but there’s always a dark side.”
Cameron is a highly regarded contributor in the rapidly evolving fields of artificial intelligence (AI) and machine learning. His articles delve into the theoretical underpinnings of AI, the practical applications of machine learning across industries, ethical considerations of autonomous systems, and the societal impacts of these disruptive technologies.























