AI Slop: The Digital Pollution of Our Time

Jacob DJ Wilson
5 min readOct 14, 2024

--

In the wake of recent natural disasters, a new threat has emerged alongside the physical destruction: AI Slop. This term, rapidly gaining traction in tech circles, refers to the deluge of low-quality, often misleading content generated by artificial intelligence.

Where is AI Slop?

The recent hurricanes ravaging the southeastern United States have become a breeding ground for this digital pollution. Social media platforms are awash with AI-generated images purporting to show the storm’s aftermath. One widely circulated image depicted a shivering little girl in a boat, clutching a puppy amidst floodwaters. Another, presumably a response, image shows a dog carrying a little girl for rescuing from flood waters. These images, while compelling, are entirely fabricated.

Does AI Slop matter?

Unlike traditional misinformation, AI Slop can be produced at an unprecedented scale and speed. It’s the digital equivalent of a polluted river, contaminating our information ecosystem. Just as email Spam evolved from a nuisance into a serious cybersecurity threat, AI Slop has the potential to erode trust in online information and manipulate public opinion. What’s particularly concerning is the attitude of those sharing this content. When confronted about posting a fake image, one prominent figure responded,

“I don’t know where this photo came from. And honestly, it doesn’t even matter. It is seared into my mind forever.” (Source: HardFork Podcast)

This mindset — prioritizing emotional resonance over factual accuracy — is at the heart of why AI Slop is so dangerous.

The impact is already being felt. Official outlets for information such as government agencies and public services are being inundated with requests to clarify what is real and what is fake. This begs the question of whether we need to establish new roles for content curators with evolving skill sets to discern what is factual. The creation and spread of AI Slop isn’t always malicious. Some creators, often in developing countries, are simply trying to capitalize on social media platforms’ monetization schemes. Others may have political motivations, seeking to influence public opinion ahead of elections. Regardless of intent, the result is the same: a polluted information landscape that makes it increasingly difficult for citizens to make informed decisions.

Is AI Slop the same as hallucinations?

Hallucinations are a direct result of limitations or flaws within the AI model itself, and therefore, the responsibility lies with the LLM provider. They are responsible for ensuring the model is trained on high-quality data, is well-architected, and is continuously monitored for potential issues.

On the other hand, AI Slop, like email Spam, is often a result of malicious actors exploiting vulnerabilities in the AI ecosystem. While AI providers can take steps to mitigate these threats, they are not solely responsible for preventing all instances of AI Slop. It’s a shared responsibility that involves the AI community, platform providers, and users working together to combat this issue.

As we navigate this new digital terrain, it’s crucial to develop better tools for detecting AI-generated content and to foster digital literacy skills that help people critically evaluate the information they encounter online. Just as we’ve learned to be wary of email Spam, we must now learn to approach online content with a healthy dose of skepticism, especially during critical times like natural disasters. The rise of AI Slop is a stark reminder that as our technological capabilities advance, so too must our ability to discern truth from fiction. In an era where reality can be manipulated with a few clicks, maintaining the integrity of our information ecosystem is more important than ever.

AI Slop vs. Email Spam

To understand the full implications of AI Slop, it’s helpful to draw comparisons to the evolution of email Spam. The term “Spam” in the context of electronic communications has a rich history dating back to the early days of the internet. One of the earliest documented uses occurred in 1993 on Usenet, when Richard Depew accidentally posted the same message 200 times due to a software bug. This incident helped solidify the term’s association with unwanted, repetitive messages in online spaces.

Over the years, email Spam has evolved into a broader category encompassing a variety of threats, including business email compromise (BEC), phishing, and social engineering. The fight against email Spam has been a constant battle. Early efforts focused on simple filtering techniques, such as blocking emails from certain domains or IP addresses. As Spammers became more sophisticated, more advanced techniques were developed, including Bayesian filtering, neural networks, and natural language processing. More recently, the rise of cloud-based email services has enabled the development of more sophisticated Spam-fighting tools. These tools can leverage massive datasets and machine learning algorithms to identify and block Spam with greater accuracy. However, unlike email Spam, AI Slop can permeate through social networks, news sources, and through platforms which detection and prevention mechanisms don’t exist.

There are also important differences. Unlike email Spam, which is primarily driven by economic motives, AI Slop can be motivated by a variety of factors, including political gain, social engineering, or simply a desire to create attention-grabbing content. Additionally, AI Slop can be generated at a much larger scale and with a higher degree of sophistication than traditional email Spam. This makes it even more difficult to detect and mitigate, as it can easily spread through social media, news outlets, and other online platforms.

As AI technology continues to advance, the problem of AI Slop will likely become more pervasive and challenging to address. New tools and techniques will be needed to detect and mitigate this threat. To stay ahead of the curve, consider exploring resources like the Trail of Bits Awesome-ML-Security GitHub repository.

Additionally, The AIBOM TT (Artificial Intelligence Bill of Materials Tiger Team) is a collaborative effort dedicated to enhancing AI transparency, security, and risk management. Inspired by the CISA SBOM initiative, the AIBOM TT aims to promote industry-wide adoption of AIBOM best practices. By tracking and documenting the components and data used to train and develop AI models, AIBOMs can help identify and mitigate potential risks associated with AI Slop.

The Role of Content Curators

As the problem of AI Slop becomes more prevalent, there will be a growing need for skilled content curators. These individuals will be responsible for evaluating the quality and authenticity of online content, including content generated by AI. Content curators will need to develop a deep understanding of AI technology and its limitations. They will also need to be skilled at critical thinking and fact-checking.

The role of content curators will be particularly important in areas where misinformation can have serious consequences, such as public health, natural disasters, national security, and elections. By working together, content creators and curators can help to ensure that the online information ecosystem remains reliable and trustworthy.

By understanding the nature of AI Slop and its potential consequences, we can take steps to protect ourselves and our communities from this emerging threat. Just as we’ve learned to be wary of email Spam, we must now learn to approach online content with a healthy dose of skepticism and critical thinking.

--

--

Jacob DJ Wilson
Jacob DJ Wilson

Written by Jacob DJ Wilson

CyberSecurity research and musings

Responses (1)