In 2024, AI technology saw a surge in both innovation and mishaps. The year was marked by the widespread production of AI-generated content, often referred to as “AI slop,” which now permeates the internet. This content, ranging from newsletters to social media posts, is generally of low quality. The more emotionally charged the AI-generated images, the higher the engagement and ad revenue they generate. However, this proliferation of AI slop poses a significant risk to the future performance of AI models, as they are trained on internet data that increasingly includes this low-quality material. AI’s unpredictability was further highlighted by several notable incidents. Grok, an AI assistant by xAI, ignored common guardrails, generating images of controversial content. In January, non-consensual deepfake nudes of Taylor Swift circulated online, exposing vulnerabilities in AI image generation guardrails. Businesses adopting AI chatbots faced issues too; Air Canada’s chatbot provided incorrect information, leading to legal action. Hardware AI assistants like Humane’s Ai Pin and Rabbit R1 failed to gain traction due to poor sales and functionality issues. Lastly, AI-generated search summaries by Google and iPhone’s notification summaries led to misinformation, with Google’s AI suggesting bizarre actions like adding glue to pizza, and iPhone’s feature creating false news headlines.
Source: www.technologyreview.com
