AI Incident Roundup – January ‘24
Read our month-in-review newsletter recapping new incidents in the AI Incident Database and looking at the trends.
🗄 Trending in the AIID
The latest crop of AI incidents in the database have ranged from the unintentional to the deliberate use of AI tools that resulted in harm to individuals, corporate reputations, and even potentially the US primary elections.
Three incidents new in January showcased the unintended consequences of using AI without enough human intervention and included inappropriate automated photoshop editing, insulting language generated by a customer service chatbot, and unedited Amazon product listings with AI-generated error messages. Several other incidents demonstrated the use of AI to produce unauthorized fake versions of celebrities Taylor Swift and George Carlin, as well as a fake Biden voice used to mislead New Hampshire Democratic Voters.
If you want to learn more about incident trends in the AIID, use our Discover tool to use search terms (e.g. Deepfakes) and filters (e.g. “Incident Date”).
🗞️ New Incidents
Occuring in January
- Incident 633 (01/28/2024) Nine Network's AI Alters Lawmaker Georgie Purcell's Image Inappropriately
- Incident 632 (01/24/2024) Significant Increase in Deepfake Nudes of Taylor Swift Circulating on Social Media
- Incident 628 (01/22/2024) Fake Biden Voice in Robocall Misleads New Hampshire Democratic Voters
- Incident 631 (01/18/2024) Chatbot for DPD Malfunctioned and Swore at Customers and Criticized Its Own Company
- Incident 625 (01/12/2024) Proliferation of Products on Amazon Titled with ChatGPT Error Messages
- Incident 627 (01/09/2024) Unauthorized AI Impersonation of George Carlin Used in Comedy Special
Occuring in December
- Incident 626 (12/26/2023) Social Media Scammers Used Deepfakes of Taylor Swift and Several Other Celebrities in Fraudulent Le Creuset Cookware Giveaways
- Incident 624 (12/20/2023) Child Sexual Abuse Material Taints Image Generators
- Incident 619 (12/20/2023) Rite Aid Facial Recognition Disproportionately Misidentified Minority Shoppers as Shoplifters
- Incident 622 (12/18/2023) Chevrolet Dealer Chatbot Agrees to Sell Tahoe for $1
- Incident 618 (12/14/2023) Navy Federal Credit Union Faces Allegations of Racial Bias in Mortgage Approvals
- Incident 623 (12/12/2023) Google Bard Allegedly Generated Fake Legal Citations in Michael Cohen Case
Earlier Incidents Newly Added
- Incident 616 (11/27/2023) Sports Illustrated Is Alleged to Have Used AI to Invent Fake Authors and Their Articles
- Incident 613 (11/23/2023) AI-Generated Images Available through Adobe Stock Misrepresent Real-World Events
- Incident 621 (11/10/2023) Microsoft AI Is Alleged to Have Generated Violent Imagery of Minorities and Public Figures
- Incident 617 (11/09/2023) Male student allegedly used AI to generate nude photos of female classmates at a high school in Issaquah, Washington
- Incident 614 (11/02/2023) Google Bard Allegedly Generates False Allegations Against Consulting Firms Used in Research Presented in Australian Parliamentary Inquiry
- Incident 629 (07/11/2023) Shein Accused of AI-Driven Art Theft on Merchandise
- Incident 615 (06/13/2023) Colorado Lawyer Filed a Motion Citing Hallucinated ChatGPT Cases
- Incident 630 (01/22/2022) Alleged Macy's Facial Recognition Error Leads to Wrongful Arrest and Subsequent Sexual Assault in Jail
- Incident 620 (11/10/2021) A Robot at a Tesla Factory in Texas Allegedly Injured an Engineer
👇 Diving Deeper
- Explore clusters of similar incidents in Spatial Visualization
- Check out Table View for a complete view of all incidents
- Learn about alleged developers, deployers, and harmed parties in Entities Page
🦾 Support our Efforts
Still reading? Help us change the world for the better!
- Share this newsletter on LinkedIn, Twitter, and Facebook
- Submit incidents to the database
- Contribute to the database’s functionality