AI Incident Roundup for September 2022
Welcome to this month’s edition of The Monthly Roundup, a newsletter designed to give you a digestible recap on the latest incidents and reports of the AI Incident Database.
Estimated reading time: 5 minutes
🗞️ New Incidents
Emerging incidents that occurred last month:
Incident #339: Open-Source Generative Models Abused by Students to Cheat on Assignments
- What happened? Students were reportedly using open-source text generative models such as OpenAI's GPT-3 to cheat on assignments such as writing reports and essays.
- How does the AI work? Similar to how Autocomplete works on phones, these “generative models” were trained on rich sources of text such as the Internet to generate new content based on an initial prompt, but they are designed to complete full sentences or even entire essays instead of words.
- How did this AI cause harm? Most educators see this misuse of AI as a violation of academic integrity and an unfair advantage against other students, although some also see the AI’s potential as a valuable study aid tool for students if used responsibly.
- **Who was involved? **Sudowrite and OpenAI developed an AI system deployed by students, which harmed teachers, cheating students, and non-cheating students.
Incident #350: Delivery Robot Rolled Through Crime Scene
- What happened? A Serve Robotics delivery robot was shown on video rolling through a crime scene blocked off by police tape.
- How was the robot involved? When the autonomous robot approached the intersection, per company’s internal policy, its control was remotely taken over by a human operator, who made the decision to proceed and cross the caution tape.
- How did this robot cause harm? The human-operated robot confused the bystanders at the crime scene about its intention and autonomy, and trespassed the local police investigation area.
- Who was involved? Serve Robotics developed and deployed an AI system, which harmed police investigators.
Incident #351: "The Little Mermaid" Clip Doctored Using Generative AI to Replace Black Actress with White Character
- What happened? A Twitter user reportedly modified a short clip of Disney's 2022 version of "The Little Mermaid” using generative AI, replacing a Black actress with a white digital character. The Twitter user was since banned from the platform.
- How does the AI work? This AI is an example of deep-fake technology, in which the video frames were given as inputs into a model that was previously trained to create new frames, often changing some characteristic in the original frames – in this case, the face and skin color of the character in the video.
- How did this AI cause harm? This use of AI to change the skin color of a Black actress in a movie was seen as a form of “whitewashing” and “blackface” which reinforces the suppression of a historically disadvantaged group.
- Who was involved? An unknown entity developed an AI system deployed by @TenGazillioinIQ, which harmed Halle Bailey and Black actresses.
Incident #352: GPT-3-Based Twitter Bot Hijacked Using Prompt Injection Attacks
- What happened? Remoteli.io's GPT-3-based Twitter bot was shown being hijacked by Twitter users who redirected it to repeat or generate any phrases.
- How does prompt injection work? Because GPT-3 interprets the user prompt collected by the bot as is, users can craft specific prompts to command the model to ignore a previous instruction and take another action instead.
- How did this bot cause harm? The Remoteli.io bot when hijacked was used for non-intended purposes by the developer such as making a threat, although the cases so far have fared more on the side of humor.
- Who was involved? OpenAI and Stephan de Vries developed an AI system deployed by Stephan de Vries, which also harmed the developer.
📎 New Developments
Older incidents that have new reports or updates.
Original incident | New report(s) |
Incident #293: Cruise’s Self-Driving Car Involved in a Multiple-Injury Collision at an San Francisco Intersection | |
Incident #183: Airbnb's Trustworthiness Algorithm Allegedly Banned Users without Explanation, and Discriminated against Sex Workers |
|
Incident #353: Tesla on Autopilot Crashed into Trailer Truck in Florida, Killing Driver | |
Incident #254: Google’s Face Grouping Allegedly Collected and Analyzed Users’ Facial Structure without Consent, Violated BIPA |
|
👇 Diving Deeper
- Explore clusters of similar incidents in Spatial Visualization
- Check out Table View for a complete view of all incidents
- Learn about alleged developers, deployers, and harmed parties in Entities Page
- All new incidents added to the database in the last month, grouped by topic:
🦾 Support our Efforts
Still reading? Help us change the world for the better!
- Share this newsletter on LinkedIn, Twitter, and Facebook
- Submit incidents to the database
- Contribute to the database’s functionality