Researching AI Incidents to Build a Safer Future: The Digital Safety Research Institute partners with the Responsible AI Collaborative
The Digital Safety Research Institute (DSRI) of UL Research Institutes is partnering with the Responsible AI Collaborative (TheCollab) to advance the AI Incident Database (AIID) and its use for preventing Artificial Intelligence (AI) incidents and public harms.
The AIID is a public resource dedicated to defining, identifying, and cataloging AI incidents to mitigate risk in future AI system deployments. Since 2018, the AIID has collected thousands of incident reports alleging real world harm involving AI systems and made these harms discoverable to developers, researchers, and policymakers.
TheCollab, a partnership between organizations such as Center for Security and Emerging Technology, and the Center for Advancing Safety of Machine Intelligence at Northwestern University, acts as the steward of the data contained in the AIID and uses the data to support independent research projects.
DSRI is now continuing the development of the code which underlies the AIID. DSRI will utilize the AIID as a foundation for its own open science research projects in AI assessment and consumer safety.
As AI and other intelligent systems become more commonplace parts of our everyday lives – in our work, in our homes, in our society – we risk becoming too familiar with its impacts. Yet, the scale and complexity of the harm posed by these systems will only grow with time. TheCollab, DSRI, and other partners welcome your help in the mission to identify these harms, learn from them without repeating them, and make a safer digital ecosystem.
Learn more at the AIID or to submit an incident report, click here.