Description: Tumblr’s automated tools to identify adult content were reported to have incorrectly flagged inoffensive images as explicit, following its announcement to ban all adult content on the platform.
Entities
View all entitiesAlleged: Tumblr developed and deployed an AI system, which harmed Tumblr content creators and Tumblr users.
Incident Stats
Incident ID
233
Report Count
1
Incident Date
2018-12-03
Editors
Khoa Lam
Incident Reports
Reports Timeline
theverge.com · 2018
- View the original report at its source
- View the report at the Internet Archive
Tumblr announced earlier today that it will ban all adult content on the platform, starting on December 17th. Now, longtime users are criticizing the company’s auto-detecting algorithms, which appear to be incorrectly flagging some inoffens…
Variants
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.