Humane Intelligence has announced a new competition aimed at tackling the growing issue of hateful image-based propaganda online. The challenge, organized in partnership with the Nordic counterterrorism group Revontulet, opens on September 26 and invites participants aged 18 and older to compete for $10,000 in prizes.
This initiative is part of a broader series of “algorithmic bias bounty” programs launched by Humane Intelligence, a non-profit founded by AI researcher Rumman Chowdhury in 2022. Supported by Google.org, the series aims to address societal impacts of AI while equipping developers with skills to conduct algorithmic assessments.
“The goal of our bounty programs is dual,” says Chowdhury. “First, to teach people how to do algorithmic assessments, and second, to solve pressing problems in the field.”
The first challenge in this series focused on identifying gaps in data sets that could lead to biased or inaccurate AI outputs. The current challenge, however, targets the complex issue of tracking hateful imagery online. With the rise of generative AI, extremist groups have found ways to manipulate images to avoid detection, creating a cat-and-mouse game with online platforms.
Participants in the competition are tasked with developing two models. The first is an intermediate-level model designed to identify hateful images. The second, an advanced challenge, involves creating a model that can evade detection by the first. “This mimics the real-world scenario where do-gooders create one approach, and bad actors counter with another,” explains Chowdhury. The aim is to engage machine-learning researchers in developing effective tools to screen for hateful images.
A significant challenge in this project is the context-dependency of hate-based propaganda. Without a deep understanding of certain cultural symbols or signifiers, it can be difficult for a model to accurately identify what qualifies as extremist content. “If the model never sees an example of a hateful image from a specific part of the world, it won’t be effective at detecting it,” says Jimmy Lin, a computer science professor at the University of Waterloo.
This context-dependency is why Humane Intelligence partnered with a non-U.S. organization for this challenge. “Most models are fine-tuned to U.S. examples, so working with a Nordic counterterrorism group broadens the scope,” says Chowdhury.
However, Lin warns that algorithmic solutions alone may not be sufficient. “We can develop models to detect fake content, but long-term solutions will likely involve training, literacy, and education efforts to avoid this perpetual cat-and-mouse game,” he adds.
The competition runs until November 7, 2024. Winners will receive $4,000 for the intermediate challenge and $6,000 for the advanced challenge. Their models will also be evaluated by Revontulet, which may incorporate them into its suite of tools to combat extremism.