EQUITY-AI | Fighting AI Suppression of Minority Voices

Rooted in Data

We have analyzed more than 50 million social media posts and comments with deep learning to understand which contexts falsely drive toxicity scores.

Unbiased Screening

Online communities must be safeguarded from hate speech and rogue algorithms. AI needs to be optimized for equity. Our tools remove the bias in training data.

Equity Feeds

Social media needs an equitable solution to its challenges. Liberating unprejudiced voices in discussions online is key in reaching progressive goals.

Status Quo: Posts falsely flagged for toxicity

Skewed data leads to a biased AI

Content output on social media has exploded in the last decade. To handle basic moderation, machines have been tasked with assessing human expression. While hate speech frequently falls through the cracks of the algorithm, minority voices are invisibly and systematically suppressed.

Machines learning from datasets about humans are falsely considered to be objective arbiters of truth. The Perspective API developed by Jigsaw (an incubator of Google's Alphabet) is a prime example that publicly illustrates the scope and depth of the issue that bias in AI poses for leading tech players. The Perspective AI is tasked with calculating a toxicity score that indicates how healthy a comment is to a given conversation. Every small change in wording, grammar and context can have a significant effect on the toxicity score. What sounds like a straightforward concept can have extremely undesirable consequences: When using biased training data that demonstrates underrepresentation of oppressed voices, the AI automatically learns to suppress their free expression.

Social media is today’s way of expressing feelings and opinions. It depicts the core of our speech. If the assessment of harmful posts or comments is skewed, this must be addressed. Once an AI has picked up a certain bias, the effects are amplified. What are silenced individuals today, will be entire minorities tomorrow. Without a channel of expression, they will fade into obscurity.

For us, discussing the topic of malice is secondary. With our toolbox, we help to bring equity to AI today.

LEARN MORE

Solution: Fighting the bias

The EQUITY-AI Solution

Our toolbox helps tech companies optimize their deep learning models for equity:

First, we measure and mitigate the effects of discriminatory patterns in training data by identifying and eliminating data imbalances, correlations as well as undersampling and oversampling. Second, our EQUITY-AI NLP model ensures that wording, grammar, context, speech patterns and idioms from the data do not skew the training. Finally, we support in continuously monitoring the output of the resulting model across time and regions.

LEARN MORE