Meta unveils new tool to detect bias in computer vision AI

1 Sep 2023

Image: © DanteVeiil/Stock.adobe.com

With more than 30,000 images of 50,000 people, Meta said its FACET tool can help developers test whether AI computer vision systems are biased.

Meta has released a new tool that is designed to detect biases – such as those relating to race and gender – within AI-powered computer vision systems.

Many AI models are known to exhibit systematic bias against women and people of colour and Meta hopes that its latest Fairness in Computer Vision Evaluation, or FACET, tool will help developers better detect and address some of these shortcomings.

“We want to continue advancing AI systems while acknowledging and addressing potentially harmful effects of that technological progress on historically marginalised communities,” reads a blogpost from Meta.

“That’s why we’re introducing FACET, a new comprehensive benchmark for evaluating the fairness of computer vision models across classification, detection, instance segmentation and visual grounding tasks.”

Meta said that the dataset consists of 32,000 images with more than 50,000 people labelled by “expert” human annotators for attributes such as perceived gender representation, age, skin tone, hairstyle and even occupational classes.

FACET can help researchers answer questions like whether an engine is better at identifying skateboarders when their perceived gender is male or whether a system is better and identifying people with light skin and dark skin.

More importantly, for instance, the tool can help determine whether such problems are magnified when a person has curly, rather than straight hair.

“Benchmarking for fairness in computer vision is notoriously hard to do,” Meta said. “The risk of mislabeling is real, and the people who use these AI systems may have a better or worse experience based not on the complexity of the task itself, but rather on their demographics.”

Last week, Meta claimed its latest AI model – Code Llama – can generate and discuss code. It does so from simple text prompts, functioning as a coding assistant that can make workflows faster for developers and lower the entry barrier for people learning to code.

Meanwhile, Google DeepMind co-founder Mustafa Suleyman called on the US today (1 September) to enforce minimum global standards in AI in an interview with the Financial Times.

“The US should mandate that any consumer of Nvidia chips signs up to at least the voluntary commitments – and more likely, more than that,” Suleyman said. “That would be an incredibly practical chokepoint that would allow the US to impose itself on all other actors [in AI].”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Vish Gain is a journalist with Silicon Republic

editorial@siliconrepublic.com