Advertisements

The selfie tool going viral for its weirdly specific captions is really designed to show how bigoted AI can be

  • 0

The selfie tool going viral for its weirdly specific captions is really designed to show how bigoted AI can be

Category : entrepreneur

A new viral tool that uses artificial intelligence to label people’s selfies is demonstrating just how weird and biased AI can be.

The ImageNet Roulette site was shared widely on Twitter on Monday, and was created by AI Now Institute cofounder Kate Crawford and artist Trevor Paglen. The pair are examining the dangers of using datasets with ingrained biases — such as racial bias — to train AI.

ImageNet Roulette’s AI was trained on ImageNet, a database compiled in 2009 of 14 million labelled images. ImageNet is one of the most important and comprehensive training datasets in the field of artificial intelligence, in part because it’s free and available to anyone.

The creators of ImageNet Roulette trained their AI on 2833 sub-categories of “person” found in ImageNet.

Users upload photographs of themselves and the AI uses this dataset to try fits them into these sub-categories.

This Business Insider reporter tried uploading a selfie, and was identified by the AI as “myope”, a short-sighed person. I wear glasses, which would seem the most likely explanation for the classification.

Some of the classifications the engine came up with were more career orientated or even abstract. “Computer user,” “enchantress,” “creep,” and “pessimist” were among the classifications thrown up. Plugging a few more pictures of myself in yielded such gems as “sleuth,” “perspirer, sweater,” and “diver.”

Other users were variously bewildered and amused by their classifications:

//platform.twitter.com/widgets.js ” data-e2e-name=”embed-container” data-media-container=”embed”/>

//platform.twitter.com/widgets.js ” data-e2e-name=”embed-container” data-media-container=”embed”/>

//platform.twitter.com/widgets.js ” data-e2e-name=”embed-container” data-media-container=”embed”/>

However, a less amusing side to the classifier soon became apparent, as the classifier threw up disturbing classifications for people of color. New Statesman political editor Stephen Bush found a picture of himself classified not only along racial lines, but using racist slurs like “negroid.”

//platform.twitter.com/widgets.js ” data-e2e-name=”embed-container” data-media-container=”embed”/>

Another of his photos was labelled “first offender.”

//platform.twitter.com/widgets.js ” data-e2e-name=”embed-container” data-media-container=”embed”/>

And a photo of Bush in a Napoleon costume was labelled “Igbo,” an ethnic group from Nigeria.

//platform.twitter.com/widgets.js ” data-e2e-name=”embed-container” data-media-container=”embed”/>

However this isn’t a case of ImageNet Roulette going unexpectedly off the rails like Microsoft’s social media chatbot Tay, which had to be shut down less than 24 hours after being exposed to Twitter denizens who successfully manipulated it into being a holocaust-denier.

Instead, creators Crawford and Paglen wanted to highlight what happens if the fundamental data used to train AI algorithms is bad. ImageNet Roulette is is currently on display as part of an exhibition in Milan.

Read more: Taylor Swift once threatened to sue Microsoft over its chatbot Tay, which Twitter manipulated into a bile-spewing racist

“ImageNet contains a number of problematic, offensive and bizarre categories — all drawn from WordNet. Some use misogynistic or racist terminology,” the pair wrote on the site.

“Hence, the results ImageNet Roulette returns will also draw upon those categories. That is by design: we want to shed light on what happens when technical systems are trained on problematic training data. WordNet is a database of word classifications formulated at Princeton in the 1980s and was used to label the images in ImageNet.”

Crawford tweeted that although ImageNet was a “major achievement” for AI, being such a huge database, the project revealed fundamental problems with bias: “be it race, gender, emotions or characteristics. It’s politics all the way down, and there’s no simple way to ‘debias’ it.”

//platform.twitter.com/widgets.js ” data-e2e-name=”embed-container” data-media-container=”embed”/>

AI bias is far from a theoretical problem. In 2016 a ProPublica investigation found that a computer programme called COMPAS, used to predict the likelihood of criminals re-offending, displayed racial bias against black people. Similarly, Amazon had to scrap an AI recruitment tool it was working on last year after it found the AI system was deranking women applicants.

Advertisements

About Author

Sammy Singh

Global VC, Founder, and entrepreneur extraordinaire as featured in Inc. Magazine, Bloomberg, and Forbes. Sammy Singh is a graduate of UCLA and Wharton School of Business as well as a former student of Loyola University of Chicago. Sammy is best known as a renowned financial technology global entrepreneur and has founded over 26 different firms across industry and all over the world. He is a venture capitalist,a TV/ Film actor, tax specialist, and marketing solutions strategist. Connect with Sammy Singh on social media below! www.linkedin.com/in/cfo www.instagram.com/champagnegqpapi www.facebook.com/officialsammysingh www.twitter.com/cxosynergy www.medium.com/@sammysingh www.crunchbase.com/sammysingh

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.