Computers think they know who you are.
Artificial intelligence algorithms can recognize objects from images, even faces.
But we rarely get a peek under the hood of facial recognition algorithms.
Now, withImageNet Roulette, we can watch an AI jump to conclusions.
Some of its guesses are funny, others…racist.
It uses data from one of the large, standard databases used in AI research.
Upload a photo, and the algorithm will show you what it thinks you are.
My first selfie was labeled nonsmoker.
Another was just labeled face.
Our editor-in-chief was labeled a psycholinguist.
Our social editor was tagged swot, grind, nerd, wonk, dweeb.
Harmless fun, right?
These categories are in the original ImageNet/WordNet database, not added by the makers of the ImageNet Roulette tool.
Heres the note from the latter:
ImageNet Roulette regularly classifies people in dubious and cruel ways.
We did not make the underlying training data responsible for these classifications.
Where do these labels come from?
Because the makers of ImageNet dont own the photos they collected, they cant just give them out.
Browsing those images gives us a peek into what has happened here.
The psycholinguists tend to be white folks photographed in that faculty headshot sort of way.
If your photo looks like theirs, you may be tagged a psycholinguist.
Likewise, other tags depend on how similar you look to training images with those tags.
If you are bald, you may be tagged as a skinhead.
If you have dark skin and fancy clothes, you may be tagged as as wearing African ceremonial clothing.
and didnt say, wow, lets remove this.
Yes,algorithms can be racist and sexist, because they learned it from watching us, alright?
They learned it from watching us.