ImageNet Roulette Shows How ML Classifies You

ImageNet Roulette is part of an art and technology exhibit called Training Humans. Upload a photo and the algorithm will give you a classification. Some of the labels are funny, others are racist.

ImageNet Roulette is meant in part to demonstrate how various kinds of politics propagate through technical systems, often without the creators of those systems even being aware of them.

We did not make the underlying training data responsible for these classifications. We imported the categories and training images from a popular data set called ImageNet, which was created at Princeton and Stanford University and which is a standard benchmark used in image classification and object detection.

I uploaded a photo of me and the label I received was “beard.” Accurate.

Check It Out: ImageNet Roulette Shows How ML Classifies You

One thought on “ImageNet Roulette Shows How ML Classifies You

  • Andrew:

    There is a more serious side to this that goes beyond AI-related inbuilt bias based on non-representative data input for training AI algorithms, notably those that characterise people. Beyond simply unflattering to outright insulting characterization, lies the more sinister potential use of these algorithms to selectively identify and harm or kill those with specific traits that underlies an open letter recently signed to ban the development of AI-powered autonomous weapons https://futureoflife.org/open-letter-autonomous-weapons/

    All industry, but particularly the information and AI-related industries face an imperative to recruit and maintain not simply a diverse but a socially and culturally representative workforce as a bulwark against intentional bias and malfeasance.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

WIN an iPhone 16 Pro Max!