CarderPlanet
Professional
- Messages
- 2,549
- Reaction score
- 724
- Points
- 113
Researchers propose a new two-dimensional approach to determining skin color.
Sony researchers William Tong, Alice Xiang, and programmer Przemyslaw Joniak recently published a study highlighting one-dimensionality in computer vision when working with skin colors. Currently, the color gradation is used from light to dark. The researchers suggested adding a range from red to yellow for greater accuracy.
In the past, the Fitzpatrick scale of six shades, from the lightest to the darkest, was used to test algorithms for skin color bias. Initially, this method was developed by dermatologists to assess the skin's response to ultraviolet radiation. However, Google recently implemented a new Ellis Monk scale with 10 shades.
As noted in Wired, when it was discovered that facial recognition algorithms work less accurately for people with darker skin, large companies such as Google and Meta began optimizing their software. However, Sony's research shows that many AI developers still miss the nuances of skin color, especially for people with yellow skin tones.
At a conference in Paris, Sony researchers presented their work, which used the international color standard CIELAB, used in editing and producing photos. Analysis using CIELAB showed that the skin in the photos differs not only in tone (color depth), but also in hue or color gradation.
The Sony team tested open-source artificial intelligence systems, including Twitter's photo cropping program and two image generation algorithms. In all cases, the systems gave an advantage to people with redder skin tones, which may not be fair to people from Asia, Latin America, and the Middle East.
The authors proposed a new method for describing the color of human skin, using two coordinates: on a scale from light to dark and on a scale from yellow to red shades.
When the Sony team applied their method, they found bias in both generative models and training data. CelebAMask-HQ, a popular dataset used to train computer vision programs, contained 82% of images of people with skewed red skin tones. In Nvidia's FFHQ dataset, this figure was 66%. In addition, two FFHQ-trained AI models showed bias by creating images with a predominance of red skin tones.
AI platforms such as ArcFace, FaceNet, and Dlib performed better with images of people with redder skin when asked to determine if two portraits matched the same person. Artificial intelligence cloud services from Microsoft Azure and Amazon Web Services for detecting smiles also worked better with red shades.
However, not all experts approve of the study's findings. Harvard sociologist Ellis Monk, in a comment for Wired, said that his Monk Skin Tone scale is not one-dimensional. In addition, Monk criticized the approach of Sony researchers, which is completely automated and does not take into account human opinion. He worries that objective measures like those proposed by Sony researchers may lead to oversimplifying or ignoring other complex aspects of human diversity.
Sony researchers William Tong, Alice Xiang, and programmer Przemyslaw Joniak recently published a study highlighting one-dimensionality in computer vision when working with skin colors. Currently, the color gradation is used from light to dark. The researchers suggested adding a range from red to yellow for greater accuracy.
In the past, the Fitzpatrick scale of six shades, from the lightest to the darkest, was used to test algorithms for skin color bias. Initially, this method was developed by dermatologists to assess the skin's response to ultraviolet radiation. However, Google recently implemented a new Ellis Monk scale with 10 shades.
As noted in Wired, when it was discovered that facial recognition algorithms work less accurately for people with darker skin, large companies such as Google and Meta began optimizing their software. However, Sony's research shows that many AI developers still miss the nuances of skin color, especially for people with yellow skin tones.
At a conference in Paris, Sony researchers presented their work, which used the international color standard CIELAB, used in editing and producing photos. Analysis using CIELAB showed that the skin in the photos differs not only in tone (color depth), but also in hue or color gradation.
The Sony team tested open-source artificial intelligence systems, including Twitter's photo cropping program and two image generation algorithms. In all cases, the systems gave an advantage to people with redder skin tones, which may not be fair to people from Asia, Latin America, and the Middle East.
The authors proposed a new method for describing the color of human skin, using two coordinates: on a scale from light to dark and on a scale from yellow to red shades.
When the Sony team applied their method, they found bias in both generative models and training data. CelebAMask-HQ, a popular dataset used to train computer vision programs, contained 82% of images of people with skewed red skin tones. In Nvidia's FFHQ dataset, this figure was 66%. In addition, two FFHQ-trained AI models showed bias by creating images with a predominance of red skin tones.
AI platforms such as ArcFace, FaceNet, and Dlib performed better with images of people with redder skin when asked to determine if two portraits matched the same person. Artificial intelligence cloud services from Microsoft Azure and Amazon Web Services for detecting smiles also worked better with red shades.
However, not all experts approve of the study's findings. Harvard sociologist Ellis Monk, in a comment for Wired, said that his Monk Skin Tone scale is not one-dimensional. In addition, Monk criticized the approach of Sony researchers, which is completely automated and does not take into account human opinion. He worries that objective measures like those proposed by Sony researchers may lead to oversimplifying or ignoring other complex aspects of human diversity.
