The Poet of Code Fighting Bias in Algorithms

Share Post...Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInPin on PinterestEmail this to someone

Every interactive Tech solution or product is based on an algorithm and very often than not some of these algorithms are not built with ‘darker shades’ in mind. In effect this undeniably brave woman has been championing the cause for more inclusive artificial intelligence and more representative data sets.

 

Joy Buolamwini who founded the Algorithmic Justice League to fight bias in Machine Learning also researches social impact technology at the MIT Media Lab. Joy is a Rhodes Scholar, a Fulbright Fellow, an Astronaut Scholar, a Google Anita Borg Scholar, and speaker giving talks at TEDx, the White House, and the Vatican.

 

As an entrepreneur, she co-founded Techturized Inc – a hair-care technology company and advises Bloomer Tech – a smart clothing startup transforming women’s health.

 

Joy has been in the Press a lot  lately advocating for full spectrum inclusion in AI. 

The rise of artificial intelligence necessitates careful attention to inadvertent bias that can perpetuate discriminatory practices and exclusionary experiences for people of all shades.

 

“Since sharing my TED Talk demonstrating facial detection failure, I am asked variations of this hushed question”:

Isn’t the reason your face was not detected due to a lack of contrast given your dark complexion?

 

 

“This is an important question. In the field of computer vision, poor illumination is a major challenge. Ideally you want to create systems that are illumination invariant and can work well in many lighting conditions. This is where training data can come in. One way to deal with the challenges of illumination is by training a facial detection system on a set of diverse images with a variety of lighting conditions.”

 

“There are of course certain instances where we reach the limits of the visible light spectrum. (Infrared detection systems also exist). My focus here is not on the extreme case as much as the everyday case. The demo in the TED talk shows a real-world office environment. My face is visible to a human eye as is the face of my demonstration partner, but the human eye and the visual cortex that processes its input are far more advanced than a humble web camera. Still, even using the web camera, you can see in the demo that my partner’s face is not so overexposed as to be inscrutable nor is my face so underexposed that there is significant information loss.”

“Though not as pertinent to my demo example, there are extreme failure cases where there is overexposure or underexposure and of course the case of no light at all in the image. Cameras, however, are not as neutral as they may seem.”

 

Are Cameras Objective?

Defaults are not neutral

I was amused to learn that complaints from chocolate and furniture companies who wanted their products better represented led to improved representation of darker tones in the earlier days of photography. In the digital era, the LDK series was developed by Phillips. The cameras explicitly handled skin tone variation with two chips — one for processing darker tones and another for processing lighter tones. The Oprah Winfrey show used the LDK series for filming because there was an awareness of the need to better expose darker skin.”

With inclusion in mind, we can make better sensor technology as well as better training data and algorithms.

“We have to keep in mind that default settings are not neutral. They reflect the Coded Gaze, the preferences of those who have the opportunity to develop technology. Sometimes these preferences can be exclusionary.”

 

Exclusion Overhead

“More than a few observers have recommended that instead of pointing out failures, I should simply make sure I use additional lighting. Silence is not the answer. The suggestion to get more lights to increase illumination in an already lit room is a stop gap solution. Suggesting people with dark skin keep extra lights around to better illuminate themselves misses the point.”

 

Should we change ourselves to fit technology or make technology that fits us?

 

                                                                           Joy Buolamwini on the TED stage.

 

Who has to take extra steps to make technology work? Who are the default settings optimized for?

“One of the goals of the Algorithmic Justice League is to highlight problems with artificial intelligence so we can start working on solutions. We provide actionable critique while working on research to make more inclusive artificial intelligence. In speaking up about my experiences, others have been encouraged to share their stories. The silence is broken. More people are aware that we can embed bias in machines. This is only the beginning as we start to collect more reports.”

 

“One bias-in-the-wild report we received at AJL shares the following story.”

“A friend of mine works for a large tech company and was having issues being recognized by the teleconference system that uses facial recognition.

While the company has some units that work on her dark skin, she has to make sure to reserve those rooms specifically if she will be presenting. This limits her ability to present and share information to times when these rooms are available.”

This employee is dealing with the exclusion overhead that can result when we do not think through or test technology to account for our differences.

 

Full Spectrum Inclusion Mindset

 

“Questioning how different skin tones are rendered by cameras can help us think through ways to improve computer vision. Let’s also keep in mind that there is more at stake than facial analysis technology. The work of AJL isn’t just about facial detection or demographic bias in computer vision. I use the mask example as a visible demonstration of how automated systems can unintentionally lead to exclusionary experiences.”

 

“I know we can do better as technologists once we see where problems arise and commit to addressing the issues. We can also be better about gaining feedback by asking for and listening to people’s real-world experiences with the technology we create. We can build better systems that account for the variety of humanity. Let’s not be afraid to illuminate our failures. We can acknowledge physical and structural challenges, interrogate our assumptions about defaults, and move forward to create more inclusive technology.”

Source: Hackernoon

RECEIVE UPDATES
Join over 3.000 visitors who are receiving our newsletter and learn how to optimize your blog for search engines, find free traffic, and monetize your website.
We hate spam. Your email address will not be sold or shared with anyone else.
Share Post...Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInPin on PinterestEmail this to someone

Leave a Reply

Your email address will not be published. Required fields are marked *