This post was first published on LinkedIn.
As artificial intelligence (AI) continues to play an undercover role in gatekeeping opportunities for basic liberties like employment, healthcare, housing and education through facial recognition technology, algorithms and machine learning, our personal autonomy and independence is called into question. I recently watched Coded Bias, a Sundance documentary film by Shalini Kantayya as part of Avenue’s internal anti-racism curriculum, and I was startled to learn how pervasive digital and AI bias is today.
As a digital and social media marketing agency, the success of our work for partners is 100% built around the data available to us and the algorithms that acquire it. So, what happens to the performance of our services if that data goes away? And how can we reframe and restructure our services in an ethical way? These are all questions that should be asked and answered in order to begin to build awareness and create change. Let’s start the conversation!
When the iOS 14 update was one of the hottest topics in the digital and social media industry earlier this year, our team grappled with what the loss of data would mean for our clients and how we might retain and claw back as much historical insight as possible before the change happened. Now, this loss of data in exchange for the gain of more anonymity for society seems like an ethically intentioned step in the right direction. We can always find new and creative ways to measure, benchmark and optimize our marketing activities.
Coded Bias: The Film
Coded Bias is a study of how the technology world is full of embedded racism and privilege that goes almost undetected unless you know what to look for. In the film, MIT Media Lab researcher Joy Buolamwini discovers that many facial recognition technologies fail more on darker-skinned faces and the faces of women. She then begins an investigation into the extensive bias in AI. Her findings show that algorithms, data and computers are not impartial. And due to the sheer size of the industry, the scale of predatory technology and bias is widespread.
The film also talks about the ‘social credit score’ that China began using a few years ago, which among many things can lower bills if the score is high, or ban travel if the score is low. The harm that can be caused by AI that is deployed at a massive scale without being vetted, and which is often virtually impossible to vet due to machine learning and the black box that is created is alarming. While China’s ‘social credit score’ practices may seem far out to those of us living in the U.S., the film underscored the fact that the same things are already happening here, we are just not transparent about it.
According to Kantayya, important decisions about who gets hired, who gets health care, who gets into college or how long a prison sentence someone serves are already being made as a result of AI, algorithms and machine learning. To top it all off, marginalized and underserved communities are typically at the highest risk.
7 Questions to Ask in the Era of AI Bias
As the film points out, “AI doesn't free us from prejudices, it automates those prejudices.” And this is the moment to push for inclusive, transparent and democratic uses of technology and AI before things go too far. Here are 7 questions (adapted from the Coded Bias Take Action Toolkit) to ask yourself and your company as you start to think about AI bias in your own company or workplace.
When was the last time you were aware of an interaction with an algorithm?
What does the AI you interact with nudge you to do? What is it nudging your employees and customers to do?
What choices did the AI take away? What choices did it take away from your employees and customers?
How can AI be used in an equal and ethical manner?
What are you willing to do to protect your privacy and autonomy?
What is the role and responsibility of your company in protecting your employee and customer’s privacy and autonomy?
How can you change or restructure your business (or advocate to your employer) to adapt for AI bias and adopt an ethical framework for AI development or use?
What You Can Do
To seed change, you can help by starting a conversation within your circles of influence (friends, family, colleagues) by using the seven questions above. You can also visit the Coded Bias website to learn more and take action. Here’s to a more equitable and socially responsible digital future.