This post was first published in LinkedIn.
The digital world is full of embedded racism and privilege that goes almost undetected unless you know what to look for. Last month, I shared 7 questions to ask in the era of AI bias to address learnings from the film Coded Bias by Shalini Kantayya that the Avenue team watched as part of our anti-racism work. According to Kantayya, important decisions about who gets hired, who gets health care, who gets into college or how long a prison sentence someone serves are already being made as a result of AI, algorithms and machine learning. On top of that, marginalized and underserved communities are typically at the highest risk.
As a Certified B Corporation®, embedding our values into how we operate as a team and company is paramount. This often means making sacrifices or taking losses for the greater good of our people, clients, community and planet. One of those areas is often technology and AI. The beauty of AI and algorithms is that they can help to automate routine and mundane tasks, predict future actions and serve up more customized experiences, which can make elements of our lives faster, easier and more enjoyable. The shadow side of AI is the hidden bias that is built into the way many algorithms sort and cull their data to make automated decisions for us. When I stop to think about the number of decisions that are made on my behalf by AI and algorithms, it’s a surprisingly large amount.
Noticing and acknowledging that bias exists in technology is the first step to changing daily habits and current business practices.
3 Digitally Responsible Practices
So, what can we do? A good starting place might be to ask questions about how you and your company interact with technology, AI and algorithms on a daily basis, and then consider any identified areas as opportunities to make a change. In keeping with the times, here are three practices that can help your company think about how to make a shift to become more digitally responsible in the era of AI bias.
Notice where interactions with AI show up and ask yourself and your company questions to build awareness and start a conversation. This could be asking the following:
When was the last time you were aware of an interaction with an algorithm?
What does the AI you interact with nudge you to do? What is it nudging your employees and customers to do?
What choices did the AI take away? What choices did it take away from your employees and customers?
How can AI be used in an equal and ethical manner?
What are you willing to do to protect your privacy and autonomy?
What is the role and responsibility of your company in protecting your employee and customer’s privacy and autonomy?
How can you change or restructure your business (or advocate to your employer) to adapt for AI bias and adopt an ethical framework for AI development or use?
Be transparent with your employees and customers about what data AI is collecting and nudging them to do. This could be talking about their CRM, email and data collection processes, social media usage or ad platforms that collect data and use AI and algorithms to serve content to specific audiences. Educate first, and then let employees and customers make their own informed decisions about how they want to alter or continue using platforms that utilize AI.
Audit your business tools and processes for inherent AI bias and restructure with an ethical framework for AI use. These could be digital tools that your company uses to produce work, hiring screening platforms and processes or chatbots that have been deployed to handle customer service. If there is a concern with a tool, platform or process, have a discussion and consider making a change.
What to do Next
The first step to seeding change is to acknowledge and build awareness of the issue. Help us start the conversation! You can visit the Coded Bias website to learn more and take action. Here’s to a more equitable and socially responsible digital future.