Join top executives in San Francisco on July 11-12 and learn how business leaders are getting ahead of the generative AI revolution. Learn More
President Biden is meeting with AI experts to examine the dangers of AI. Sam Altman and Elon Musk are publicly voicing their concerns. Consulting giant Accenture became the latest to bet on AI, announcing plans to invest $3 billion in the technology and double its AI-focused staff to 80,000. That’s on top of other consulting firms, with Microsoft, Alphabet and Nvidia joining the fray.
Major companies aren’t waiting for the bias problem to disappear before they adopt AI, which makes it even more urgent to solve one of the biggest challenges facing all of the major generative AI models. But AI regulation will take time.
Because every AI model is constructed by humans and trained on data collected by humans, it’s impossible to eliminate bias entirely. Developers should strive, however, to minimize the amount of “real-world” bias they replicate in their models.
Real-world bias in AI
To understand real-world bias, imagine an AI model trained to determine who is eligible to receive a mortgage. Training that model based on the decisions of individual human loan officers — some of whom might implicitly and irrationally avoid granting loans to people of certain races, religions or genders — poses a massive risk of replicating their real-world biases in the output.