Have you ever been frustrated by autocorrect on your smartphone? It’s a common experience, but what happens when AI writing assistants go beyond just suggesting the wrong word? Maurice Jakesch, a doctoral student at Cornell University, wanted to find out. He created his own AI writing assistant based on GPT-3, but with a twist. The assistant was programmed to offer biased suggestions for answering the question, “Is social media good for society?”
It’s not surprising that AI can be biased, even though it’s not alive. The software is only as good as the data it’s trained on, and if that data is limited or biased, the final product will be too. The problem is that AI can perpetuate existing biases on a large scale, and even influence individuals through latent persuasion.
Jakesch’s study, presented at the 2023 CHI Conference on Human Factors in Computing Systems, found that AI systems like GPT-3 can impact a writer’s opinions, even if they’re not aware of it. The influence of an AI’s recommendations depends on people’s perception of the program. If they trust it, they’re more likely to go along with its suggestions.
Jakesch developed a social media platform similar to Reddit and an AI writing assistant that suggested letters and phrases. For some, the assistant was biased towards positive responses about social media, while for others it was biased against it. The results were surprising. Anyone who received AI assistance was twice as likely to go with the bias built into the AI, even if their initial opinion had been different.
The study highlights the need for awareness of AI biases and the potential for AI to influence our opinions and behavior. It’s a reminder that we need to be cautious when using AI writing assistants and consider the source of their suggestions.
The rise of artificial intelligence (AI) in the form of writing assistants has raised new ethical concerns about the potential for AI-driven bias. AI-driven writing assistants can produce complex, accurate pieces of text, saving time and energy compared to manual writing. While this capability is beneficial in many ways, it has the potential to lead to biased thinking among users.
One way AI can lead to biased thinking is by reinforcing existing biases that are encoded in datasets or programming code. AI systems learn from datasets that have been constructed by people, often with embedded biases. This means that any bias found in the dataset will be passed on to the users of the AI system, potentially leading to biased thinking.
Another way AI can lead to biased thinking is by using models that have been trained on biased data. For example, AI-driven text analysis algorithms that are trained on human-generated data can be blind to the cultural context of the information they use. This can lead to interpretations that are colored by the biases of the training data.
Finally, AI writing assistants can create biased outputs due to the inherent limitations of language processing models. AI-driven writing assistants rely on language processing models that can introduce bias into their outputs. For example, language processing models tend to have difficulty recognizing sarcasm or nuances in the language. This means AI-generated texts can lack context and be misaligned with the user’s intended meaning.
Given the potential for AI-driven bias, it is important for users of AI writing assistants to be aware of the potential pitfalls. It is also important for the developers of these systems to do their utmost to ensure that their AI systems are trained on datasets that are as bias-free as possible. By taking these steps, developers can help to ensure that AI writing assistants lead to accurate, unbiased thinking among users.