Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
Generative AI is no laughing matter, as Sarah Silverman proved when she filed suit against OpenAI, creator of ChatGPT, and Meta for copyright infringement. She and novelists Christopher Golden and Richard Kadrey allege that the companies trained their large language models (LLM) on the authors’ published works without consent, wading into new legal territory.
One week earlier, a class action lawsuit was filed against OpenAI. That case largely centers on the premise that generative AI models use unsuspecting peoples’ information in a manner that violates their guaranteed right to privacy. These filings come as nations all over the world question AI’s reach, its implications for consumers, and what kinds of regulations — and remedies — are necessary to keep its power in check.
Without a doubt, we are in a race against time to prevent future harm, yet we also need to figure out how to address our current precarious state without destroying existing models or depleting their value. If we are serious about protecting consumers’ right to privacy, companies must take it upon themselves to develop and execute a new breed of ethical use policies specific to gen AI.
What’s the problem?
The issue of data — who has access to it, for what purpose, and whether consent was given to use one’s data for that purpose — is at the crux of the gen AI conundrum. So much data is already a part of existing models, informing them in ways that were previously inconceivable. And mountains of information continue to be added every day.