Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


The simultaneous excitement and fear surrounding artificial intelligence (AI) are truly remarkable.

On the one hand, companies and investors are pouring billions into the technology, with interest accelerating since Microsoft-backed OpenAI in November publicly released the conversational chatbot ChatGPT that many are calling a tipping point for AI. “Generative AI will change business models and how work gets done and, in the process, reinvent entire industries,” a recent PwC report declared.

On the other hand, controversy is billowing. In May, AI pioneer Geoffrey Hinton warned that AI could pose a “more urgent” threat than climate change. A month earlier, billionaire Elon Musk and hundreds of others issued an open letter calling for a six-month pause on advanced AI work, citing “profound risks to society and humanity.” And on May 16, OpenAI CEO Sam Altman told a Senate committee he favors the creation of a new government licensing body for large-scale AI models.

Whew! Let’s catch our breaths for a moment. To be sure, AI systems are growing smarter at a staggering pace — able to understand not only text but images, starting to rival humans at general tasks, and even, as some suggest, beginning to approach true human-level intelligence. As a society, we should care deeply about where AI is heading and, of course, make sure the technology is safe before it is deployed.