The X Center for X Future X {Humanity/Life…}, X Center X AI Safety, X Center for Existential Risks X, and X Center for Catastrophic Risks X are all funded by white male billionaires who seem to be bombarding us with their worries about saving humanity. It’s like a DDOS attack! But is all this attention on x-risk really necessary? Some experts argue that it takes the air out of more pressing issues and puts undue pressure on researchers focused on other current risks. It also plays into issues of regulatory capture, with some companies pushing for an AI licensing regime that could lead to regulatory capture.

The authors of the Statement on AI Risk acknowledge that one can be concerned about long-term, low-probability events and also be concerned about near-term, high-probability harms. But the “doomer” narrative drowns out voices that seek to draw attention to real harms occurring to real people right now, particularly those in marginalized and underrepresented communities.

While it’s good for some people in the field to work on long-term risks, the amount of those people is currently disproportionate to the ability to accurately estimate that risk. It minimizes conversations around present-day risk and displaces visibility and resources for the efforts of many researchers who work on safety.

So why are industry leaders and prominent researchers raising the specter of AI as an existential risk? Some argue that the organizations warning of existential risk obtain their funding precisely by convincing donors that AI existential risk is a real and present danger. While the warnings about existential risk remain extremely vague, the research community has delivered concrete advances across science, industry, and government.

Many other prominent AI researchers are speaking out against the “doomer” narrative, insisting that AI will be a key part of the solution to existential risks. So, let’s focus on the present-day risks and allocate resources accordingly. There is room for concern for existential AI risks, but let’s not let it drown out the voices of those seeking to draw attention to real harms occurring right now.The topic of AI safety is gaining more attention and resources, but some experts are concerned that focusing solely on the existential risk facet could negatively impact our ability to address current and ongoing harms. While some may be tempted to draft a counter-letter to the Statement on AI Risk, Hugging Face’s Jernite believes it’s more productive to keep working on the things that matter. “You can’t push back every five minutes,” he wisely notes.