17 Comments

The biggest threat to humanity is not AI, it's humans. From the start of recorded history, we are the biggest danger to ourselves. Admittedly, AI is a very powerful tool in the hands of the wrong person or group can wreck widespread havoc with minimal effort (propaganda, misinformation, hacking, etc). However, it's the people programming the AI, not the AI itself.

The current AI doomsday narrative is rooted deeply in Hollywood films and popular sci-fiction. It makes for good watching and reading, but not for good public policy. I completely agree with your analysis and conclusion that we need to work on building institutions that reduce the risks and harms of AI. We also need to find mechanisms to include the underrepresented in the datasets to combat weights and biases in the system. However, that is boring. It's far more interesting to fight Skynet!

Expand full comment

The hour is very late. LLMs are already being embedded into agent architectures, becoming controllers rather than just chatbots. The time when we need to know how to make the AI itself an ethical being, is already here.

Expand full comment

While I think this letter is thoughtful and honest, I think there quite clear gaps in the arguments, which take away from the seriousness of AI existential risk. I've written my detailed thoughts here:

https://www.soroushjp.com/2023/06/01/yes-avoiding-extinction-from-ai-is-an-urgent-priority-a-response-to-seth-lazar-jeremy-howard-and-arvind-narayanan/

Open to feedback and further discussion -- I definitely think this is a complex topic that's worth a deep societal discussion, and this is just my attempt to do exactly that.

Expand full comment

Concentrated power is dangerous. Dystopia and science fiction may be leading imaginations because their stories are compelling--I know I am compelled by them. However, I look at them as cautionary and as a space to think problems through. How can we motivate and inspire those who do not see money signs and automated workflows? Having something to fight and fear means it is easier to keep people engaged, for better or worse. I agree with Sunil Daluvoy that we need the underrepresented in the datasets and engaging with those datasets.

Expand full comment

"But it also means empowering voices and groups underrepresented on this AI power list—many of whom have long been drawing attention to societal-scale risks of AI without receiving so much attention." Should scholars urge news outlets to implement a moratorium on the word godfather in all AI news stories going forward? Many AI experts are women; that's erased every time "Godfather" is used in a news title. It encourages this lofty outdated "great man" theory of humanity.

Expand full comment

I suspect that this, "What about risks posed by people who negligently, recklessly, or maliciously use AI systems" is intended to be part of the concern, based on what I've heard at least some of the signers talk about.

Expand full comment

I'm a bit confused by your statement that "in calling for regulations to address the risks of future rogue AI systems, [AI industry leaders] have proposed interventions that would further cement their power." It links to an article titled "Sam Altman says a government agency should license AI companies — and punish them if they do wrong". Wouldn't regulation help to ensure that AI companies are subject to government oversight, and thus limit the power of AI companies?

Expand full comment

Well said. The rush to put restrictions onto systems seems very premature. There is still lots of research that needs to be done for any system to actually become uncontrollable by humans. Restrictions would stifle progress we are making.

Expand full comment

If including more marginalized voices will reduce the risk of a rogue superintelligence relative to trying to move ahead on regulation without them, then that sounds good. If not though, we must consider the paramount importance of preserving our species as a whole, and ensuring this world remains ours to shape and live on.

Expand full comment

I’m more or less nodding along with this whole post. Proviso: realpolitik is going to apply its grubby hand to events. Those busy with developing the near future of AI are the ones funding and coercing government and working on other levers of power. “It’s our ball and we’ll say how we want the playing area to look.” Governments too have the motive and opportunity to call down major crimes on the people’s of the world.

It’s true that regulation stifles innovation, but what we do with AI is a genuine battle between good and evil. The world’s voiceless, i.e. suppressed opinion, does not have any power in the arena, by definition. All we can do is make democratic overtures, and to call out those who want to abuse this technology.

We suspect enough that ‘the singularity’ is coming, but good luck avoiding that. I agree 100% that we need concrete steps for these very present abuses. These people have previous.

Warning: the titans of AI will seriously get to play God here - they will create AI in their own image. They will play dirty games to ensure the playing arena works to enable their godlike status.

Expand full comment

I'm all for a Butlerian Jihad.

Expand full comment

As with any technogy the promise of good use needs to be weighed against the certainty of misuse. To your point, we [currently] have far more to fear from rogue people using AI than rogue AI itself.

Expand full comment

Great analysis, agree with every word of it. Even if the fear of some of the signees is genuine, the document still has completely the wrong focus and is devoid of any solution-oriented language. Failing to mention climate change as one of the major risks facing humanity (when daring to talk about extinction) is baffling.

This line from your piece sums it up for me: "After all, why would we have any confidence in our ability to address risks from future AI, if we won’t do the hard work of addressing those that are already with us?"

Expand full comment

An excellent perspective 😀

Expand full comment