Experts compared the future of AI with pandemics and nuclear wars


Industry leaders have warned that artificial intelligence could pose an “existential threat” to humanity. It should be considered as a global risk on a par with pandemics and nuclear wars, experts say.

The 22-word open letter from the non-profit research organization Center for AI Safety has been signed by more than 350 AI leaders, scientists and engineers.

According to the statement, industry experts, journalists, politicians and the public are increasingly discussing a wide range of important and urgent risks associated with technology.

“It is sometimes difficult to express concern about some of the most serious threats to advanced AI. The letter will help overcome this hurdle and open up discussion,” the statement said.

Signatories include the CEOs of three leading AI companies: Sam Altman (OpenAI), Demis Hassabis (Google DeepMind) and Dario Amodei (Anthropic).

AI godfathers Geoffrey Hinton and Yoshua Bengio joined them. The letter was also signed by Meta Vice President and Lead AI Specialist Jan LeKun.

Recall that in March, more than 1,000 experts in the field of artificial intelligence called for six months to suspend the training of language models more powerful than GPT-4.

Later, other AI specialists criticized this idea.

Found a mistake in the text? Select it and press CTRL+ENTER

ForkLog Newsletters: Keep your finger on the pulse of the bitcoin industry!


Leave a Reply