URGENT: JUST 7 DAYS REMAIN TO HELP SAVE INDEPENDENT MEDIA & ANR, SO LET'S CUT THE BS & GET TO THE POINT - WE WILL BE FORCED LAY OFF STAFF & REDUCE OPERATIONS UNLESS WE ARE FULLY FUNDED WITHIN THE NEXT 2 WEEKS - Sadly, less than 0.5% of readers currently donate or subscribe to us But YOU can easily change that. Imagine the impact we'd make if 3 in 10 readers supported us today. To start with we’d remove this annoying banner as we could fight for a full year...

Enter email to get Free News Emails

36% of Scientists Surveyed Fear AI Could Lead to ‘Nuclear-Level Catastrophe’

Loading

Facebook
Twitter
LinkedIn
Pinterest
WhatsApp
Telegram

URGENT: JUST 7 DAYS REMAIN TO HELP SAVE INDEPENDENT MEDIA & ANR, SO LET'S CUT THE BS & GET TO THE POINT - WE WILL BE FORCED LAY OFF STAFF & REDUCE OPERATIONS UNLESS WE ARE FULLY FUNDED WITHIN THE NEXT 2 WEEKS - Sadly, less than 0.5% of readers currently donate or subscribe to us But YOU can easily change that. Imagine the impact we'd make if 3 in 10 readers supported us today. To start with we’d remove this annoying banner as we could fight for a full year...

Opinion pieces don’t necessarily reflect the position of our news site but of our Opinion writers.

[cs_con]

You are not authorized to access this content

Subscribe now for access articles

36% of Scientists Surveyed Fear AI Could Lead to ‘Nuclear-Level Catastrophe’

By Common Dreams

While nearly three-quarters of researchers believe artificial intelligence (AI) “could soon lead to revolutionary social change,” 36% worry that AI decisions “could cause nuclear-level catastrophe.”

While nearly three-quarters of researchers believe artificial intelligence (AI) “could soon lead to revolutionary social change,” 36% worry that AI decisions “could cause nuclear-level catastrophe.”

Those survey findings are included in the 2023 AI Index Report, an annual assessment of the fast-growing industry assembled by the Stanford Institute for Human-Centered Artificial Intelligence and published earlier this month.

“These systems demonstrate capabilities in question answering, and the generation of text, image, and code unimagined a decade ago, and they outperform the state of the art on many benchmarks, old and new,” says the report.

The report continues:

“However, they are prone to hallucination, routinely biased, and can be tricked into serving nefarious aims, highlighting the complicated ethical challenges associated with their deployment.”

As Al Jazeera reported on April 14, the analysis “comes amid growing calls for regulation of AI following controversies ranging from a chatbot-linked suicide to deepfake videos of Ukrainian President Volodymyr Zelenskyy appearing to surrender to invading Russian forces.”

Notably, the survey measured the opinions of 327 experts in natural language processing — a branch of computer science essential to the development of chatbots — in May and June 2022, months before the November 2022 release of OpenAI’s ChatGPT “took the tech world by storm,” the news outlet reported.

In March, Geoffrey Hinton, considered the “godfather of artificial intelligence,” told CBS News’ Brook Silva-Braga that the rapidly advancing technology’s potential impacts are comparable to “the Industrial Revolution, or electricity, or maybe the wheel.”

Asked about the chances of the technology “wiping out humanity,” Hinton warned that “it’s not inconceivable.”

That alarming potential doesn’t necessarily lie with currently existing AI tools such as ChatGPT, but rather with what is called “artificial general intelligence” (AGI), which would encompass computers developing and acting on their own ideas.

“Until quite recently, I thought it was going to be like 20 to 50 years before we have general-purpose AI,” Hinton told CBS News. “Now I think it may be 20 years or less.”

Pressed by Silva-Braga if it could happen sooner, Hinton conceded that he wouldn’t rule out the possibility of AGI arriving within five years, a significant change from a few years ago when he “would have said, ‘No way.’”

“We have to think hard about how to control that,” said Hinton. Asked if that’s possible, Hinton said, “We don’t know, we haven’t been there yet, but we can try.”

The AI pioneer is far from alone. According to the survey of computer scientists conducted last year, 57% said that “recent progress is moving us toward AGI,” and 58% agreed that “AGI is an important concern.”

In February, OpenAI CEO Sam Altman wrote in a company blog post:

“The risks could be extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world.”

More than 25,000 people have signed an open letter published on March 22 that calls for a six-month moratorium on training AI systems beyond the level of OpenAI’s latest chatbot, GPT-4, although Altman is not among them.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” says the letter.

The Financial Times reported on April 14 that Tesla and Twitter CEO Elon Musk, who signed the letter calling for a pause, is “developing plans to launch a new artificial intelligence start-up to compete with” OpenAI.

Regarding AGI, Hinton said:

“It’s very reasonable for people to be worrying about those issues now, even though it’s not going to happen in the next year or two. People should be thinking about those issues.”

While AGI may still be a few years away, fears are already mounting that existing AI tools — including chatbots spouting lies, face-swapping apps generating fake videos, and cloned voices committing fraud — are poised to turbocharge the spread of misinformation.

According to a 2022 IPSOS poll of the general public included in the new Stanford report, people in the U.S. are particularly wary of AI, with just 35% agreeing that “products and services using AI had more benefits than drawbacks,” compared with 78% of people in China, 76% in Saudi Arabia, and 71% in India.

Amid “growing regulatory interest” in an AI “accountability mechanism,” the Biden administration announced on April 12 that it is seeking public input on measures that could be implemented to ensure that “AI systems are legal, effective, ethical, safe, and otherwise trustworthy.”

Axios reported on April 13 that Senate Majority Leader Chuck Schumer (D-N.Y.) is “taking early steps toward legislation to regulate artificial intelligence technology.”

Loading

Opinion pieces don’t necessarily reflect the position of our news site but of our Opinion writers.

*Note We Deliberately Miss Spell Some Words or Add Capital Letters To Get Around Big Tech Censoring.

Support the ANR from as little as $8 – it only takes a minute. If you can, please consider supporting us with a regular amount each month. Thank you.

Facebook
Twitter
LinkedIn
Pinterest
WhatsApp
Telegram

AUSTRALIAN NATIONAL REVIEW NEWS SUBSCRIPTION

Join now and receive a free 12-month Subscription to
TruthMed worth $495 USD for free

Subscribe for free to our ANR news emails and access 2 free ebooks plus Reports to share with family and friends about Covid fraud and the danger of the vaccines.

Help us help defend free speech and save democracy from the World Economic Forum planned Totalitarian Great Reset.

and help us expose the Covid Fraudsters

Leave a Reply

Your email address will not be published. Required fields are marked *

If You Are Starting To Realise That School Or University Was More About Indoctrination, Then Education, Then You’ll Love What You're About To Discover With 21st Century U. Break Free Of The Matrix Once And For All And Help Others To As Well

21st Century U

Why keep all your money in a bank?
BRICSTether pays 10% pa and is a 100% asset backed stable coin

BRICS Tether

Play Video

McIntyre Special

Book of the Month with Jamie McIntyre

We will expose the Covid Crimes of our Governments. Will you help us ?

Play Video

Want to become a citizen journalist? And have your own newsite that automatically update daily with content from Independent Media such as Australian National Review plus you can also add articles.

McIntyre Report
Political Talk Show

Episode 166

Episode 166

Get our free News Emails on latest articles, alerts and solutions for both legal templates and ways to help fight back against the Globalists vax Mandates , and health resources to boost your immune system and ways to Protect from deadly EMF 5G radiation and more.

FREE E-BOOKS AND REPORTS ALSO

Documentary: Died Suddenly (2022)

Australian National Review - News with a Difference!

How you can advertise on Truthbook.social

Help us help defend free speech and save democracy from the World Economic Forum planned Totalitarian Great Reset.

and help us expose the Covid Fraudsters

Ukraine. Military Summary And Analysis 23.01.2023

Raw Report

Ryan Jackson Saw show

in USD

in AUD

in GBP

in CAD

Jamie McIntyre

ANR on

Currency Exchange Rates in AUD

Live and updated every minute of the day

Nurses Speak Out

ANR Meme Report

with Nadine Roberts

Episode 002

21st Century Political System

Play Video

Editor's Pick

Thank you for visiting the
Australian National Review

To continue accessing more articles for free simply enter your email address

Watch Full Documentary

URGENT: JUST 7 DAYS REMAIN TO HELP SAVE INDEPENDENT MEDIA & ANR, SO LET'S CUT THE BS & GET TO THE POINT - WE WILL BE FORCED LAY OFF STAFF & REDUCE OPERATIONS UNLESS WE ARE FULLY FUNDED WITHIN THE NEXT 2 WEEKS

Sadly, less than 0.5% of readers currently donate or subscribe to us But YOU can easily change that. Imagine the impact we'd make if 3 in 10 readers supported us today. To start with we’d remove this annoying banner as we could fight for a full year...

Get access to TruthMed- how to save your family and friends that have been vaxx with vaccine detox, & how the Unvaxxed can prevent spike protein infection from the jabbed.

Free with ANR Subscription from $8

Download the Full PDF - THE COVID-19 FRAUD & WAR ON HUMANITY