AI Eye: Is AI a nuke-level threat? Why AI fields all advance at once, dumb pic puns

AI Eye: Is AI a nuke-level threat? Why AI fields all advance at once, dumb pic puns

Just as we don’t allow just anyone to build a plane and fly passengers around, or design and release medicines, why should we allow AI models to be released into the wild without proper testing and licensing? 

That’s been the argument from an increasing number of experts and politicians in recent weeks. 

With the United Kingdom holding a global summit on AI safety in autumn, and surveys suggesting around 60% of the public is in favor of regulations, it seems new guardrails are becoming more likely than not. 

One particular meme taking hold is the comparison of AI tech to an existential threat like nuclear weaponry, as in a recent 23-word warning sent by the Center of AI Safety, which was signed by hundreds of scientists:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Extending the metaphor, OpenAI CEO Sam Altman is pushing for the creation of a global body like the International Atomic Energy Agency to oversee the tech.

“We talk about the IAEA as a model where the world has said, ‘OK, very dangerous technology, let’s all put (in) some guard rails,’” he said in India this week. 

Libertarians argue that overstating the threat and calling for regulations is just a ploy by the leading AI companies to a) impose authoritarian control and b) strangle competition via regulation. 

;ID=169476;size=728x90;setID=601214;referrer=https%3A%2F%2Fcointelegraph.com%2Fmagazine%2Fai-eye-is-ai-a-threat-why-ai-advance%2F;type=img ;ID=169476;size=300x250;setID=601213;referrer=https%3A%2F%2Fcointelegraph.com%2Fmagazine%2Fai-eye-is-ai-a-threat-why-ai-advance%2F;type=img

Princeton computer science professor Arvind Narayanan warned, “We should be wary of Prometheans who want to both profit from bringing the people fire and be trusted as the firefighters.”

Netscape and a16z co-founder Marc Andreessen released a series of essays this week on his technological utopian vision for AI. He likened AI doomers to “an apocalyptic cult” and claimed AI is no more likely to wipe out humanity than a toaster because: “AI doesn’t want, it doesn’t have goals — it doesn’t want to kill you because it’s not alive.”

This may or may not be true — but then again, we only have a vague understanding of what goes on inside the black box of the AI’s “thought processes.” But as Andreessen himself admits, the planet is full of unhinged humans who can now ask an AI to engineer a bioweapon, launch a cyberattack or manipulate an election. So, it can be dangerous in the wrong hands even if we avoid the Skynet/Terminator scenario. 

The nuclear comparison is probably quite instructive in that people did get very carried away in the 1940s about the very real world-ending possibilities of nuclear technology. Some Manhattan Project team members were so worried the bomb might set off a chain reaction, ignite the atmosphere and incinerate all life on Earth that they pushed for the project to be abandoned. 

After the bomb was dropped, Albert Einstein became so convinced of the scale of the threat that he pushed for the immediate formation of a world government with sole control of the arsenal.

Read also

Features

How Activist Investors Could Change The Crypto Landscape

Features

2023 is a make-or-break year for blockchain gaming: Play-to-own

The world government didn’t happen but the international community took the threat seriously enough that humans have managed not to blow themselves up in the 80-odd years since. Countries signed agreements to only test nukes underground to limit radioactive fallout and set up inspection regimes, and now only nine countries have nuclear weapons. 

In their podcast about the ramifications of AI on society, The AI Dilemma, Tristan Harris and Aza Raskin argue for the safe deployment of thoroughly tested AI models.

“I think of this public deployment of AI as above-ground testing of AI. We don’t need to do that,” argued Harris.

“We can presume that systems that have capacities that the engineers don’t even know what those capacities will be, that they’re not necessarily safe until proven otherwise. We don’t just shove them into products like Snapchat, and we can put the onus on the makers of AI, rather than on the citizens, to prove why they think that it’s (not) dangerous.”

Also read: All rise for the robot judge — AI and blockchain could transform the courtroom

The genie is out of the bottle

Of course, regulating AI might be like banning Bitcoin: nice in theory, impossible in practice. Nuclear weapons are highly specialized technology understood by just a handful of scientists worldwide and require enriched uranium, which is incredibly difficult to acquire. Meanwhile, open-source AI is freely available, and you can even download a personal AI model and run it on your laptop.

AI expert Brian Roemmele says that he’s aware of 450 public open-source AI models and “more are made almost hourly. Private models are in the 100s of 1000s.”

Roemmele is even building a system to enable any old computer with a dial-up modem to be able to connect to a locally hosted AI.

Working on making ChatGPT available via dialup modem.

It is very early days an I have some work to do.

Ultimately this will connect to a local version of GPT4All.

This means any old computer with dialup modems can connect to an LLM AI.

Up next a COBOL to LLM AI connection! pic.twitter.com/ownX525qmJ

— Brian Roemmele (@BrianRoemmele) June 8, 2023

The United Arab Emirates also just released its open-source large language model AI called Falcon 40B model free of royalties for commercial and research. It claims it “outperforms competitors like Meta’s LLaMA and Stability AI’s StableLM.”

There’s even a just-released open-source text-to-video AI video generator called Potat 1, based on research from Runway. 

I am happy that people are using Potat 1️⃣ to create stunning videos