4/4/2023

AI and alarmism

More than a thousand personalities, from Elon Musk to Steve Wozniack to Yuval Harari and Tristan Harris, have published an open letter calling for a pause of generative AI systems development.

The prominent presence on the list of figures such as Harari, who has been spreading incomprehensible alarmism in mainstream newspapers, and Musk, who needs no comment, would already be a warning sign.

The fact that it was published by the Future of Life Institute, a think tank whose mission is to “reduce global catastrophe and existential risk from powerful technologies,” an assiduous promoter of “longtermism” and funded by the likes of Musk (who sits on its board), is another.

The letter buys at face value the promises of the industry, of the companies that produce and sell this technology. It is a paradoxical situation in which the call for restraint validates the problem brought up.

Or, as University of Washington professor Emily Bender puts it, the letter accepts and potentiates #AIhype, the frenzy that accompanies technologies sold as something beyond what they are capable of — see web3 and metaverse for similar cases.

The signers’ main request is directed at AI labs: a voluntary pause of at least six months in “training AI systems more powerful than GPT-4.”

The request itself is weird. What would happen in those six months? It’s unlikely that Microsoft, after injecting USD 10 billion into OpenAI, laying off its ethics experts for AIs (followed by Google and others), and cramming AI into all its major products, would have a crisis of conscience that culminated in a change of course.

On the other hand, no kind of regulatory consensus is reached in such a short time. It took the European Union years to pass two laws that put reins on US big techs, both of which are about to go into effect.

The bigger issue, however, is the narrative that the letter and its signatories promote, that AI would be “very powerful” and that the biggest risk we run is that it will acquire consciousness and turn against humanity.

The letter transcribes the excerpt from a piece of nonsense written by Sam Altman on OpenAI’s website in which he lays out fears about general artificial intelligence (AGI) that, obviously, only people like him and companies like OpenAI are in a position to deal with. “We agree,” say the signatories.

There is no indication that we are close or that it is possible to create AGIs.

Emily’s article is great at rebutting the bogus arguments of these folks. She co-authored a landmark paper, written alongside Timnit Gebru and published in 2021, which warned of the dangers of large language models, the fundamental technology of AIs like ChatGPT.

At the time, Timnit, who worked for Google, was fired for publishing the paper. Disappointing, but not surprising.

Two Princeton University researchers, Sayash Kapoor and Arvind Narayanan, share Emily’s skepticism. They summarize the problems of the letter in one paragraph:

We agree that misinformation, impact on labor, and safety are three of the main risks of AI. Unfortunately, in each case, the letter presents a speculative, futuristic risk, ignoring the version of the problem that is already harming people. […] It plays right into the hands of the companies it seeks to regulate.

Instead of this “super-powerful AI” nonsense, we should be concerned about the super-powerful companies and VCs who use any available technology — including AI — to concentrate and exert unbridled power. If there is any existential threat to humanity, this seems a far greater one than that of AIs.

Discuss @ Hacker News.

« Indie app developers, the App Store “middle class” Substack is the Biggest Threat to Newsletters Ever »