Monday, May 8, 2023

The Risks of LLMs Are More Immediate Than AGI

Recently, a friend shared a CNN interview concerning the dangers that out-of-control Artificial Intelligence could pose:

https://www.facebook.com/camanpour/videos/573499221277846

The interview is easily accessible without a technical background, and well worth watching, but you may find it a bit scary.

After watching it, I had too many responses for a Facebook comment so I thought I'd jot a few of them down here.

Foremost, I think it's crucial to distinguish between Artificial Intelligence (AI) and Artificial General Intelligence (AGI). AGI researchers aspire to develop a subset of AI techniques that will allow computers to learn and perform any cognitive task that humans can do as well as, or even better, than a human. (This is the "god-like intelligence" referred to in the interview).

The quest for AGI is considered a subfield of AI research, although the two terms have been thoroughly mixed and muddled in the media frenzy of the last six months. Sadly, some of this confusion also comes directly from the executives of big-tech companies who seem to be using the terms interchangeably while also making grandiose claims about the level of intelligence their systems really possess.

In the first major wave of AI, in the 70s, it was a common joke that "AI" referred to all the smart stuff that humans could do that we didn't yet know how to make computers do. Once you could program something smart, it became just another algorithmic technique. While this process-oriented approach to intelligence was the prevalent viewpoint among early AI researchers many of them took their ideas and inspiration from studies of human cognition (e.g., linguistics, psychology, philosophy, and even economics and political science).

In the current wave of AI, it seems most researchers are focused on mathematical and statistical techniques and few have any background in human cognition. Large Language Models (LLMs), such as ChatGPT, are built via a data-driven approach: massive statistical computation over massive amounts of human-created text (which includes human errors, biases, mistakes, lies, propaganda, and prejudices). As Dr. Leahy points out in the CNN interview, the results which LLMs derive are rather opaque to us humans. The current AI "joke" might be "look at all the smart stuff we can get computers to mimic but we don't know what they're doing or why they can do it".

The CNN interview is scary because it expounds on the dangers of AGI; as posed by the development of an intelligent agent whose capabilities exceed those of humans but which is not limited by human morals, ethics, or physical existence. From my background in computer and cognitive science, I believe these dangers are still years away and are not likely to be realized via the current massive statistical processing approach alone, anyway. 

Nevertheless, I believe and fear that current LLMs do pose very real dangers to society, politics, and human well-being. I agree strongly with Gary Marcus who has penned a great article on the dangers of AGI risk vs current AI risk:

"...although a lot of the literature equates artificial intelligence risk with the risk of superintelligence or artificial general intelligence, you don’t have to be superintelligent to create serious problems."

I think AI apologists do a great disservice to us all when they conflate fears of AGI with concerns about the consequences of malicious use of LLMs and then dismiss all concerns as silly. Many concerns are real and immediate and something must be done or our society is in for a lot of pain.

I apologize to my friend who sent me the video as I probably haven't relieved her fears but I hope that I was able to clarify what I feel are the more imminent and important issues.

Is Writing Computationally Easy?

Recently, a colleague asked if we agreed or disagreed with this quote from an article by Stephen Wolfram entitled What Is ChatGPT Doing...and Why Does It Work:

"And instead what we should conclude is that tasks—like writing essays—that we humans could do, but we didn’t think computers could do, are actually in some sense computationally easier than we thought."

 I would have to disagree because this pronouncement is vague and the terms and premises are ill-defined.

For one example, what is "computationally easier"?:

"Lambda labs estimated a hypothetical cost of around $4.6 million US dollars and 355 years to train GPT-3 on a single GPU in 2020, with lower actual training time by using more GPUs in parallel."

This extensive training was done by reading almost 500 BILLION words. Doesn't sound easy to me.

Also, the statement seems to imply the premise that if humans can do it, then it must be "easy". I doubt anyone who studies human cognition (e.g., psychologist, linguists, cognitive scientists) would agree with this. It took me at least 14 years of varied cognitive training to learn to write an essay, and my mother was an English teacher.

Also, are LLMs really "writing essays"? Perhaps...if you define "writing an essay", narrowly, as regurgitating words and phrases that humans have written and stringing them together using correct syntax and grammar. But when humans write an essay they are engaging in an act of linguistic communication with other humans. A good writer first thinks critically about a topic and then writes with a communicative goal; such as providing a unique perspective or convincing a reader of some particular point. LLMs are decidedly NOT doing that and therein lies the danger.

Because what LLMs generate looks like what humans produce, it is extremely easy for people to be misled into believing that they must be engaged in intelligent activities such as thinking, imagining, judging, arguing, and believing. Cognitive psychologists call this "over-attribution" and it has been a problem for psychology for a very long time.
See this blog by Gary Marcus about the (growing) over-attribution problem in AI.