Monday, May 8, 2023

The Risks of LLMs Are More Immediate Than AGI

Recently, a friend shared a CNN interview concerning the dangers that out-of-control Artificial Intelligence could pose:

https://www.facebook.com/camanpour/videos/573499221277846

The interview is easily accessible without a technical background, and well worth watching, but you may find it a bit scary.

After watching it, I had too many responses for a Facebook comment so I thought I'd jot a few of them down here.

Foremost, I think it's crucial to distinguish between Artificial Intelligence (AI) and Artificial General Intelligence (AGI). AGI researchers aspire to develop a subset of AI techniques that will allow computers to learn and perform any cognitive task that humans can do as well as, or even better, than a human. (This is the "god-like intelligence" referred to in the interview).

The quest for AGI is considered a subfield of AI research, although the two terms have been thoroughly mixed and muddled in the media frenzy of the last six months. Sadly, some of this confusion also comes directly from the executives of big-tech companies who seem to be using the terms interchangeably while also making grandiose claims about the level of intelligence their systems really possess.

In the first major wave of AI, in the 70s, it was a common joke that "AI" referred to all the smart stuff that humans could do that we didn't yet know how to make computers do. Once you could program something smart, it became just another algorithmic technique. While this process-oriented approach to intelligence was the prevalent viewpoint among early AI researchers many of them took their ideas and inspiration from studies of human cognition (e.g., linguistics, psychology, philosophy, and even economics and political science).

In the current wave of AI, it seems most researchers are focused on mathematical and statistical techniques and few have any background in human cognition. Large Language Models (LLMs), such as ChatGPT, are built via a data-driven approach: massive statistical computation over massive amounts of human-created text (which includes human errors, biases, mistakes, lies, propaganda, and prejudices). As Dr. Leahy points out in the CNN interview, the results which LLMs derive are rather opaque to us humans. The current AI "joke" might be "look at all the smart stuff we can get computers to mimic but we don't know what they're doing or why they can do it".

The CNN interview is scary because it expounds on the dangers of AGI; as posed by the development of an intelligent agent whose capabilities exceed those of humans but which is not limited by human morals, ethics, or physical existence. From my background in computer and cognitive science, I believe these dangers are still years away and are not likely to be realized via the current massive statistical processing approach alone, anyway. 

Nevertheless, I believe and fear that current LLMs do pose very real dangers to society, politics, and human well-being. I agree strongly with Gary Marcus who has penned a great article on the dangers of AGI risk vs current AI risk:

"...although a lot of the literature equates artificial intelligence risk with the risk of superintelligence or artificial general intelligence, you don’t have to be superintelligent to create serious problems."

I think AI apologists do a great disservice to us all when they conflate fears of AGI with concerns about the consequences of malicious use of LLMs and then dismiss all concerns as silly. Many concerns are real and immediate and something must be done or our society is in for a lot of pain.

I apologize to my friend who sent me the video as I probably haven't relieved her fears but I hope that I was able to clarify what I feel are the more imminent and important issues.

No comments: