AI: Is It Really THAT Bad?

The rapid recent advancement of artificial intelligence has led people to develop overly optimistic assumptions about what it can do. And the media… isn't helping. Let's take a deeper look at where the tech actually is.

Introduction

So. What is AI? I love starting episodes like this: Websters Dictionary defines artificial intelligence as:

  1. a branch of computer science dealing with the simulation of intelligent behavior in computers
  2. the capability of a machine to imitate intelligent human behavior

NOTE: Just for fun, here's the example sentence that came along with that definition: “A robot with artificial intelligence.” That's it. That's the example. Who wants to bet that that example was in fact generated by AI. That alone should tell you all you need to know about where AI actually is right now.

The rapid recent advancement of artificial intelligence has led people to develop overly optimistic assumptions about what it can do. And why not? According to researchers, AI's computational power is doubling every six to 10 months. That seems fast! Let's all get terrified real quick. And the media… isn't helping. I just did a short 200 hour sleep-free espresso filled deep dive to see what the kids are saying about AI, and what I'm seeing are just.. SO many headlines that sound like this:

Now, that last one might actually be interesting, as we'll see later on. I'm not linking to the rest of them because they're all head-scratchingly pointless.

The promise of these AI is that it will make it easier than ever before to search for answers or get help with tasks without having to wait on someone else. However, the reality is that most, if not all, AI tools are still limited in their capabilities when compared to humans. They can only provide basic responses based on what data they have been given, or do super fast pattern matching. That's why I didn't include other AI based headlines like

  • AI helps decipher 2,000-year-old scroll on life after Alexander the Great

That's machine learning and pattern matching. It ain't the same thing. What people think AI is, is basically Jarvis from Iron man. And it ain't the same thing. I just want to highlight two problems that AI has that will keep it from being what the media desperately wants everyone to think it is.

Problem 1: Issues with Fact-Checking

And by saying that there are "issues" with fact-checking, what I mean is "Fact checking is nonexistent with AI."

Getting things right. You'd think that a computer could do that. In fact it's a bedrock idea that a lot of us have about computers. We just assume that computers can and will be perfect all the time forever. And, well, as the recent spate of reports about chatbots from Microsoft, Google, and OpenAI have shown, they're just not.

One of the major issues that arises when using artificial intelligence is its inability to fact check itself. It relies on data that has been fed into it, if the source of this data is incorrect then any answers given by the AI will also be wrong. This has been proven to be true by probably thousands of people online. In many cases the source of answers AI has given to people asking it questions seem to have come from ... reddit. Not ideal. And even when AI has the right source, it will confidently cite it while simultaneously completely misusing it.

Here's an example: I asked ChatGPT (my current favorite AI punching bag) about Azure VM Instance sizes. I wanted one that was “4vCPU and 32GB vRAM.” It came back with D4s v3. This sizing is of course, wrong. A D4s v3 is 4vCPU and 16GB vRAM. If I ask ChatGPT directly, “What is the size of an Azure D4s v3 instance,” it responded correctly. In both answers I asked for a citation- and both answers were the same azure pricing page! This does not inspire confidence.

Another, more serious example: Mens Journal started using AI to create articles. Even though the publication claimed the articles were being edited and fact-checked by humans, outside researchers found that some of the articles contained “serious errors and plagiarism,” forcing an embarrassing retraction. This wasn't some buzzfeed listicle type nonsense article either, this was about a real medical issue that real people might be inclined to take advice from- and act upon. Great work, everyone.

Problem 2: Inability to Understand Context or Nuance

Miscommunications happen. Even between humans. But syntax, context, localization, etc., are simply not always considered when it comes to AI. The fact is, even though AI simulates a human speaking AI, it really doesn't understand language. Particularly in question with AI systems is their inability to understand context or nuance. Like highlighted above, this context or nuance is important EVERYWHERE- in parsing the question asked, in parsing the way the source answered the question, and in how the AI ends up returning a result. This can lead to misunderstandings or misinterpretations of what is being asked and can result in inaccurate responses.

And something about this that needs to be highlighted- the AI is slowly forming its OWN context and nuances, that WE are gonna have to get used to. And like people, they're gonna be different from system to system. So just like someone from Philadelphia and someone from Glasgow will both allegedly speak English that doesn't mean they understand each other- not without at least a little bit of code-switching and working doubly hard to make sense of completely foreign idioms and ways of speaking. But we can do it. Humans (usually) have no problem recognizing cultural differences when we are confronted by them- we need to get that same understanding in our minds when it comes to talking to AI.

AI Is NOT AGI

All of this is to say that AI its not just that current AI doesn't do what we have seen all of these hyperbolic fluff pieces say it can do- it's that it can't do those things. All it can do is mimic them, based on old information fed to it via a training process.

Now, the end goal of a lot of researchers is to get AI to where we think it already is today. To get to that state we need to achieve something different though- Artificial General Intelligence (AGI). The main difference between then and now is that AGI will be capable of understanding its environment, learning new skills, and making decisions independently. Neural networks that master games like checkers and Othello only by playing them are very primitive examples of this. But even they aren't capable of, say, then going onto invent a new game. Yet.

Creating true AGI will require systems that can learn over time, can handle complex tasks requiring reasoning and decision making, and finding ways to safeguard these systems so that they don't become racist or Skynet. Or, most nefariously, a racist Skynet.

NOTE: Interestingly, OpenIA posted an article about exactly these issues right as I was posting. It's worth a read if you want to see what the vendor of ChatGPT has to say about the future state of AGI.

And these tasks are HARD. One of the reasons they're hard is that we literally still don't understand human self-awareness, either philosophically or neurologically. So a lot of AGI research is just educated guessing at what might work.

Even in recent years, though, there have been some advancements towards creating truly, independently, intelligent machines. Researchers at advertising company Google have created an AI system called AlphaZero which is capable of playing chess with near-human levels of skill without being given any prior knowledge about the game itself. However, it still cannot understand context or make decisions on its own like a human would be able to do.

Yet.