Rehumanize yourself - the illusion of AI

ai personal
Reading time: 2 minutes, 54 seconds

The more I work with AI, the more sceptical I am. It is very helpful but also very dangerous if you don't remind yourself constantly what you are dealing with.

I work all day at the factory
I'm building a machine that's not for me
There must be a reason that I can't see
You've got to humanize yourself
Rehumanize yourself

"Rehumanize yourself", The Police

Introduction

There is a rush, a rush to embrace artificial intelligence.

Whenever we are quick to jump on a bandwagon like this, we might get tricked a little.

As a magician, I know quite a lot about what makes an illusion compelling. LLMs like ChatGPT appear to understand us, reason with us, and partner with us intellectually. But below this convincing behaviour lies a fundamental truth that's very difficult to accept:

AI is not intelligent in the way we are. It's not reasoning or thinking, even though these words are exactly what appears after we send away our prompts.

Instead, we are experiencing a sophisticated simulation that tricks us into attributing human-like cognition where none exists.

The rubber duck dilemma

Programmers have long used a technique called "rubber ducking" (see also my blog post about this here) - explaining problems to an inanimate object (traditionally, a rubber duck) to solve them. This works not because the duck is smart, but because articulating problems forces us to structure our thoughts. And this leads to us finding solutions ourselves.

AI serves a similar function, but somewhat more interactive.

When we "talk" to AI, we clarify our thoughts and often reach insights through the process of explanation itself.

The difference is that unlike the silent duck, AI responds - creating the illusion of a thinking partner.

The helpful illusion

This isn't to say AI lacks purpose and utility.

When I describe a programming challenge to an AI, it might suggest elegant code solutions or approaches I hadn't considered. When brainstorming article ideas, it might offer perspectives that spark creativity.

These contributions can be genuinely valuable.

AI has "seen" millions of similar problems and solutions in its training data (wherever this may come from).

It can pattern-match and remix this information in ways that seem insightful and new to us.

This is incredibly useful - but it's not intelligence or reasoning.

The danger of deception

Unlike my daughter, who reveals her lies with a typical facial expression, AI has no "tell" when it's fabricating information. It has a perfect pokerface when it delivers false positives with the same confidence as facts.

It doesn't know what it doesn't know.

When we engage with AI, we invest time and mental energy explaining our needs. If what we get in return is plausible-sounding misinformation, we've not only wasted that investment but might even use this as a wrong foundation to go down a rabbit hole of lies. Or, sometimes even worse, get caught in a spiral of endless back and forth in order to correct the AI and steer it into the direction we want it to go.

Conclusion

Understanding AI's true nature does not diminish its value. A more sophisticated rubber duck that can respond with relevant information can still be really useful for our thinking and problem-solving.

The key is maintaining awareness of what we're actually interacting with: not a thinking being, but a statistical pattern matcher trained on human-generated or even self-generated content.

When we approach AI with appropriate skepticism rather than unconditional trust, we can benefit from it.

Previous Post Next Post