Why I don't use LLMs (Large Language Models)

2025-03-02 13:48 by Ian

Tools that engender dependence are very dangerous tools.

While I was tutoring chemistry in college, I once had an argument about Imperial vs SI units with a German exchange student. His assertion was that most Americans weren't capable of converting between the two unit systems and should fully convert to SI. My position was that nearly all of us (Americans) learned the arithmetic necessary to do any-and-all unit conversion sometime before the 6th grade. But I couldn't convince him.
So the next time I went to the DMV to renew my driver's license, I tried an experiment. When asked for my height and weight, I would refuse to answer in Imperial units (feet/pounds). Just to see what would happen.

DMV: "Your height and weight?"
Me: "193cm / 90kg."
"How many feet/pounds is that?"
"I don't know... It's 2.54cm to the inch."
"You know that, but don't know how many feet?"
"It's 12-inches per foot, I think? I can't do the division in my head." I fib, while trying to feign a European's unfamiliarity with our goofball unit system.
"No one has ever asked you how tall you are?"
(Shrugging) "Sure. I'm 193cm."

Exasperated at trying to make me give him the answer, he quickly became more exasperated with the problem of using the two numbers I gave him to arrive at a third, new number. In less time than he spent trying to get me to answer in feet, he gives up pushing buttons on his desktop calculator and goes to get help from someone else.
Ok... This is not what I expected...

The second employee arrives and instead of doing a division problem, asks me:
"You really don't know your height in feet?"
"Not off-hand, no... If it's going on my license, I want it to be right."

Frowns were exchanged, and the second employee goes to find a manager.
No. way.
When the manager appeared, he asked me for my height.
"193cm."
"Ok, give me a minute..."

I then watch him ignore the desktop calculator, and instead perform a web search for "193cm to feet".
In the five minutes that it took three people to use a trillion dollar globe-spanning information technology apparatus to tell them the answer to a division problem they should have solved with the $5 calculator at arm's reach, I had checked the result twice in my head. And then re-checked my weight.

"Ok, how much do you weigh?"
Oh, he's ready for that now.
"90kg. I think it's 454 grams for each pound."

He ignores the conversion factor, and pecks out the search query on his keyboard: "90kg to pounds".

I left the DMV minutes later feeling as if I just learned more than I wanted to. Not only did I lose the argument I sought to settle (you were right, Sven), but I felt as if I had intruded on someone in the middle of dressing. Three out of three people I just tested had become so dependent on tools that they had become incapable of doing a simple unit conversion.

AI isn't going to enslave us, or poison us, or any other such act implying intention. It will destroy (some of) us the same way painkillers do: By an addiction/dependency feedback loop.
As best as I can tell, the act of thinking is actually experienced as pain for many people. It isn't simply that they dislike thinking. They do, after all, spend a tremendous amount of mental energy trying to avoid it.

They will take every possible escape from a situation that requires them to think through something as simple as an arithmetic problem. They'll defer, or use a thought-terminating cliché, or simply tolerate cognitive dissonance in their habitually compartmentalized knowledge. Once AI can "think" on their behalf, I fully expect a critical mass of people will always prefer using it to thinking for themselves, even if its conclusions are wrong.
Perhaps even because its conclusions are wrong.
If Google had told the DMV manager that 193cm was the same as 5'10", I doubt I would have been able to convince them otherwise. They weren't thinking to begin with, and my objection to an incorrect result wouldn't have spurred them to start. Especially if I couldn't give them an answer to supplant it.

It's one thing to have a dependency on a tool that does some narrow mechanical task (IE, sewing). But it's a whole different problem to have a dependency on a tool for something so basic as reading and writing. As I found out at the DMV, critical thinking has been absent from much of the population for most of my adult life. But that was 15 years ago, and there are about to be large numbers of people who have no idea how to think at even a basic level. How basic that level will be will depend on the person and what tools some company or government will have decided to get them hooked on.

I write for a living (code, not English so much). I enjoy writing. I'm even good enough at it to take pride in my work. Writing is how I know if I make any sense at all. Writing forces me to think. And if I don't do it, I have abdicated a necessary step in my reasoning process.
What if I didn't write what you just read?
Would it really have been worth reading?

Others will disagree. They will argue in terms of degrees and split hairs. But for what it's worth, I don't want a computer doing my thinking for me. And I won't ever claim a robots "thoughts" are my own. I won't even let a robot clean up my scattered or sloppy thoughts. If I can't communicate it clearly, then it isn't clear to me, and I don't want to delude myself.

Previous:
Next: