I recently read an article titled “The Timmy Trap”, which talks about how humans anthropomorphize LLMs and how anthropomorphizing them can often be counterproductive.
I found the article compelling overall. I was surprised to see that the Hacker News comments were mostly negative.
The comments mostly focused on one aspect of the article: about how the author said LLMs are not intelligent. The main critique was that the author didn’t give a definition of intelligence.
I thought the definition of true intelligence as “the ability to acquire, understand, and use knowledge” was well accepted and didn’t need to be explicitly stated.
LLMs clearly don’t meet this standard because they cannot truly understand logic, that’s simply not how they work. They can just probabilistically spit out a result.
While humans sometimes respond based on probability and not true understanding, humans only do it when they don’t know truly understand something. We call it an educated guess. It’s quite different than truly knowing or understanding something.
After thinking more about the negative Hacker News comments, I realized something: most, if not all, of these comments likely came from people who were not thinking intelligently at the time (which is true of most comments from people online)! They were regurgitations of arguments that they’ve heard other people make.
So in that way they were acting like LLMs! No wonder they see LLMs as intelligent.