To claim that "LLMs are not really intelligent" just because you know how they work internally, is a fallacy.

ai, philosophy

To claim that "LLMs are not really intelligent" just because you know how they work internally, is a fallacy.

Understanding how LargeLanguageModels work internally, to even the deepest degree, doesn't take away from their intelligence.

Just because we can explain how they choose the next word doesn’t make their process any less remarkable -- or any less powerful -- than the human brain. (Although it's obvious that they operate differently from the human brain, with different strengths and weaknesses).

Thought experiment: If we someday fully understand how the human brain works, would that make our intelligence any less real?

Sometimes, the more we understand, the more awe we feel.