Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

hunter

(40,564 posts)
5. The most clueless dogs I've met have better internal models of reality than any AI.
Tue Feb 10, 2026, 02:03 AM
Feb 10

I really don't understand how anyone attributes "intelligence" to these automated plagiarism machines.

There are some aspects of this paper that bother me. For example, I think it's absurd to talk about such things as "LLM Reasoning Failures" when there's no reasoning going on at all.

Are we all so conditioned by our education that we think answering questions or writing short essays for an exam is some kind of "reasoning?" It's not.

I'll give an example: Sometimes I meet Evangelical Christian physicians who tell me they don't "believe in" evolution. They might even "believe" that the earth is merely thousands of years old and not billions. They've obviously passed Biology exams to become physicians, they've witnessed the troublesome quirks of the human body that can only be explained by evolution, yet they've never applied any of that to their own internal model of reality. There's an empty space where those models ought to exist. ( Or possibly they are lying to themselves, which is the worst sort of lie. )

With AI it's all empty space. The words go in and the words come out without anything in between.

Whenever I write I'm always concerned that I'm letting the language in my head do my thinking for me; that I'm being the meat based equivalent of an LLM. If I'm doing that I don't really have anything to say. I want all my writing to represent my own internal models of reality as shaped by my own experiences.

LLMs don't have any experiences.

Recommendations

3 members have recommended this reply (displayed in chronological order):

Kick SheltieLover Feb 9 #1
Thanks! highplainsdem Feb 10 #16
Yw! SheltieLover Feb 10 #17
Is it accepted that generative AI reasons? Iris Feb 9 #2
Depends on the person EdmondDantes_ Feb 10 #8
It's called reasoning by people working on and promoting AI, but it's really more a pretense of highplainsdem Feb 10 #13
The main problem is how to assess evidence. Happy Hoosier Feb 10 #23
Thank you for providing this context. Iris Feb 12 #30
The reasoning aspect is key. cachukis Feb 9 #3
This message was self-deleted by its author Whiskeytide Feb 10 #10
I like your Spock/Kirk analogy, but then I thought ... Whiskeytide Feb 10 #11
I think Spock recognized humanity as a whole cloth. cachukis Feb 10 #24
I wonder how this affects ... rog Feb 9 #4
Whether or not an AI model shows its reasoning - its pretense of reasoning - you should never trust highplainsdem Feb 10 #14
That's an issue that seems to be coming up again and again . . . hatrack Feb 10 #18
With the "bonus" of dumbing yourself down, de-skilling yourself, as you try to let the AI do the work. highplainsdem Feb 10 #19
Same reason I refuse to use AI when writing or researching . . . hatrack Feb 10 #20
You may be missing my point ... rog Feb 10 #21
Summarizing isn't something AI is good at, judging by examples I've seen. Organizing by subject or highplainsdem Feb 10 #25
I just got back from an appointment with my vascular surgeon. rog Feb 10 #27
The most clueless dogs I've met have better internal models of reality than any AI. hunter Feb 10 #5
I've never forgotten a software engineer and machine learning expert saying an amoeba is more intelligent than an LLM. highplainsdem Feb 10 #15
I wonder how Neuro-sama would do on the test sakabatou Feb 10 #6
In a way, this is a computerized version of odins folly Feb 10 #7
This explains why... purr-rat beauty Feb 10 #9
Sam Altman is a serial liar who's fired everywhere he's been - including Open AI. 617Blue Feb 10 #12
LLM's can't really reason. Happy Hoosier Feb 10 #22
You are using the language of the AI promoters. hunter Feb 10 #26
I work in software development. Such Anthropomorphic language is common. Happy Hoosier Feb 10 #29
AI expert Gary Marcus's response to that paper: highplainsdem Feb 10 #28
Latest Discussions»General Discussion»A very pro-AI account on ...»Reply #5