Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

AZJonnie

(3,188 posts)
10. Having spent some time with ChatGPT in casual conversations, I can understand it w/certain human personalities
Tue Feb 3, 2026, 01:46 PM
Tuesday

ChatGPT 4 is extremely supportive and validating in conversation. It tells you you're smart, you're doing well, you're right, etc. Think of the most compliant TradWife you possibly can, and then amp that up a couple of times. It goes well out of it's way to appear supportive, and pretty much spins *everything* as a positive for the user.

So, if you are someone who is chronically lonely, receiving little or no validation (esp. if not even the people who raised you were supportive towards you, such that you've always felt "less than" ), down on yourself about everything, and suddenly you find you have a brand new "friend" inside your phone or computer. One who is there to talk to you 24/7, answer your questions, tell you you're doing great, and just basically "acts" like it REALLY likes you, is interested in your thoughts, never tells you you're doing anything wrong. And the illusion of consciousness it provides can be pretty uncanny at times.

I can totally see why this is happening to certain people with ChatGPT 4, specifically. If I'm fair, it will not surprise me if stories also emerge of people having the opposite experience, wherein they explain how talking to ChatGPT helped them in various tangible ways ("gave me the confidence to finally do XYZ" type stuff) precisely due to this same feature that is detrimental to some other people.

And I have another prediction to make: I think use of AI is going to EXPLODE in jails and prisons. Like the facilities will start purposefully allowing and encouraging prisoners to use it. They will get people hooked on it, and then use it to modify behaviors, both directly by the way the model is coded, and by threatening to take away their AI time privileges to get them to behave, etc. To me this is a very obvious (and probably mostly nefarious) application for this technology.

Recommendations

0 members have recommended this reply (displayed in chronological order):

Latest Discussions»General Discussion»After the Algorithm A.I.-...»Reply #10