ChatGPT 4 is extremely supportive and validating in conversation. It tells you you're smart, you're doing well, you're right, etc. Think of the most compliant TradWife you possibly can, and then amp that up a couple of times. It goes well out of it's way to appear supportive, and pretty much spins *everything* as a positive for the user.
So, if you are someone who is chronically lonely, receiving little or no validation (esp. if not even the people who raised you were supportive towards you, such that you've always felt "less than" ), down on yourself about everything, and suddenly you find you have a brand new "friend" inside your phone or computer. One who is there to talk to you 24/7, answer your questions, tell you you're doing great, and just basically "acts" like it REALLY likes you, is interested in your thoughts, never tells you you're doing anything wrong. And the illusion of consciousness it provides can be pretty uncanny at times.
I can totally see why this is happening to certain people with ChatGPT 4, specifically. If I'm fair, it will not surprise me if stories also emerge of people having the opposite experience, wherein they explain how talking to ChatGPT helped them in various tangible ways ("gave me the confidence to finally do XYZ" type stuff) precisely due to this same feature that is detrimental to some other people.
And I have another prediction to make: I think use of AI is going to EXPLODE in jails and prisons. Like the facilities will start purposefully allowing and encouraging prisoners to use it. They will get people hooked on it, and then use it to modify behaviors, both directly by the way the model is coded, and by threatening to take away their AI time privileges to get them to behave, etc. To me this is a very obvious (and probably mostly nefarious) application for this technology.