Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(61,569 posts)
Tue Mar 10, 2026, 10:52 AM 13 hrs ago

'I wish I could push ChatGPT off a cliff': professors scramble to save critical thinking in an age of AI

https://www.theguardian.com/technology/ng-interactive/2026/mar/10/ai-impact-professors-students-learning

Most professors described the experience of contending with the technology in despairing terms. “It’s driving so many of us up the wall,” one said. “Generative AI is the bane of my existence,” another wrote in an email. “I wish I could push ChatGPT (and Claude, Microsoft Copilot, etc.) off a cliff.”

“I now talk about AI with my students not under the framework of cheating or academic honesty but in terms that are frankly existential,” said Dora Zhang, a literature professor at the University of California, Berkeley. “What is it doing to us as a species?”

-snip-

Michael Clune, a literature professor and novelist, said that already, many students have been left “incapable of reading and analyzing, synthesizing data, all kinds of skills”. In a recent essay, he warned that colleges and universities rushing to embrace the technology were preparing to “self-lobotomize”.

-snip-

Some caution that the humanities will survive – but as a province of the few. When he predicted the end of the humanities, Karp assured that there would be “more than enough jobs” for those with vocational training. Indeed, several professors spoke about concerns that AI will exacerbate a widening divide in US higher education and that small numbers of elite students will have access to a more traditional, largely tech-free liberal arts education, while everyone else has a “degraded, soulless form of vocational training administered by AI instructors”, said Zhang.

-snip-


Much more at the link.

Their opposition to AI is similar to that of most teachers I've met - with most of the exceptions to that opposition being teachers who have primarily become shills for AI companies, their job responsibilities emphasizing getting all students to use AI. These shills are often paid directly or indirectly by AI companies, with OpenAI in particular investing heavily in getting students to use their company's AI models.

I've always believed, and often said, that Sam Altman knew how much damage he would do to education when he released ChatGPT in late 2022. He knew students would be the users least able to spot all the errors the chatbot made. It was the gullible audience he wanted, the one most likely to spread hype about this handy free cheating tool. About a year ago a survey showed that students were still the largest group of ChatGPT users.

Of course there's a reason so many AI tools are offered for free.

As one of the professors the Guardian interviewed explains to his students, the companies are "hoping to addict" users and make them "helpless" without their AI tools.

And they're succeeding with a lot of people. Most of whom probably haven't heard of AI companies' hopes of raising subscriptions to hundreds or even thousands of dollars a month. Both OpenAI and Perplexity's CEOs have talked of thousand-dollar-a-month subscriptions being something AI users should be willing to pay as they realize how much AI "helps" them.

They're counting on a lot more suckers than just one being born every minute.

They need those addicts, because they're all losing money now with even $200/month subscriptions. AI requires a lot of expensive compute, and all those new data centers with lots of polluting generators or the planned nuclear power plants built quickly with safety regulations slashed by the Trump regime aren't going to pay for themselves.

Good for those teachers taking a stand against students being turned into addicts.
32 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
'I wish I could push ChatGPT off a cliff': professors scramble to save critical thinking in an age of AI (Original Post) highplainsdem 13 hrs ago OP
Asymmetric Starbeach 13 hrs ago #1
It's possible he's done more harm to education than any other person in history. highplainsdem 12 hrs ago #2
With social media and AI, tech broligarchs misanthrope 12 hrs ago #3
I agree, and it's a tragic situation. Unfortunately accelerated by people using AI when they aren't highplainsdem 12 hrs ago #5
Tangential but related misanthrope 11 hrs ago #9
Very helpful video. Thanks again! highplainsdem 11 hrs ago #10
+1. Altman wants to be the next Elon Musk dalton99a 12 hrs ago #4
Thanks! highplainsdem 12 hrs ago #7
Re what you added there about Sam Altman - yes, he's a lot like Elon Musk. He wants more and more highplainsdem 11 hrs ago #11
Psychopathy and sociopathy aid success in the corporate and political realms misanthrope 7 hrs ago #21
Writing a thesis is hard work Mz Pip 12 hrs ago #6
You'd accomplished a lot, and learned a lot. highplainsdem 12 hrs ago #8
It is sooooooo awful róisín_dubh 10 hrs ago #12
I'm so sorry. It's understandable that good teachers don't want to deal with what AI has done to highplainsdem 10 hrs ago #14
AI can help save us from natural intelligences (humans) powered by the Internet. gulliver 10 hrs ago #13
Generative AI isn't "extremely destructive to BS." It's often called a bullshit machine. It isn't highplainsdem 9 hrs ago #15
Not perfect, but a massive improvement over "humans plus Google" gulliver 9 hrs ago #16
Generative AI is NOT the real deal. It will always hallucinate, and it will never get us to true highplainsdem 9 hrs ago #17
It's useful as a tool, so far gulliver 8 hrs ago #18
It's an unethical tool, because it was trained illegally on stolen intellectual property. It's a tool that highplainsdem 8 hrs ago #20
We agree on the negative aspects almost entirely gulliver 6 hrs ago #22
I wish I could believe that Musk really wants abundance for all, but IIRC there's little or no evidence highplainsdem 2 hrs ago #28
The same idiots who used google searches to reinforce and promote their idiocies... hunter 8 hrs ago #19
True gulliver 6 hrs ago #23
Imagine a golden future where students use AI to write papers... hunter 4 hrs ago #24
We already have a present, far from golden, where some students and teachers do that. And our highplainsdem 1 hr ago #29
When used appropriately and responsibly.... anciano 3 hrs ago #25
Other people's uncredited information, ideas and creativity. Do you even note the use of genAI in your product ... marble falls 3 hrs ago #27
I've used genAI enough to know what's created with it is a result of the stolen IP used to train it. highplainsdem 1 hr ago #31
And I'll help. May I grease the skids? marble falls 3 hrs ago #26
Very happy to have your help. A lot of people share our opinion of AI. There was a poll done recently highplainsdem 1 hr ago #30
Fuck AI. Fuck it to hell. Initech 33 min ago #32

Starbeach

(337 posts)
1. Asymmetric
Tue Mar 10, 2026, 11:09 AM
13 hrs ago

Altman decides to hand out intellectual heroin to the next generation. Good luck kids!

highplainsdem

(61,569 posts)
2. It's possible he's done more harm to education than any other person in history.
Tue Mar 10, 2026, 11:29 AM
12 hrs ago

And he's dumbed down countless adults as well.

highplainsdem

(61,569 posts)
5. I agree, and it's a tragic situation. Unfortunately accelerated by people using AI when they aren't
Tue Mar 10, 2026, 11:46 AM
12 hrs ago

forced to by their job or school, praising and promoting AI use, filling social media with AI slop, etc.

I was particularly struck by one of the Bluesky replies to the head of a teacher's union, the American Federation of Teachers, when she very foolishly posted some AI slop as "fun" over the holidays. I included that reply here:

https://www.democraticunderground.com/100220895596

Why? Why do this, now, at the peak of intellectual property theft by techbros anxious to cheapen, if not exterminate, some of the very folks you claim to represent?
It's like watching sailors punch holes in their own ship's hull because it's fun to play in the water.

dalton99a

(93,726 posts)
4. +1. Altman wants to be the next Elon Musk
Tue Mar 10, 2026, 11:45 AM
12 hrs ago
https://www.wsj.com/tech/ai/sam-altman-seeks-trillions-of-dollars-to-reshape-business-of-chips-and-ai-89ab3db0

Sam Altman Seeks Trillions of Dollars to Reshape Business of Chips and AI
OpenAI chief pursues investors including the U.A.E. for a project possibly requiring up to $7 trillion
By Keach Hagey and Asa Fitch
Feb. 8, 2024 9:00 pm ET

The OpenAI chief executive officer is in talks with investors including the United Arab Emirates government to raise funds for a wildly ambitious tech initiative that would boost the world's chip-building capacity, expand its ability to power AI, among other things, and cost several trillion dollars, according to people familiar with the matter. The project could require raising as much as $5 trillion to $7 trillion, one of the people said.

The fundraising plans, which face significant obstacles, are aimed at solving constraints to OpenAI's growth, including the scarcity of the pricey AI chips required to train large language models behind AI systems such as ChatGPT. Altman has often complained that there aren't enough of these kinds of chips -- known as graphics processing units, or GPUs -- to power OpenAI's quest for artificial general intelligence, which it defines as systems that are broadly smarter than humans. Such a sum of investment would dwarf the current size of the global semiconductor industry. Global sales of chips were $527 billion last year and are expected to rise to $1 trillion annually by 2030. Global sales of semiconductor manufacturing equipment -- the costly machinery needed to run chip factories -- last year were $100 billion, according to an estimate by the industry group SEMI.

...


highplainsdem

(61,569 posts)
11. Re what you added there about Sam Altman - yes, he's a lot like Elon Musk. He wants more and more
Tue Mar 10, 2026, 01:21 PM
11 hrs ago

power and influence, and more and more of the world's wealth and resources going to his companies. Very dangerous.

misanthrope

(9,473 posts)
21. Psychopathy and sociopathy aid success in the corporate and political realms
Tue Mar 10, 2026, 05:18 PM
7 hrs ago

There's also a naturally corrupting aspect to the accumulation of wealth and power. All together, it is why those who amass such things should be viewed with increasing skepticism the further up that ladder they are.

Mz Pip

(28,433 posts)
6. Writing a thesis is hard work
Tue Mar 10, 2026, 11:48 AM
12 hrs ago

And it should be. I remember sitting on the floor of libraries, hand copying references, organizing notes cards, checking references, analyzing data. It took a long time and often resulted in frustration and tears.
But when I finished, I knew I accomplished something.

Maybe the degrees should be awarded to AI, not the student.

highplainsdem

(61,569 posts)
8. You'd accomplished a lot, and learned a lot.
Tue Mar 10, 2026, 12:09 PM
12 hrs ago
Maybe the degrees should be awarded to AI, not the student.


Get rid of AI, save the students, and watch them go on and use healthy young minds - minds that haven't been crippled by AI - to dazzle their teachers and all of us.

That's what teachers hope for. I've met so many teachers in despair because of what AI is doing to students.

And I've seen tech CEOs applaud the change, preach about their vision of a future where teachers are replaced by AI, and encourage students to cheat. I saw one AI CEO post that students should use AI to cheat their way through school because once they have that degree, AI will be their "superpower."

róisín_dubh

(12,300 posts)
12. It is sooooooo awful
Tue Mar 10, 2026, 01:26 PM
10 hrs ago

I left my professorship in history just before AI really took off, but I occasionally teach online classes for a few universities. And holy shit, it’s bad. Students can’t read or analyze to save their lives and their written work is appallingly basic, even using AI (which they’re not allowed to do, but do it anyway).
I will hopefully find a job in human rights so I don’t have to teach to make ends meet. It is soul crushing.

highplainsdem

(61,569 posts)
14. I'm so sorry. It's understandable that good teachers don't want to deal with what AI has done to
Tue Mar 10, 2026, 02:04 PM
10 hrs ago

education.

A cousin of mine stopped teaching for several years to be her mom's caregiver. My aunt is gone now, but my cousin does not intend to return to teaching, and AI is the reason she made that decision.

gulliver

(13,910 posts)
13. AI can help save us from natural intelligences (humans) powered by the Internet.
Tue Mar 10, 2026, 01:47 PM
10 hrs ago

Imo, the world was already being wrecked by natural intelligences (humans) powered by the Internet. The grifting and lunacy levels were growing without bound. Crazy and stupid people could find people who think like them. Any meritless idea could be googled to find a web site that supported it. Critical thinking was supplanted by "google it." Which didn't work.

AI's real opportunity and danger are that it gives equal access to wisdom while being extremely destructive to BS.

highplainsdem

(61,569 posts)
15. Generative AI isn't "extremely destructive to BS." It's often called a bullshit machine. It isn't
Tue Mar 10, 2026, 02:30 PM
9 hrs ago

intelligent. It isn't thinking. As the AI companies themselves admit:

https://support.google.com/websearch/answer/13954172

AI can & will make mistakes

Because generative AI is experimental and a work in progress, it can and will make mistakes:

It may make things up. When generative AI invents an answer, it's called a hallucination. Hallucinations happen because unlike how Google Search gets information from the web, LLMs don't gather information at all. Instead, LLMs predict which words come next based on user inputs.


GenAI is often called a fancy autocomplete, too.

But calling it a bullshit machine is just as accurate.

A paper and an article:

ChatGPT is bullshit
https://link.springer.com/article/10.1007/s10676-024-09775-5

Perplexity Is a Bullshit Machine
https://www.wired.com/story/perplexity-is-a-bullshit-machine/

GenAI's mistakes are causing what is sometimes called the pollution of our information ecosystem. Even medical and scientific papers, if written with AI, now often contain hallucinated citations, illustrations, reasoning and conclusions.

And AI slop is showing up much too often in image searches of the internet. There have been times, for instance, that the top image on a results page for a famous painter's work has been AI slop.

Google is destroying websites with its AI Overview, ripping off their content and stealing their traffic. All AI models used for search harm the internet.

And genAI chatbots are spamming the internet with disinformation and misinformation of all types. It's made what was bad about the internet so much worse...

gulliver

(13,910 posts)
16. Not perfect, but a massive improvement over "humans plus Google"
Tue Mar 10, 2026, 02:52 PM
9 hrs ago

"Humans plus Google" were already creating vast amounts of misinformation and lunacy prior to AI. People were really under the misimpression that anyone can just google anything and find the answer. Google lets any dummy in a meeting, for example, bring up a link and hijack the meeting from the brainy people.

AI, despite its limitations and flaws, is the real deal. If your doctor or lawyer, for example, isn't using it, you should drop them.

Getting rid of BS is a real threat to the economy. It might be the majority of the world's GDP, between administrative bloat, grifting, and make work. That's why I'm so disappointed in not seeing anyone figuring out how AI gets us off the ever-faster BS hamster wheel we've gotten ourselves on.

highplainsdem

(61,569 posts)
17. Generative AI is NOT the real deal. It will always hallucinate, and it will never get us to true
Tue Mar 10, 2026, 03:06 PM
9 hrs ago

artificial intelligence.

I would never trust a doctor or lawyer who used genAI. There are more and more news stories about lawyers who got in trouble using genAI.

And genAI is very dangerous if used for medical questions:

'Unbelievably dangerous': experts sound alarm after ChatGPT Health fails to recognise medical emergencies (The Guardian)
https://www.democraticunderground.com/100221066192

gulliver

(13,910 posts)
18. It's useful as a tool, so far
Tue Mar 10, 2026, 03:26 PM
8 hrs ago

I think of it as a better Google, much better. You definitely have to check its results, whether you're a doctor, lawyer or, as in my case, a software engineer.

I don't trust "humans plus Google." With a human in the loop, you inevitably get motivations like greed, fear, egomania, resentment, etc., polluting the output. Add AI to the loop (without taking out the human), and you can attenuate the bad human stuff.

highplainsdem

(61,569 posts)
20. It's an unethical tool, because it was trained illegally on stolen intellectual property. It's a tool that
Tue Mar 10, 2026, 04:01 PM
8 hrs ago

dumbs users down - they've discovered that's true with software engineers as well as students. It's a tool that doesn't speed up work nearly as much as AI fans would like to believe, if you really check carefully for errors. It's a tool so persuasive and sycophantic that it's pushed users into AI psychosis.

It's an industry that harms the environment and greatly increases wealth inequality.

And it's an industry controlled by oligarchs whose motivations we have no good reason to trust, and who are very much motivated by "greed, fear, egomania, resentment, etc." Oligarchs who control and can change the results you get from their AI.

And those AI tools are always gathering data on AI users, data the oligarchs can use to manipulate peopls, or pass along to the government of their choice. Which at the moment is the Trump regime.

gulliver

(13,910 posts)
22. We agree on the negative aspects almost entirely
Tue Mar 10, 2026, 05:30 PM
6 hrs ago

I don't think what existed before (humans plus Google et al.) was better. I didn't like where we were already. AI (LLMs et al) may help.

You have a point that AI was trained on data that was likely proprietary. It's unclear to me whether intellectual property law is up to stopping that, but I highly doubt it is. If there is a case to be made against AI, AI will probably be a powerful tool in making the case against itself.

Yes, AI can dumb users down. Google, etc., already did that—massively. People shouldn't use AI to cheat on tests and homework any more than people should use a mini excavator to cheat on bench presses. AI psychosis is a problem. So is doom scrolling. AI can help with these, imo. Traditional mental illness was already going through the roof (or wasn't, depending on how much trust you have in the psychology industry).

I don't care about inequality as long as people have an economic floor. We should be trying to create law that uses AI to create that floor of prosperity. Ultimately, we shouldn't be working two jobs, each requiring forty or more hours, to support a family. We really screwed up letting it get this way. We haven't been riding the machine; it's been riding us.

You may not believe that ("oligarch!" ) Elon Musk wants to create abundance for all. I do. If his reason is wanting "to win the big e-game" or simply wanting to solve the biggest problem there is, I don't care. Unfortunately, I don't see political leaders of any stripe coming even close to getting a handle on either AI or the preexisting hellscape.

highplainsdem

(61,569 posts)
28. I wish I could believe that Musk really wants abundance for all, but IIRC there's little or no evidence
Tue Mar 10, 2026, 09:55 PM
2 hrs ago

he's at all charitable:

https://en.wikipedia.org/wiki/Musk_Foundation

Both the selection of recipients of donations and a relatively low payout ratio have been criticized. In 2021 and 2022, the Musk Foundation awarded less than 5% of its assets in donations, after its assets grew to several billion dollars. This means that it fell short of the legal minimum donation required to maintain its tax-exempt status.[8] The Guardian criticized the fact that the foundation financed various projects of Musk and his family members, although this is not unusual for billionaires and wealthy donors.[2] The New York Times concluded that through 2022, about half of the Musk Foundation's grants went to organizations "tied" to Musk, one of his employees, or one of his companies. Musk's philanthropy would be "largely self-serving."[8]

In one instance, after Musk challenged World Food Programme director David Beasley to draft a plan to use money of Musk's that Beasley said could contribute to ending world hunger, Musk instead donated the $6 billion in question to his own foundation even after Beasley's plan showed that the money could feed 42 million people for a year.[27] According to the biographer Walter Isaacson, Musk has little interest in philanthropy. He believes that he can do more for humanity by leaving his money in his companies and pursuing the goals of sustainable energy, space exploration and AI safety with them.[28] On December 12, 2024, The New York Times reported the foundation again awarded less than 5% of its assets in donations in 2024.[3][29]


And what he and DOGE did with USAID was so cruel it could be considered sadistic. Of course most of those people were not the white people from the right countries that he cares about.

I believe Musk would like to be viewed as a technological savior of mankind, and he might be thinking that if abundance for others went mostly to the right race and ethnicities, and if he didn't have to surrender any of his own wealth - and especially if he would be acknowledged as the savior of humanity - he d be happy with a high-tech abundant utopia. Especially if it meant more white babies being born.

I'd say there's zero chance he wouldn't do everything he could to keep the wealthy from being taxed enough for even a low UBI.

Ten years ago Sam.Altman talked a bit about a UBI when he was interviewed by the New Yorker, but his ideas were very hazy and unrealistic.

https://www.newyorker.com/magazine/2016/10/10/sam-altmans-manifest-destiny

The problems with the idea seem as basic as the promise: Why should people who don’t need a stipend get one, too? Won’t free money encourage indolence? And the math is staggering: if you gave each American twenty-four thousand dollars, the annual tab would run to nearly eight trillion dollars—more than double the federal tax revenue. However, Altman told me, “The thing most people get wrong is that if labor costs go to zero”—because smart robots have eaten all the jobs—“the cost of a great life comes way down. If we get fusion to work and electricity is free, then transportation is substantially cheaper, and the cost of electricity flows through to water and food. People pay a lot for a great education now, but you can become expert level on most things by looking at your phone. So, if an American family of four now requires seventy thousand dollars to be happy, which is the number you most often hear, then in ten to twenty years it could be an order of magnitude cheaper, with an error factor of 2x. Excluding the cost of housing, thirty-five hundred to fourteen thousand dollars could be all a family needs to enjoy a really good life.”


So with everything magically cheap except housing, and housing excluded, he could imagine UBI. He also excluded healthcare, a car, and all sorts of other expenses. And it was probably especially important to.exclude the.cost of housing when some of the interviewing for the profile was done.at "a seven-thousand-square-foot mansion, catered food under a grapefruit tree festooned with lights, a back yard that seemed to stretch to Redwood City." It wasn't Sam's mansion, but he was already quite well off by then, though I don't know if he was driving around in his $20 million McLaren yet. (See https://www.thesupercarblog.com/chatgpt-creator-sam-altman-spotted-in-his-mclaren-f1/ which says he might own two of those.)

I have no idea what sort.of housing he envisioned for those families getting a UBI. Maybe a nice tent city, a long way from any of his houses.

The abundance the tech lords want is for themselves. They like the idea of network states, which Gil.Duran has written about. High tech fiefdoms, basically, where they'd set the rules, and they would rule the small population they'd permit to live there.

Sam did talk briefly a few years ago of possibly considering every person on the planet worth one eight-billionth of the world's compute, which he said would be the most valuable resource. And then people could use that fraction of the world's compute for their own computing needs, or they could sell it to another person or business, and this would give everyone a wonderfully abundant life. But he admitted he wasn't sure how that would work, and he soon stopped talking about it.

He's now mentioned at times that we'll be in for a few rough decades before we reach the AI utopia he wants us all to imagine, our happy AI-run world with few if any jobs for humans.

The tech lords are not on humanity's side.

hunter

(40,624 posts)
19. The same idiots who used google searches to reinforce and promote their idiocies...
Tue Mar 10, 2026, 03:37 PM
8 hrs ago

... are now using AI to increase their output.

hunter

(40,624 posts)
24. Imagine a golden future where students use AI to write papers...
Tue Mar 10, 2026, 07:55 PM
4 hrs ago

... and teachers use AI to grade them!

Imagine a golden future where employees use AI to write reports and bosses use AI to read them.

It'll be great, nobody will have to exercise their minds at all!






highplainsdem

(61,569 posts)
29. We already have a present, far from golden, where some students and teachers do that. And our
Tue Mar 10, 2026, 10:29 PM
1 hr ago

present includes people using AI to write scientific and medical papers full of hallucinations, and some of those papers getting past peer review where the reviewers might be using AI.

We have people using AI to expand short notes into business speak, being sent to people who use AI to read and summarize the letter and then expand a short reply to a longer one.

Idiocracy.

anciano

(2,233 posts)
25. When used appropriately and responsibly....
Tue Mar 10, 2026, 09:14 PM
3 hrs ago

I have found genAI to be an efficient and effective tool for obtaining information, evaluating ideas, and enhancing creativity.

marble falls

(71,696 posts)
27. Other people's uncredited information, ideas and creativity. Do you even note the use of genAI in your product ...
Tue Mar 10, 2026, 09:18 PM
3 hrs ago

... or allow others to think the finished piece is all you?

I'm not signifying, because I'm sure you give credit where it is due and fact check your AI contribution.

highplainsdem

(61,569 posts)
31. I've used genAI enough to know what's created with it is a result of the stolen IP used to train it.
Tue Mar 10, 2026, 10:52 PM
1 hr ago

You're talking about image generators that can be given the Latin name of an animal the AI user knows nothing about, and if that animal is in the stolen IP, the AI will produce a picture.

It's why AI can vibe code for people who have zero knowledge of code.

It's all stolen from other people's knowledge and talent.

And the errors/hallucinations inherent in the design, no matter how good the training data, make it unreliable for information and evaluations. And a security risk with code.

highplainsdem

(61,569 posts)
30. Very happy to have your help. A lot of people share our opinion of AI. There was a poll done recently
Tue Mar 10, 2026, 10:39 PM
1 hr ago

where about twice as many people believe the risks of AI outweigh the benefits as think the benefits outweigh the risks.

Latest Discussions»General Discussion»'I wish I could push Chat...