General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region Forums'I wish I could push ChatGPT off a cliff': professors scramble to save critical thinking in an age of AI
https://www.theguardian.com/technology/ng-interactive/2026/mar/10/ai-impact-professors-students-learningI now talk about AI with my students not under the framework of cheating or academic honesty but in terms that are frankly existential, said Dora Zhang, a literature professor at the University of California, Berkeley. What is it doing to us as a species?
-snip-
Michael Clune, a literature professor and novelist, said that already, many students have been left incapable of reading and analyzing, synthesizing data, all kinds of skills. In a recent essay, he warned that colleges and universities rushing to embrace the technology were preparing to self-lobotomize.
-snip-
Some caution that the humanities will survive but as a province of the few. When he predicted the end of the humanities, Karp assured that there would be more than enough jobs for those with vocational training. Indeed, several professors spoke about concerns that AI will exacerbate a widening divide in US higher education and that small numbers of elite students will have access to a more traditional, largely tech-free liberal arts education, while everyone else has a degraded, soulless form of vocational training administered by AI instructors, said Zhang.
-snip-
Much more at the link.
Their opposition to AI is similar to that of most teachers I've met - with most of the exceptions to that opposition being teachers who have primarily become shills for AI companies, their job responsibilities emphasizing getting all students to use AI. These shills are often paid directly or indirectly by AI companies, with OpenAI in particular investing heavily in getting students to use their company's AI models.
I've always believed, and often said, that Sam Altman knew how much damage he would do to education when he released ChatGPT in late 2022. He knew students would be the users least able to spot all the errors the chatbot made. It was the gullible audience he wanted, the one most likely to spread hype about this handy free cheating tool. About a year ago a survey showed that students were still the largest group of ChatGPT users.
Of course there's a reason so many AI tools are offered for free.
As one of the professors the Guardian interviewed explains to his students, the companies are "hoping to addict" users and make them "helpless" without their AI tools.
And they're succeeding with a lot of people. Most of whom probably haven't heard of AI companies' hopes of raising subscriptions to hundreds or even thousands of dollars a month. Both OpenAI and Perplexity's CEOs have talked of thousand-dollar-a-month subscriptions being something AI users should be willing to pay as they realize how much AI "helps" them.
They're counting on a lot more suckers than just one being born every minute.
They need those addicts, because they're all losing money now with even $200/month subscriptions. AI requires a lot of expensive compute, and all those new data centers with lots of polluting generators or the planned nuclear power plants built quickly with safety regulations slashed by the Trump regime aren't going to pay for themselves.
Good for those teachers taking a stand against students being turned into addicts.
Starbeach
(337 posts)Altman decides to hand out intellectual heroin to the next generation. Good luck kids!
highplainsdem
(61,569 posts)And he's dumbed down countless adults as well.
misanthrope
(9,473 posts)are well on the way to destroying our nation.
highplainsdem
(61,569 posts)forced to by their job or school, praising and promoting AI use, filling social media with AI slop, etc.
I was particularly struck by one of the Bluesky replies to the head of a teacher's union, the American Federation of Teachers, when she very foolishly posted some AI slop as "fun" over the holidays. I included that reply here:
https://www.democraticunderground.com/100220895596
It's like watching sailors punch holes in their own ship's hull because it's fun to play in the water.
misanthrope
(9,473 posts)I saw this last night and it rang true.
highplainsdem
(61,569 posts)dalton99a
(93,726 posts)Sam Altman Seeks Trillions of Dollars to Reshape Business of Chips and AI
OpenAI chief pursues investors including the U.A.E. for a project possibly requiring up to $7 trillion
By Keach Hagey and Asa Fitch
Feb. 8, 2024 9:00 pm ET
The OpenAI chief executive officer is in talks with investors including the United Arab Emirates government to raise funds for a wildly ambitious tech initiative that would boost the world's chip-building capacity, expand its ability to power AI, among other things, and cost several trillion dollars, according to people familiar with the matter. The project could require raising as much as $5 trillion to $7 trillion, one of the people said.
The fundraising plans, which face significant obstacles, are aimed at solving constraints to OpenAI's growth, including the scarcity of the pricey AI chips required to train large language models behind AI systems such as ChatGPT. Altman has often complained that there aren't enough of these kinds of chips -- known as graphics processing units, or GPUs -- to power OpenAI's quest for artificial general intelligence, which it defines as systems that are broadly smarter than humans. Such a sum of investment would dwarf the current size of the global semiconductor industry. Global sales of chips were $527 billion last year and are expected to rise to $1 trillion annually by 2030. Global sales of semiconductor manufacturing equipment -- the costly machinery needed to run chip factories -- last year were $100 billion, according to an estimate by the industry group SEMI.
...
highplainsdem
(61,569 posts)highplainsdem
(61,569 posts)power and influence, and more and more of the world's wealth and resources going to his companies. Very dangerous.
misanthrope
(9,473 posts)There's also a naturally corrupting aspect to the accumulation of wealth and power. All together, it is why those who amass such things should be viewed with increasing skepticism the further up that ladder they are.
Mz Pip
(28,433 posts)And it should be. I remember sitting on the floor of libraries, hand copying references, organizing notes cards, checking references, analyzing data. It took a long time and often resulted in frustration and tears.
But when I finished, I knew I accomplished something.
Maybe the degrees should be awarded to AI, not the student.
highplainsdem
(61,569 posts)Get rid of AI, save the students, and watch them go on and use healthy young minds - minds that haven't been crippled by AI - to dazzle their teachers and all of us.
That's what teachers hope for. I've met so many teachers in despair because of what AI is doing to students.
And I've seen tech CEOs applaud the change, preach about their vision of a future where teachers are replaced by AI, and encourage students to cheat. I saw one AI CEO post that students should use AI to cheat their way through school because once they have that degree, AI will be their "superpower."
róisín_dubh
(12,300 posts)I left my professorship in history just before AI really took off, but I occasionally teach online classes for a few universities. And holy shit, its bad. Students cant read or analyze to save their lives and their written work is appallingly basic, even using AI (which theyre not allowed to do, but do it anyway).
I will hopefully find a job in human rights so I dont have to teach to make ends meet. It is soul crushing.
highplainsdem
(61,569 posts)education.
A cousin of mine stopped teaching for several years to be her mom's caregiver. My aunt is gone now, but my cousin does not intend to return to teaching, and AI is the reason she made that decision.
gulliver
(13,910 posts)Imo, the world was already being wrecked by natural intelligences (humans) powered by the Internet. The grifting and lunacy levels were growing without bound. Crazy and stupid people could find people who think like them. Any meritless idea could be googled to find a web site that supported it. Critical thinking was supplanted by "google it." Which didn't work.
AI's real opportunity and danger are that it gives equal access to wisdom while being extremely destructive to BS.
highplainsdem
(61,569 posts)intelligent. It isn't thinking. As the AI companies themselves admit:
https://support.google.com/websearch/answer/13954172
Because generative AI is experimental and a work in progress, it can and will make mistakes:
It may make things up. When generative AI invents an answer, it's called a hallucination. Hallucinations happen because unlike how Google Search gets information from the web, LLMs don't gather information at all. Instead, LLMs predict which words come next based on user inputs.
GenAI is often called a fancy autocomplete, too.
But calling it a bullshit machine is just as accurate.
A paper and an article:
ChatGPT is bullshit
https://link.springer.com/article/10.1007/s10676-024-09775-5
Perplexity Is a Bullshit Machine
https://www.wired.com/story/perplexity-is-a-bullshit-machine/
GenAI's mistakes are causing what is sometimes called the pollution of our information ecosystem. Even medical and scientific papers, if written with AI, now often contain hallucinated citations, illustrations, reasoning and conclusions.
And AI slop is showing up much too often in image searches of the internet. There have been times, for instance, that the top image on a results page for a famous painter's work has been AI slop.
Google is destroying websites with its AI Overview, ripping off their content and stealing their traffic. All AI models used for search harm the internet.
And genAI chatbots are spamming the internet with disinformation and misinformation of all types. It's made what was bad about the internet so much worse...
gulliver
(13,910 posts)"Humans plus Google" were already creating vast amounts of misinformation and lunacy prior to AI. People were really under the misimpression that anyone can just google anything and find the answer. Google lets any dummy in a meeting, for example, bring up a link and hijack the meeting from the brainy people.
AI, despite its limitations and flaws, is the real deal. If your doctor or lawyer, for example, isn't using it, you should drop them.
Getting rid of BS is a real threat to the economy. It might be the majority of the world's GDP, between administrative bloat, grifting, and make work. That's why I'm so disappointed in not seeing anyone figuring out how AI gets us off the ever-faster BS hamster wheel we've gotten ourselves on.
highplainsdem
(61,569 posts)artificial intelligence.
I would never trust a doctor or lawyer who used genAI. There are more and more news stories about lawyers who got in trouble using genAI.
And genAI is very dangerous if used for medical questions:
'Unbelievably dangerous': experts sound alarm after ChatGPT Health fails to recognise medical emergencies (The Guardian)
https://www.democraticunderground.com/100221066192
gulliver
(13,910 posts)I think of it as a better Google, much better. You definitely have to check its results, whether you're a doctor, lawyer or, as in my case, a software engineer.
I don't trust "humans plus Google." With a human in the loop, you inevitably get motivations like greed, fear, egomania, resentment, etc., polluting the output. Add AI to the loop (without taking out the human), and you can attenuate the bad human stuff.
highplainsdem
(61,569 posts)dumbs users down - they've discovered that's true with software engineers as well as students. It's a tool that doesn't speed up work nearly as much as AI fans would like to believe, if you really check carefully for errors. It's a tool so persuasive and sycophantic that it's pushed users into AI psychosis.
It's an industry that harms the environment and greatly increases wealth inequality.
And it's an industry controlled by oligarchs whose motivations we have no good reason to trust, and who are very much motivated by "greed, fear, egomania, resentment, etc." Oligarchs who control and can change the results you get from their AI.
And those AI tools are always gathering data on AI users, data the oligarchs can use to manipulate peopls, or pass along to the government of their choice. Which at the moment is the Trump regime.
gulliver
(13,910 posts)I don't think what existed before (humans plus Google et al.) was better. I didn't like where we were already. AI (LLMs et al) may help.
You have a point that AI was trained on data that was likely proprietary. It's unclear to me whether intellectual property law is up to stopping that, but I highly doubt it is. If there is a case to be made against AI, AI will probably be a powerful tool in making the case against itself.
Yes, AI can dumb users down. Google, etc., already did thatmassively. People shouldn't use AI to cheat on tests and homework any more than people should use a mini excavator to cheat on bench presses. AI psychosis is a problem. So is doom scrolling. AI can help with these, imo. Traditional mental illness was already going through the roof (or wasn't, depending on how much trust you have in the psychology industry).
I don't care about inequality as long as people have an economic floor. We should be trying to create law that uses AI to create that floor of prosperity. Ultimately, we shouldn't be working two jobs, each requiring forty or more hours, to support a family. We really screwed up letting it get this way. We haven't been riding the machine; it's been riding us.
You may not believe that ("oligarch!" ) Elon Musk wants to create abundance for all. I do. If his reason is wanting "to win the big e-game" or simply wanting to solve the biggest problem there is, I don't care. Unfortunately, I don't see political leaders of any stripe coming even close to getting a handle on either AI or the preexisting hellscape.
highplainsdem
(61,569 posts)he's at all charitable:
https://en.wikipedia.org/wiki/Musk_Foundation
In one instance, after Musk challenged World Food Programme director David Beasley to draft a plan to use money of Musk's that Beasley said could contribute to ending world hunger, Musk instead donated the $6 billion in question to his own foundation even after Beasley's plan showed that the money could feed 42 million people for a year.[27] According to the biographer Walter Isaacson, Musk has little interest in philanthropy. He believes that he can do more for humanity by leaving his money in his companies and pursuing the goals of sustainable energy, space exploration and AI safety with them.[28] On December 12, 2024, The New York Times reported the foundation again awarded less than 5% of its assets in donations in 2024.[3][29]
And what he and DOGE did with USAID was so cruel it could be considered sadistic. Of course most of those people were not the white people from the right countries that he cares about.
I believe Musk would like to be viewed as a technological savior of mankind, and he might be thinking that if abundance for others went mostly to the right race and ethnicities, and if he didn't have to surrender any of his own wealth - and especially if he would be acknowledged as the savior of humanity - he d be happy with a high-tech abundant utopia. Especially if it meant more white babies being born.
I'd say there's zero chance he wouldn't do everything he could to keep the wealthy from being taxed enough for even a low UBI.
Ten years ago Sam.Altman talked a bit about a UBI when he was interviewed by the New Yorker, but his ideas were very hazy and unrealistic.
https://www.newyorker.com/magazine/2016/10/10/sam-altmans-manifest-destiny
So with everything magically cheap except housing, and housing excluded, he could imagine UBI. He also excluded healthcare, a car, and all sorts of other expenses. And it was probably especially important to.exclude the.cost of housing when some of the interviewing for the profile was done.at "a seven-thousand-square-foot mansion, catered food under a grapefruit tree festooned with lights, a back yard that seemed to stretch to Redwood City." It wasn't Sam's mansion, but he was already quite well off by then, though I don't know if he was driving around in his $20 million McLaren yet. (See https://www.thesupercarblog.com/chatgpt-creator-sam-altman-spotted-in-his-mclaren-f1/ which says he might own two of those.)
I have no idea what sort.of housing he envisioned for those families getting a UBI. Maybe a nice tent city, a long way from any of his houses.
The abundance the tech lords want is for themselves. They like the idea of network states, which Gil.Duran has written about. High tech fiefdoms, basically, where they'd set the rules, and they would rule the small population they'd permit to live there.
Sam did talk briefly a few years ago of possibly considering every person on the planet worth one eight-billionth of the world's compute, which he said would be the most valuable resource. And then people could use that fraction of the world's compute for their own computing needs, or they could sell it to another person or business, and this would give everyone a wonderfully abundant life. But he admitted he wasn't sure how that would work, and he soon stopped talking about it.
He's now mentioned at times that we'll be in for a few rough decades before we reach the AI utopia he wants us all to imagine, our happy AI-run world with few if any jobs for humans.
The tech lords are not on humanity's side.
hunter
(40,624 posts)... are now using AI to increase their output.
It's another "fire."
hunter
(40,624 posts)... and teachers use AI to grade them!
Imagine a golden future where employees use AI to write reports and bosses use AI to read them.
It'll be great, nobody will have to exercise their minds at all!
highplainsdem
(61,569 posts)present includes people using AI to write scientific and medical papers full of hallucinations, and some of those papers getting past peer review where the reviewers might be using AI.
We have people using AI to expand short notes into business speak, being sent to people who use AI to read and summarize the letter and then expand a short reply to a longer one.
Idiocracy.
anciano
(2,233 posts)I have found genAI to be an efficient and effective tool for obtaining information, evaluating ideas, and enhancing creativity.
marble falls
(71,696 posts)... or allow others to think the finished piece is all you?
I'm not signifying, because I'm sure you give credit where it is due and fact check your AI contribution.
highplainsdem
(61,569 posts)You're talking about image generators that can be given the Latin name of an animal the AI user knows nothing about, and if that animal is in the stolen IP, the AI will produce a picture.
It's why AI can vibe code for people who have zero knowledge of code.
It's all stolen from other people's knowledge and talent.
And the errors/hallucinations inherent in the design, no matter how good the training data, make it unreliable for information and evaluations. And a security risk with code.
marble falls
(71,696 posts)highplainsdem
(61,569 posts)where about twice as many people believe the risks of AI outweigh the benefits as think the benefits outweigh the risks.
Initech
(108,504 posts)And fuck the tech dude bros pushing this shit on us.