Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
In reply to the discussion: So, I'm gonna stir the pot a bit... don't be too harsh... [View all]highplainsdem
(62,070 posts)47. It still hallucinates. All genAI models do. It can hallucinate at any time, and for that reason its
results need to be checked just as much as any other AI model's results.
Edit history
Please sign in to view edit histories.
Recommendations
4 members have recommended this reply (displayed in chronological order):
89 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
RecommendedHighlight replies with 5 or more recommendations
It is absolutely NOT better for AI to tell you about your symptoms than Google
Ms. Toad
10 hrs ago
#76
The trouble with Wiki isn't that it spouts false information on a regular basis.
Igel
13 hrs ago
#18
Those AI overviews are stealing traffic from the websites they stole the information from, and the
highplainsdem
13 hrs ago
#35
It still hallucinates. All genAI models do. It can hallucinate at any time, and for that reason its
highplainsdem
12 hrs ago
#47
And current LLMs do exactly the same thing as Google search or YouTube algorithms. You just don't realize it.
paleotn
11 hrs ago
#66
If you mean generative AI, the kind most hyped now, it's badly flawed tech based on stolen intellectual property,
highplainsdem
13 hrs ago
#20
It works - to the extent it works when it's mindless and will always hallucinate - only because of IP theft.
highplainsdem
13 hrs ago
#29
The AI companies who felt they had a right to take everyone else's IP have been quick to scream if
highplainsdem
13 hrs ago
#40
I'm in favor of creatives owning their intellectual property, and that right being protected. It's as
highplainsdem
9 hrs ago
#84
Legal judgments aren't always ethical, as everyone here is aware. Creatives and those who support
highplainsdem
9 hrs ago
#87
The problem is not a fork or a knife, the problem is who has it in their hand...An assassin with a knife is very
Escurumbele
13 hrs ago
#27
I agree. I've been saying this about computers for decades. However, I think most of us agree that IA should be
Martin68
13 hrs ago
#34
True, AI by itself is benign. The companies controlling it, however, are not.
tinrobot
13 hrs ago
#45
I sometimes stir a pot in the kitchen and then walk away until dinner is served
Soul_of_Wit
11 hrs ago
#60
I do agree with you there. One of my smartest friends, a tech professional, thinks like Joinformill.
Scrivener7
12 hrs ago
#55
AI can be rejected - and should be, by ethical, smart people who have any choice in the matter.
highplainsdem
11 hrs ago
#59
It's genAI being hyped and used most widely. Which is why people need to know about how harmful
highplainsdem
11 hrs ago
#63
Not like a fork: like a cruise missle with a spork instead of a warhead.
JustABozoOnThisBus
12 hrs ago
#54
And using AI harms human intelligence. See this thread on yet another article about that:
highplainsdem
11 hrs ago
#67
Sadly, few people are fully able to tell when AI provides facts or fallacies.
MineralMan
11 hrs ago
#69