Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

AZJonnie

(2,017 posts)
Fri Oct 31, 2025, 02:06 AM 20 hrs ago

If you're confused about how the hell AI is going to make the kind of money needed to turn a profit, I'll illustrate

I'll say first I use AI for coding at work, and I use it in places like the nursery, when I'm buying plants. "Hey Google, how will this plant species do in the Phoenix heat, 4 feet from a south-facing wall", and "what is the growth rate", and "is now a good time to plant it", those kinds of things. I think it'll probably always be free for that kind of thing, or perhaps your mobile provider pays for the one integrated in your phone and passes that cost on to you, for example. This is a lot of people's experience with AI, and so it's logical to ask, how is it going to ever make a lot of money?

And it's because the goal is much more lofty than that. Let's say you'd like to hand it blueprints for a skyscraper and say "build this for us", well then THAT will cost you a lot of money. Obviously AI cannot do the physical work itself, but think of what it might be able TO do, if not now, then someday probably not that far off:

It could figure out the skills needed from the workers, what specialists you'd need. Place ads for workers. Review the resumes. Research the applicants (and probably get a shit ton of info about them) who apply. Make the job offers. Shop needed contractors. Negotiate with them. Send them contracts, review how they reply, provided detailed options to a small group of human decision makers.

Calculate all the materials required for the project. Shop for the best deals for the products. Hell, it could send out emails requesting better deals for you and read the emails that come back and negotiate some more. Leverage one supplier against another in negotiations. Figure out when the products need to arrive based on which stage of the process the project should be at at what time, and place the orders at the proper times.

Work up the schedules for the workers and contractors during the project. Communicate those schedule to the workers. While monitoring upcoming weather forecasts so it can tell people not to show up 3 days from now cause there's a blizzard coming.

Shop around for the payroll company then coordinate the paychecks. Review the accounts payable and receivable. Bug people to pay you, review the requests for you to pay them.

For permits, figure out which are needed, then apply for any that are. Pay the permit fees. Retrieve and store digital copies of the permits.

Create a website for your project if you wanted one. Book the cloud space to host the website on AWS, after shopping the options (AWS, Azure, etc). If it's an office or commercial space, put out ads about renting the space. Run your marketing campaign. Calculate what rates will need to be charged to make a profit, keeping an eye on published competitors rates in the area.

Hell, it could design the freaking BUILDING ITSELF if you give it a detailed-enough prompt, and give YOU the blueprints.

Now, think about how many jobs that is, normally. How many people you have to pay to get all that done. And AI never calls in sick, never takes a day off, it never makes a mathematical mistake (though of course it would make other mistakes, but people do as well).

Then after your building is built, the original customer keeps it working doing any other electronic shit needed to keep the place running, manage tenets, maintenance, the landscapers, everything damn near. The customer ends up locked in, dependent on it.

THAT is the kind of AI they want to build and that is where the real money will be. So, obvs, that is shit-ton of processing power, and these companies aren't giving THAT service away. You'll pay MILLIONS because you'll be replacing DOZENS of people.

And once the AI is trained and built (and continues to self-train), then what?

OpenAI or whoever just has to pay for the juice, and to keep the electronics up to date. That will not cost them the same amount of millions. At a certain point, it becomes VERY profitable. Esp. if they build their own power infrastructure, solar and wind farms and battery backups and such.

And this is just one example, you can apply this same model to all kinds of business enterprises and endeavors. This whole thing is not about Chatbots, that is just the friendly, forward-facing bit. The proof of concept, if you will.

Will it ever be able to do all that, and do it reliably enough? I'm not sure, that is the trillion dollar question. But that's the kind of thing they're hoping to have as a product.

29 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
If you're confused about how the hell AI is going to make the kind of money needed to turn a profit, I'll illustrate (Original Post) AZJonnie 20 hrs ago OP
I hope it replaces all of the investment bankers VMA131Marine 20 hrs ago #1
I have worked with various forms of AI since the 1980s. Metaphorical 19 hrs ago #2
I'm really not asserting that this is what will happen, but I believe machines capable of tasks on that scale AZJonnie 19 hrs ago #3
Yes, I agree that is the selling point being used. Hugin 19 hrs ago #5
Oh, I absolutely agree with that Metaphorical 5 hrs ago #21
My own cursory queries bear out the approximate 30% error rate. Hugin 19 hrs ago #4
Good points and good analysis. . . . nt Bernardo de La Paz 18 hrs ago #6
You could have chosen any number of similar pie in the sky examples and they'd all be dreams but not selling points. Bernardo de La Paz 18 hrs ago #7
Sure, I use Gemini Agentic via CLI everyday. But all I'm talking about is a collection of agents that effectively talk AZJonnie 16 hrs ago #9
Remember that most agentic calls Metaphorical 5 hrs ago #22
Well yeah I don't have Gemini running locally like I do Ollama and a couple other models AZJonnie 4 hrs ago #23
Helpful Starbeach 17 hrs ago #8
Too long to read. I'll bookmark it. QueerDuck 16 hrs ago #10
And you'll end up with a 90k sqft ballroom LetsGetSmartAboutIt 16 hrs ago #11
Stupid Question About AI Bibbers 12 hrs ago #12
Some amateur answers... Hugin 12 hrs ago #14
+1 leftstreet 11 hrs ago #15
What a pile of bullshit. hunter 12 hrs ago #13
The coding is useful...but the resto is just bullshit and will crash...the AI companies are Demsrule86 11 hrs ago #16
Yeah. I probably didn't make clear enough that my point was more political than technical AZJonnie 9 hrs ago #17
I've seen too many articles on problems with AI coding including security risks that aren't caught to be highplainsdem 9 hrs ago #18
I probably should have made it more clear that my point was more political than technical AZJonnie 8 hrs ago #19
That's very interesting, but I think there is a flaw in that MineralMan 8 hrs ago #20
I really didn't mean it's definitely going to work and building a building was just a convenient illustration AZJonnie 4 hrs ago #25
All well and good, but the energy demand will kill us (financially and literally) . . . . hatrack 4 hrs ago #24
Yeah I certainly did not to have it come off sounding like it's all a 'good thing' ESPECIALLY not for the climate AZJonnie 3 hrs ago #27
Priceline refund works on AI Turbineguy 3 hrs ago #26
Fascinating discussion. Thank you all. cachukis 3 hrs ago #28
AI, as both a technology and a commodity, is in its infancy, but this thread makes some nice points Ilikepurple 2 hrs ago #29

VMA131Marine

(5,117 posts)
1. I hope it replaces all of the investment bankers
Fri Oct 31, 2025, 02:26 AM
20 hrs ago

And hedge fund managers before it gets into building design and project management.

Metaphorical

(2,543 posts)
2. I have worked with various forms of AI since the 1980s.
Fri Oct 31, 2025, 02:42 AM
19 hrs ago

There are several major flaws in your post. The first is that LLMs in general have an accuracy rate of approximately 70% (across the board, OpenAI is less accurate than others). This means 30% of the time, the information that you receive from an AI will be wrong in some critical way. There are many sound mathematical reasons why this is the case, and it's pretty fundamental to the underlying transformer model. This has been known for a decade or so. AI can be useful - I use it myself for intellisense, when I know I can reasonably count of the underlying patterns, but even here, the benefits that I get back from such LLMs need to be weighed in terms of how much additional time I am now spending analysing the results to make sure that what I'm getting back is valid, and correcting it when it's now.

OpenAI does not "self-train". It (and others like it) typically employ many people (at very low wages) to filter out and "pretrain" their data, often at considerable psychological distress; that work means that much of the hard task of classification has already been done, but it's something that is in fact not sustainable. There have been many attempts to generate content that can be used for pre-training; however, because of the nature of the way that latent spaces generate narrative threads (something I won't get into here), what happens is that the mock training data usually loses a lot of intrinsic context, becoming blander and smoothed down over time, much like multiple repetitions of copying the output from one copying machine by the same mechanism.

Finally, we have effectively used the bulk of the Internet to train the models used for things like ChatGPT 4 (yes, we're up to 5, but the document corpuses have not changed significantly) and this means that we're seeing a plateuing of improvements in design in particular.

There are some interesting areas of research (especially into world models) that I suspect may provide a better approach to GenAI, but right now the consoritium of the Mighty Seven that effectively support AI are reluctant to go down that road because it is not beneficial to their longer term goals of getting the American public to build data centres for them.

Yes, AI might (almost certainly will) improve, but its probably not with this architecture, nor with the incredible amount of very questionable financial plays going along behind the architecture.

AZJonnie

(2,017 posts)
3. I'm really not asserting that this is what will happen, but I believe machines capable of tasks on that scale
Fri Oct 31, 2025, 02:52 AM
19 hrs ago

are the end goal. There's not really another way for this industry to be profitable unless it can replace the work of a lot of people in critical business tasks. Like building a building, though that was a random example to illustrate the point. I think achieving this level of power is their 'selling point' to investors, their explanation for how they're not just throwing their money into an open pit. Separate from the question of whether it's POSSIBLE, do you disagree with this premise?

Hugin

(37,042 posts)
5. Yes, I agree that is the selling point being used.
Fri Oct 31, 2025, 03:07 AM
19 hrs ago

It works because labor is so often the bugaboo in building every castle in the sky.

Metaphorical

(2,543 posts)
21. Oh, I absolutely agree with that
Fri Oct 31, 2025, 05:30 PM
5 hrs ago

Working on an article on precisely that point now, in fact.

I think we're within six months of a fairly massive meltdown in the sector, one that may end up precipitating a much broader collapse of the market. OpenAI's floating of an IPO to me may very well be the trigger - it's a way to materialize all of those options for the ones who were in on the Ponzi scheme, leaving everyone else holding an increasingly smelly bag.

Hugin

(37,042 posts)
4. My own cursory queries bear out the approximate 30% error rate.
Fri Oct 31, 2025, 03:01 AM
19 hrs ago

However, the error rate is non-linear. Especially when a query (prompt, to use the lingo) involves context switching. The results are often little more than a word salad.

Project management is littered with context switching.

Generative AI is incapable of chewing gum and walking at the same time and probably never will be. Sure, there’s methods of mitigation, but they all involve HITL (Human In The Loop). The captains of industry are going to balk at having numerous humans prompting the AI systems when they could be out building real world buildings.

Bernardo de La Paz

(59,935 posts)
7. You could have chosen any number of similar pie in the sky examples and they'd all be dreams but not selling points.
Fri Oct 31, 2025, 04:12 AM
18 hrs ago

AI is not being sold on that basis.

AI is being sold on the here and now agentic basis. AI is providing agents to answer questions, agents to write a piece of code, agents to analyze some incoming data and route it to the appropriate human, agents to answer the phone, agents to summarize meetings, agents to talk to other agents.

Doing all that agentic stuff can enhance a company's profitability now. Probably. Maybe. The MIT study says few companies are actually getting a solid return from deployment yet. I think that after the coming AI winter the next AI summer will deliver those solid returns. There are plenty of ways for AI companies to make good profits with the capabilities that already exist, with the inevitable fine tuning and optimization. They don't need to be capable of project management for there to be lots of profits.

AI is real, AI is powerful, it will be tuned, it will be made more efficient, but it won't do project management this decade and probably not the next. But maybe after 2040 which is not so far away, if there are some breakthroughs on the revolutionary scale of the LLM breakthrough.

If someone tries to sell you an investment or a product based on the idea that it can, or "just in a couple of years" will, manage a large project on its own, ... find out what the company is and sell their stock short.

I know you are not saying it can do it now, but rather you are saying it is being held out as a promise of future capabilities to get investment now. But really, nobody is selling stock or products on that future premise. There are an awful lot of people who make that extrapolation on their own, which is why we have a stock market AI bubble now. But that is rather like investing in Netscape or Altavista in 1998 hoping it will become some vaguely imagined Facebook or Amazon of the future. Those two companies withered away.

Let the bubble burst and see who emerges from the wreckage. If you make a half dozen to a dozen small investments in 2027 or 8 or 9, you will likely have a big winner among them by 2040. Like how a dozen years after the 2000 bubble burst by 2012 it was pretty clear Apple, Facebook, and Amazon were going to win with the internet.

AZJonnie

(2,017 posts)
9. Sure, I use Gemini Agentic via CLI everyday. But all I'm talking about is a collection of agents that effectively talk
Fri Oct 31, 2025, 05:45 AM
16 hrs ago

Your materials procurement agent. Your hiring agent. Your planning and scheduling agent. Your marketing agent. Your permit procurement agent. But the one agent to bind them all is the missing piece that keeps customers from having AI manage the bulk of the administrative type work of building a building, for example.

I think that when it comes to large scale investors, maybe it's as you say they aren't being explicitly told this capability is on the horizon at say, a large symposium, but I'd bet that's what they're promising the whales over $35 martinis at the swanky club: BIG stuff done largely autonomously by AI.

And I don't think there can be overall profitability for the industry without reaching that threshold. There will be 'profits' via certain types of accounting, sure. They were handed free freaking billions. So yeah, if you don't count that, maybe sometimes companies will take in more money than they spend in some given quarter. And market valuations will go up for awhile, so people will are getting 'returns' on their investment in that sense, on paper. But you think anyone is getting back the, say, $50M they laid out, via dividends, from AI companies any time soon?

My main point though was more political than technical. The only way there will be net profitability is by putting *many* millions out of their good paying jobs via massive scale AI deployment. And that's what they're shooting for. Even if I'm misjudging exactly what the public sales pitches are at the moment.

Metaphorical

(2,543 posts)
22. Remember that most agentic calls
Fri Oct 31, 2025, 05:37 PM
5 hrs ago

are thinly wrapped web service RPC calls. The one (moderately) good thing that agentic services have done is to provide a standard mechanism for discovery of existing web services. There's no real AI in that, it's just an RPC, but MCP is mostly a reskinned swagger/openAPI protocol.

AZJonnie

(2,017 posts)
23. Well yeah I don't have Gemini running locally like I do Ollama and a couple other models
Fri Oct 31, 2025, 05:56 PM
4 hrs ago

But for me the benefit of agentic Gemini is that I've given it permission to read individual folders in my Source directory (code projects) and can tell it "look here" and it will do it, and then I can say "change that file, in place" and it will do it. No back and forth uploading files to the Perplexity chatbot, copy/pasting code it responded with, etc. That is what I understand "agentic" to mean in my somewhat limited world/understanding.

I know of course in real life it's talking to Gemini running in the cloud but it's almost entirely just looking at my own code/the project and it's usually just form related stuff, sometimes they're drupal, sometimes symfony, sometimes laravel projects, almost always just PHP and javascript pieces. Oh, and I have it make a lot of database migrations for me. I've only a few times asked it to make API calls, and it was to a very well-known and well-documented API, so I really don't know much about it's ability to discover and communicate with existing web services other than that one.

Starbeach

(259 posts)
8. Helpful
Fri Oct 31, 2025, 04:49 AM
17 hrs ago

Thanks for imagining these complex offerings down the line, and the displacements that could occur.

11. And you'll end up with a 90k sqft ballroom
Fri Oct 31, 2025, 06:10 AM
16 hrs ago

At the cost of demolishing a landmark.... because it could negotiate good price for said demolition.

What is this history you speak of ?

That was all pre AI so it doesn't exist to me.

( sorry it does sound the the embodiment of an AI hallucination )

Bibbers

(12 posts)
12. Stupid Question About AI
Fri Oct 31, 2025, 10:02 AM
12 hrs ago

When I do a search on Ecosia or Duck Duck Go (I'm sure it's the same for Google) it generates an AI answer. I don't want it to generate an AI answer.

1. Is there a way to turn that off?

2. Do companies make money off that automatic AI-generated answer?

3. Does it use a lot of energy to generate that answer, in which case I REALLY want to turn it off.

I know I'm an idiot when it comes to this stuff, but I would really like to know. Thanks to anyone who can answer!

Hugin

(37,042 posts)
14. Some amateur answers...
Fri Oct 31, 2025, 10:36 AM
12 hrs ago

I am not familiar with Ecosia.

1. DDG allows users to opt out of AI searches. Check your settings to see if it’s set.

2. The best answer to this I can come up with is that they anticipate making money. Your eyeballs are their commodity. The problem up until now is that the volume of data is overwhelming. Truly targeted ads remain elusive. They see AI as a means to cut through this. The current problem for them is consumers are fickle with their tastes and needs. Which AI doesn’t handle well. An example would be, once you have found the refrigerator of your dreams, you are not likely to search for one again soon. So now that they have an expert system trained up on your precise refrigerator metrics. You have moved on to duvets. They believe that the solution to this paradox is to know everything about you in granular detail. Unfortunately for them, human behavior is a fractal or labyrinth for a system (AI) which relies on a steady context. The AI system predicts that you will be needing a new refrigerator soon, but you have opted for a day at the spa.

3. There have been some articles on studies which indicate that an AI search (due to the training of the LLMs) generates several thousand times as much carbon as a regular search. I think that the AI search was around four GRAMS of carbon at the time of the study. Reading up higher on this thread with the construction of data centers happening now it’s probably much higher.

leftstreet

(37,846 posts)
15. +1
Fri Oct 31, 2025, 11:07 AM
11 hrs ago

Yes, I think the goal is making those targeted ads more profitable. For some time now when you make a purchase there are auto suggestions for product enhancements, or "customers also bought," etc. Predictive generating would be more tech-y, but better profits!

hunter

(40,116 posts)
13. What a pile of bullshit.
Fri Oct 31, 2025, 10:31 AM
12 hrs ago

Sadly, we probably won't learn until we are pulling bodies out of the rubble, and maybe not even then.

We are very stupid creatures.


Demsrule86

(71,269 posts)
16. The coding is useful...but the resto is just bullshit and will crash...the AI companies are
Fri Oct 31, 2025, 11:24 AM
11 hrs ago

will crash.

AZJonnie

(2,017 posts)
17. Yeah. I probably didn't make clear enough that my point was more political than technical
Fri Oct 31, 2025, 01:18 PM
9 hrs ago

Which is to say, the goal of AI is to steal a LOT of good paying jobs e.g. that's the only way the massive investments (and ongoing costs) will ever come close to being recouped.

Others above have pointed out what I'm saying is a long way off and may never happen, and that AI isn't being 'sold' this way. I'm just going to say that only 5 years ago I had absolutely no idea AI was going to become as capable as it is today in that span of time.

highplainsdem

(58,823 posts)
18. I've seen too many articles on problems with AI coding including security risks that aren't caught to be
Fri Oct 31, 2025, 01:39 PM
9 hrs ago

impressed by that.

It's very flawed technology that hasn't been widely adopted by business because of those flaws.

The con artists behind the AI hype are finally admitting it will always be fallible.

What it's best at is fraud - allowing people to pretend to have knowledge and skills they don't have.

It's been most impressive to students too ignorant to catch the mistakes, and students using AI to cheat remain a large fraction of users.

It's dumbing those students down, though. Dumbing down all users, but the damage is most obvious with students.

It's also doing psychological damage to a lot of users.

It's harming the natural environment.

It's harming the information ecosystem, flooding it with AI slop and disinformation and deepfakes.

And it's controlled by some of the greediest, most unethical people on the planet, some of whom have insane, cultlike ideas about their tech god.

The AI companies should be sued out of existence, and those responsible for the IP theft to train the LLMs and the con job that followed the theft, from the initial lies to the current circular financing, should be prosecuted. Long sentences would be appropriate considering the harm they've already done.

AZJonnie

(2,017 posts)
19. I probably should have made it more clear that my point was more political than technical
Fri Oct 31, 2025, 01:51 PM
8 hrs ago

Which is to say, in line with your oft-shared views, the only way AI generally is ever going come close to recouping its costs is by replacing millions of good-paying jobs. Like it would have to be able to coordinate pretty large scale projects.

It's funny that you mention fraud though because the only actual implementation I've been part of at my job is a project where we were going to be using AI to detect fraud, specifically it was to be a 'business referrals' system, which are inherently ripe for abuse (and indeed, AI is used to defraud such systems as well) and we got pretty far into developing the implementation before the money dried up and the project was cancelled. It would have been a custom, purpose-driven installation, not trained on the world at large, just the dataset gathered from the referral submission process.

Of course it would've needed months of training in the form of humans flagging the fraudulent submissions so that the AI could observe the patterns and eventually do the flagging on its own, but I will say my boss was pretty confident it could be brought up to about 95% accurate when trained for this purpose. Obviously this is not life or death stuff so 95% accurate would have been pretty damn useful to the client.

BTW Hi HPD, hope you are well these days

MineralMan

(150,068 posts)
20. That's very interesting, but I think there is a flaw in that
Fri Oct 31, 2025, 02:22 PM
8 hrs ago

concept that can't be ignored. If human beings are required to do the actual work of turning a big project into reality, one thing that absolutely has to be taken into consideration is the general laziness of human beings. Go visit a project as simple as building a nice home on a lot for people to buy and live in. On a small scale, that is the same as building the largest project you can imagine. Individual humans work together in small groups to do the actual physical work in building that project. Standing nearby will be other human beings whose job it is to make sure the paid workers are doing the job efficiently and properly. Take those people away and that house is going to leak, lean, or fall down long before it should. Why, because people are lazy in their work and take whatever shortcuts they can invent to do it the fastest, easiest way possible. And that's almost never the proper way.

So the AI does everything about the supply chain and finding people to do the work. It brings all of those things together, and what happens. Basically nothing, without the supervisors who tell all those workers what to do, when, and how to do it properly. Then, after the work is done, it gets inspected and redone if there are errors. Where do those supervisors come from and how does the AI system know which ones are good and will get the work done properly?

Where do they come from? Well, they're self-generating. They all start out doing the actual work, which they learn to do properly and efficiently. They get noticed by the supervisors they work under and begin to get supervisory assignments because they know WTF they're doing. Years later, they are picking out other workers who are capable of becoming supervisors.

That is one thing no AI system will ever be able to do: Supervise human workers. Because AI cannot do anything the workers can do, so it cannot learn how to supervise workers. AI has no eyes, hands, feet or ears. It has no way of knowing the quality of what is under construction. That requires human experience and intelligence. Did that framer use the correct fasteners and enough of them? Were they installed correctly. AI has no idea. It can only assume so.

The only people AI can replace are on the supply side of projects and top level organization and planning. After that, real humans have to take over, because the AI lives in dataspace and has no knowledge of physical reality. It just has descriptive models of that, but has never experienced the reality of any of those models.

And so it goes, on and on...

AZJonnie

(2,017 posts)
25. I really didn't mean it's definitely going to work and building a building was just a convenient illustration
Fri Oct 31, 2025, 06:40 PM
4 hrs ago

Wherein the main thrust of my point was more political than technical. The only way AI could really make money at the scale of it's investment is if it can do really big things, and thus replace really large numbers of well-paying jobs. I should've made that more clear in the OP. But yeah, almost nothing about the physical part of the job can be replaced by AI. There will still need to be workers, and there's only so much AI would be able to do WRT to picking out the correct workers. There still needs to be contracting companies, who do most of the things you're talking about in your post. But there's still a LOT of stuff related to a task like this that an AI could do, if sufficiently advanced.

Lastly while it's right to say AI has no hands and feet, it most definitely can have 'eyes' at least in some sense.

hatrack

(63,866 posts)
24. All well and good, but the energy demand will kill us (financially and literally) . . . .
Fri Oct 31, 2025, 06:27 PM
4 hrs ago

TX is currently facing interconnection requests for new locations (AI, crypto and other data centers) for four times the maximum demand ever placed on their grid. That's 205 new gigawatts of requests. The maximum daily demand ever on their grid was 85.5 GW in August 2023.

https://insideclimatenews.org/news/28102025/texas-data-center-grid-planning-struggles/

This is one among a multitude of examples of how much of a fuck-up this whole thing is likely to end up being.

Q&A

1. How quickly can new plants be built?
2. Will they be built?
3. If they are, who will pay for them?
4. What happens to the sunk capital costs if the whole AI bubble goes tits-up?
5. What does this do to electric rates for everybody else?
6. Will promoters of data center development concern themselves with water availability?

1. Answer - Not quickly enough to satisfy the tech bros and bubble-pumpers.
2. Answer - Potentially, but utilities are famously conservative businesses, unlikely to "take a chance" on something as hyped as this without ironclad guarantees from the state.
3. Answer - Ratepayers, with maybe a smidge covered by the AI and Crypto-bro companies.
4. Answer - Ratepayers, and probably taxpayers nationwide.
5. Answer - Rates go up, probably by a lot.
6. Answer - No. Problematic when you consider how eager investors are for AI facilities in places like Texas, Arizona, southern California, Nevada etc.

That's the best-case short-term scenario.

The worst-case long-term scenario is that the hype and bullshit attendant upon Shiny New Tech Big Money Thing succeeds and much of it gets built. With new NG coming on line en masse, with coal retirements postponed and old coal plants taken out of mothballs, this means that whatever vanishingly small chance we had of dodging the
double climate canister at 50 yards that Gaia is loading for us drops to zero.

But hey, we got to dream about Shiny New Tech Big Money Thing for a while, and writing term papers got marginally easier, and a few people you wouldn't lend your phone to got really, really rich.

AZJonnie

(2,017 posts)
27. Yeah I certainly did not to have it come off sounding like it's all a 'good thing' ESPECIALLY not for the climate
Fri Oct 31, 2025, 06:50 PM
3 hrs ago

Thanks for the great and thoughtful contribution to the thread

Crypto pisses me off even more though I have to say. It's literally nothing more than WASTING valuable resources (i.e. power), and then calling it "wealth". It's such fucking bullshit. Money is supposed to represent actual resources/things of value, not the DESTRUCTION of it. It's completely fucking backwards, fundamentally

Turbineguy

(39,571 posts)
26. Priceline refund works on AI
Fri Oct 31, 2025, 06:45 PM
3 hrs ago

You put in for a refund. You chat with AI Penny. You go through the steps. Each answer from her is longer and comes much faster than a human can type.

Priceline keeps your money.

Ilikepurple

(347 posts)
29. AI, as both a technology and a commodity, is in its infancy, but this thread makes some nice points
Fri Oct 31, 2025, 08:01 PM
2 hrs ago

AZJonnie, thanks for putting thought into both your OP and your responses. It’s nice to see this kind of engagement in a discussion. I thank the rest of you for your input also.

Latest Discussions»General Discussion»If you're confused about ...