Hacker News new | past | comments | ask | show | jobs | submit login
How to feed all the power-hungry AI data centers (caseyhandmer.wordpress.com)
37 points by madpen 6 months ago | hide | past | favorite | 61 comments



This article is keyword stuffed nonsense.

Take the first sentence.

> As recently discussed on The Carbon Copy with Brian Janous, utilities are seeing major forecasted demand growth for the first time in decades, and almost entirely from data centers.

Utilities aren't seeing major forecasted demand growth for the first time in decades. It isn't entirely from DCs.

The author appears to have a startup, pitching a Power to NatGas plus direct air capture system because "Why not?™" [0]. I'm offended on behalf of chemical engineers everywhere.

[0]https://terraformindustries.wordpress.com/2023/06/26/the-ter...


i have a crazy idea... how about... not focusing on ever increasing the size of ai models, so that we don't need to build all those power-hungry ai training datacenters? just a thought


It’s always interesting to see these ideas on a tech forum.

Should we as humanity stop right here? Draw a line in the sand and not do anything new? Maybe we should move backwards, skipping the Industrial Revolution and go back to each individual growing their own food?

This is a serious question. What do you propose? Because you are not able to pick and choose what humanity tries out, so shall we stop entirely?


I think it is not very generous to interpret somebody suggesting that we try different things, as

> Because you are not able to pick and choose what humanity tries out, so shall we stop entirely?

I don’t see anything that dictatorial in their suggestion.

There’s clearly something wrong in the current LLM based ecosystem. They take gigawatt-hours to train and digest the entire corpus of the internet to produce a model that writes at, what, the level of an erudite college freshman?

> not focusing on ever increasing the size of ai models, so that we don't need to build all those power-hungry ai training datacenters?

I read the italic bit as not a command to stop, but a suggestion to come up with better algorithms. Which researchers are presumably working on. Hopefully chatGPT & friends don’t suck up all the oxygen.


> There’s clearly something wrong in the current LLM based ecosystem

LLMs are in their infancy and we have individuals calling for reduced usage of electricity along withothers like yourself saying something is wrong with them. I simply believe those are wild opinions to have for a space that is still so new. Either business finds these tools useful in the long run or not and that will drive prices and efficiency.


We should invent solutions for problems that we have, instead of wildly trying to apply this tech to everything we can think of. We are skipping the benefit/cost considerations.


People are doing cost/benefits analyses. It’s just that defining costs and benefits are subjective and you disagree with their decisions


Many companies and individuals are figuring out the benefits. There are number of companies that are seeing real value. We are not skipping anything.


Is AI not a solution for our problems? I can imagine a declining population, a younger generation, that does not want to reproduce, neither working off the available work. We need a solution for that.

So we need systems, that learns human knowledge, refines it, makes it better and takes over all of the work.

And for programmers: is sitting not the new cig smoking and drinking alcohol for early death?


How does AI help in solving the issues you brought over in your post?

> a declining population, a younger generation, that does not want to reproduce, neither working off the available work. We need a solution for that.

And what do you reckon is the reason for that? "Kids these days are lazy and only want to have fun", or something to that extent? Or is that because the cost of living has skyrocketed, the future is uncertain and people are afraid of the planet becoming more and more inhospitable for humans in the following decades or even years?

Maybe people do not want to work despite the job availability because those jobs offer piss poor conditions and no security? If not for all these reasons, why would people act so differently than in the past?

Once you put your precious A"I" in the equation, what do you get out of it? Even less job security, more leverage for bad employers to threaten workers with the magic wand of "you'll get substituted", and arguably way shittier quality of products all around (cases in point: A"I" customer service that fails to be satisfactory in resolving customer issues or become a huge nuisance to deal with, or A"I" "art" in articles and blogs that' so easy to spot that you could arguably be better off with no illustrations at all).

> So we need systems, that learns human knowledge, refines it, makes it better and takes over all of the work. First off, who is "we"? And secondly, once it "takes over all of the work" (whatever that means), what are people going to do? Do you really think we'll get some utopian fantasy like UBI?

> And for programmers: is sitting not the new cig smoking and drinking alcohol for early death? How tf does that have anything to do with A"I" helping? If I use Copilot to "help" me out in programming I'll still have to sit, will I not?


> And what do you reckon is the reason for that?

Declining birth-rates seem to happen to every society that passes a certain level of industrialization, with the most well-off and secure in these societies having the fewest kids, and birth rates as a whole declining as more people enter this "totally well-off and secure" demographic. In fact, people in these same late-industrialization cultures who aren't well-off, whose futures remain uncertain, do not experience any decline in aggregate birth-rate at all. Nor do people in societies where nobody is well-off — these in fact tending to be the societies with the highest birth-rates!

The trend in my own observations, from personal experience with both family-wanting "trad" people, and "child-free" people, is that as the self-perceived value and security of your own life goes up, two things seem to shift in your perspective:

1. your subjective resource-commitment bar for bringing a new life into the world grows ever higher proportionally to your available resources (i.e. a millionaire has the intuition that they'll have to allocate a good portion of their millions to any kids they have; a billionaire thinks the same, but now it's a good portion of their billions.) Having kids is always intuitively expensive — but as your own life becomes better and more secure, having kids begins to feel exceedingly, inhibitingly expensive. (Even though it's likely eminently practical for people with 1/1000th your resources, let alone for you!)

2. specifically for women, the act of gestating and raising of a baby grows ever-larger in its subjective potential for negative impact on their (increasingly-highly-valued and de-risked) lives — in terms of both time-cost and risk to their health, career, etc.

Given these trends, here's my hypothesis for what's happening:

We have seemingly evolved to feel driven to reproduce when under a sort of optimum level of scarcity and uncertainty. We feel this drive the most when:

• things are bad enough that you feel your own opportunities for a good life are done and spent — so having kids is your genes' Hail Mary to trying again in 15 years when opportunities could be better; with this lack of further opportunity making the risk to your health and resources of having a kid feel "worth it";

• but things still aren't so bad at the moment, that any kids you conceive would literally starve and not make it to reproductive age themselves.

This heuristic worked just fine for the entirety of human evolution (and perhaps long before that); but it seems to "go wrong" in post-industrial society, for people experiencing no present scarcity or uncertainty. The heuristic wasn't designed to cope with this! It outputs nonsense — things like:

• "you never need to have kids, you'll be immortal, always healthy, and will always have infinite opportunities"

• "the right time to have kids is after you retire, when you'll have time, and also money accumulated to raise them. But of course, only if you can get someone else to carry the baby for you, since you won't have working gonads by then. And only if you can afford a team of nannies and private tutors, since you won't be able to handle the rambunctiousness of a toddler by then."

...both of which, as intuitions, tend to result in people just never having kids — despite often eventually regretting not having had kids. Because, by the time these intuitions shift or resolve positively, it's too late.

This broken intuition about when (or if) you should ever have kids, seems to lead to people also developing very different perspectives on sexual relationships. These "child-free" people — especially women — often seem to experience much lower sexual drive, or even fear of sex for its potential to force an (unwanted) child upon them. And this negative attitude toward sex, secondary to fear of reproduction, often then leads to either strained romantic relationships, or just not bothering with romantic relationships at all.

If you wanted to medicalize all this, you might call it a specific kind of neurosis that humans develop, when they no longer need to do anything much to ensure all their needs are satisfied. It's a compulsion toward catastrophizing all the negative aspects of reproduction, child-rearing, sex, and romantic relationships; and a disconnection from the emotional valence-weight that the positives of these same subjects would normally have.

(And I suspect that this is exactly the framing various societies will eventually take toward this developing "problem" — as I can totally imagine drug companies developing and marketing treatments for "reproductive neurosis", that trick just the part of your brain that cares about that sort of stuff, into thinking you're not well-off and not secure, so that it'll spit out the signals to tell you to feel more positively toward these subjects.)

> Do you really think we'll get some utopian fantasy like UBI?

Yes. Why wouldn't we? As soon as nobody has to labor, UBI is just one global (probably bloody) proletarian revolution away.

There has always been a "so who does the work nobody wants to do, then" blank spot at the end of the Marxist plan, that left all previous Communist revolutions floundering after the "revolution" part.

But "intelligent robots grow all the food autonomously, cook it, and give it out for free, while also maintaining themselves and the whole pipeline that creates their parts, to the profit of no one but the benefit of all — the mechanistic equivalent of Sikh Langar, accomplished at megaproject scale in every country" (and analogous utopias re: clean water, housing, etc) form a set of neat snap-in answers to that blank spot. I have always presumed that these are what AI advocates are vaguely attempting to gesture toward when they imagine AI "taking over all of the work."


> I can imagine a declining population, a younger generation, that does not want to reproduce, neither working off the available work.

Luckily the future does not depend on what you can or can't imagine. And no, even if that imagination were accurate, AI would not be the solution for any of those problems. We know the root causes, we know the effects, having a bunch of companies boil the oceans to have AI generate mediocre copy and uninspired illustrations does not help with anything other than making those companies richer and displacing even more workers and shutting down even more career paths.

Young people don't want to have kids or work because all the aspirational goals previous generations had have become unattainable for them, we're counting down the years until the climate catastrophe becomes impossible to ignore even in wealthier countries and there is literally nothing the masses can do because politicians across the world have shown a complete disregard for human life in the face of a global pandemic that had a death toll in the millions before we simply stopped counting.

> So we need systems, that learns human knowledge, refines it, makes it better and takes over all of the work.

And how would that benefit anyone but those who own those systems and charge for its use? How would that benefit the "younger generation"? To the contrary this would seem like it would do what automation always does: drive down wages and reduce workforces while harming workers' ability to bargain for better working conditions.

As tech workers we should take the time to understand that the "Luddites" were not actually opposed to technological advancement but to the consequences of it in a system that always rewards the business but not the worker for an increase in productivity. You don't need to withdraw into a cabin in the woods to realize that the way we have set up the systems that govern us technological progress will always only accelerate the wealth drain to the rich, never reverse it.

And that doesn't even get into how most AI startups are completely unsustainable (as growth-oriented startups tend to be) or how the proliferation of underpriced AI has contributed to the destruction of knowledge via search engine spam, "content generation" and social media bots.

If you want to create fully automated gay luxury space communism, be my guest, but you'll also need to work on the "communism" part if you want to make the "fully automated" part not result in the opposite direction.


What about education for all, better healthcare and a solution to hunger. Or discriminations? I’m not seeing anything trying to solve that in the AI space right now. It’s all about replacing workers and artists.


Well yeah cause its bad at all the things you want it to solve.


I'd argue that the innovation with these models is making smaller. Just throwing compute resources to make a model with more parameters is easy and doesn't really expand our knowledge. IMO, larger and larger LLM's aren't that impressive. Being able to shrink that model down, retain its accuracy (to a degree) and be able to run it on smaller hardware is impressive and will more likely lead to AI/ML being intertwined within people's day-to-day


Don’t disagree and I think it is natural evolution similar other aspect of tech. We innovate and then make it more efficient. I don’t want to stop the opportunity to innovate because of electricity usage though.


Yes, you can't save yourself rich.

I think these kinds of things will be solved by market forces.


What people want is limited by their ability level.

And there's no reason to ever be bullish on liberal (classical) society's capacity for intentional change.

Human agency doesn't mean shit.


Engineers like to create solutions, not problems.


Prone too at creating solutions to problems that do not exist.


> Should we as humanity stop right here? Draw a line in the sand and not do anything new?

Maybe someone with the knowledge of hindsight in a few generations will say that, yes, we should stop and draw a line in the sand.

We are still used to the idea that we can treat externalities to the systems we build as infinite and boundless, while it is getting increasingly clear that they are not. I am not saying we should stop working on LLMs. I just say we probably should factor in (that means: assign economic value to) downstream consequences of said high energy consumption if we don't wanna destroy our own habitats. And that then would probably lead to a world where LLMs are used for actually important problems instead of furthering the goals of the few who truly profit from surveillance-capitalism.

But hey, let's continue wasting a years worth of energy on training LLMs on cat pictures, they are cute after all.


There are unaccounted externalities in many areas, but energy use of LLMs isn't one of them - people are paying full market price for the energy of training or using LLMs, that energy cost is large enough so that people care a LOT about it, and make intentional tradeoffs about what is worth it.

And if people vote with their wallets that they are willing to pay for a years worth of energy on training LLMs on cat pictures, we should respect their choice that apparently this is what they consider a valuable use of the limited resources they have.


Jup, and the market price for energy doesn't reflect where it is going towards, which was my point. Whether you train the LLM to predict extreme weather events or put funny hats on pictures of cats, you have to pay the same — in fact, the latter will drive the price up for the former.

Sure, there are still people out there who believe the invisible hand of the market will regulate everything to give us a good outcome despite every real world evidence to the opposite. But in a market more demand for bullshit reasons drives the prices up for everybody, also for things that are essential for the survival of the poorest, the long term sustainability of environment, health services, etc.

The question I laid out was whether it is really a good idea to treat some random bullshit the same as essential services and externalities? Especially considering we are walking into a climate catastrophe that is the result of too much energy use.

Granted, we live in a society that is kinda built on ignoring these connections — otherwise we would go mad or cynical, but lying to ourselves has a hidden cost that will come due at some point.


But as we don't have mass long-term energy storage, future energy is not exchangeable with current energy, thus the future energy cost shouldn't matter for the current training of LLMs, and whether tomorrow it will be a good use of energy to put funny hats on pictures of cats, it's something that people will be able to choose based on the availability of energy - it's up to them to decide what's essential or not, it's up to them to decide whether they are willing to sacrifice some entertainment just to save energy, and that's what they should be doing with money as the mechanism of measure - if a person wants to do something silly, it would be inappropriate for someone else to decide "no, that's random bullshit" and prohibit them to spend their resources (energy or otherwise) on it. The fact that this decision will affect the price of e.g. energy for others is NOT an externality, it's the core economic mechanism of allocating scarce resources - if some people want to spend the same resources on cat picture and they are fully paying the costs of that, then that's directly reflected in the price they're paying, not an unpriced externality.


Are you sure you aren't deep in cognitive-dissonance land now?

More energy use overall — momentary or over time (=work), doesn't make a difference here — especially with LLMs which aren't exactly producing bursty loads and could be in theory scheduled to any time of the day. And energy corps will just have to produce more energy, which has it's own negative externalities on the world.

That means my stupid project still has an impact on the rest of the world. As of now everybody luckily still has the luxury to figure this out themselves. But we have to realize that depending on how bad things get this won't be feasible forever. And how fast things get bad depends on our collective energy use. If you're in a desert with limited water you won't let your kid pour it on the water because it makes a fun sound wouldn't you?

You would only let your kid do that if you had no mental model of water being a limited resouece and no imagination of the consequences a lack of water would mean for you and your kid.

Energy is a limited resource and producing more of it has it's own secondary costs (e.g. for climate change).

Operating with limited resources doesn't mean we are not allowed to have fun. It just means you don't pour your limited resource on the ground for it.


One man's "pour your limited resource on the ground" is another man's "valuable use of that resource" - if they're choosing to do something with the energy they purchase, apparently they consider that this is a worthwhile use of that resource. If I'm in the desert and want to pour some water on the ground for growing orchids, that's my choice on how to use it - if it's really important to conserve water, then it will be costly enough so that I won't waste much of it on luxuries, but if it's cheap enough to do so, then people are voting with their wallets that it's NOT really (yet?) so important to conserve it so much as to refuse spending that on luxuries.


“Thou shalt not question”


> Should we as humanity stop right here? Draw a line in the sand and not do anything new?

There will be a point when the answer is "yes".

For example, when we can produce hobby 3d printers for biological viruses. Or DIY kits for nuclear weapons. Or cameras that are the size of a grain of sand and cost only 1 cent. Stuff that some individuals might want, but society will never be ready for.


It's not just "progress" vs "no progress". Some technical innovations, like leaded gasoline, have had more of a negative impact than a positive. It's not a binary, and not all technology is without cost.

The argument against current AI progress isn't anti-technology, it's pointing out that these things can have a hugely negative impact that's mostly being shouted down or swept under the rug in the name of "progress". The climate crisis is pretty serious business, and the AI power consumption is making it worse, not better. The risk of huge amounts of unemployment are being mostly handwaved away as "yeah, that's not gonna happen, we always make jobs out of somewhere".


Even if the models aren't gargantuan, millions of people using AI applications is going to burn a lot of power.


I still haven’t seen a use case that will make AI adopted by the masses I think it’s still mostly hot air. It’s good for what it is for office jobs and higher education, but only for specific tasks and it has to be triple checked.

Just yesterday at work I asked it for a Powershell script to upload a local file to S3 and it hallucinated a method. The whole script is like 10 lines. How could it mess that up, but meanwhile it’s about to change the world and be mass adopted?

This is Claude 3 the best model for coding…

After using these models for awhile I think they’re really helpful to get off the ground in terms of coding and writing and to help review coding and writing, but that’s about all I’ve seen so far. It feels like we’re also getting diminishing returns with the current paradigm so we’ll see if I’ll be eating my hat next year, but I really don’t see mass adoption like a search engine.


I use ChatGPT4 daily for powershell scripts. The only time it seems to hallucinate is when I'm forcing it to use 4.0 rather than 5.1 or 7.


I've had GPT4 produce quite a bit of useful code, and it's indispensable when working with new libraries or in new domains.


Even if every dev in the world adopted an LLM copilot, that’s a teeny tiny sliver of “the masses”.


Except that LLMs are going to eat into everything white collar. Law, medicine, writing, common office tasks, etc. Beyond that, a function calling chat interface enables a class of application that couldn't exist without it.


Has AI not already been adopted by the masses?


Absolutely not. I’d wager that a slim majority of Americans have heard of chatGPT, much less have used it, and even fewer have done something useful/productive with it.

Here are stats from 6 months ago, I doubt it’s changed significantly since then:

https://www.pewresearch.org/short-reads/2023/08/28/most-amer...


>about four-in-ten adults under 30 have used it

That seems pretty massive honestly, that's way beyond what I would consider adopted by the masses.


That’s four intern among those that have heard of it. So cut that in half to get the percentage of the population.

Or another way of thinking of it: among people that have heard of your free-to-use product, a majority haven’t bothered to try it.


This only makes their argument stronger.


But then who would pay OpenAI to keep building bigg- I mean, smarter models?


If your business model depends on a continuous growth curve where your profitability point is more than two years away... you may be planning for an exit in 18 months.

The only reliable long term growth curves so far are world population and global average temperature.


Check notes.... World population? We're in for a rude awakening.


Solar is no way close to being reliable, all weather and cheap source of energy that nuclear and fossil fuel can be, which are critical for the tech industry in general.


The actual workload in ML model training is totally deferrable. It ought to be very compatible with intermittent power, and actually great for the renewable grid. The two solutions for intermittence are: buy batteries or buy so many solar panels that you still get enough power even when they are operating at like 30% efficiency. A workload that will pay big bucks for any excess power you’ve got will provide the needed ROI for the latter solution.

Unfortunately the chips are all to expensive so everybody wants to run them full-out all the time.


TBH I was expecting something about plugging humans into a huge bioenergy farm and letting them live out their lives in a simulation.


Easy: utility of AI is huge, load is understandable and controllable.

DC are run by companies with a shitload of money.

Not a hard problem.

And we can use the heat to heat houses too!


The problem is that massive amounts of PV is needed to replace existing fossil fuels, so these would be extending the carbon output of coal/natural gas.

If AI training is being that power hungry, maybe we shouldn't be doing it ... right now.

It's all fueled by speculative VC funding. This isn't a slam dunk economics calculation, certainly no more than regulation to shut this nonsense down would be.


That makes sense if PV deployment is limited by supply. But the solar industry is currently limited by demand. You can see it in market news for solar components:

"Wafer prices stable-to-soft on market oversupply"

https://www.pv-magazine.com/2024/03/15/wafer-prices-stable-t...

"Wacker Chemie’s sales, earnings fall in tough market"

https://www.pv-magazine.com/2024/01/30/wacker-chemies-sales-...

"China polysilicon prices fall 51.8% year-on-year amid supply glut"

https://www.pv-magazine.com/2024/01/19/china-polysilicon-pri...

"China solar cell prices decline on sluggish downstream demand"

https://www.pv-magazine.com/2024/01/05/china-solar-cell-pric...


I thought building new solar plants was cheaper than operating existing gas and coal plants. What’s hold up on building solar capacity as fast as we can, saturating demand? Permitting? Capital deployment in utility markets?


In the United States, I think that permitting, interconnection queue backlogs [1], and political efforts to protect legacy coal against cheaper competition [2] are the biggest obstacles.

That said, the US had its best-ever year for solar deployment in 2023, and this year is projected to be even better: https://www.eia.gov/todayinenergy/detail.php?id=61424

[1] https://www.utilitydive.com/news/grid-interconnection-queue-...

[2] https://montanafreepress.org/2023/06/03/the-battle-for-clean...


It's not a replacement for the cost of operating existing gas and coal plants; building up new solar capacity enables you to burn less gas or coal at the times when the sun is shining and thus cut its fuel costs, but it does not enable you to shut down the plant and cut any its other operating costs, because you still need the same capacity for e.g. winter evenings when at peak daily consumption there is zero sunlight.


The problem with renewables is that most people talking about replacing fossil with renewables don't understand how the grid actually works.

What we need the most is better storage technologies. PV, wind and hydro are highly variable which means you need something to cover the gaps between production and demand. Gas and coal plants are actually super useful for this because you can literally scale production up or down as needed by increasing or reducing fuel. If you want to eliminate gas and coal, you need a way to store excess energy and release it on demand - and to be able able to do so at the same scale as with fossil.

And nuclear power is actually worse in this regard because you effectively can't turn a nuclear power plant on or off. If it's on, it must remain on because taking it offline can take days and the same goes for turning it back on. But the problem right now is not lack of production but stability. And stability can only be achieved with better storage technologies - or fossil fuels.


This might be why Bill Gates is invested so heavily into TerraPower.


Bill Gates has a $1B short position on Tesla.

I wouldn't use what Bill Gates is investing in as any indicator of what we should be doing.


Tesla has been on a steady [and IMHO rightful] decline since Jan 3, 2024.


I meant in terms of doing the right thing for the planet


They're not doing that, either... the only reason you would drive a Tesla is performance. The environmental impact of making a full-EV is much more damaging than using a gas engine.


That old oil industry FUD line again? Seriously it seems like a switcy was flipped and now people who claim to care about the environment spout oil industry myths.


The real "oil industry myth" is that of your carbon footprint [1].

[1]: https://www.youtube.com/watch?v=1J9LOqiXdpE


I thought it's widely accepted that the Tesla stock price is heavily inflated relative to the company's actual performance and that Elon Musk is increasingly considered a liability as his vaporware marketing (e.g. the various blatantly misleading claims about the trajectory of FSD capabilities) was a major factor to the stock's growth in the past.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: