Hacker News new | past | comments | ask | show | jobs | submit login

It’s always interesting to see these ideas on a tech forum.

Should we as humanity stop right here? Draw a line in the sand and not do anything new? Maybe we should move backwards, skipping the Industrial Revolution and go back to each individual growing their own food?

This is a serious question. What do you propose? Because you are not able to pick and choose what humanity tries out, so shall we stop entirely?




I think it is not very generous to interpret somebody suggesting that we try different things, as

> Because you are not able to pick and choose what humanity tries out, so shall we stop entirely?

I don’t see anything that dictatorial in their suggestion.

There’s clearly something wrong in the current LLM based ecosystem. They take gigawatt-hours to train and digest the entire corpus of the internet to produce a model that writes at, what, the level of an erudite college freshman?

> not focusing on ever increasing the size of ai models, so that we don't need to build all those power-hungry ai training datacenters?

I read the italic bit as not a command to stop, but a suggestion to come up with better algorithms. Which researchers are presumably working on. Hopefully chatGPT & friends don’t suck up all the oxygen.


> There’s clearly something wrong in the current LLM based ecosystem

LLMs are in their infancy and we have individuals calling for reduced usage of electricity along withothers like yourself saying something is wrong with them. I simply believe those are wild opinions to have for a space that is still so new. Either business finds these tools useful in the long run or not and that will drive prices and efficiency.


We should invent solutions for problems that we have, instead of wildly trying to apply this tech to everything we can think of. We are skipping the benefit/cost considerations.


People are doing cost/benefits analyses. It’s just that defining costs and benefits are subjective and you disagree with their decisions


Many companies and individuals are figuring out the benefits. There are number of companies that are seeing real value. We are not skipping anything.


Is AI not a solution for our problems? I can imagine a declining population, a younger generation, that does not want to reproduce, neither working off the available work. We need a solution for that.

So we need systems, that learns human knowledge, refines it, makes it better and takes over all of the work.

And for programmers: is sitting not the new cig smoking and drinking alcohol for early death?


How does AI help in solving the issues you brought over in your post?

> a declining population, a younger generation, that does not want to reproduce, neither working off the available work. We need a solution for that.

And what do you reckon is the reason for that? "Kids these days are lazy and only want to have fun", or something to that extent? Or is that because the cost of living has skyrocketed, the future is uncertain and people are afraid of the planet becoming more and more inhospitable for humans in the following decades or even years?

Maybe people do not want to work despite the job availability because those jobs offer piss poor conditions and no security? If not for all these reasons, why would people act so differently than in the past?

Once you put your precious A"I" in the equation, what do you get out of it? Even less job security, more leverage for bad employers to threaten workers with the magic wand of "you'll get substituted", and arguably way shittier quality of products all around (cases in point: A"I" customer service that fails to be satisfactory in resolving customer issues or become a huge nuisance to deal with, or A"I" "art" in articles and blogs that' so easy to spot that you could arguably be better off with no illustrations at all).

> So we need systems, that learns human knowledge, refines it, makes it better and takes over all of the work. First off, who is "we"? And secondly, once it "takes over all of the work" (whatever that means), what are people going to do? Do you really think we'll get some utopian fantasy like UBI?

> And for programmers: is sitting not the new cig smoking and drinking alcohol for early death? How tf does that have anything to do with A"I" helping? If I use Copilot to "help" me out in programming I'll still have to sit, will I not?


> And what do you reckon is the reason for that?

Declining birth-rates seem to happen to every society that passes a certain level of industrialization, with the most well-off and secure in these societies having the fewest kids, and birth rates as a whole declining as more people enter this "totally well-off and secure" demographic. In fact, people in these same late-industrialization cultures who aren't well-off, whose futures remain uncertain, do not experience any decline in aggregate birth-rate at all. Nor do people in societies where nobody is well-off — these in fact tending to be the societies with the highest birth-rates!

The trend in my own observations, from personal experience with both family-wanting "trad" people, and "child-free" people, is that as the self-perceived value and security of your own life goes up, two things seem to shift in your perspective:

1. your subjective resource-commitment bar for bringing a new life into the world grows ever higher proportionally to your available resources (i.e. a millionaire has the intuition that they'll have to allocate a good portion of their millions to any kids they have; a billionaire thinks the same, but now it's a good portion of their billions.) Having kids is always intuitively expensive — but as your own life becomes better and more secure, having kids begins to feel exceedingly, inhibitingly expensive. (Even though it's likely eminently practical for people with 1/1000th your resources, let alone for you!)

2. specifically for women, the act of gestating and raising of a baby grows ever-larger in its subjective potential for negative impact on their (increasingly-highly-valued and de-risked) lives — in terms of both time-cost and risk to their health, career, etc.

Given these trends, here's my hypothesis for what's happening:

We have seemingly evolved to feel driven to reproduce when under a sort of optimum level of scarcity and uncertainty. We feel this drive the most when:

• things are bad enough that you feel your own opportunities for a good life are done and spent — so having kids is your genes' Hail Mary to trying again in 15 years when opportunities could be better; with this lack of further opportunity making the risk to your health and resources of having a kid feel "worth it";

• but things still aren't so bad at the moment, that any kids you conceive would literally starve and not make it to reproductive age themselves.

This heuristic worked just fine for the entirety of human evolution (and perhaps long before that); but it seems to "go wrong" in post-industrial society, for people experiencing no present scarcity or uncertainty. The heuristic wasn't designed to cope with this! It outputs nonsense — things like:

• "you never need to have kids, you'll be immortal, always healthy, and will always have infinite opportunities"

• "the right time to have kids is after you retire, when you'll have time, and also money accumulated to raise them. But of course, only if you can get someone else to carry the baby for you, since you won't have working gonads by then. And only if you can afford a team of nannies and private tutors, since you won't be able to handle the rambunctiousness of a toddler by then."

...both of which, as intuitions, tend to result in people just never having kids — despite often eventually regretting not having had kids. Because, by the time these intuitions shift or resolve positively, it's too late.

This broken intuition about when (or if) you should ever have kids, seems to lead to people also developing very different perspectives on sexual relationships. These "child-free" people — especially women — often seem to experience much lower sexual drive, or even fear of sex for its potential to force an (unwanted) child upon them. And this negative attitude toward sex, secondary to fear of reproduction, often then leads to either strained romantic relationships, or just not bothering with romantic relationships at all.

If you wanted to medicalize all this, you might call it a specific kind of neurosis that humans develop, when they no longer need to do anything much to ensure all their needs are satisfied. It's a compulsion toward catastrophizing all the negative aspects of reproduction, child-rearing, sex, and romantic relationships; and a disconnection from the emotional valence-weight that the positives of these same subjects would normally have.

(And I suspect that this is exactly the framing various societies will eventually take toward this developing "problem" — as I can totally imagine drug companies developing and marketing treatments for "reproductive neurosis", that trick just the part of your brain that cares about that sort of stuff, into thinking you're not well-off and not secure, so that it'll spit out the signals to tell you to feel more positively toward these subjects.)

> Do you really think we'll get some utopian fantasy like UBI?

Yes. Why wouldn't we? As soon as nobody has to labor, UBI is just one global (probably bloody) proletarian revolution away.

There has always been a "so who does the work nobody wants to do, then" blank spot at the end of the Marxist plan, that left all previous Communist revolutions floundering after the "revolution" part.

But "intelligent robots grow all the food autonomously, cook it, and give it out for free, while also maintaining themselves and the whole pipeline that creates their parts, to the profit of no one but the benefit of all — the mechanistic equivalent of Sikh Langar, accomplished at megaproject scale in every country" (and analogous utopias re: clean water, housing, etc) form a set of neat snap-in answers to that blank spot. I have always presumed that these are what AI advocates are vaguely attempting to gesture toward when they imagine AI "taking over all of the work."


> I can imagine a declining population, a younger generation, that does not want to reproduce, neither working off the available work.

Luckily the future does not depend on what you can or can't imagine. And no, even if that imagination were accurate, AI would not be the solution for any of those problems. We know the root causes, we know the effects, having a bunch of companies boil the oceans to have AI generate mediocre copy and uninspired illustrations does not help with anything other than making those companies richer and displacing even more workers and shutting down even more career paths.

Young people don't want to have kids or work because all the aspirational goals previous generations had have become unattainable for them, we're counting down the years until the climate catastrophe becomes impossible to ignore even in wealthier countries and there is literally nothing the masses can do because politicians across the world have shown a complete disregard for human life in the face of a global pandemic that had a death toll in the millions before we simply stopped counting.

> So we need systems, that learns human knowledge, refines it, makes it better and takes over all of the work.

And how would that benefit anyone but those who own those systems and charge for its use? How would that benefit the "younger generation"? To the contrary this would seem like it would do what automation always does: drive down wages and reduce workforces while harming workers' ability to bargain for better working conditions.

As tech workers we should take the time to understand that the "Luddites" were not actually opposed to technological advancement but to the consequences of it in a system that always rewards the business but not the worker for an increase in productivity. You don't need to withdraw into a cabin in the woods to realize that the way we have set up the systems that govern us technological progress will always only accelerate the wealth drain to the rich, never reverse it.

And that doesn't even get into how most AI startups are completely unsustainable (as growth-oriented startups tend to be) or how the proliferation of underpriced AI has contributed to the destruction of knowledge via search engine spam, "content generation" and social media bots.

If you want to create fully automated gay luxury space communism, be my guest, but you'll also need to work on the "communism" part if you want to make the "fully automated" part not result in the opposite direction.


What about education for all, better healthcare and a solution to hunger. Or discriminations? I’m not seeing anything trying to solve that in the AI space right now. It’s all about replacing workers and artists.


Well yeah cause its bad at all the things you want it to solve.


I'd argue that the innovation with these models is making smaller. Just throwing compute resources to make a model with more parameters is easy and doesn't really expand our knowledge. IMO, larger and larger LLM's aren't that impressive. Being able to shrink that model down, retain its accuracy (to a degree) and be able to run it on smaller hardware is impressive and will more likely lead to AI/ML being intertwined within people's day-to-day


Don’t disagree and I think it is natural evolution similar other aspect of tech. We innovate and then make it more efficient. I don’t want to stop the opportunity to innovate because of electricity usage though.


Yes, you can't save yourself rich.

I think these kinds of things will be solved by market forces.


What people want is limited by their ability level.

And there's no reason to ever be bullish on liberal (classical) society's capacity for intentional change.

Human agency doesn't mean shit.


Engineers like to create solutions, not problems.


Prone too at creating solutions to problems that do not exist.


> Should we as humanity stop right here? Draw a line in the sand and not do anything new?

Maybe someone with the knowledge of hindsight in a few generations will say that, yes, we should stop and draw a line in the sand.

We are still used to the idea that we can treat externalities to the systems we build as infinite and boundless, while it is getting increasingly clear that they are not. I am not saying we should stop working on LLMs. I just say we probably should factor in (that means: assign economic value to) downstream consequences of said high energy consumption if we don't wanna destroy our own habitats. And that then would probably lead to a world where LLMs are used for actually important problems instead of furthering the goals of the few who truly profit from surveillance-capitalism.

But hey, let's continue wasting a years worth of energy on training LLMs on cat pictures, they are cute after all.


There are unaccounted externalities in many areas, but energy use of LLMs isn't one of them - people are paying full market price for the energy of training or using LLMs, that energy cost is large enough so that people care a LOT about it, and make intentional tradeoffs about what is worth it.

And if people vote with their wallets that they are willing to pay for a years worth of energy on training LLMs on cat pictures, we should respect their choice that apparently this is what they consider a valuable use of the limited resources they have.


Jup, and the market price for energy doesn't reflect where it is going towards, which was my point. Whether you train the LLM to predict extreme weather events or put funny hats on pictures of cats, you have to pay the same — in fact, the latter will drive the price up for the former.

Sure, there are still people out there who believe the invisible hand of the market will regulate everything to give us a good outcome despite every real world evidence to the opposite. But in a market more demand for bullshit reasons drives the prices up for everybody, also for things that are essential for the survival of the poorest, the long term sustainability of environment, health services, etc.

The question I laid out was whether it is really a good idea to treat some random bullshit the same as essential services and externalities? Especially considering we are walking into a climate catastrophe that is the result of too much energy use.

Granted, we live in a society that is kinda built on ignoring these connections — otherwise we would go mad or cynical, but lying to ourselves has a hidden cost that will come due at some point.


But as we don't have mass long-term energy storage, future energy is not exchangeable with current energy, thus the future energy cost shouldn't matter for the current training of LLMs, and whether tomorrow it will be a good use of energy to put funny hats on pictures of cats, it's something that people will be able to choose based on the availability of energy - it's up to them to decide what's essential or not, it's up to them to decide whether they are willing to sacrifice some entertainment just to save energy, and that's what they should be doing with money as the mechanism of measure - if a person wants to do something silly, it would be inappropriate for someone else to decide "no, that's random bullshit" and prohibit them to spend their resources (energy or otherwise) on it. The fact that this decision will affect the price of e.g. energy for others is NOT an externality, it's the core economic mechanism of allocating scarce resources - if some people want to spend the same resources on cat picture and they are fully paying the costs of that, then that's directly reflected in the price they're paying, not an unpriced externality.


Are you sure you aren't deep in cognitive-dissonance land now?

More energy use overall — momentary or over time (=work), doesn't make a difference here — especially with LLMs which aren't exactly producing bursty loads and could be in theory scheduled to any time of the day. And energy corps will just have to produce more energy, which has it's own negative externalities on the world.

That means my stupid project still has an impact on the rest of the world. As of now everybody luckily still has the luxury to figure this out themselves. But we have to realize that depending on how bad things get this won't be feasible forever. And how fast things get bad depends on our collective energy use. If you're in a desert with limited water you won't let your kid pour it on the water because it makes a fun sound wouldn't you?

You would only let your kid do that if you had no mental model of water being a limited resouece and no imagination of the consequences a lack of water would mean for you and your kid.

Energy is a limited resource and producing more of it has it's own secondary costs (e.g. for climate change).

Operating with limited resources doesn't mean we are not allowed to have fun. It just means you don't pour your limited resource on the ground for it.


One man's "pour your limited resource on the ground" is another man's "valuable use of that resource" - if they're choosing to do something with the energy they purchase, apparently they consider that this is a worthwhile use of that resource. If I'm in the desert and want to pour some water on the ground for growing orchids, that's my choice on how to use it - if it's really important to conserve water, then it will be costly enough so that I won't waste much of it on luxuries, but if it's cheap enough to do so, then people are voting with their wallets that it's NOT really (yet?) so important to conserve it so much as to refuse spending that on luxuries.


“Thou shalt not question”


> Should we as humanity stop right here? Draw a line in the sand and not do anything new?

There will be a point when the answer is "yes".

For example, when we can produce hobby 3d printers for biological viruses. Or DIY kits for nuclear weapons. Or cameras that are the size of a grain of sand and cost only 1 cent. Stuff that some individuals might want, but society will never be ready for.


It's not just "progress" vs "no progress". Some technical innovations, like leaded gasoline, have had more of a negative impact than a positive. It's not a binary, and not all technology is without cost.

The argument against current AI progress isn't anti-technology, it's pointing out that these things can have a hugely negative impact that's mostly being shouted down or swept under the rug in the name of "progress". The climate crisis is pretty serious business, and the AI power consumption is making it worse, not better. The risk of huge amounts of unemployment are being mostly handwaved away as "yeah, that's not gonna happen, we always make jobs out of somewhere".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: