Hacker Newsnew | past | comments | ask | show | jobs | submit | thatnerd's commentslogin

Regardless of how fast we use up non-renewable resources, they're all going to be gone at some point. Copper, lithium, and tin are going to be gone. Humanity will need to live off of what we can forage or grow.

Also, the rarity of farming in the animal kingdom makes me worried about the sustainability even of multi-species domestication. A few ants cultivate trees or fungi or aphids, but they seem to specialize in just domesticating just one species at a time. This is telling us something important: I suspect domesticating too many species leads to vulnerabilities to so many parasites/bacteria/viruses/pests that pestilence and famine risk will eventually outweigh any benefits of domestication. If they didn't, ants would be farming lots of species!

In the real long term, then, humans will get one (or zero) domesticated species, and maybe some electricity if we can make self-sustaining solar power operations using common elements like aluminum and silicon from dirt, or sodium, chlorine, oxygen, and hydrogen from water, and that'll be it for technology, Everything else will be foraged animals and plants, in an ecosystem that keeps our population in check through predation.

As for the transition, it's going to suck. And I don't trust any governing body to "ramp down" the population smoothly without committing some major atrocities.


For the resources you've listed we're nowhere near extracting just the known reserves. For lithium there are a lot of known sources that aren't included in the reserves because they haven't been assessed yet.

And if we did extract the majority of those particular resources then there would be so much of them in circulation that wide scale recycling becomes viable. It already is for copper. And if you're thinking then that recycling is going to be more energy intensive, that's not clear for copper and lithium either - both require high energy to extract in the first place and potentially less to keep them going around.


Yes: our politicization of science and theirs will take a somewhat different form. And yes: if our politicization of science is is less complete, or briefer, it may not set us back as much... but RFK Jr. is out there purging scientists from the federal government if they disagree with his preferred theories about vaccines. This is not going to advance medical science in the US.

The comparison to the present-day US isn't hyperbole. It's not a perfect match but there are parallels.

The article mentions Lysenko, a Soviet "biologist" who set back Soviet biology for a generation. He believed, for example, that plants in the USSR would not compete with each other for resources the way they did in capitalist societies, but would instead share resources. He asserted that crops could therefore be planted closer together in the USSR, yielding more food per acre. Evidence to the contrary was suppressed: Lysenko had Stalin's ear and a zealot's confidence. The rest of the field was either purged or fell in line. Scientists lost their jobs or got sent to Siberia.

The comparison to the present-day US isn't perfect, but it also isn't hyperbole. While scientists in the US mostly aren't in the same type of danger of arrest for speaking out (assuming ICE doesn't start targeting political opponents), but we're looking at a similar era in the US in terms of making theories and data fit ideology. RFK, Jr. has his preferred biological theories about vaccines, autism, and disease. Government scientists are at risk of losing their jobs and their financial security if they reference (or publish) findings that Kennedy objects to. Universities are still (as far as I can tell) safe for natural scientists because the first wave of the crackdown is focused on the humanities and social sciences, so this purge of scientists is limited to federal government employees, but the effect is real, and it isn't a stretch to assume that if the government finds success in the current purge, it will go looking further afield.

The human impact is significant for those affected, but the article is right to point out that this purge of scientists from the government for ideological goals will have a broader impact for society: it will set back American science.

Kennedy doesn't even have to be wrong on the facts for the culture he's creating to be toxic for federal science in his department and beyond. Just the politicization of science pushes our country towards being a scientific backwater.


Let's Not Lose Our Minds (2017) by Carl Zimmer: https://carlzimmer.medium.com/lets-not-lose-our-minds-c5dcac...

The situation has only devolved since he wrote this.


Neither US political party has a monopoly on Lysenko-style academics, unfortunately.

At least one of them seems to have an introspective capacity, at some level. So that's nice?

Genuinely I wonder if this would make for good charity? Where can I donate to promote political introspection? I'll consider any mainstream spot on the political spectrum.


The fact that any party can implement Lysenko-style academics means the system has failed. We don't need political introspection so they more benevolently interfere with scientific progress, we need a system where they don't interfere, where they can't interfere without an impractical degree of effort.

Science is beneficial for progress, it makes sense at a 20,000 ft level for the government to encourage it, but politicians deciding which grants to offer, guidelines for what can be published by grant recipients, being able to make serious threats against universities and other such research institutions with few restrictions - there is no argument for a government to have such power. Either publicly funded research institutions should have strong protections in place for their academic integrity, or some alternative to government funding for these institutions should be the norm.


But only one party is halving scientific funding and withholding billions in research grants

> Universities are still (as far as I can tell) safe for natural scientists because the first wave of the crackdown is focused on the humanities and social sciences

Well, kinda—but only because they're not cracking down on any specific views in the natural sciences; they're just cutting their funding entirely.

Large swaths of natural science research at US universities relied (past tense) on federal grant funding, and that's effectively been eliminated across the board.

They just don't want any science research being done, period, unless it's 100% funded and owned by for-profit companies.


This take is quite alarmist, but also so biased that it cannot be taken seriously.

Is it though? Trump just fired an economist because she released some job numbers he didn't like.

https://www.bbc.co.uk/news/articles/cvg3xrrzdr0o



That is the silicon valley cryptoscam version.

This concept has been studied already extensively, e.g [1] (in 2000!) by people like Rivest and Chaum, who have actual decade-old competence in that field.

[1] https://people.csail.mit.edu/rivest/pubs/pubs/LRSW99.pdf


I think worldcoin added this year (?) identification using government e-passport as well (not only orb) - all modern passport have NFC/RFID chip, you won't get all data from that in public way but can verify signature and can get basic information. There are already apps in appstore doing that.


Or just charge bots and humans and we're good to go

https://www.nytimes.com/2006/02/05/technology/postage-is-due...


While that works for attacks that are like spam, bot detection for high margin attacks like show ticket scalping really wants an identity-oriented solution.


Ah yes, postage has stopped all the spam coming to my house!


This is an extremely ignorant take. It's extremely well-known that one of the primary ways you stop spam is by making it economically infeasible, specifically by making the cost of distribution higher than the expected return. It's also extremely well-known that spam snail-mail is subsidized by the US post office and doesn't pay normal post rates.


> Say something everyone lives everyday around the world. > "This is an extremely ignorant take."


Yup, Worldcoin has been the one of the efforts in this space. We're trying to have a frictionless, less privacy-invasive method than biometric scanning


Do you work for Worldcoin?


For a casino? In practice, yes, fair games are perfectly consistent with greedily skimming a game, and fair games draw gamblers.

That said, when organized crime gets involved, somebody always thinks "if I rig this, I'll do EVEN BETTER!" Maybe they're a corrupt employee skimming from the house, maybe they're a loyal employee skimming for the house, but unless you have something like the Nevada Gaming Control Board forcing fairness on them, you basically never get it. At least, from what I've read on the subject. Source: I've read some books on card counting & otherwise beating the odds in casinos, and this my vague memory.

And it's ironic that the house wants to rig games, because a biased game means a mathematically savvy individual can go in and calculate how results differ from "fair" games, and can then skim some profits for themselves if the bias is larger than the house advantage.


I think that's an invalid hypothesis here, not just an unlikely one, because that's not my understanding of how LLMs work.

I believe you're suggesting (correctly) that a prediction algorithm trained on a data set where women outperform men with equal resumes would have a bias that would at least be valid when applied to its training data, and possibly (if it's representative data) for other data sets. That's correct for inference models, but not LLMs.

An LLM is a "choose the next word" algorithm trained on (basically) the sum of everything humans have written (including Q&A text), with weights chosen to make it sound credible and personable to some group of decision makers. It's not trained to predict anything except the next word.

Here's (I think) a more reasonable version of your hypothesis for how this bias could have come to be:

If the weight-adjusted training data tended to mention male-coded names fewer times than female-coded names, that could cause the model to bring up the female-coded names in its responses more often.


People need to divorce the training method from the result.

Imagine that you were given a very large corpus of reddit posts about some ridiculously complicated fantasy world, filled with very large numbers of proper names and complex magic systems and species and so forth. Your job is, given the first half of a reddit post, predict the second half. You are incentivized in such a way as to take this seriously, and you work on it eight hours a day for months or years.

You will eventually learn about this fantasy world and graduate from just sort of making blind guesses based on grammar and words you've seen before to saying, "Okay, I've seen enough to know that such-and-such proper name is a country, such-and-such is a person, that this person is not just 'mentioned alongside this country,' but that this person is an official of the country." Your knowledge may still be incomplete or have embarrassing wrong facts, but because your underlying brain architecture is capable of learning a world model, you will learn that world model, even if somewhat inefficiently.


To chime in on one point here: I think you're wrong about what an LLM is. You're technically correct about how an LLM is designed and built, but I don't think your conclusions are correct or supported by most research and researchers.

In terms of the Jedi IQ Bell curve meme:

Left: "LLMs think like people a lot of the time"

Middle: "LLMs are tensor operations that predict the next token, and therefore do not think like people."

Right: "LLMs think like people a lot of the time"

There's a good body of research that indicates we see emergent abilities, theory of mind, and a bunch of other stuff that shows models do deep levels of summarization, pattern matching during training from these models as they scale up.

Notice in your own example there's an assumption models summarize "male-coded" vs "female-coded" names; I'm sure they do. Interpretability research seems to indicate they also summarize extremely exotic and interesting concepts like "occasional bad actor when triggered," for instance. Upshot - I propose they're close enough here to anthropomorphize usefully in some instances.


Yes, the pricing is what I'm curious about. If they were saying "we can make this for half the price of steel," then the world steel market becomes their growth target.

If they can't get cheaper than steel, if they can't compete with steel, they'll probably never be more than a niche product.


Here's a thought: why don't YOU shut up? /s

Sorry! That was mean, but I hope it came across as funny.

In all seriousness, I like the question, and your implication is intuitive: if we (as individuals) talk to machines rudely, it's likely to (at minimum) lead us to be ruder to other humans, if only by habit. And if they're expecting more politeness than we're showing, they may infer the intent to be rude, and react accordingly. Those who are rude would end up being worse off.

That said, it's the Fallacy of Composition to assume that if everyone gets ruder the collective effect would be the same as the individual effect. We have different requirements for what counts as "polite" in different cultures but everyone seems to get along pretty well. Maybe societies can all get ruder (and just get along worse with each other) but also maybe they can't.

I tried looking in the literature but this book implies we don't even know how to measure politeness differences between languages: https://books.google.com/books?hl=en&lr=&id=MPeieAeP1DQC&oi=...

There are even theories that politesse can lead to aggression: https://www.jstor.org/stable/2695863

Deborah Tannen (the linguist) has found many examples where different politeness expectations (particularly across the cultural divide that aligns with gender) can lead to conflict, but it always seems to involve misunderstandings due to expectations: https://books.google.com/books?hl=en&lr=&id=YJ-wDp7CJYAC&oi=...

So yeah, bad outcomes feel intuitive but I don't think linguistics or sociology has a theory of what happens if a group collectively gets less polite.


Sam Altman doesn't want you to say "please" to ChatGPT.

https://futurism.com/altman-please-thanks-chatgpt


That is not true; first he says it's "tens of millions of dollars well spent," followed by "you never know". I don't think he knows.


I’ve wondered whether they use thanks as a signal to a conversation well done, for the purpose of future reinforcement learning


I'd speculate that they use slightly more complicated sentiment analysis. This has been a thing since long before LLMs.


I don't know if they do or if it is efficient but it is possible.


Casual twitter response turned into a new article turned into a "X wants Y" is exactly why I stopped trusting most of social media as a source of information.


From the article: "the impacts of generating a 100-word email. They found that just one email requires .14 kilowatt-hours worth of electricity, or enough to power 14 LED lights for an hour"

Seems completely off the charts. A 70b model on my M3 Max laptop does it for 0.001 kWh... 140x times less that stated in the article. Let's say the OpenAI Nvidia clusters are less energy efficient than my Macbook... but not even sure about that.


One can also work backwards, to see what kind of compute hardware they think must be needed for the models, or how much they think OpenAI's electricity costs.

100 words is ~133 tokens, so 0.14 kWh/133 tokens is about 1 kWh/kilo-token. If electricity is all from record-cheapest PV at $0.01/kWh, then this limits them to a price floor of $10/mega-token. For more realistic (but still cheap) pricing of $0.05/kWh, that's $50/mega-token. Here's the current price sheet: https://platform.openai.com/docs/pricing

To generate a 133-token email in, say, 5 seconds, if it takes 0.14 kWh, is 101 kW. This does not seem like a plausible number (caveat: I don't work in a data centre and what I think isn't plausible may just be wrong): https://www.wolframalpha.com/input?i=0.14+kWh+%2F+5+seconds+...


For reference, a single NVIDIA H200 card has a TDP of 700watts. Considering all the middlemen you put between you and the model, .14KWh doesn't look too outrageous to me. Because you add processors, high-speed interconnects, tons of cooling, etc. into the mix. Plus the models you run at the datacenters are way bigger.

For "fathomability" case, the network cables (fibers in fact) you use in that datacenters carries 800gbps, and the fiber-copper interface converters at each end heats up to uncomfortable levels. You have thousands of these just converting packets to light and vice versa. I'm not adding the power consumption of the switches, servers, cooling infra, etc. into the mix.

Yes, water cooling is more efficient than air cooling, but when a server is burning through 6KWh of energy (8x Tesla cards, plus processors, plus rest of the system), nothing is efficient as a local model you hit at your computer.

Disclosure: Sitting on top of a datacenter.


I end most conversations with a fuck you, then close the browser window. Since usually chatbots fail at the tasks I give them.


You can't express mass in volts. A volt is energy per unit of charge. To get energy, you need to multiply by a charge.

One Joule of energy is what you get when you move one Coulomb of charge across a 1V potential.

One electronVolt (eV) is the energy you get from moving one electron's worth of charge across 1 volt of potential.

It's an accident of what we chose to be a Joule of energy and what we chose to be a Coulomb of charge, so there should be no expectation that this would turn out to be the mass of an electron (when divided by the square of the speed of light, which is unstated because everyone knows E = mc^2).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: