Hacker Newsnew | past | comments | ask | show | jobs | submit | true_religion's commentslogin

I had once thought the circle shape came from scantron style examination papers, where you can only fill one circle at a time. It’s similar even if the origins are probably different.

A lot of Scantron-style systems (including a lot of Scantrons) support marking multiple.

Yes, I had tests in the 1980s which were 'select all that apply'.

How much energy does an AI model use during inferencing versus a human being?

This is a rhetorical question.

Sure we aren’t capturing every last externality, but optimization of large systems should be pushed toward the creators and operators of those systems. Customers shouldn’t have to validate environmental impact every time they spend 0.05 dollars to use a machine.


I actually did the math on this last year some time. For gpt4 or so. Attempted to derive a per-user energy use value. Based on known data LLM training used many hundreds of times the energy use of agriculture and transport costs to feed a human to do equivalent mental work. Inference was much lower. But the climate critique of AI doesn’t distinguish.


100x more inefficient than a human in only food is pretty efficient. Consider that humans in the developed world spend far more in energy on heating/AC, transportation, housing, lawn care, refrigeration, washers and dryers, etc, and an LLM can probably be several factors more efficient.

I don't really understand the critique of GPT-4 in particular. GPT-4 cost >$100 Million to train. But likely less than 1 billion. Even if they pissed out $100 million in pure greenhouse gases, that'd be a drop in the bucket compared to, say 1/1000 of the US military's contributions


That sounds on the low side?

Does that "hundreds" include the cost of training one human to do the work, or enough humans to do the full range of tasks that an LLM can do? It's not like-for-like unless it's the full range of capabilities.

Given the training gets amortised over all uses until the model becomes obsolete (IDK, let's say 9 months?), I'd say details like this do matter — while I want the creation to be climate friendly just in its own right anyway, once it's made, greater or lesser use does very little:

As a rough guess, let's say that any given extra use of a model is roughly equivalent to turning API costs into kWh of electricity. So, at energy cost of $0.1/kWh, GPT-4.1-mini is currently about 62,500 tokens per kWh.

IDK the typical speed of human thought (and it probably doesn't map well to tokens), but for the sake of a rough guide, I think most people reading a book of that length would take something around 3 hours? Which means if the models burn electricity at about 333 W, they equal the performance (speed) of a human, whose biological requirements are on average 100 W… except 100 W is what you get from dividing 2065 kcal by 24h, and humans not only sleep, but object to working all waking hours 7 days a week, so those 3 hours of wall-clock time come with about 9 hours of down-time (40 hour work week/(7 days times 24 hours/day) ~= 1/4), making the requirements for 3 hours work into 12 hours of calories, or the equivalent of 400 W.

But that's for reading a book. Humans could easily spend months writing a book that size, so an AI model good enough to write 62,500 useful tokens could easily be (2 months * 2065 kcal/day = 144 kWh), at $0.1/kWh around $14.4, or $230/megatoken price range, and still more energy efficient than a human doing the same task.

I've not tried o3*, but I have tried o1, and I don't think o1 can write a book-sized artefact that's worth reading. But well architected code isn't a single monolith function with global state like a book can be, you can break everything down usefully and if one piece doesn't fit the style of the rest it isn't the end of the world, so it may be fine for code.

* I need to "verify my organisation", but also I'm a solo nerd right now, not an organisation… if they'd say I'm good, then that verification seems not very important?


I totally agree that the environmental cost SHOULD be pushed towards the creators, but as long as that doesn't happen, is the moral thing as a consumer to just carry on using it? This is not a rhetorical question.

Transporting something with a car using fossil fuel usually uses less energy than if a human did the same thing by hand, that doesnt mean fossil fuel is environmentally friendly. LLM:s does not decrease the population even if it can do human tasks. If the LLM is used for the good of humanity it is probably a win, but I mean obviously a lot of the use of AI is not.

I use LLM:s as well, I'm just saying, I dont think it is a totally strange question to ponder over the energy use of different use cases with LLM:s.


Where there is AI illustrations today, in the past would be clip art with little relevance to the work.


I have two sets of grand parents. One was relatively well off, and the other not.

I can say, the cutlery inherited from the poorer pair is not great. Some is bent. Some was broken and then repaired with different materials. Some is just rusted. And the designs are very basic.

It’s one of the few surviving things from them, so I haven’t thrown it away but I doubt my kids will want to inherit it since they don’t even know them.

I think survivorship bias plays into effect here strongly.


I suggest that they use the Disney style. It’s similarly popular and well known but no one will stand up to protect it since it’s a multi billion dollar company that we already know has no respect for artists or artistic legacy.


The objection is that China is not a genuine trading partner that sees open and efficient markets as a good thing.

They are instead the sort of trading partner that reminds people of the British Empire: they will produce all manufactured goods and you will handle all the more easily replaced aspects of the business from raw materials to soft services.


I somehow feel Safari drags its feet on basic things platform improvements because they want to focus on iOS apps instead.


The narrative Safari is behind and Apple doesn’t care about the web is so tired…

This 8,000+ word article on Safari 18.4 (released today, BTW) doesn’t read like an organization that doesn’t care about the web [1].

[1]: https://webkit.org/blog/16574/webkit-features-in-safari-18-4...


The pace of improvement in the web is slow.

I don’t want to debate if a mega corp cares or not, but compared to how UI frameworks were in the 90s, the developer experience is anemic.

I don’t know if HTML, CSS and JavaScript are the best path forward. Maybe they are doing exactly what their spec set out to do.

But we need something better that doesn’t leash one to an ecosystem that takes 30% of your revenue.


Eh; the standard is two weeks old. Written by someone working for Apple by the way.


I’m wondering where in my comment it sounded like I wanted this standard specially to be implemented yesterday?


If you are afraid to travel to the US, I’d ask if you’re also afraid to travel to Asia, South America and Africa where risks are comparable regardless of gun ownership.

It seems that you would be cutting yourself off from a large portion of the world.


You can be impeached for lying to Congress or maybe even substantially misleading it.

Isn’t that how they got Clinton?


Which reminds me of how some places teach science.

We accept that students can’t really learn chemistry without practice in the lab but practicals for physics aren’t done so students graduate without being able to contextualize things.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: