Hacker Newsnew | past | comments | ask | show | jobs | submit | ericd's commentslogin

Not sure what you’re talking about, I scaled to millions of users on a pair of boxes with PHP, and its page generation time absolutely crushed Rails/Django times. Apache with mod PHP auto scales wonderfully.

The Google paper (https://arxiv.org/pdf/2511.19468) didn’t seem too concerned with radiator mass/size when I skimmed it, but maybe I just missed it. My understanding is that if you run the chips relatively hot (and maybe boost with heat pumps? But then you’re not quite as solid state, and maintenance is tough up there), the radiation ability increases enough such that you can make the radiators slightly smaller than the solar panels, and they’d sit on the dark side of the panels. Many people like to point to the ISS system and scale that up, but there’s a big difference between a system assembled in space and meant to keep humans at human temps vs mass manufactured on the ground and keeping things around 100C.

Did you seriously register an account to leave that little jab?

A van dweller has a much smaller carbon footprint than the average American.


Maybe when they've also been doing their thing for almost 40 years, people will be past this phase for LLMs, too ;-)

Maybe the bottleneck for most high profile open source is PR review and not coding?

The most thorough review is done when you write the code, it takes much more effort to be as thorough when you read it as when you write it.

So if what you say is true then agentic coding will be net negative for same quality.


I feel like that ship sailed long ago with phone trees and hour-long support wait times becoming normal. Not that it's an ideal state of affairs, but I'd much rather talk to a chatbot than wait for an hour for a human who doesn't want to talk to anyone, as long as that chatbot is empowered to solve my problem.

Have you ever had a chatbot solve your problem? I don't think this has ever happened to me.

As a reasonably technical user capable of using search, the only way this could really happen is if there was no web/app interface for something I wanted to do, but there was a chatbot/AI interface for it.

Perhaps companies will decide to go chatbot-first for these things, and perhaps customers will prefer that. But I doubt it to be honest - do people really want to use a fuzzy-logic CLI instead of a graphical interface? If not, why won't companies just get AI to implement the functionality in their other UIs?


Actually, I have, Amazon has an excellent one. I had a few exchanges with it, and it initiated a refund for me, it was much quicker than a normal customer service call.

Outside of customer service, I'm working on a website that has a huge amount of complexity to it, and would require a much larger interface than normal people would have patience for. So instead, those complex facets are exposed to an LLM as tools it can call, as appropriate based on a discussion with the user, and it can discuss the options with the user to help solve the UI discoverability problem.

I don't know yet if it's a good idea, but it does potentially solve one of the big issues with complex products - they can provide a simple interface to extreme complexity without overwhelming the user with an incredibly complex interface and demanding that they spend the time to learn it. Normally, designers handled this instead by just dumbing down every consumer facing product, and I'd love to see how users respond to this other setup.


I'm happy that LLMs are encouraging people to add discoverable APIs to their products. Do you think you can make the endpoints public, so they can be used for automation without the LLM in the way?

If you need an LLM spin to convince management, maybe you can say something about "bring your own agent" and "openclaw", or something else along those lines?


Yep, I’m developing the direct agent access api in parallel as a first class option, seems like the human ui isn’t going to be so necessary going forward, though a little curation/thought on how to use it is still helpful, rather than an agent having to come up with all the ideas itself. I’ve spun off one of the datasets I’ve pulled as an independent x402 api already, plan to do more of those.

What I mean is that I want to be able to build my own UIs and CLIs against open, published APIs. I don't care about the agent, it's an annoyance. The main use of it is convincing people who want to keep the API proprietary that they should instead open it.

I did think about this use-case as I was typing my first message.

I can see it working for complex products, for functionality I only want to use once in a blue moon. If it's something I'm doing regularly, I'd rather the LLM just tell me which submenu to find it in, or what command to type.


Yeah true, might be a good idea to have the full UI and then just have the agent slowly “drive” it for the user, so they can follow along and learn, for when they want to move faster than dealing with a chatbot. Though I think speech to text improves chatbot use speed significantly.

Amazon's robot did replace the package that vanished. I don't believe it ever understood that I had a delivery photograph showing two packages but found only one on my porch. But I doubt a human would have cared, either--cheap item, nobody's going to worry about how it happened. (Although I would like to know--wind is remotely possible but the front porch has an eddy that brings stuff, it doesn't take stuff.)

Yeah as long as the chatbot is empowered to fix a bunch of basic problems I'm okay with them as the first line of support. The way support is setup nowadays humans are basically forced to be robots anyway, given a set of canned responses for each scenario and almost no latitude of their own. At least the robot responds instantly.

Yep, exactly, the problem comes when chatbots are used to shield all the people who can do stuff from interacting with customers.

I think it's pretty likely that "intelligence" is emergent behavior that comes when you predict what comes next in physical reality well enough, at varying timescales. Your brain has to build all sorts of world model abstractions to do that over any significant timescale. Big LLMs have to build internal world models, too, to do well at their task.

Hard to get 6000+ bit memory bus HBM bandwidth out of a 512 or 1024 bit memory bus tied to DDR... I think it's also just tough to physically tie in 512 gigs close enough to the GPU to run at those speeds. But yeah, I wish there was a very competitive local option, too, short of spending $50k+.

Those were a lot easier to swap out. Oil is the foundation of modern society, CFCs were far from.

But to put this in context, the average American family’s carbon footprint per year is roughly 50,000 kg, and one flight is usually on the order of >1,000 kg, or ~300kg/700 pounds of milk, assuming that 3kg CO2 per kg milk high end figure. So if you like milk, there are probably other places you can cut first.

Does seem like a lot of carbon for a kg of plastic, though, how does that compare to normal plastic’s carbon footprint?


>... >1,000 kg, or 700 pounds of milk

Why do you mix your units like that.


Because I'm American, so I use metric in scientific contexts, and weird medieval units in everyday ones :-)

I'll edit a bit for clarity for you all who live in more consistent places.


I'm not above asking the barber to leave an inch on the top, but then I'm not going to ask him to leave 15mm on the sides. At least keep the system consistent within a sentence :)

lol fair enough.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: