Hacker Newsnew | past | comments | ask | show | jobs | submit | reducesuffering's commentslogin

The only extant “X-risk” is, and always has been, economic collapse due to loss of energy. “Climate Change” is science fiction.

/s

It amazes me climate change X-riskers scoff at denialists and then do the exact same denialism with AGI. How many leading AI scientists (like climate science) would it take to convince you?

"Our great religion, their primitive superstition"[0]

[0] https://imgur.com/EELDM6m


“AGI” was literally made up by a Harry Potter fan fiction community

> Stop burning everything! Fossil fuel, wood, plastic, garbage.

I don't understand the wood argument. Isn't it widely accepted we need to do burns to manage forests? Wood is a short-term cycle of carbon. It releases when it burns but frees up space to capture it right after. When people live on rural plots and trees fall, should they burn for heat (and lessen needing other energy sources) or let it decompose and cause the same thing? It's not the same as extracting deeply embedded carbon sources that won't make it to the atmosphere if untouched (fossil fuels)


Wood and plant burning requires a longer nuanced answer than the Hacker News format allows. Humanity must not cut forests or grow plants unnecessarily. If you must use wood to build a house -(there are better and cheaper materials in terms of energy and climate change enhancing emissions, see for examples Amory Lovins book Reinventing Fire or his lectures on Youtube) - then first grow those trees in a place that has no natural forest. And then do not burn the wood after you demolish the house. Do not use wood from forest, humans should let the forest manage itself.

Same with clearing the underbush of Meditaranian and hotter climate forest to prevent forest fires. If humanity had not managed those forest (grazing animals, building roads, harvesting) in the first place than there would have been no buildup of excess material that sustain wildfires past its natural rate.


The trick to forest management is allow or create small, frequent burns that clean up dry, overgrown understories. Nature does this without our help and some species even depend on it. If we interfere with this, eventually there's a big fire instead that levels the place.

Aren't people generally on board with preferred names? That's what the DoD wants to call themselves.

They don't get to choose.

Ya. The WeWork debacle and investing $300 million into Wag, an imploded Uber for dog walking, surely wasn't helping.

The idea being that you could at a random time call upon some random person to walk your dog? Someone wasted $300M on that? Must have been in a very sheltered environment.

Many engineers use Codex 5.3 and find it better, including Hashicorp's Mitchell.

i find codex 5.3 roughly on par with (though tbh still not quite as capable as) sonnet models, which are not even anthropic's flagship model family.

And the OpenClaw guy

who was bought by openai

What are you talking about? OpenAI's ChatGPT free tier (that everyone uses) answers this in the first sentence within a couple seconds.

"If your goal is to get your dirty car washed… you should probably drive it to the car wash "


That problem went viral weeks ago so is no longer a valid test. At the time it was consistently tripping up all the SOTA models at least 50% of the time (you also have to use a sample > 1 given huge variation from even the exact same wording for each attempt).

The large hosted model providers always "fix" these issues as best as they can after they become popular. It's a consistent pattern repeated many times now, benefitting from this exact scenario seemingly "debunking" it well after the fact. Often the original behavior can be replicated after finding sufficient distance of modified wording/numbers/etc from the original prompt.


For example, I just asked ChatGPT "The boat wash is 50 meters down the street. Should I drive, sail, or walk there to get my yacht detailed?" and it recommended walking. I'm sure with a tiny bit more effort, OpenAI could patch it to the point where it's a lot harder to confuse with this specific flavor of problem, but it doesn't alter the overall shape.

This question is obviously ambiguous. The context here on HN includes "questions LLMs are stupid about, I mention boat wash, clearly you should take the boat to the boat wash."

But this question posed to humans is plenty ambiguous because it doesn't specify whether you need to get to the boat or not, and whether or not the boat is at the wash already. ChatGPT Free Tier handles the ambiguity, note the finishing remark:

"If the boat wash is 50 meters down the street…

Drive? By the time you start the engine, you’re already there.

Sail? Unless there’s a canal running down your street, that’s going to be a very short and very awkward voyage.

Walk? You’ll be there in about 40 seconds.

The obvious winner is walk — unless this is a trick question and your yacht is currently parked in your living room.

If your yacht is already in the water and the wash is dock-accessible, then you’d idle it over. But if you’re just going there to arrange detailing, definitely walk."


You can make the argument that the boat variant is ambiguous (but a stretch), it's really not relevant since the point was revealing the underlying failure mode is unchanged, just concealed now.

The original car question is not ambiguous at all. And the specific responses to the car question weren't even concerned with ambiguity at all, the logic was borderline LLM psychosis in some examples like you'd see in GPT 3.5 but papered over by the well-spoken "intelligence" of a modern SOTA model.


I don't understand what occasional hiccups prove. The models can pass college acceptance tests in advanced educational topics better than 99% of the human population, and because they occasionally have a shortcoming, it means they're worse than humans somehow? Those edge cases are quickly going from 1% -> 0.01% too...

"any human can instantly grok the right answer."

When asking a human about general world knowledge, they don't have the generality to give good answers for 90% of it. Even very basic questions humans like this, humans will trip up on many many more than the frontier LLMs.


You're not thinking of the second order meta system here. ASI isn't just one instance of an LLM responding to you in a session. It's the datacenter full of millions of LLM interacting with millions in parallel.

Well in that case wouldn't that be millions of ASIs, each with contradictory goals?

I'm not saying that ASI isn't an existential threat, just that it probably won't present itself like the fanciful sci-fi scenario of a singular intelligence suddenly crossing a magic threshold and being able to take over the world. Most likely it will be some scenario we won't have predicted, the same way hardly anybody predicted LLMs.


Impressive levels of anthropomorphizing the models already. Time will tell whether this was extremely prescient or completely delusional.

It has been obvious since ChatGPT that the internet, including HN, will be flooded with AI generated commentary, drowning out real peoples' voices (soon undetectable). How this is surprising to anyone is a mystery.

I remember their being a previous post about stylometry analysis of HN accounts. And people confirmed the top account correlations. It basically identified all the HN alt accounts

And HN asked the author to take it down if I'm not mistaken.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: