Hacker Newsnew | past | comments | ask | show | jobs | submit | kubb's commentslogin

It's just one team with like 4 people. They can layoff a lot of staff from Metaverse.

Where do you go when you need to escape but can't actually go anywhere?

Inwards. Imagination, media, substances, meditation, solitude.


To an extent I feel your position, and it can be true in many contexts. I would say, however, that taking a trip to a far off country, and dropping acid serve two different purposes. While I'm sure there's some overlap, you won't get the entirety of the experience from either one exclusively.

It feels like "Autonomous Coding Agents" are being astroturfed on the daily on HN. The same arguments and tropes are echoing through every thread.

It's hard to distinguish who's a bot, who's a narrative pusher and who's an enthusiast. Which is exactly what you'd want from an astroturfing campaign. There's a clear benefit: people in the industry are reading this, and in doing so they're granting mindshare.

There's one way that can prevent inauthentic support campaigns - personal key signature. But judging by how afraid people, especially in the US, need to be of their government surveilling them, this isn't going to catch on.


Yes. I’ve also been asking every engineer I know what they’re doing with AI and there’s a lot of people doing a lot of different things, but it’s a deep mismatch with the online rhetoric.

This phenomenon appears to be incrementally coming for every single topic and public platform.


I feel the same way. Most people I've talked to are using AI for better search. I don't know anyone using it heavily to do their main job (writing code). I think a lot of the accounts bragging about how much they are doing with AI are bots.

I'm even shocked when I hear people are using it for better search. I've found it to be terrible for search, and constantly fabricating things. It's distilled everything that is bad about new Google, where it prefers popular results to accurate ones - but with actual fabrication that becomes infinitely worse.

I literally ask it to look for something, and immediately afterwards (before reading the long-winded result), ask it if the results were real or fabricated. It's just how the cost-benefit analysis works out, and I didn't learn until a ton of times reading the results, getting suspicious of a few, doing websearches to verify them, not finding them, then coming back to ask if they were real.

"Sorry! It's absolutely fair that you called me out on that... It's important that you hold me to a high standard... You're absolutely right..."

I'm finding it valuable for compressing all of the docs in the world, so I don't have to look up what a function does or how to accomplish something in some framework or CLI. I find it capable of writing code if I move an inch at a time; build copious verbose debugging output that I feed back into it every time it screws up; and when it goes into a stupid loop being stupid, just debugging by hand before wasting hours trying to get it to see something that it doesn't want to see.


There's a lot of money wrapped up in people thinking a certain way: AI is useful. Work should be done in a corporate office. The American Dream is attainable. Recession is not coming. War is good. The world is dangerous. Others want to harm you. Lots of investment in astroturfing these themes because a population who believes them will more easily be separated from their money.

What's interesting about that is that indeed, there are a lot of people pushing the 'autonomous coding agents are great' narrative but there is one crucial bit missing: they absolutely never show their code.

>It feels like "Autonomous Coding Agents" are being astroturfed on the daily on HN. The same arguments and tropes are echoing through every thread.

Isn't this what exactly you'd expect in a connected world? The best arguments from both sides proliferate, thereby causing "The same arguments and tropes are echoing through every thread".


> Isn't this what exactly you'd expect in a connected world?

I would expect a figurative war for human attention. With so much information being available, everyone would try to make people focus on what they want to communicate.

> The best arguments

Some of these tropes and arguments aren't really the best. There's a lot of rhetorical gotchas, e.g. "that's exactly what I'd expect from a human" when an automated solution isn't up to par.

> from both sides

The only real "side" is the one actively pushing for something. Everyone else isn't a camp - they're just random people.


>I would expect a figurative war for human attention. With so much information being available, everyone would try to make people focus on what they want to communicate.

How does this relate to online commenting? Are you expecting the "figurative war for human attention" to make comments more diverse?

>Some of these tropes and arguments aren't really the best. There's a lot of rhetorical gotchas, e.g. "that's exactly what I'd expect from a human" when an automated solution isn't up to par.

I think you're overestimating the epistemic rigor of the average internet commenter, eternal September, etc.

>The only real "side" is the one actively pushing for something

Are you implying the "astroturfing" is only on one side? If you might just be experiencing motivated reasoning and/or confirmation bias. Most of the astroturfing behavior can be applied to the anti-AI side as well, eg. people complaining about electricity or water consumption in every thread about the impacts of AI, or "ai slop".


> How does this relate to online commenting?

A viable strategy is to disseminate messaging reinforcing a belief beneficial for the disseminating entity, in a way that invokes emotion (like fear or anger), especially in influential spaces allowing for anonymity.

But in general this line of questioning won't lead to a satisfying conclusion. The assumptions you requested (connected world) aren't specific enough to determine what we should expect from comments in online spaces (and by extension, to demonstrate that the current situation is a natural outcome).

> I think you're overestimating the epistemic rigor of the average internet commenter, eternal September, etc.

Yeah, but this place, quite frankly, is above average. Not to the point of being immune to manipulation, obviously.

> Are you implying the "astroturfing" is only on one side?

No. I'm pointing out the "two sides" framing that you insist on is a mistake. There is only one organized camp with a clear financial incentive to have people believe in "Autonomous Coding Agents" which justifies capital investements in that area.

People who are concerned about power consumption, people who don't like hardware unavailability, and people who think that LLMs are useful tools, but they're not even close to autonomous delivery of software systems are all distinct groups without financial incentives. But they do have the right to push back against the relentless messaging barrage from the camp.


It feels the same way on GitHub trending. I used to check it frequently to see what the hottest newest tech was and stay up to date. Now it's oversaturated by whatever the newest AI bubble is. It also doesn't help that MCP enabled products like OpenClaw star their own repo and artificially inflate their perceived value.

Interesting - claw faking the benchmarks .. they match well with openA ideologically .

> There's one way that can prevent inauthentic support campaigns - personal key signature.

You would be surprised at how cheaply opinions can be purchased, especially globally.


I hate to sound like I’m turfing for cryptocurrencies, isn’t there like an identity solution there that the crypto nerds solved for to keep identity verification anonymous and surveillance proof?

Need to double check what is available, though I feel like that angle could work.

I’ve been wondering also if a simple lie & deception detection type system could be a useful angles. It’s complicated in practice; though the human intuition would say it’s figured this out millennia ago- I can’t tell you how many times my body has figured out someone’s toxic negative vibe by feeling. And I think we probably understand this better than we think and can represent it in the computer space with analysis of signals and some follow on questions. Hope I’m not too naive here.


I left an agent generating code over the weekend, so that I can get to 15 million.

What code? Code!


Thoughts and prayers.

Where I live it's not an issue.

Is it politics or ideology to recognize the flawed character of someone? How cultish his following is? His erratic behavior, the damage that he's doing?

Some people will cry "politics" just to take the voice away from those who dare to question their beloved celebrities.


Yeah and it’s not our fault every Elon discussion involves politics. It’s literally all he does all day, and all he seems interested in, anymore.

Well, even European Evangelicals are vastly different from their American counterparts. There's no megachurches, prosperity gospel, televangelism, and the religion is not as strongly intertwined with politics.

Poland is quite intertwined.

But Catholicism has its own government, which prevents individual catholic countries to veer off too much.


Poland isn't Evangelical.

Out of curiosity, what grounds their belief that it's going to happen soon? Why not in a thousand years? As far as I know, there is no mention of the exact date in the Bible.

The land of Israel has been a vassal state or part of another state or empire for most of recorded history. Israel becoming an independent state in 1948 ties in with messianic prophesy.

No, but even the first christians believed they were living in the end times. It's been believed for 2000 years.

For the first Christians, it made sense. But gradually, as it didn't happen, people adjusted their expectations.

New Apostolic Reformationists believe that there are increasing number of "new apostles" who are receiving messages from God, which they see as evidence of the end times.

It is also common among these folks to believe that the end times don't just happen and that instead it is our responsibility to create the circumstances that enable the end times. This can either mean creating a state of instability and violence or creating a worldwide christian theocracy that lasts for 1000 years. Both involve massive upheavals of global systems.


My favourite bit of Biblical trivia. Consider this passage from the Revelation of St John: https://www.biblegateway.com/passage/?search=Revelation%208%... describing, perhaps, events leading to the end of the world:

> The third angel blew his trumpet, and a great star fell from heaven, blazing like a torch, and it fell on a third of the rivers and on the springs of water. The name of the star is Wormwood. A third of the waters became wormwood, and many people died from the water, because it had been made bitter.

"Wormwood", a type of bitter plant, translates to Russian as "Chernobyl", and Ukrainian "Chornobyl": https://en.wikipedia.org/wiki/Chernobyl > Etymology


Sure, but this description doesn't correspond to what happened in Chernobyl, and none of the other trumpets have corresponding events.

So do the Evangelicals believe that Chernobyl disaster triggered the apocalypse, and that it has been happening ever since? I don't think so.


Yes, you would have to extend some poetic license.

This was a bit of ad lib, the US branch of Christianity follows it's own logic and sadly I cannot answer the serious question.

I'm pretty sure there were some bits in the Bible about loving thy enemy and turning the other cheek. But maybe I misremember.


It's a Pascal's wager. If you're convinced Armageddon is going to happen at some point, then you should do all you can to prepare for it happening in your lifetime. And that approach is explicitly encouraged in the Bible: "You do not know the day or hour", etc.

Right "you do not know the day or the hour", not "you know that the day will be sometime between 2026 and 2076". I understand being prepared and whatnot. I don't understand the certainty of the date. Even the Bible says that it's unknown.

Not only is there no date, it explicitly says the time and date is not known to us.

The closest we have to a "date" is Jesus claiming the current generation wouldn't pass away before the end times arrived, which obviously didn't happen. So even the "Son of God" got it wrong.

Wow thousands of years of theology all got it wrong, including Thomas Aquinas and some of the smartest people who ever lived. If only they had your brilliant HN thesis they could have saved so much time and understood so much more.

Owwie looks like I'm going to need some lotion for that sick burn.

Better to get used to the burning sensation

The same self-centeredness that drove man to think that Earth was the center of its Universe.

See also: bean soup / "what about me?*


I believe it was related to both Israel gaining statehood after WW2, and the panic of nuclear disaster leading up to the end of the Cold War. It feels like a idea that really took root in the minds of evangelical Baby Boomers and early GenXers, but likely has lost all meaning to millenials.

It would be more satisfying to learn why hash of nan is not guaranteed to be the same. It feels like a bug.

At the standards level, NaN payload propagation isn't guaranteed, regardless of any other issues.

> payload propagation isn't guaranteed

Yes and no:

`If an operation has a single NaN input and propagates it to the output, the result NaN's payload should be that of the input NaN (this is not always possible for binary formats when the signaling/quiet state is encoded). If there are multiple NaN inputs, the result NaN's payload should be from one of the input NaNs; the standard does not specify which.'


My guess is that no one ever bothered to define hash(nan), which should, IMHO, be nan.

nan isn't anything. It's an early attempt at None when no/few (common) languages had that concept.

That python allows nan as an index is just so many kinds of buggy.


For binary operations, NaN values compare as unordered.

The IEEE 754 Specification requires that >,<,= evaluate to False.

Saying that two incomparable objects become comparable let alone gain equally would break things.

We use specific exponents and significands to represent NaNs but they have no numerical meaning.

I am actually surprised python got this correct, often NaN behavior is incorrect out of convenience and causes lots of issues and side effects.


Probably just due to encoding. NaN is all 1s for the exponent and non-zero mantissa, so that's 2^23 - 1 possible values for f32

The hash is the same. But a hash set has to use == in case of equal hashes (to avoid collisions).

It's not always the same:

  >>> hash(float('nan'))
  271103401
  >>> hash(float('nan'))
  271103657

Yes. The CPython hash algorithm for floats (https://github.com/python/cpython/blob/main/Python/pyhash.c#...) special-cases the non-finite values: floating-point infinities hash to special values modeled on the digits of pi (seriously! See https://github.com/python/cpython/blob/main/Include/cpython/...), and NaNs fall through ultimately to https://github.com/python/cpython/blob/main/Include/internal... which is based on object identity (the pointer to the object is used rather than its data).

maybe it's that multiple bit patterns can be NaN and these are two different ones? In IEEE-754, a number with all the exponent bits set to 1 is +/-infinity if the fraction bits are all zero, otherwise it's NaN. So these could be values where the fractions differ. Can you see what the actual bits it's setting are?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: