Hacker Newsnew | past | comments | ask | show | jobs | submit | loganmhb's commentslogin

Especially concerning in light of that METR study in which developers overestimated the impact of AI on their own productivity (even if it doesn't turn out to be negative) https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...


I once saw an explanation which I can no longer find that what's really happening here is also partly "man" and "woman" are very similar vectors which nearly cancel each other out, and "king" is excluded from the result set to avoid returning identities, leaving "queen" as the closest next result. That's why you have to subtract and then add, and just doing single operations doesn't work very well. There's some semantic information preserved that might nudge it in the right direction but not as much as the naive algebra suggests, and you can't really add up a bunch of these high-dimensional vectors in a sensible way.

E.g. in this calculator "man - king + princess = woman", which doesn't make much sense. "airplane - engine", which has a potential sensible answer of "glider", instead "= Czechoslovakia". Go figure.


I respect the forecasting abilities of the people involved, but I have seen that report described as "astonishingly accurate" a few times and I'm not sure that's true. The narrative format lends itself somewhat to generous interpretation and it's directionally correct in a way that is reasonably impressive from 2021 (e.g. the diplomacy prediction, the prediction that compute costs could be dramatically reduced, some things gesturing towards reasoning/chain of thought) but many of the concrete predictions don't seem correct to me at all, and in general I'm not sure it captured the spiky nature of LLM competence.

I'm also struck by the extent to which the first series from 2021-2026 feels like a linear extrapolation while the second one feels like an exponential one, and I don't see an obvious justification for this.



Plenty of people are ragging (justifiably) on Clean Code, but I really admire by contrast Ousterhout's commitment to balanced principles and in particular learning from non-trivial examples. Philosophy of Software Design is a great and thought-provoking read.


Philosophy of Software Design seemed more pragmatic to me.


Martin Kleppmann's book Designing Data-Intensive Applications is a great starting point if you're not familiar with it.


The roads might actually cost more than that:

https://www.strongtowns.org/journal/2019/4/30/my-city-has-ma...


I wonder how much of this is an illusion of precision that comes from pattern matching on content from filler sites like https://www.free-hosting.biz/division/16-divided-7.html (I do not recommend clicking the link, but the result appears there).


I was thinking the same thing, especially as we are talking about division, and the result is "correct" for 16/7 to a great number of digits.

See also the "x = x + x three times", for which the result is not random but the result for the same thing... Two times instead of three (so result/2). That heavily smells like it has read sites that had nearly the same code on them.


The question is not whether perceptions are true -- that's irrelevant. Undoubtedly perceptions present a skewed and unreliable view onto reality. It's whether they exist at all. You can't trick someone who isn't looking.

> Except everybody knows there is no "I", you're just a bundle of atoms, and a bundle that's changing from moment to moment.

This is begging the question in the other direction.


> The question is not whether perceptions are true -- that's irrelevant. Undoubtedly perceptions present a skewed and unreliable view onto reality. It's whether they exist at all.

Perceptions are not experience. Nobody denies the existence of perceptions, the question is whether perceptions carry something "extra", something "ineffable" that we call "qualitative experience", something that cannot even in principle be captured by a third person objective description.

Algorithms and machines arguably have perceptions but not experience. Eliminativism is the position that we don't have experience either, we're only a collection of perceptions arranged in such a way that it leads us to the conclusion that our experience is real.

> This is begging the question in the other direction.

I'm not begging the question because I'm not saying eliminativism is true because matter is all we can measure. I'm simply saying that it's demonstrably true that by every measure currently available, we are just a bundle of atoms changing from moment to moment. The only people who claim otherwise and are given any kind of credibility, are people who cite fallacious thought experiments like Mary's room as "evidence". Not very compelling frankly.


I have this feeling sometimes too, but I think there is an important aspect of complex board games, in particular strategy games that is missing from computer games. When you play a board game, you are forced to understand the rules (because you are the one executing them) so you are able to more fully consider their implications on strategy. (Of course, the mechanics must be tasteful in addition to complex in order for this to actually be a benefit.) In a computer game, my experience is that it's much easier to revert to playing by feel and lose that effect, and much harder to design a game where the full mechanics are obvious to the players. As a wargamer this is the main reason I prefer playing board wargames, even though they are not able to simulate in nearly as much detail as computer wargames.


I am a wargamer too! Are you familiar with Arty's "Crossfire"? That's a marvelous wargame (the best WWII game in my opinion) with a very basic set of rules. The complexity comes from scenario design and gameplay itself -- the rules are trivial.


Another fellow wargamer here! I’ll second that Crossfire is fantastic. It totally captures the rhythm and feel of what it’s trying to represent, without getting bogged down in irrelevant detail. It’s also one of the only really innovative sets of wargame rules I’ve ever played, most of which are essentially the same mechanics combined in different ways.

Another old set with some great ideas is Loose Files and the American Scramble, for the American Revolution. Originally a magazine article (from the 80s I think?), you can find the pdf floating around online. Three pages of rules, and super tightly focused on what makes the AWI unique. 100% worth a look if you have any interest in the period.

I think you’re right about computer games when it comes to hardcore number-crunching simulation. Computers being better able to portray fog of war is also a huge advantage. But I think tabletop games can do certain things better, especially when it comes to things like command friction. On the tabletop, when an order fails to go through or an unlucky break sees your units dissolve in a rout, you can easily understand what happened and it just becomes part of the story of the game. But in a computer game not having precise control can be very frustrating, like you’re at the mercy of opaque mechanics and the RNG.


Thanks for the recommendation! I'll definitely look for the PDF, because a recommendation from a fellow Crossfire fan carries a lot of weight for me ;)


Hah! Us grognards gotta stick together. Loose Files is definitely a little rough around the edges and shows it’s age: for ex. it mentions that officers can send orders, but what that means is left as an exercise for the reader. But with some common sense adaptations it plays really well.


I have heard excellent things about Crossfire but alas have not had the chance to play it -- hopefully one of these days!


When reading about Elasticsearch I was fascinated to learn that the JVM can often use 32-bit object pointers to address about 32GB of RAM when running in 64-bit mode: https://wiki.openjdk.java.net/display/HotSpot/CompressedOops


As someone who comes from the .NET world, this is something that pissed me off about configuring Java applications like Elastic.

If I've got a box with 512GB of ram, it seems I'm supposed to spin up multiple instances to satisfy this, all because the JVM has a hissy fit if you go over ~30GB. This then means worrying about replication and ensuring we don't have both the primary and replicas sitting on the same box.

It seems insane that this is an actual issue in 2017.


Can't you just turn compressed object pointers off, or am I misunderstanding the issue?


> If I've got a box with 512GB of ram, it seems I'm supposed to spin up multiple instances to satisfy this, all because the JVM has a hissy fit if you go over ~30GB.

What is the actual technical reason why the JVM cannot (easily?) address more than 32 GiB of RAM?


I don't believe that's the case (an other commenter notes they run solr processes up to 160GB), however they may have run into the compressed oops optimisation, or more precisely the end of it: because Java is a very pointer-heavy language, if the maximum heap size is under 32GB many 64b JVMs use a variant of tagged pointers where they have 35 bit pointers stored in 32 bits (since the lower 3 bits are always empty they can shift them in/out).

Except once you breach the 32GB limit, your 32 bit pointers grow to 64, and depending on your application you might need to grow your maximum heap into the high 40s to get room for new objects: https://blog.codecentric.de/en/2014/02/35gb-heap-less-32gb-j...


The ES documentation goes into more detail on this.

https://www.elastic.co/guide/en/elasticsearch/guide/current/...


Thanks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: