Hacker Newsnew | past | comments | ask | show | jobs | submit | ActivePattern's commentslogin

A “sufficiently smart compiler” can’t legally skip Python’s semantics.

In Python, p.x * 2 means dynamic lookup, possible descriptors, big-int overflow checks, etc. A compiler can drop that only if it proves they don’t matter or speculates and adds guards—which is still overhead. That’s why Python is slower on scalar hot loops: not because it’s interpreted, but because its dynamic contract must be honored.


In Smalltalk, p x * 2 has that flow that as well, and even worse, lets assume the value returned by p x message selector, does not understand the * message, thus it will break into the debugger, then the developer will add the * message to the object via the code browser, hit save, and exit the debugger with redo, thus ending the execution with success.

Somehow Smalltalk JIT compilers handle it without major issues.


Smalltalk JITs make p x * 2 fast by speculating on types and inserting guards, not by skipping semantics. Python JITs do the same (e.g. PyPy), but Python’s dynamic features (like __getattribute__, unbounded ints, C-API hooks) make that harder and costlier to optimize away.

You get real speed in Python by narrowing the semantics (e.g. via NumPy, Numba, or Cython) not by hoping the compiler outsmarts the language.


Python'a JIT could do the same, it could check if __getattribute__() is the default implementation and replace its call with p x directly. This would work only for classes that have not been modified at runtime and that do not implement a custom __getattribute__

People keep forgetting about image based semantics development, debugger, meta-classes, messages like becomes:,...

There is to say everything dynamic that can be used as Python excuse, Smalltalk and Self, have it, and double up.



edit and continue is available on lots of JIT-runtime languages

First, we need to add the word 'only': "not ONLY because it’s interpreted, but because its dynamic contract must be honored." Interpreted languages are slow by design. This isn't bad, it just is a fact.

Second, at most this describes WHY it is slow, not that it isn't, which is my point. Python is slow. Very slow (esp. for computation heavy workloads). And that is okay, because it does what it needs to do.


Ironically, this comment reads like it was generated from a Transformer (ChatGPT to be specific)


its the em dashes?


It's a OpenAI researcher that's worked on some of their most successful projects, and I think the criticism in his X thread is very clear.

Systems that can learn to play Atari efficiently are exploiting the fact that the solutions to each game are simple to encode (compared to real world problems). Furthermore, you can nudge them towards those solutions using tricks that don't generalize to the real world.


Right, and the current state of tech - from accounts I’ve read, though not first hand experienced - is the “black box” methods of AI are absolutely questionable when delivering citations and factual basis for their conclusions. As in, the most real world challenge, in the basic sense, of getting facts right is still a bridge too far for OpenAI, ChatGPT, Grok, et al.

See also: specious ethics regarding the training of LLMs on copyright protected artistic works, not paying anything to the creators, and pocketing investor money while trying to legislate their way around decency in engineering as a science.

Carmack has a solid track record as an engineer, innovator, and above the board actor in the tech community. I cannot say the same for the AI cohort and I believe such a distinction is important when gauging the validity of critique or self-aggrandizement by the latter, especially at the expense of the former. I am an outlier in this community because of this perspective, but as a creator and knowledgeable enough about tech to see things through this lens, I am fine being in this position. 10 years from now will be a great time to look back on AI the way we’re looking back at Carmack’s game changing contributions 30 years ago.


That sounds like an extremely useful insight that makes this kind of research even more valuable.


I am quite confident that an LLM will never beat a top chess engine like Stockfish. An LLM is a generalist -- it contains a lot of world knowledge, and nearly all of it is completely irrelevant to chess. Stockfish is a specialist tuned specifically to chess, and hence able to spend its FLOPs much more efficiently towards finding the best move.

The most promising approach would be tune a reasoning LLM on chess via reinforcement learning, but fundamentally, the way an LLM reasons (i.e. outputting a stream of language tokens) is so much more inefficient than the way a chess engine reasons (direct search of the game tree).


Wouldn't the extra stamina have been rewarded, assuming creatine allowed you to perform extra repetitions? All exercises were done to repetition maximum.


You're right, I read that too quickly.


You may want to delete your comment as it is spreading misinformation about this study.


They can't delete the comment because 1) it is past the 2-hour deletion window and 2) it has replies.


The study seems to have controlled for training intensity -- all exercises were done to repetition maximum.


If you read the study, you can see that they controlled for training intensity. All exercises were done to repetition maximum.


The idea of an asymmetric Chess starting position is very interesting, although it does introduce more risk of one side starting with a big advantage (perhaps this has been analyzed).

I also like that in this variant, castling works like normal -- that is one of the most unintuitive aspects of Chess960.


We fixed the castling issue with Chess 744. Thoughts?

https://sites.google.com/view/chess-744/todays-744-game


Yes, that appears to be another good solution to the castling trickiness! And probably how you assume castling works in Chess960 if you weren't given the rules.


But that's the problem with Chess 960 - that's not how you castle at all in that variant.


One option is bidding with points: player A looks at the position and bids X<0.5 points for the privilege of picking the color. Player B either accepts or raises the bid, the process repeats until one of them accepts or bids 0.5 points. The match is then played for the remaining 1-X points.

Or, more simply, player A is shown N random positions, picks one of them and lets player B pick the color.


I don't think that bidding system really works. If one side is strongly favored in the opening, the optimal bid would be essentially 0.4999999999... so that you can pick the color and win the game by a slim margin. Players then increase the bid with tiny steps ad infinitum.

The other idea works but is essentially just discarding all of the lopsided starting positions, in which case they might as well not be in the game.


Or you just run things like duplicate bridge. Everybody plays the same set of randomized boards.


> one side starting with a big advantage

I have never played this variant of chess, but on the surface it seems that having both bishops on the same color would be a sizeable disadvantage.

The other randomized pieces (queens and knights) can get to any square, so having two knights start on dark squares, for instance, doesn't seem to really matter.


The bishops are required to be on different colors.


Who chooses the "correct" use of words? Is it you? Wikipedia disagrees with you: https://en.wikipedia.org/wiki/Gross_margin. Maybe you should make your own encyclopedia.


"Gross margin" seems a suitable alternative term.


you mean some person on wikipedia disagrees with him.



The answer will likely match what the reasoning steps bring it to, but that doesn’t mean the computations by the LLM to get that answer are necessarily approximated by the outputted reasoning steps. E.g. you might have an LLM that is trained on many examples of Shakespearean text. If you ask it who the author of a given text is, it might give some more detailed rationale for why it is Shakepeare, when the real answer is “I have a large prior for Shakespeare”.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: