Hacker Newsnew | past | comments | ask | show | jobs | submit | kenjackson's commentslogin

I feel like people forget that they're still allowed to program. You're still allowed to create workflows tying together LLMs and agents if you want. Almost all the tools and technology that existed before LLMs are still available to be used.

The fallacy here is the assumption that humans know why we do what we do. Much like modern LLMs we have an explanation, but it’s just something we cook up in our brain. Whether or not it’s the truth is far more complex.

Oddly, despite LLMs being these huge networks with billions of parameters, we still probably do understand it better than we do our own brains.


>The fallacy here is the assumption that humans know why we do what we do. Much like modern LLMs we have an explanation

Human brains and cognition do not work like LLMs, but that aside that's irrelevant. Existing machines can explain what they did, that's why we built them. As Dijkstra points out in his essay on 'the foolishness of natural language programming', the entire point of programming is: (https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...)

"The virtue of formal texts is that their manipulations, in order to be legitimate, need to satisfy only a few simple rules; they are, when you come to think of it, an amazingly effective tool for ruling out all sorts of nonsense that, when we use our native tongues, are almost impossible to avoid."

So to 'program' in English, when you had an in comparison error free and unambiguous way to express yourself is like in his words 'avoiding math for the sake of clarity'.


Okay, so you've established that LLMs aren't programming. They are unlike existing machines. The closest analogy we have for them is the human brain, which also seems to include a lot of neural net architecture.

Now, physics says that everything can be explained mathematically, including the human brain. Obviously, on some level, an LLM can be explained. But despite hundreds of years of science, we still don't understand the human brain. Some systems are just really complex and difficult to understand.

Given all of that, I see no reason to assume that we'll be able to understand LLMs anytime soon. Especially given we keep growing more complex ones.


That is absurd as a suggestion of it being the entire point of programming. In fact, it goes back to my original point - I have no idea why Djikstrs would say something so non-sensical, and likely neither did he.

what do you mean "likely neither did he", I literally linked you the piece in which he said it. And of course he of all people would make that (correct) point, because he was always the strongest advocate of the virtue of formal correctness of programming languages, again from his article:

"A short look at the history of mathematics shows how justified this challenge is. Greek mathematics got stuck because it remained a verbal, pictorial activity, Moslem "algebra", after a timid attempt at symbolism, died when it returned to the rhetoric style, and the modern civilized world could only emerge —for better or for worse— when Western Europe could free itself from the fetters of medieval scholasticism —a vain attempt at verbal precision!— thanks to the carefully, or at least consciously designed formal symbolisms that we owe to people like Vieta, Descartes, Leibniz, and (later) Boole."

LLMs are nothing else but the exact reversal of this. To go from the system of computation that Boole gave you to treating your computer like a genie you perform incantations on, it's literally sending you back to the medieval age.


"what do you mean "likely neither did he", I literally linked you the piece in which he said it."

Saying an explanation it and actually knowing why you did it are two different things. That's exactly my point.

And then Boole's quote -- good quote, but I think you (not Boole) are conflating precision with motivation.


Who is using this on the SAT when there is Desmos?

So is this a 2400% reduction in the number of NSF board members?

That would leave one remaining member lol. I guess it would be infinity percentage reduction according to his math.

This is a reference to RJK Jr's pronouncement that Trump has a "different way of calculating percentages". Seems apt to me in this context.

Very much another "Emperor's New Clothes" situation.

If the pathology was entirely within his own privately-owned company that'd be one thing, but Americans are going to continue to get hurt because of it.


> At some point, because these models are trained on existing data, you cease significant technological advancement

What makes you think that they can't incrementally improve the state of the art... and by running at scale continuously can't do it faster than we as humans?

The potentially sad outcome is that we continue to do less and less, because they eventually will build better and better robots, so even activities like building the datacenters and fabs are things they can do w/o us.

And eventually most of what they do is to construct scenarios so that we can simulate living a normal life.


I agree the article is smarter than the title makes it seem. And honestly, much better than comments on HN. The articles keeps diving deeper and asking questions. The comments here take hold of a single theory, without even thinking about the counters that article mentions. This is probably the best example of read the article, and not the comments.


The HN comments are sadly mostly just people pushing their favorite thing, whether COVID denialism, "everything is going bad because people are atheists" or whatever, without engaging with the article at all.


> COVID denialism

> "everything is going bad because people are atheists"

I don't think I have ever seen anybody express either of these opinions on HN, and if they tried, they would immediately get downvoted to oblivion.


Here's one example of the second, in this very thread: https://news.ycombinator.com/item?id=47877926


[flagged]


Where do you see the word ‘Evil?’ I don’t see the word in the title or anywhere in the article itself.


Substack tends to select for this kind of author. Not daily posts about their life and their latest hot take, but a few deep articles every few weeks, that make you think "hey, that's interesting". Although there is not necessarily an easy way to know where the author is talking from, whether they're entirely relevant, etc...

Even the "superstars" (Krugman, etc..) are posting this is that could have been posted on twitter, with the same level of outrage and polarization, but at least the content is well structure, and they are allowed to use sentences in paragraph, with quotes, and figures, and links, etc...

Yes, I know, it's called blogging. I'm saying that the new hot thing, in 2026, is blogging.


Thank you kindly.


What does this mean?


This was a masterclass in science writing for laypeople. Going to find more from this author.


I don’t think these companies are hurting for access to code.


You can set your prompt to do that. You can have it be extremely skeptical. You can even make it contrarian, if you wanted to be extreme. My current prompt challenges me often, and wants to find weaknesses in my argument.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: