Hacker Newsnew | past | comments | ask | show | jobs | submit | benreesman's commentslogin

Carmack did very much almost exactly the same with the Trinity / Quake3 Engine: IIRC it was LCC, maybe tcc, one of the C compilers you can actually understand totally as an individual.

He compiled C with some builtins for syscalls, and then translated that to his own stack machine. But, he also had a target for native DLLs, so same safe syscall interface, but they can segv so you have to trust them.

Crazy to think that in one computer program (that still reads better than high-concept FAANG C++ from elite lehends, truly unique) this wasn't even the most dramatic innovation. It was the third* most dramatic revolution in one program.

If you're into this stuff, call in sick and read the plan files all day. Gives me googebumps.


Carmack actually deserves the moniker of 10x engineer. Truly his work in his domain has reached far outside it because id the quality of his ideas and methodologies

I have a bit I do where I do Carmack's voice in a fictional interview that goes something like this:

Lex Fridman: So of all the code you've written, is there any that you particularly like?

Carmack: I think the vertex groodlizer from Quake is probably the code I'm most proud of. See, it turns out that the Pentium takes a few cycles too long to render each frame and fails to hit its timing window unless the vertices are packed in canonically groodlized format. So I took a weekend, 16-hour days, and just read the relevant papers and implemented it in code over that weekend, and it basically saved the whole game.

The point being that not only is he a genius, but he also has an insane grindset that allows him to talk about doing something incredibly arcane and complex over a weekend -- devoting all his time to it -- the way you and I talk about breakfast.


Another weird thing about Carmack, now that you mention it -- and Romero, coincidentally -- is their remarkable ability to remember technical challenges they've solved over time.

For whatever reason the second I've solved a problem or fixed a bug, it basically autopurges from my memory when I start on the next thing.

I couldn't tell you the bugs I fixed this morning, let along the "groodilizer" I optimized 20 years ago.

Oh btw Jank is awesome and Jaeye is great guy, and also a game industry dev!


I like this word, 'grindset'

Linking directly to C++ is truly hell just considering symbol mangling. The syntax <-> semantics relationship is ghastly. I haven't seen a single project tackle the C++ interface in its entirety (outside of clang). It nearly seems impossible.

There's a reason Carmack tackled the C abi and not whatever the C++ equivalent is.


There is no C ABI (windows compilers do things quite differently from linux ones, etc) and there is no certainly no C++ equivalent.

The C ABI is basically per-platform (+ variations, like 32- vs 64-bit). But you can get by quite well pretending there is something like a C ABI if you use <stdint.h>.

C ABI is the system V abi for Unix, since C was literally created for it. And that is the abi followed by pretty much any Unix successor: Linux, Apple's OS, FreeBSD.

Windows has its own ABI.

The different abi is pretty much legacy and the fact that x86_64 ABI was built by AMD + Linux etc, while Microsoft worked with Intel for the Itanium abi.


> windows compilers do things quite differently from linux ones, etc

Fuck windows lmao. If windows wanted to be included they would have made different decisions 30 years ago. They can still make an effort.

Until then: fuckem, who cares, who would bother make an effort to care? simple apps can target the web; the rest of us can target vaguely-unix systems.


Just parsing C++ is already a freaking hell.

It's no wonder that every other day a new mini C compiler drops in, while no one even attempts to parse C++.


There is one pretty serious C++ parser project: https://github.com/robertoraggi/cplusplus

Wow, thanks! I didn't know this project.

To parse C++ you need to perform typecheck and name resolution at the same time. And C++ is pretty complex so it's not a easy task.


Any particular year?

Quake III Arena was released in 1999. It was open-sourced in 2005.

https://github.com/id-Software/Quake-III-Arena

https://en.wikipedia.org/wiki/Id_Tech_3

(from the source release you can see benreesman remembered right: it was lcc)


We could also stop gerrymandering the use/abuse line and just make a call as a society: we use drugs (most people most places most of history) or we don't fuck with drugs (most of the Islamic world, certain subcultures).

America is a Puritan origin society with a temperance faction that has been everything from writing the Constitution to largely ignored, standards for alcohol, cannabis, scripts fluctuate like hemlines: a typical adulthood will see multiple incompatible regimes of acceptable use vs unacceptable abuse.

None of that is anything to do with compassionate provision of high-quality medical care to vulnerable people (a strict ethical and practical good). Compassionate provision of high quality support is both expensive and leaves no room for insider/outsider lizard brain shit, i.e. not a very American thing to do in the 21st century.

Our society needs to get its shit together on this, not further weaponize it.


It's kind of a bummer that "skins/themes" never caught on for programming languages. You see it once in awhile, I think some compiler people at one of the FAANGs did an OCaml skin/theme/alternative syntax (reason? something). And there's stuff like Elixir that's kind of a new language but also an interface to an existing world (very cool, Valim is a brilliant guy).

But you could do it for almost anything. I would love the ability to hit a key chord in `emacs` and see things in an Algol-family presentation, or an ML family, or a Lisp.

Seems like the kind of thing that might catch on someday. Certainly the math notation in things like Haskell and TLA were a bit of a barrier to entry at one time. Very solvable problem if people care a lot.


This is a huge thing already, though?

The JVM is a virtual machine for which Java/Kotlin/Scala/Groovy/Clojure/Flix and a dozen others are front end syntax for.

Same with the CLR and C#, F#, Visual Basic, and technically LLVM/GCC if you want to be pendantic.


Not sure if you have some idea on how, but it feels like an unsolved problem to me. E.g. It is easy to theme a data structure, but if the layout matters it can be very hard to theme while also allowing free form edits.

I find the swings to be wild, when you win with it, you win really big. But when you lose with it, it's a real bite out of your week too. And I think 10x to 20x has to be figurative right, you can do 20x by volume maybe, but to borrow an expression from Steve Ballmer, that's like measuring an airplane by kilograms.

Someone already operating at the very limit of their abilities doing stuff that is for them high complexity, high cognitive load, detail intense, and tactically non-obvious? Even a machine that just handed you the perfect code can't 20x your real output, even if it gave you the source file at 20x your native sophistication you wouldn't be able to build and deploy it, let alone make changes to it.

But even if it's the last 5-20% after you're already operating at your very limit and trying to hit your limit every single day is massive, it makes a bunch of stuff on the bubble go from "not realistic" to "we did that".


There are definitely swings. Last night it took about 2 hours to get Monaco into my webpack built bootstrap template, it came down to CSS being mishandled and Claude couldn't see the light. I just pasted the code into ChatGPT o3 and it fixed it first try. I pasted the output of ChatGPT into Claude and viola, all done.

A key skill is to sense when the AI is starting to guess for solutions (no different to human devs) and then either lean into another AI or reset context and start over.

I'm finding the code quality increase greatly with the addition of the text 'and please follow best practices because will be pen tested on this!' and wow.. it takes it much more seriously.


Doesn't sound like you were writing actual functionality code, just integrating libraries?

That's right for this part of the work.

Most of the coding needed to give people CRUD interfaces to resources is all about copy / pasting and integrating tools together.

Sort of like the old days when we were patching all those copy/paste's from StackOverflow.

Too little of full stack application writing is truly unique.


Is there a way to have two agentic AIs do pair programming?

I did experiment with this where Claude Code was the 'programmer' and ChatGPT was the Software Architect. The outcome was really solid and I made it clear that each was talking to an AI and they really seemed to collaborate and respect the key points of each side.

It would be interesting to set up a MCP style interface, but even me copy/pasting between windows was constructive.

The time this worked best was when I was building a security model for an API that had to be flexible and follow best practices. It was interesting seeing ChatGPT compare and contrast against major API vendors, and Claude Code asking the detailed implementation questions.

The final output was a pragmatic middle-ground between simplistic and way too complex.


yes, definitely. https://github.com/BeehiveInnovations/zen-mcp-server is one example of people going off on this, but i'm sure there are many others

Let's be serious, what percentage of devs are doing "high complexity, high cognitive load, detail intense" work?

All of them, some just don’t notice, don’t care or don’t know this line of work is like that. Look at how junior devs work vs really experienced, self-aware engineers. The latter routinely solve problems the former didn’t know existed.

What does being experienced in a field of work have to do with self awareness?

Also I disagree. For web dev atleast, most people are just rewriting the same stuff in a different order. Even though the entire project might be complex from a high level perspective, when you dive into the components or even just a single route it ain't "high complexity" at all and since I believe most jobs are in web / app dev which just recycles the same code over and over again that's why there's a lot of people claiming huge boosts to productivity.


Most components are routine work, that you kinda snooze through. I like them as a kind of mental break: write tests, write code, run tests/linter.

The difficult part is reading thousand lines of unfamiliar code to measure the impact of a fix, finding the fix by reasoning about the whole moduke, designing a feature for long term maintainability,…

Note that all of them requires thinking and not much coding. Coding is easy, especially when you’ve done all the (correct?) thinking beforehand.


> Someone already operating at the very limit of their abilities doing stuff that is for them high complexity, high cognitive load, detail intense, and tactically non-obvious?

How much of the code you write is actually like this? I work in the domain of data modeling, for me once the math is worked out majority of the code is "trivial". The kind of code you are talking about is maybe 20% of my time. Honestly, also the most enjoyable 20%. I will be very happy if that is all I would work on while rest of it done by AI.


Creatively thinking about what a client needs, how the architecture for that would be like, general systems thinking, UX etc. and seeing that come to live in a clean, maintainable way, that's what lights up my eyes. The minutiae of code implementation, not so much, that's just an implementation detail, a hurdle to overcome. The current crop of tooling helps with that tremendously, and for someone like me, it's been a wonderful time, a golden era. To the people who like to handcraft every line of code to perfection, people who derive their joy from that, I think they benefit a lot less.

> Someone already operating at the very limit of their abilities doing stuff that is for them high complexity, high cognitive load, detail intense, and tactically non-obvious?

When you zoom in, even this kind of work isn't uniform - a lot of it is still shaving yaks, boring chores, and tasks that are hard dependencies for the work that is truly cognitively demanding, but themselves are easy(ish) annoyances. It's those subtasks - and the extra burden of mentally keeping track of them - that sets the limit of what even the most skilled, productive engineer can do. Offloading some of that to AI lets one free some mental capacity for work that actually benefits from that.

> Even a machine that just handed you the perfect code can't 20x your real output, even if it gave you the source file at 20x your native sophistication you wouldn't be able to build and deploy it, let alone make changes to it.

Not true if you use it right.

You're probably following the "grug developer" philosophy, as it's popular these days (as well as "but think of the juniors!", which is the perceived ideal in the current zeitgeist). By design, this turns coding into boring, low-cognitive-load work. Reviewing such code is, thus, easier (and less demoralizing) than writing it.

20x is probably a bit much across the board, but for the technical part, I can believe it - there's too much unavoidable but trivial bullshit involved in software these days (build scripts, Dockerfies, IaaS). Preventing deep context switching on those is a big time saver.


When you zoom in, even this kind of work isn't uniform - a lot of it is still shaving yaks, boring chores, and tasks that are hard dependencies for the work that is truly cognitively demanding, but themselves are easy(ish) annoyances. It's those subtasks - and the extra burden of mentally keeping track of them - that sets the limit of what even the most skilled, productive engineer can do. Offloading some of that to AI lets one free some mental capacity for work that actually benefits from that.

Yeah, I'm not a dev but I can see why this is true, because it's also the argument I use in my job as an academic. Some people say "but your work is intellectually complex, how can you trust LLMs to do research, etc.?", which of course, I don't. But 80% of the job is not actually incrementally complex, it's routine stuff. These days I'm writing the final report of a project and half of the text is being generated by Gemini, when I write the data management plan (which is even more useless) probably 90% will be generated by Gemini. This frees a lot of time that I can devote to the actual research. And the same when I use it to polish a grant proposal, generate me some code for a chart in a paper, reformat a LaTeX table, brainstorm some initial ideas, come up with an exercise for an exam, etc.


Yes, things that get resolved very quickly with AI include fixing Linting errors, reorganizing CI pipelines, documenting agreed on requirements, building well documented commits, cleaning up temporary files used to validate dev work, building README.md's in key locations to describe important code aspects, implementing difficult but well known code, e.g. I got a trie security model implemented very quickly.

Tons of dev work is not exciting, I have already launched a solo dev startup that was acquired, and the 'fun' part of that coding was minimal. Too much was the scaffolding, CRUD endpoints, web forms, build scripts, endpoint documentation, and the true innovative stuff was such a small part of the whole project. Of the 14 months of work, only 1 month was truly innovative.


> Offloading some of that to AI lets one free some mental capacity for work that actually benefits from that.

Maybe, but I don't feel (of course, I could be wrong) that doing boring tasks take away any mental capacity; they feel more like fidgeting while I think. If a tool could do the boring things it may free my time to do other boring work that allows me to think - like doing the dishes - provided I don't have to carefully review the code.

Another issue (that I asked about yesterday [1]) is that seemingly boring tasks may end up being more subtle once you start coding them, and while I don't care too much about the quality of the code in the early iterations of the project, I have to be able to trust that whatever does the coding for me will come back and report any difficulties I hadn't anticipated.

> Reviewing such code is, thus, easier (and less demoralizing) than writing it.

That might well be true, but since writing it doesn't cost me much to begin with, the benefit might not be large. Don't get me wrong, I would still take it, but only if I could fully trust the agent to tell me what subtleties it encountered.

> there's too much unavoidable but trivial bullshit involved in software these days (build scripts, Dockerfies, IaaS). Preventing deep context switching on those is a big time saver.

If work is truly trivial, I'd like it to be automated by something that I can trust to do trivial work well and/or tell me when things aren't as trivial and I should pay attention to some detail I overlooked.

We can generally trust machines to either work reliably or fail with some clear indication. People might not be fully reliable, but we can generally trust them to report back with important questions they have or information they've learnt while doing the job. From the reports I've seen about using coding agents, they work like neither. You can neither trust them to succeed or fail reliably, nor can you trust them to come back with pertinent questions or information. Without either kind of trust, I don't think that "offloading" work to them would truly feel like offloading. I'm sure some people can work with that, but I think I'll wait until I can trust the agents.

[1]: https://news.ycombinator.com/item?id=44526048


Yeah, I don't fuck with Docker jank and cloud jank and shit. I don't fuck with dynamic linking. I don't fuck with lagged-ass electron apps. I don't fuck with package managers that need a SAT solver but don't have one. That's all going to be a hard no from me dawg.

When I said that after you've done all the other stuff, I was including cutting all the ridiculous bullshit that's been foisted on an entire generation of hackers to buy yachts for Bezos and shit.

I build clean libraries from source with correct `pkg-info` and then anything will build against it. I have well-maintained Debian and NixOS configurations that run on non-virtualized hardware. I use an `emacs` configuration that is built-to-specifications, and best-in-class open builds for other important editors.

I don't even know why someone would want a model spewing more of that garbage onto the road in front of them until you're running a tight, optimized stack to begin with, then the model emulates to some degree the things it sees, and they're also good.


Ok, that's great for you. Most of us don't have the luxury of going full Richard Stallmann in their day to day and are more than happy to have some of the necessary grunt work to be automated away.

I live in the same world as everyone else and have to make a living same as anyone else.

Lagged-ass electron apps are a choice: run neovim or emacs or zed, I have Cursor installed, once in a while I need vscode for something, but how often is someone dictating my editor?

I have to target OCI container platforms for work sometimes, that's what Arion and nix2container are for. Ditto package managers: uv and bun exist and can interact with legacy requirements.txt and package.json in most cases.

Anything from a Helm chart to the configuration for ddagent can be written from nixlang and into a .deb.

My current job has a ton of Docker on GCE running TypeScript, I have to emit compatible code and configuration, but no one stands over my shoulders to make sure I'm doing the Cloud Approved jank path or having a bash script or Haskell program print it. I have a Jank Stack Compatibility Layer that builds all that nonsense.

Job after job there's a little setup cost and people look at me funny, 6 months in my desk is an island of high-velocity sanity people are starting to use because I carry a "glory days FAANG" toolkit around and compile reasonable plain text into whatever ripoff cloud garbage is getting pimped this week.

It's a pretty extreme workplace where you can't run reasonable Unix on your own machine and submit compiler output instead of typing for the truly mandatory jank integration points.


You'll see a fairly even split amongst S-tier, "possibly headed for standardization" level libraries. I'd say there's a skew for `#ifndef` in projects that are more "aspires to the standard library" and for `#pragma once` in projects that are more focused on like a very specific vertical.

`#pragma once` seems to be far preferred for internal code, there's an argument for being strictly conforming if you're putting out a library. I've converted stuff to `#ifndef` before sharing it, but I think heavy C++ people usually type `#pragma once` in the privacy of their own little repository.

- `spdlog`: `#pragma once` https://github.com/gabime/spdlog/blob/v1.x/include/spdlog/as...

- `absl`: `#ifndef` https://github.com/abseil/abseil-cpp/blob/master/absl/base/a...

- `zpp_bits`: `#ifndef` https://github.com/eyalz800/zpp_bits/blob/main/zpp_bits.h

- `stringzilla` `#ifndef` https://github.com/ashvardanian/StringZilla/blob/main/includ...


I don't even think we know how to do it yet. I revise my whole attitude and all of my beliefs about this stuff every week: I figure out things that seemed really promising don't pan out, I find stuff that I kick myself for not realizing sooner, and it's still this high-stakes game. I still blow a couple of days and wish I had just done it the old-fashioned way, and then I'll catch a run where it's like, fuck, I was never that good, that's the last 5-10% that breaks a PB.

I very much think that these things are going to wind up being massive amplifiers for people who were already extremely sophisticated and then put massive effort into optimizing them and combining them with other advanced techniques (formal methods, top-to-bottom performance orientation).

I don't think this stuff is going to democratize software engineering at all, I think it's going to take the difficulty level so high that it's like back when Djikstra or Tony Hoare was a fairly typical computer programmer.


If you combine models/agents with formal systems, you can get them to come back when they're in a corner today: https://gist.github.com/b7r6/b2c6c827784d4e723097387f3d7e1d8...

This interaction is interesting (in my opinion) for a few reasons, but mostly to me it's interesting in that the formal system is like a third participant in the conversation, and that causes all the roles to skew around: it can be faster to have the compiler output in another tab, and give direct edit instructions: do such on line X, such on line Y, such on line Z than to do anything else (either go do the edits yourself or try to have it figure out the invariant violation).

I'm basically convinced at this point that AI-centric coding only makes sense in high-formality systems, at which it becomes wildly useful. It's almost like an analogy to the Girard-Reynolds isomorphism: if you start with a reasonable domain model and a mean-ass pile of property tests, you can get these things to grind away until it's perfect.


I don't have anything terribly insightful to say other than that I really like this project. I loved using Clojure back during it's brief time in the sun, it was a great time, and jank has all the same awesome vibes. I haven't used it on anything serious yet, it seems like it would be great for like a `zig`-flavored approach to C++ builds for example. I'm doing a big C++ project ground up at the moment, I think I'm going to get jank into it and see if I can automate some stuff.

The compilation strategy is very much what Carmack did in Trinity and whether you got your inspiration there or independently had a great idea, that's good company to be keeping.

Keep it up!


I hadn't heard of Carmack doing something similar, but I will take that company any day. Thanks for the kind words!

The alternative to mafia capitalism in the grips of what Trading/Finance/Crypto Twitter calls `#CrimeSeason` is markets refereed by competent, diligent, uncorrupted professionals and public servants: my go-to example is Brooksley Born because that's just such a turning point in history moment, but lots of people in important jobs want to do their jobs well, in general cops want to catch criminals, in general people don't like crime season.

But sometimes important decisions get made badly (fuck Brooksley Born, deregulate everything! This Putin fellow seems like a really hard worker and a strong traditional man.) based on lies motivated by greed and if your society gets lazy about demanding high-integrity behavior from the people it admits to leadership positions and punishing failures in integrity with demotions from leadership, then this can really snowball on you.

Just like the life of an individual can go from groovy one day to a real crisis with just the right amount of unlucky, bit of bad cards, bit of bad choices, bit of bad weather, same thing happens to societies. Your institutions start to fail, people start to realize that cheating is the new normal, and away you go. Right now we're reaping what was sowed in the 1980s, Gordon Gecko and yuppies would love 2025 (I'd like to think Reagan would feel a bit queasy about how it all went but who knows).

Demand high-integrity behavior from leaders. It's not guaranteed to work at this stage of the proceedings, but it's the only thing that has ever worked.


Right. The point is that in frothy market conditions and a general low-integrity regime in business and politics there is a ton of incentive to exploit FOMO far beyond it's already "that's a stiff sip there" potency and this leads to otherwise sane and honest people getting caught up into doing concrete things today based on total speculation about technology that isn't even proposed yet. A good way to really understand this intuitively is to take the present-day intellectual and emotional charge out of it without loss of generality: we can go back and look at Moore's Law for example, and the history of how the sausage got made on reconciling a prediction of exponential growth with the realities of technological advance. It's a fascinating history, there's at least one great book [1] and the Asionometry YouTube documentary series on it is great as always [2].

There is no point in doing business and politics and money motivated stuff based on the hypothetical that technology will become self-improving, if that happens we're through the looking glass, not in Kansas anymore, "Roads? Where we're going, we won't need roads." It won't matter or at least it won't be what you think it'll be some crazy thing.

Much, much, much, much more likely is that this is like all the other times we made some real progress, people got too excited, some shady people made some money, and we all sobered up and started working on the next milestone. This is by so far both A) The only scenario you can do anything about and B) The only scenario honest experts take seriously, so it's a double "plan for this one".

The quiet ways that Jetson Orin devices and shit will keep getting smarter and more trustworthy to not break shit and stuff, that's the bigger story, it will make a much bigger difference than snazzy Google that talks back, but it's taking time and appearing in the military first and comes in fits and starts and has all the other properties of ya know, reality.

[1] https://www.amazon.com/Moores-Law-Silicon-Valleys-Revolution...

[2] https://www.youtube.com/@Asianometry


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: