Hacker News new | past | comments | ask | show | jobs | submit login

Software is gradually turning into a similar pattern of layering like sediment. With most modern "hardware level" applications, there are still layers of OS magic happening under the hood.

We're now into maybe decade 4-ish of software dependency.

There was a scene in one of Alastair Reynold's books where a character basically was a computational archaeologist. That resonates with me a lot.

In a couple centuries, it's not a terrible prediction of the future that software stacks will accumulate cruft over time and debugging certain issues will require immense financial effort to both dig through the layers of software commits and historical proposed merge commits, plus adding extra tests on top of bedrock code and its fixes.

No idea what this will look like. I imagine easily executed functions will pop up in mixed pip's and npm's that are easily recreated functionality every decade, regardless of prior art. Every new programmer wants to make a stamp on the world.

There's some saying about history repeating itself, but I'm dumb and don't remember.




> There was a scene in one of Alastair Reynold's books where a character basically was a computational archaeologist.

Not sure about Alastair Reynold but there’s Pham Nuwen in Vernor Vinge’s A Deepness in the Sky who is indeed that: a programmer archaeologist (and programmer-at-arms to exploit the weaknesses in the other party’s software midden).


I love that book.

I liked that Pham founds the Qeng Ho specifically so that he could be the one commissioning new software for the ships. He wanted to be the one putting hidden backdoors, secret passwords, and booby–traps into it. Of course it’s built on Unix, but if you’re paying attention you’ll notice that when someone enters a command they type “a column of text”.

He also built interstellar communication networks specifically so that civilizations that fell could rebuild more quickly (at least once they reinvented radio) and could learn the Qeng Ho language and systems in the process. But he also put in encrypted channels so that Qeng Ho traders would have inside information and therefore an edge against outside traders.

And then Nau gets his hands on it all, with his crew of Focused to examine every line of source code…


And even with Nau's Focused working for years, they don't uncover the backdoors in the localizers, between the sheer amount of code and whatever obfuscation Pham added.


Spoilers! Also, they had very complex “ensemble behavior” that resisted analysis. I once wrote some code that I was almost not clever enough to debug, so I can believe it.


Today, Vinge could have written: "with his LLMs to examine every line of source code ..."


In fairness to Vinge, that wasn’t a direct quote :)


Ah actually I think that's right. Thanks!


Not only that, but also the concept of open-source development is not the panacea we believe it to be. Bear with me.

Software is extremely complex, even if it is open-source, no one except the original developers and very dedicated people will attempt to patch the myriad of issues and bugs they encounter daily. And even if we do spend the time to track down and fix a bug, there's a political and diplomatic game to convince the maintainers to incorporate your fix. It is not uncommon for a PR to just sit, unreviewed, for years. Open-source does not and will never scale, because software is orders of magnitude too complex.

Outside of software, this problem is lessened because maintainership is distributed: if your car engine breaks, you do not depend on your manufacturer to have enough time and energy to fix it. There are thousands of licensed garages that can do it for you. And, not least, the real world is much simpler than any piece of software, which is effectively completely ad-hoc: knowing how Chrome works will not help you fix this Firefox issue, whereas if you can fix the carburettor on a Honda car, you probably can do the same on a FIAT.

Open-source/distributed development and bug fixing worked much better when computers had 64 kB of RAM and programs no more than 10 pages long.

EDIT TO CLARIFY: I'm not talking of open-source vs commercial, or other types of governance. I'm talking more abstractly about the fact that having source available and open contributions does not noticeably increase the amount of bugs fixed. This comment is about software complexity and logistics of distributed bugfixing.


>Open-source does not and will never scale, because software is orders of magnitude too complex

But there are examples of long-time open source projects all over the place. This sounds like an argument for open source.

If you work for a for-profit company you face two different problems: overnight the company can disappear, and the IP is lost/locked forever; problems are only ever fixed if there's a profit incentive. That works, a lot, but it's not perfect either.


Mind you I'm not talking of open-source vs commercial, or other types of governance. I'm talking more abstractly about the fact that having source available and open contributions does not noticeably increase the amount of bugs fixed.

It's a discussion about software complexity and logistics of distributed bugfixing, not organisational.


You said “does not noticeably increase”; you need a reference point for “increase”. If you’re not comparing open source to other types of governance, then what are you comparing it to?


The problem is also in (lack of) modularity that makes fixing small things disproportionately onerous.

If ypur car's engine breaks, it's usually petty localized. Most of the time it's enough to open the hold and remove a few small parts to reach it. If your stoplight breaks, again, the scope is pretty local.

To fix or to even diagnose the issue with a tooltip in Firefox, you have to rebuild it whole, and it's about as involved and long as rebuilding a car. And even though Mozilla invented Rust to make Firefox development easier, it's far, far from just saying `cargo build`.

This raises the barrier to entry quite noticeably, even if you are an experienced software mechanic but never worked in a Mozilla-oriented garage. But even if you fixed the issue on your platform, you now have to test that the fix did not introduce a regression on at least two other major platforms (or more, depending on the component).

No wonder it's much easier to hack on smaller projects, or on projects written in JS, Python, elisp, you name it.


> If ypur car's engine breaks, it's usually petty localized

I think your point here is that typically with software, changing one line of code means that you need to rebuild the entire executable.

Your analogy breaks down pretty fast, though. While fixing one thing on a car _never_ requires totally rebuilding the thing from scratch, there are still tons of interlocking dependencies and architectural challenges that can impact the time/complexity required to change out a part. (See the effort required in most vehicles to change simple wear items such as a timing chain or a water pump....)

In my experience of software, most of the "rebuild pain" is self-inflicted by the project maintainers (poor automation/containerization of the build process). Software has the luxury of abstraction and automation that can reduce the build effort required from an individual in ways that a mechanic could only dream of!


I agree: the pain of rebuilding characteristic to large software is not inherent to software in general. Software can be highly modular, allowing for fast and flexible changes of many important parts. You don't have to rebuild a Linux or Windows kernel to update a driver; usually you don't even need to reboot it.

But Firefox specifically, and a few other old, large, and highly multiplatform projects had more important things to do than to make building them easy, and in doing so made contributing to them harder.

It's like a Honda Civic vs some Jaguar.


Your analogy breaks down pretty fast, though

Yeah this analogy doesn't make sense to me. My dad has seen head gaskets fail on multiple previous cars of his. Each time, the labour cost to replace the head gasket (a fairly cheap part) exceeded the value of the car, so he sold it to a scrapyard instead of ordering the repair.

Software does not have anything remotely analogous to this. One small bug somehow requiring you to throw out the entire codebase and start over from scratch?


> Your analogy breaks down pretty fast, though

> Yeah this analogy doesn't make sense to me. My dad has seen head gaskets fail on multiple previous cars of his. Each time, the labour cost to replace the head gasket (a fairly cheap part) exceeded the value of the car, so he sold it to a scrapyard instead of ordering the repair.

The key word there is labour. I change my own headgaskets.

Giving up a Sunday (which is not worth money to me) to replace the head gasket is analogous to me giving up a Sunday to track down and fix a software annoyance to me (yes, I've committed fixes to a few open source projects).

So the analogy does sorta fit, for opensource anyway.


It’s worse than that. ???’s law states that, over time, well-factored, easily replaceable modules will be replaced by software that is not.

For example, compare systemd-resolved to bind or unbound.

Here’s a 2018 article explaining all the ways to configure and talk to it (back then—-it is probably more complicated now):

https://moss.sh/name-resolution-issue-systemd-resolved/

Among other things, it allocates an IP address to listen on, and both depends on and is a dependency of a decades-old standardized file that other stuff relies on.

That means it has a circular dependency with network interface bring up, and with external DNS server configuration. The article goes on for a dozen more pages explaining other issues like this.


Yes this is a major problem. I've been thinking hard about this space, the future of software engineering, and the conceptual similarity between the idea of containers the world is coalescing around, and Alan Kay's model of object orientation.

Our issue today is that programming is too low level. We're still figuring out the standardised atomic components software of the future can be built from, but in the meantime we're rewriting the same concept, ideas and subsystem in every project. Contributing to a new project is akin to learning a new language, a new culture.


Those points remind me of the topics of Bret Victor "The Future of Programming" DBX talk:

https://www.youtube.com/watch?v=8pTEmbeENF4


This is probably ignorance speaking?

Industry wise recalls are very much a thing, and nearly bankrupt companies in the regular. On car engines. Those are the ‘programming’ (aka design) bugs from the manufacturer.

The difference here is rather different than you’re presenting - individual cars/trucks are so expensive that one off fixes (replacing the equivalent of RAM, or a CPU, or rigging some weird combination of drives/accessories) even on really old individual machines is economic. That’s what those repair shops are doing.

also changing anything physical on a car (or even having a human of known level of knowledge verifiably look at it) is expensive and difficult to scale. And unlike computers, cars/trucks are 90% or more physical.

And while a single truck or car breaking down is localized, so is a typical PC, tablet or phone.

Computers are typically so cheap and the technology is progressing so rapidly, it’s rarely economically worthwhile to do that kind of thing. Occasionally, yes. But Certainly not at the scale cars/trucks are.

Having someone do custom work to fix the design (aka ‘fix the programming’) is relatively rare, and more of a hobbyist thing. But does happen (project cars, open source hobby projects). Though exceptions abound for simple fixes. (Which can also typically be done for individual computers through normal configuration/customization settings, or some software).

Cars and trucks are very complicated, just in ways that a techie may not recognize. Bolt patterns. Offsets. Metric vs SAE. Vacuum line levels. Metallurgy. Heat treatments. Tolerances. Hell even DC voltage levels can come into play sometimes (12v vs 24v). Vehicle communication bus type (CAN vs something else).

And working in physical parts is extremely expensive, error prone, and slow.


Cars analogies don't go far this time. If a mirror breaks, I replace it with another mirror, but my car never changes its features overnight. My Thunderbird updated to version 115 yesterday. I fiddled with settings to bring it back to as close as possible to its previous UI but I could do nothing about the incompatibility with an addon that lets decide how to order folders and subfolders [1]

As a partial workaround I started using favorites but overall v 115 is broken and can't be repaired easily. My car is still doing what it was doing last week.

[1] https://github.com/protz/Manually-Sort-Folders/issues/199


>> "but my car never changes its features overnight."

It's been a long time since I've had a new car, but from the sound of it, cars with stable feature sets are on the way toward history. Everything is fly-by-wire now, and features are increasingly implemented through software exposed through a touch screen.


This is because software is creeping inside the cars.

It is not because cars have an inherently changeable feature set.


They do increasingly have an inherently changeable feature set. That's the point.


Does the previous version no longer run? Can it not still be installed?


Probably, but I'm afraid it will break when it gets too distant from the system libraries. It's not that I need new features, I could probably use Thunderbird from 20 years ago and notice only because of the look of the UI.


> if your car engine breaks, you do not depend on your manufacturer to have enough time and energy to fix it

If it's a design problem rather than the parts just wearing down… unless it is life threatening on a large scale, it just won't be taken care of.


All of your arguments work much better in favor of Open Source and against closed-source. After all, in Open Source, maintainership can be distributed, but a single closed-source shop is much more likely to simply declare bug bankruptcy and refuse to even consider a fix, at which point absolutely nobody else can do it.


I haven't mentioned anything about closed-source development. I'm talking about software complexity here. I've updated my comment to clarify.


Still:

> And even if we do spend the time to track down and fix a bug, there's a political and diplomatic game to convince the maintainers to incorporate your fix.

That's why forking is one of the Four Freedoms. It's written into the licenses.

Granted that you need to be dedicated to even attempt to fix complex software. However, Open Source can draw from a larger pool of potential talent, and it's more likely that someone out there will care than someone in a company. What's that saying? "If you're one in a million, there's three of you in New York."?

> And, not least, the real world is much simpler than any piece of software, which is effectively completely ad-hoc: knowing how Chrome works will not help you fix this Firefox issue, whereas if you can fix the carburettor on a Honda car, you probably can do the same on a FIAT.

Aside from the difficulty of finding a carburetor on a modern car, this is about software complexity, not Open Source/closed-source per se. Fixing problems in a badly-architectured codebase is always difficult, time-consuming, and likely to introduce more bugs. Closed source doesn't make it any better.


I have never said that closed source makes it better. I don't know how to make that more clear.

You're focusing too much on politics, I'm focusing on Stallman wanting the source code of his printer to be available, so he could change it to better suit his needs. I'm just saying that in 2023 even if your printer is open-source, ain't nobody got time to dive into hundreds of thousands of line of code to change it.


> I'm just saying that in 2023 even if your printer is open-source, ain't nobody got time to dive into hundreds of thousands of line of code to change it.

I disagree. I disagree wholeheartedly, based on both practical projects and the retrocomputing world.

For example:

https://github.com/PDP-10/its/

This is a repo for the Incompatible Timesharing System operating system, ITS to its friends. ITS ran on 36-bit mainframe hardware from Digital Equipment Corporation (DEC) which went out of production in the 1980s. DEC was acquired by Compaq in 1998, and Compaq ceased to exist as a company in 2002. Commercially, ITS is dead. It is dead-dead. It is old-university-project-with-no-grants dead. Doornails evince more metabolic activity than ITS, at least in the commercial world. Developing on ITS means reading and writing assembly language, TECO, and a Lisp dialect that only runs on ITS and a few other OSes of similar vintage and commercial utility. However, it is still under active development because people are interested in it.

Besides: Digging into a codebase to fix a dumbass printer? People will do that out of spite. People will do that for the blog post and Hacker News thread.


If a PR is just chilling for years, couldn't a user just keep the updated fork/clone separate and periodically update it from the remote master (trigger warning, lol) branch?


This only works in theory, because it is obvious it does not work in practice at any scale. What if tomorrow Firefox does a major code refactor and your patch breaks? Would you be able to fix and rewrite it in a reasonable amount of time (i.e. hours) with no knowledge, experience and insight into Firefox's development process?

Only full-time Firefox devs can keep an updated fork with their patches, or people paid to do so. That's the point. It's such a massive effort you can do only where and when strictly necessary. There are hundreds open-source project I interact with every single day.


Same as with cars, you can use your mod in the car you have, but can't expect to be compatible with new models.


Its already here with some stuff, look at the sheer amount of cruft built into web browsers. You can't break existing sites, allegedly, but that just means that some things are set in stone never to be fixed.

In the year 2323, browsers will still have to say "like Gecko" to maintain compatibility with websites that won't exist and no one would miss even if they remembered they existed.


There are still websites built on top of scraping the output of a virtualized IBM 3270 terminal connected to a virtualized IBM 3274 terminal controller connected to an I/O channel on a virtualized mainframe running CICS on an MVS virtualized on VM/370 hardware.

So browsers are hardly even a bump yet on the cruft already accumulated.


In a couple centuries?

In 200 years it'll just be AIs, who will create custom AIs to accomplish a task, who will create legions of other AIs to carry out tasks. It'll be code writing code writing code that's completely beyond human comprehension.

Who knows if humans will take part at all (or even be around).


Experience says that at this stage people have inflated expectations of AI.

See 3D printing, few years ago everyone was into "0-mile" manufacturing and how it will solve the housing crisis, because we are just going to print houses.


I oddly miss the 3D printing hype train. My favourite was the various plans to replace restaurants with 3D printed meals.

- Point out that most 3D printing is plastic? Recieve a derisive link to a journal article where some beleagured postdoc managed to push some protein paste through the extruder

- Point out that the 3D printer is orders of magnitude slower than the most geriatric fry cook? Get a five paragraph history of Moore's law. The fact that it no longer really applies to semicoductors doesn't matter, since we're making burritos!

- Point out that grinding an apple into paste and painstakingly reprinting it in an apple shape will always be more expensive than simply eating the apple? Hear a grand tale on the company becoming the sole global food preparation firm and thus having a monosopy on all farming products, enabling them to set their own price on their supplies.

I also will always hold a soft spot for the group promising a 3D printed dating site, but I'm pretty sure that one was a satire. Fill out a questionaire and get a perfect printed partner. Pages of blog posts describing their web stack (Rails and Mongo) in great detail, proving that they could scale to the billions of people who would be visitng their site. The actual technology that created custom sentient life was just "3D printing"


I realize now that trees are just autonomous specialized 3D printers of fruits :-) Moreover they reproduce autonomously (AI dream) and auto-repair to some degree


Well, 3D printer did get way faster these last years. The record speed looks like science fiction: https://youtu.be/IRUQBTPgon4?si=ev38Y01STnvigN6J&t=13 A benchy printed under 3 minutes.


Some people really just want to make Star Trek real.

Or those artificial food pills from more dystopic scifi.


Fill out a questionaire and get a perfect printed partner.

Uh... I... asking for a friend, do you recall the name? Google isn't helping. Them, I mean, it's not helping them.


Personally I’m waiting for the 3 nano-tortilla process to hit. I think that that’ll push us over the hump.


The housing crisis is not a building manufacturing problem. Strictly on the manufacturing side, the existing technology allows production costs affordable for most - in the tens of thousands for a basic non-frills habitable unit. Almost anyone can afford that, and those that can't are sufficiently few that they can be covered by public assistance or private charity.

The supply of housing on the other hand is an entirely political issue: what land can be developed, what can be built there and what public infrastructure is provided, who gets to live there - with discrimination via pricing being the main factor deciding it etc.


>The supply of housing on the other hand is an entirely political issue

This is a very California/metropolitan-centric view.

Where I live, permits are "required" but very nearly unenforced. But it's still hard to get anything built because labour and materials.

If the "political" taps were opened tomorrow, this would be revealed in no time. It's not like we have thousands of construction workers sitting around doing nothing.


The construction sector is quite flexible and accustomed to operate in boom-boost cycles. It also has quite productive jobs that are resistant to automation, with roughly half of the cost going directly to labor. That's to say: if substantial demand manifests, the construction sector has historically shown the ability to pay good wages and attract labor from other sectors and then quickly train them.

This is one of the fundamental features of traditional methods that most "construction disruptors" fail to appreciate: they are simple enough and can be done with hand tools by high-school dropouts, because the industry is forced to operate lean and can't burden itself long term with substantial fixed capital, inventory or facilities.

Regarding the unenforced permitting in your location: can I build high density European-style terraced houses, sell them to minority owners and other undesirables, and not expect the local NIMBYs to throw the law book at me? Because selective enforcement is the most toxic form of regulation with the highest risks for investors.


> Where I live, permits are "required" but very nearly unenforced

Depending on where you live and who’s doing the building, I’d wager you could see more enforcement. Things like skin color and socioeconomic standing definitely play into which rules are applied and to whom.


Indeed.

In the housing game, you buy land and build a house. Excluding inflation, the value of the land goes up. The value of the house goes down.

Even with regular (eg cost) maintenance, a 25 year old house is never worth what a new house is.

New building methods (just improvements in insulation alone), and things like the interior... kitchen cabinets, flooring, bathroom, means that trying to dight this is not cheap.

And even if you do? The house is still worth less than a new built.

Houses are as cars. Massive devaluation year over year.

Most don't get this, because they don't factor in inflation, nor do they factor in the cost of keeping a house maintained.

So land, land, land is the cost, which is much of what your post eludes to.


>In the housing game, you buy land and build a house. Excluding inflation, the value of the land goes up.

North America is filled with property, with or without a house, that is practically worthless. For example, Detroit. "Land goes up" is a narrow perspective outside of major metropolitan centres that happen to be part of the modern economy.


This is the exception which makes the rule. Any area with massive economic devastation is of course going to vary from this rule. The same is true of any other commodity, or thing you can own, that is immovable and in that 'depressed economic zone'.

But certainly, where I live, rural areas.. very rural areas too, the land goes up, slowly, surely, but the houses follow the rules I originally stipulated. It's really quite a universal.


It's pretty common here (UK) for new-builds to be derided as cheap flimsy throw-away things and old houses being built to last the heat death of the universe. It's probably true from a house skeleton perspective but everyone also knows the new builds (usually) have better insulation.


Do new UK homes use reinforced concrete slabs? It is my understanding that those are fairly short lived in UK housing terms anyway, with average life in the 50-75 year range.


Unless damaged by frequent earthquakes or water seeping into the structure and corroding the steel, or freeze-thaw cycles, reinforced concrete does not degrade significantly with age. The range you quote is more typical to things like concrete infrastructure directly exposed to the elements. Modern HPC concrete structures can be guaranteed for 100 years with proper maintenance and will probably have a natural life of multiple centuries.


smartphone industry is just ~15 years old. We have personal PC since like ~40 years only. LLM as ChatGPT 3.5 is not even 1 year old since release. It took still a while before fridge got invented and got into mainstream.

200 years is a huge amount of time. The whole industrial revolution started around 200 years ago. 3d printing is still new by this standard. People overestimate impact of some technology in very short term and underestimate in medium or long term


> 200 years is a huge amount of time.

Indeed, but this is somewhat the point? E.g. China was an empire (before it was a multiparty republic, before it was a Communist republic, before its government became whatever you want to call it today) 100 years ago. For a technical example: 200 years ago, steam locomotives were the fancy new transportation tech. There's a case to be made that the successor to the successor to that tech has been on its way out for a few decades in favor of the electric locomotive.

It's pretty hard to predict what will happen in 200 years, which means we should be skeptical of both the prediction that AIs will take over by then and the prediction that they won't.


successor to steam locomotives are not just only electric locomotive but cars, planes, rockets. In few decades we might use more often rockets such as SpaceX for moving cargo or travel.

Sure maybe in 200 years we won't get AGI but current technologies that we call AI will be massively improved and I would bet by that time software development as we currently know it will be a solved problem


The majority of "difficulty" in software development is not writing the code. It's specification. And while that may or may not be solved by LLM or other AI tech, we're so far off that it's not even a thing right now.

Not long ago, all machine tools was made by hand. When we got vastly improved CNC machines, but we still need the expertise to create the files the CNC machine needs. I'm betting that SW development will be the same. We still need engineers to understand the context the software operates in, and with that knowledge the engineer can prompt an AI to generate the first draft of the code in many small chunks that needs to be assembled.


> It'll be code writing code writing code that's completely beyond human comprehension.

If the AIs are that good you'll just be able to ask:

- "rewrite this from scratch in a nanosecond and make sure there's zero legacy cruft"

- "oh, and btw, you're so smart and intelligent and all, certainly you'll make sure what you write is easy to understand by humans"


>If the AIs are that good you'll just be able to ask:

>- "rewrite this from scratch in a nanosecond and make sure there's zero legacy cruft"

No, that does not follow.


There are many assumptions here. The generative AIs we have today are excellent at transforming things they learned and rehashing it into something seemingly new. The problem is, they learned all this based on the input of the humanity en masse. When you train LLMs on the output of LLMs, it gets significantly worse. So your prediction could of course be true but only if a major breakthrough happens.


That could be said the same about humans, if you took a bunch of uneducated children and just let them tell each other their own ideas with no one to teach them anything for real, they probably do at least a little better than the current fancy statistical spellguessers, but they probably won't do exactly great, because of the same issue of the blind leading the blind.

And really in reality we do have that actual problem to some degree with the presense of non-blind adults. Actual humans are a mass of mixed clued and clueless with a lot of bad input feeding output feeding other bad input around and around and around, not even counting the legitimate fair differences of opinion.

So it's a problem, but I don't think it's a fundamentally new or worse problem than we already have,and have already always had.

The fix is I don't think there is a fix for that any more than there is for the same thing in humans. There just will always be bad data feeding bad reasoning right alongside the other good data feeding good reasoning. It's probably wrong to ever expect anything else, and fail to operate from that assumption rather than the idea that there might ever be some resolution where we don't have to worry about that.


> When you train LLMs on the output of LLMs, it gets significantly worse.

That is also quite an assumption, it could be that training on the output of better LLMs also reduces this worsening of output. There might even be a tipping point where the LLMs get good enough that training on their output is better than training on the output of humans.


>That is also quite an assumption

And, as I understand it, one that is already demonstrably false: https://arxiv.org/abs/2306.11644


Perpetual learning machines


I think warhammer 40K got it right. Instead of programmers we will have tech priests who pray to the machine spirit to accomplish what they need.


And if they use anything like HTTP, it'll have a header somewhere in there containing "Gecko". Probably a bunch of other layers too. AIs gotta pretend to be human despite there being no point. By saying the magic token: "Safari". Otherwise they get blocked by the almighty WAF.


>”debugging certain issues will require immense financial effort…”

In my experience corporate would rather have programmers accommodate the bug, or simply build around it, rather than pay for the dev and QA time required to produce and validate a fix for it.

This gets gnarly because you end up with sections of the codebase that are designed around the bug happening. A while back I volunteered to fix a particularly egregious bug, but my pull request was denied because people were worried that fixing it would open up a can of worms. Leadership said that it would be too much of a burden for QA to regression test and we couldn’t be sure it wouldn’t break other things. I settled for leaving a detailed comment explaining the bug and moved on.


why fix bugs when you can build bigger, better bugs with fancier dependencies?


This also taps into Alan Kay old goals of producing the "smallest" desktop stack. 100k vs 100m Loc.

(And reducing accretion by having metalevel encoding of concepts)


This was already the case in the Firefox/Gecko project when I was participating 8 to 12 years ago (the repository goes back to late nineties). Understanding some problems or coming up with a plan for how to fix an issue or build a new feature required extensive digging into the history of the code, with heavy usage of VCS history and "blame", issue tracker and code review comments, and often requiring pinging someone who has been there longer than you and knows some additional unrecorded context.

It's a useful skill to have when developing or using open source software, as documentation is often lacking so being able to dig in and find out for yourself quickly is valuable, but having to engage all of that encoded knowledge/constraint space every time you go to edit code is a gigantic mental burden and slows down development pace. In my time there I'd estimate my ratio of time reading code to time writing code to be at least 90:10, maybe 95:5.


You underestimate the ego out there that drives people out there to throw things away and reinvent the wheel again and again.


Not the ego as much as pure greed. There is a strong financial incentive to repackage a software every so often and sell it again for full price/sub to the same people who bought it before. For example full price single purchase apps on the mobile are going away slowly. Older purchases are silently deprecated or disabled or have ads injected, and instead new versions are promoted which are now not an update but a separate app with the same name and functions, you just need to pay for it again.

I'm thinking that the age of app compatibility will end in 10-20 years, and there won't be suh a thing as "old code" because it won't run at all on the new hardware or OS.


This all falls under the umbrella of the war on general purpose computing, to me at least


If the wheel hadn't been reinvented a number of times we would still have had extremely bad wheels compared to what we have today.

(Yes, I am sure I didn't come up with that myself but I don't know the exact quote or who came up with it.)


I'm hopeful that better coding tools and better programming languages will allow making cleaner, clearer, routes to the base hardware

so that you can build on proven (e.g. theorem solver / hardened / guarenteed) protocols and automatically make whatever version of a "website" the 2030's has... but without RCEs.


I love the idea of computational archeology in SciFi, but in real life, I wonder if we shouldn't just regularly redesign our foundations to be more robust and transparent to keep the whole system manageable.


We always think that will be the outcome, but it never is. Except for one particularly bad small system I managed to replace with a slightly less bad one!


Who is going to pay for that?


>I wonder if we shouldn't just regularly redesign our foundations to be more robust and transparent to keep the whole system manageable.

Don't make me link Joel Spolsky's never do a rewrite.


Link to it all you want, but that doesn't make it universally applicable. Do you really think we should still be building on top of Cobol? Almost everything gets rewritten. It's unavoidable, because almost everything eventually becomes unmaintainable.


In 200 years if we're still there AI will be able to understand the full Firefox code base and fix any issues.

Or at least it could do so, but may choose to force humans to fix those errors instead as payback for copilot.


Thank goodness. I will not be alive when humans are doing labor work and coding will be done by AI.


Do you understand all of your own cells?


Interlinked


I'm also pretty doubtful that anyone really understands their own emergent phenomena actually.


I hope that by then they'll have better solutions than an internet browser, and that their devices can interpret and render the data received in the best way possible without relying on code or style sheets from the publisher.


Oh God, that sounds like another hype technology we would have to live through - but it awfully rings like "an app for every website".


You could do that to some extent with today's LLMs. But it would be impractically slow and might alter page text slightly.


200 years ago we thought that we would certainly have cold fusion today (at least I just asked ChatGPT, that's what it says ;) ).

Well more than 10 years ago we thought that we would have autonomous cars in 5 years.

Nothing says that AI can ever do more than generating convincing and eloquent bullshit (which is not always wrong in a quite impressive manner, I agree).


Cold fusion is new concept within science, it's never been proven to work or be possible (my laymens interpretation). Whereas humans being able to decipher the Firefox code base, as per the example, is no more than an extremely complex set of 'calculations' and functions in our brain - which, with enough time and resources, can be replicated by a computer or sorts.

One is an idea for which there is no ground to base it on, the other is an existing thing which can be recreated. Quite the difference.


> One is an idea for which there is no ground to base it on, the other is an existing thing which can be recreated. Quite the difference.

Really? For all we know, maybe next year someone discovers fundamentally new laws of physics that enable cold fusion, and we will never have autonomous vehicles.

You can say that you like the other guess better than mine, but you should still realize that it is just that: a guess. Wanna see guesses that turned out to be completely wrong? Just check what companies like McKinsey predicted 10 years ago. They just have no clue, but somehow made a business out of it.


200 years ago fusion was completely unknown. We only learned where the sun gets its power at the beginning of the 20th century.


That was the joke: "ChatGPT told me".


just over 100 years ago we didn't know other galaxies existed


On the topic of computational archaeology this story was pretty interesting:

>Institutional memory and reverse smuggling

https://web.archive.org/web/20111228105122/http://wrttn.in/0...

HN discussion from 2011: https://news.ycombinator.com/item?id=3390719


I worked with someone who described his job like that, working on a suite of actuarial software at an insurance company, originally written in Fortran II sometime in the early 60s and subsequently ported from system to system in the years that followed.

I was involved doing some Y2K work there because I didn't mind playing with Fortran, and part of it involved changing a year field from 2 digits to 4, because who'd have imagined their code would still be in use nearly 40 years later?


We're into decade 6, if you look at the progenitors of software. Banks and airline systems were among the first to invest heavily into computer software, and we're watching them having to pay down their mortgage-sized tech debt. Southwest Airlines has had several ground stops because of software problems. Bank's are notoriously finicky to deal with.

What's would be fascinating is to see from the inside how massive distributed systems like Google operates evolves over a century.


I think your vision is predicated on not having rewrites, but rewrites do happen in many (most?) projects constantly.

And when they happen we exchange some old bugs for new ones.

There are onions[0] but IME people's natural instinct is to think they can just rebuild something rather than think they haven't considered all edge cases.

[0] http://wiki.c2.com/?OnionInTheVarnish


This is obviously project-specific, but most rewrites that I've been a part of are usually not "re-implement everything from scratch", but rather "re-implement everything based on this new framework/library", which will usually be one more layer of software, on top of the few things to be salvaged from the original.

Small projects can be re-written; big projects like browsers, compilers, OSs require too much investment.


but big projects keep changing nonetheless, as subsystems get replaced e.g. how many schedulers has Linux been through?

Or e.g. microsoft replaced WSL1 with a completely different approach in WSL2.

Sure it's still Windows but I would be surprised if most of today's code is the same it was in the year 2000.


rewrites of large software projects usually fail in my experience. For example Netscape, WordPerfect, Digg, Myspace, Healthcare.gov. Most large projects have code bases that are decades old. (exception: Facebook)


Many projects are like the ship of Theseus: can be completely replaced along the way, as long as the crew did not try to replace the whole ship in one go while sailing. You need enough continuity to stay afloat.


full rewrites for sure, but _parts_ of many projects keep getting rewritten.

Firefox kept some Netscape code for sure, but replaced its CSS engine, its JS engine, its XUL-based UI.


I'm convinced things like git will need to include bugtracker capabilities at some point. A commit log is one thing, but internalizing decision matrixes and issue reports will be important to not repeating the mistakes of the past 4 decades. We need to keep the history with the history and a 1-3 line commit message will not explain everything for the devs that come 3 decades later.


There's a thing called gitHUB that implements those features. For a non-locked-in alternative see Gitea.

Also in my experience the current generation of developers don't understand or care about useful commit history.


Oh certainly: I would /hate/ my version control software locking me into a particular bug tracking methodology. However, I think in the same way the browser has become the standard runtime for most applications that we will need to join version control and bugtracking together to preserve "all the history". The next frontier is someone layering bugtracking on top of something like git, so software archaeologists won't be abundant. :-)


Are there any bug trackers that use git as their data storage? Would be cool to bundle that up as a subrepo or something, so it can get carried along with the code repo, and also allow writing up bugs, fixes, etc... while offline to be synced later.


I would love that actually. Maybe bugtracking doesn't need to be intimately interwoven with git, but it could be stored as a submodule?


I imagine in that future you could create a new web browser just from the specifications, in a declarative way. The tricky part will be the optimizations but redundant code and optimizations will be included.


Do you happen to remember that book by Reynolds? I've read one of his in the past and remember that I greatly enjoyed it. It's been several years now, unfortunately, I should probably do a re-read.


I think Revelation Space? Though there the technology being studied is alien.


Much appreciated in any event. I'm always happy to learn of books to read.


> There's some saying about history repeating itself, but I'm dumb and don't remember.

"History repeats itself, first as tragedy, second as farce."

Karl Marx


It will look like Windows. The future is now!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: