Hacker News new | past | comments | ask | show | jobs | submit login

And not only for Instruction Set Architectures. I feel this might be the case for software too.

1. Want a simpler application

2. Write it

3. Realize adding complex code that's weird can really boost performance

4. Repeat 3 until someone thinks your application is overcomplicated and makes a new one

I guess the moral here is that computers are complicated and trying to avoid complexity is hard or infeasible.




Except it's not performance usually, it's just features. And then it gets bloated. And someone things they don't need all that crap.

But turns out they did.


I feel like there are massive industries that exist just because of this


Everyone uses about 5% of the features of their word processors, but it is a different subset for everyone so all features are needed by someone and most get equal useage.


I do not think that is true. There are definitely more and less commonly used features. Everyone uses basic formatting, but only a small minority of users use index or bibliography features. The features might matter a lot to that small minority, but they are not anything like equally used.


There are a small minority that everyone uses. It quickly falls off.


Ehh the issue is features tacked on w/out regards to existing ones. Lotta apps like that end up with multiple ways to do the exact same thing but with very slightly different use cases


Jira has entered the chat...


Or programming language design. However, I think during the process, things are still distilled so that new common patterns are incorporated. I am sure null-terminated strings were not a particularly bad idea in the 70s given the constraints developers faced at the time. It's just that later we have different constraints and have gained more experience, thus finally realizing that it is an unsafe design.


I expect it's basically always been understood that null-terminated strings were unsafe (after all, strncpy has existed since the 70s [1]), more just that the various costs of the alternatives (performance, complexity, later on interop) weren't seen as worth it until more recent times. And it's not like they didn't get tried— Pascal has always had length-prefixed strings as its default, and it's a contemporary of C.

[1]: https://softwareengineering.stackexchange.com/a/450802


We are coming full circle back to Pascal Strings, just that now we don't mind using 32 or 64 bytes for the length prefix. And in cases where we do mind we are now willing to spend a couple of instructions on variable length integers.

But in the bigger picture the wheel of programming languages is a positive example of reinvention. We do get better at language design. Not just because the requirements become more relaxed due to better hardware and better compilers, but also because we have gained many decades of experience which patterns and language features are desirable. And of course the evolution of IDEs plays a huge role: good indentation handling is crucial for languages like python, and since LSP made intelligent auto-complete ubiquitous strong typing is a lot more popular. And while old languages do try to incorporate new developments, many have designed themselves into corners. That's where new languages can gain the real advantage, by using the newest insights and possibilities from the get-go, depending on them in standard library design and leaving out old patterns that fell out of favor.

No modern language in their right mind would still introduce functions called strstr, atoi or snwprint. Autocomplete and large screens make the idea of such function names antipatterns. But C can't easily get rid of them.


I think saying "software" is much too broad, and you have to narrow the comparison to a small subset of software development for it to make sense. With software, typically you're dealing with vague and changing requirements, and the hope is that if you build five simple applications, four will be basically adequate as written, needing only incremental feature enhancements, and only the fifth needs significant work to rise to the emerging complexity of the problem. (The ratio can be adjusted according to the domain.)

In this case they're creating a new solution to a problem where all previous solutions have ended up extremely complex, and the existing range of software currently running on x86 and ARM gives them with a concrete set of examples of the types of software they need to make fast, so they're dealing with orders of magnitude more information about the requirements than almost any software project.

The closest software development equivalent I can think of would be building a new web browser. All existing web browsers are extremely complex, and you have millions of existing web pages to test against.


Yeah, definitely this. We think that the specs are pretty much baked after a year or two of shipping, so the focus is now on making things faster, which requires very complex algorithms, but don't worry, once we get it done, it won't have to change anymore! Right? ... then new use cases come in. New feature requests come in. We want to adapt the current code to the new cases while still covering the existing ones and also maintaining all the performance boost we gained along the way. But the code is such a mess that it is just not feasible to do so without starting from scratch.


Your understansing of programming is superficial to point of unfixable by explaining why :(


Explain why please


Example gratia, last statement is approximately equivalent to "I guess computer science is infeasible."

But it is not; the fact that Debian UNIX running programming languages of the sort that are used today exist proves that we have already managed a significant amount of complexity; compare to infite-size ENIAC for programming. You would certainly want to use the Debian than be stuck with latter given that you could magically NOT to implement the existing systems for it—had to write your whole program from scratch either on the tape or the Debian system.

There are many other layers of issues and problems with the comment, but understand that no matter how intellectually honest and kind you are, you cannot reply to every person who is wrong saying it & telling WHY. Often the "you are wrong" may be more valuable—mutually. I do not want to argue about this though.


...or you can cheat and write the weird code as a separate, child app.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: