Hacker News new | comments | show | ask | jobs | submit login

I still think it's immature. It takes time before you accept these kinds of things, the larger part of the world outside your control, that you only become properly aware of as you get older; fighting against it is like fighting against the tide. Fixing the systemic inefficiencies can only be done incrementally, but this rant literally suggests flushing the whole thing down the toilet at some point, and that's just childish.

You can't pause the world while you rebuild everything; it would take far too long to get to something better than what you're trying to replace. You can only repair one or two things at a time, and hopefully leave the world better for it; but the mindset espoused in the rant is more likely to result in a half-baked start on something new, but abandoned when the scope of the whole problem is fully perceived.




What his rant literally suggested what to "flush boost and glib and autoconf down the toilet and never think of them again" when "the accumulated complexity of our existing systems is greater than the complexity of creating a new one." It is hardly childish to imagine that such a scenario might occur, and you have not argued against his thesis as he stated it.


I directly disagree that a complex system will be replaced by creating a new complex system to replace it. I do not think that will happen, because I don't think the world works that way. What happens is something slightly simpler is created to solve a simpler problem, and gradually accretes more and more functionality until it gradually replaces something, in a kind of process of innovator's dilemma; or alternatively (and IMO more likely), one or two pieces in the complex whole are individually replaced by (perhaps) one thing which is simpler. But there's never a moment of high drama where we suddenly realize what a pile of crap we have and switch forthwith.

Just about everybody knows that all our software is imperfect crap on top of imperfect crap, from top to bottom. Everybody, when met with a new codebase above a certain size, thinks they could do better if they started over and did it "properly this time". Everybody can look at a simple thing like a submit form in a web browser, and sigh at the inefficiencies in the whole stack of getting what they type at the keyboard onto the wire in TCP frames, the massive amount of work and edifices of enormous complexity putting together the tooling and build systems and source control and global coordination of teams and the whole lot of it, soup to nuts, into a working system to do the most trivial of work.

But this is not a new or interesting realization by any means. It's not hard to point almost anywhere in the system and think up better ways of doing it. Pointing it out without some prescription for fixing it is idle; and suggesting that it will be fixed by wholesale replacement by another complex system is, IMO, fantasy.


While some see them as the crazy ones, we see genius. Because the people who are crazy enough to think they can change the world, are the ones who do. </quote> http://en.wikipedia.org/wiki/Think_Different


I don't know, there may come a time. If we were to replace the current system, then yes that would be impossible/waste of time. But what if an architecture came about built on AI? Maybe quantum computing? DNA based computers? Eventually there will be new hardware platforms that force the very change you are dismissing. Even if it is 25-50 years from now which is a blink of an eye in the grand scheme of things.


I'm a bit confused. The architecture is just abstracted away. Why does a programmer care if it's optical or DNA or whatever? See, for a less extreme example, the tools that people use to develop for a single x86 vs CUDA vs massive clusters vs huge FPGA clusters vs ARM vs etc etc. I honestly think that revolutionary architecture will just lead to some new libraries and dev tools which get kludged onto existing dev systems. But I welcome more information because I know I could be horribly wrong.


I think you're on the right track, software tries to layer itself as much as possible. New architectures and capabilities will only make the concepts and tools exposed by the glue between the layers change.

Unless the new thing isn't Turing-complete and can't be implemented with a Turing-complete system, it will be abstracted away at first just so we have an environment to start building with, and can start using it without reinventing every single wheel we have.


What does Turing-completeness have to do with it?


Turing-completeness means that the path of least resistance is to create a compatibility layer.

Without radically changing the paradigms on such a fundamental level a start-over just wouldn't happen.




Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: