Hacker News new | past | comments | ask | show | jobs | submit login
The Great Software Stagnation (alarmingdevelopment.org)
276 points by azhenley on Jan 1, 2021 | hide | past | favorite | 299 comments



One of the big mistakes I see in the list is choosing to ignore the developments in software that are old. For example, the author cites Python as pre-1996, yet the adoption of Python as a mainstream language largely postdates 1996.

Taking a very narrow view of software (i.e., looking only at compiler-related technologies), I can easily list several advancements that the author neglects:

* Profile-guided and link-time optimization aren't really feasible until circa 2010.

* Proper multithreaded memory models in programming languages don't come into existence until Java 5, from whence the C/C++11 memory model derives.

* Compiler error messages have improved a lot since the release of Clang around 2011.

* Generators, lambdas, and async/await have been integrated into many major programming languages.

* Move-semantics, as embodied by C++11 and Rust.

* OpenMP came out in 1997.

* Automatically-tuned libraries, e.g., ATLAS and FFTW are also late 90s in origin.

* Superoptimization for peephole optimizers (Aiken's paper is 2006; souper is 2013).

* Heterogeneous programming, e.g., CUDA.

* Undefined behavior checkers, such as IOC, ASAN. Even Valgrind only dates to 2000!


It's a good bit of rhetorical slight of hand. Anything that existed in any form prior to the date isn't new (even if it was only in an experimental, near-unusable form). Anything that exists today but hasn't yet had much impact on the world is a toy.

The only things that remain are those few which appeared seemingly out of nowhere and rose to prominence over a short period of time.


Apologies in advance for the nitpick: it’s sleight of hand.


I always appreciate people correcting others - I love when it's done to me. Best way to learn. Mostly driven by just good intentions.

Thanks for doing it.


Yeah. Sure, there were service-based architectures (SOA), some automated testing (so CI/CD I guess), containers have been around forever (BSD jails and then Solaris Zones), configuration management, etc., etc. But all of those things are vastly different than they were 25 years ago. [EDIT: Not 15. Can't do math this morning. Though even 15 is pre-iPhone/most mobile.]

This is an "Oh, the cloud is just timesharing" take.


Generally in agreement with most of this (and others takes that are similar).

Except that Linux containers are a huge regression on what BSD jails gave us 20 years ago. They’re catching up for sure, but it’s still just badly reinventing a wheel that already exists, for philosophical, political, or selfish reasons.


> Profile-guided and link-time optimization aren't really feasible until circa 2010.

Had that in the 90s:

https://www.digitalmars.com/ctg/trace.html

The idea dates back even earlier from a product called the Segmentor.

The Superoptimizer comes from a paper in the 1980s, and what it discovered was promptly integrated into several compilers.


Masalin! I hope she's doing well.


Yes, the real problem is the perspective and not the actual development. New languages are of no value by themselves. It is like counting the tools in your toolbox instead of the things you built with them.

So looking at the number of projects that are hosted on the internet (e.g. Github, Sourceforge etc.) we are likely to see a different story.


Yeah; but thats all so incremental. Better compiler error messages? Really? That makes the top 10 list from 25 years of work?

Async/await matters. But compared to inventions like threading, filesystems, java's write-once run anywhere, HTTP/HTML and the invention of the URL? I agree with the author. We've lost our mojo.

Computing is pretty unique in that we have near-complete freedom to define what a program even is. A sorting method implemented in haskell is a different sort of expression than the same idea implemented in C. The abstractions we all take for granted - processes, applications, RAM & disks, databases, etc - all this stuff has been invented. None of these ideas are inherent to computing. And yet apparently we only know how to make three kinds of software today: Web apps, electron apps and mobile apps.

Here's some hairbrained ideas that are only a good implementation away from working:

- HTML/React inspired UI library that works on all platforms, so we can do electron without wasting 99% of my CPU cycles.

- Opaque binary files replaced with shared objects (eg smalltalk, Twizzler[1]). Or files replaced with CRDT objects, which could be turned to support collaboratively editable global state.

- Probabilistic computing, merging ML and classical programming languages. (Eg if() statements in an otherwise normal program which embed ML models)

- SQL + realtime computed views (eg materialize). Or caching, but using an event stream for proactively updating cached contents. DB indexes as separate processes outside of a database, using an event stream to keep up to date. (Potentially with MVCC to support transactional reads from the index & DB.)

- Desktop apps that can be run without needing to be installed. (Like websites, but with native code.). And that can run without needing to trust the developer. (Using sandboxing, like web apps and phone apps).

- Git but for data. Git but realtime. Git, except with a good UI. Git for non-developers.

- Separate out the model and view layers in software. Then have the OS / network handle the models. (Eg SOLID.)

- An OS that can transparently move apps between my devices. (Like erlang but for desktop / phone / web applications)

- Docker's features on top of statically linked executables.

The entire possibility space of what computing is is open to us. Why settle for POSIX and javascript+HTML? Our platforms aren't even very good.

[1] https://www.usenix.org/conference/atc20/presentation/bittman


> Yeah; but thats all so incremental. Better compiler error messages? Really? That makes the top 10 list from 25 years of work?

Aren't good error messages very hard to implement and also a _major_ productivity increase? They're also a boon to beginners.

Both are very important as the most powerful resource we have is people.


Sure; important and took a lot of work, but they’re still incremental improvements over what came before. I could name dozens of equally important improvements - typescript, css grids, llvm, http2, the M1 processor, WSL, C++ smart ptrs, io_uring, zfs, SeL4 and so on.

This is all important work, but none of it makes you think of computing in a new way. Not like the web did, or Haskell, or the idea of a compiler & high level language, a preemptive kernel, or TCP/IP. These were all invented by people, and a lot of the people who invented them are still around.

There are plenty of sibling ideas in the idea space. But for some reason computing cooled down and tectonic shifts of that scale don’t seem to happen anywhere near as frequently now. How long has it been since an interesting new OS appeared on the scene? Feels like a very long time. And even Haiku uses POSIX internally anyway.


Side comment: most new developers tend to ignore/not read compile or runtime error messages and ask for help before even realizing the cause of the problem they're trying to fix is right there in front of them. Better error messages for 'beginners' has questionable benefits for developers just getting started, until they get to the point where they realize error messages are actually useful


There are two kinds of new developers:

a) new developers in general (totally new to programming)

b) developers new to this specific programming language

I'd be curious what the percentages are, but my personal hunch is that for any specific language there are more in category b) than a).

And for category b), those are definitely helped by error messages.


> HTML/React inspired UI library that works on all platforms, so we can do electron without wasting 99% of my CPU cycles.

There are React Native forks for Windows, MacOS and Linux. I have no idea whether any of them is "good implementation" though.

> SQL + realtime computed views (eg materialize)

ClickHouse (OLAP DB) has materialized views (but only for inserts). Also Oracle and (I guess!) Materialize DB should have it too.

> Desktop apps that can be run without needing to be installed. (Like websites, but with native code.)

AppImage (and maybe Snap and Flatpak) is like this. Also technically, with Nix you can just run something like

    nix-shell -p chromium --command chromium
(without root), but it feels like cheating.

> Git but for data

https://github.com/dolthub/dolt (again, never tried it yet, but would like in future)


> java's write-once run anywhere

Java didn't invent that, it is only a new iteration on an old idea.

I remember that when they announced Java in 1994 I thought about the p code system from UCSD I used in 1986 on a Z8000 Unix machine from Onyx (sp?).

More info at https://en.wikipedia.org/wiki/P-code_machine


Other commercial development tools popular in the same time period just before Java's release also used p-code, e.g. Powerbuilder.


> - HTML/React inspired UI library that works on all platforms, so we can do electron without wasting 99% of my CPU cycles.

Unfortunately this won't ever happen, at least not something production ready that will be usable on all 5 major platforms (Android, Windows, iOS, MacOS and I'm generously including Linux desktop here :-) ).

It's a very costly super long term investment and the incentives are not there as every platform (including desktop Linux!) is fighting against it.

React Native is the closest thing we have and the development experience is awful, from what friends that are using it are telling me, plus I don't think it's well supported on desktops. I, for one, don't think I've ever seen a Windows React Native app, for example.

So the best we'll ever get is modified browser engines, ergo Electron.


Flutter is react inspired and it appears to be slowly getting twoards cross environment support.


I know about Flutter but I would be never build a product on top of Google tech unless I'd be confident I'm able to easily migrate away from it.

Flutter is also the kind of project Google seems worst at: it will require a lot of tedious work going forward, to support all the platforms. Thankless work that doesn't foster promotions.

I'm not counting Go, I consider it a community project now.


>Git for non-developers

I was once amazed to learn that ClearCase was designed with the objective to be sold to lawyers, not developers.


the name makes so much more sense !


> Git, except with a good UI. Git for non-developers.

The reason we all suffer without this is not because we have failed to solve it. It's that Git itself is preventing any forward development in this space.

We had Mercurial 15 years ago, but Git is a massive usability dumpster fire that blocks all further progress (and GitHub continually pours gasoline on the fire so we can never escape).


The effect on compiler robustness of high volume randomized testing is also very significant, I think. This was post-1996 (McKeeman in 1998).


CSmith [0] also made significant strides in that area.

[0] https://embed.cs.utah.edu/csmith/


Of course (and jsfunfuzz in 2007, and some others, and now a continuing stream of elaboration and improvement.)

I think this made it easier to firm up new language implementations. I feel like we started getting more of them because of this.


Tactile pocket computing - iPhone, etc - was a huge paradigm shift for users. The UX as a whole is completely different to the desktop experience.

In terms of software itself - it's not about typing code or about text UIs vs graphic UIs.

It's about programming being a fiddly and tedious pastime with explicit handling of every possible scenario, because the machine has no contextual background knowledge and isn't intelligent enough to interpolate or intuit the required details for itself.

So you get modern software which is basically just a warehouse full of virtual forms glued together with if/then/else - or simple pattern matching if you're really lucky - managed by relatively simple automation pipelines. Plus a UI/API glued on top.

And underneath that is TinkerLand, full of lovely text-langs which are different-but-increasingly-similar and command line environments which are like little puzzles which are not really hard - in the sense of Physics PhD hard - but are hard enough to give people who enjoy tinkering a sense of satisfaction when they (more or less) solve one.

You can do a lot with this - like ordering goods and services, selling advertising, collecting creepy amounts of data about users, and keeping people entertained.

But it's basically just automated virtual bureaucracy. It has no independent goal seeking skills, it doesn't improvise, invent, or adapt, and it's ridiculously brittle and not particularly secure.

Visual programming won't change this. Nor will ML (although it may contribute.) It needs a complete change of metaphor, and there doesn't seem to be much evidence of the research required to make that happen.


"automated virtual bureaucracy"

Oh boy you hit the nail so hard on the head that my spine is still tingling.

In the past 15 years of my career writing business applications, websites etc. I always had this nagging doubt that this is just me being a very active part in building moloch, adding to all the bureaucracy instead of reducing/simplifying it. Simply because wherever we built something to make things more efficient/easier, those gains are usually swallowed and overshadowed by the demand for more complexity that arises from latent potential that has been opened up.

"we've made taxes easy!"

"...could you also make tax evasion easy?"

"yes, but it would be more complicated"

"More complicated than before you were there?"

This is just an made-up example but that's how it so often goes and it really makes it feel like I should have become a hermit and wrote indie-games while living on cupnoodles instead.


yes the purpose of automation is scale, note scale is what enables big salaries


I think you managed to succinctly verbalize and create concrete form to what I've felt.

Especially the "faux-value-add" quality of many software systems due to their puzzle like nature rings very true.

Perhaps software cannot be nothing more. But there is lots that _can_ be done with virtual bureaucracy.

For example 3D modelling tools do enhance the human capability to rapidly design complex forms. Text processing tools beat hands down typewriter in several tasks. Etc.


UI will always look simplistic because it has to communicate with users. But beneath that tiny surface is a huge depth of software, usually on the server. Think Google Now/Assistant, notifications, recommendation systems,and all the semi-automonous server management stuff like Kubernetes and EC2. Also non-consumer stuff like protein folding.


Not a lot of support for his sentiments here it seems, but to understand where he's coming from you probably need to see the research he has been doing, largely aimed at entirely different modes of programming -- visualizing boolean logic trees and all kinds of other stuff, aimed at making programming more accessible and concrete to regular people.

You know in the movies where they have the futuristic user interfaces? Well we've been able to put graphics on the screen for decades, but "real" enterprise programming is still mostly pounding keys into text files and fussing over semicolons and curly braces. Visual interfaces are considered toys that "real" programmers don't need.

And yet there's a chasm of capability between programmers and users, and it's actually true that what programmers do isn't something most "normal people" will ever be capable of. But is that because programmers have some wildly special form of intelligence that most people don't have, or is it because our tools suck?

The divide between users and developers is as accepted and entrenched as it is archaic, not to mention a huge innovation and creativity bottleneck for the species, and by my estimation an embarrassing failure to innovate for our industry. Jonathan has been pushing back for decades. He's not grumpy or cynical, he's a relic from a time when the future of programming and "what programming even looks like" was still a wildly open question.

I wish more people thought like him.


This is like envisioning a world where everyone is building their own chairs and doing their own pottery. People who can do carpentry, or pottery, or programming are not in any way better or smarter than the average Joe. They have a different specialized set of skills than many other people, and they have the time to actually use those skills. Even if everyone knew carpentry and pottery and programming, they would not be using these skills in their day-to-day lives, because they have their own job to do. An accountant will not start programming their own accounting software, or even their own plugins - they will use some purpose-built software written by someone else, because it is a far more efficient use of their time.

It's also important that programming is a fundamentally tedious endeavor. Even with the most high-level imaginable PL, where there would be no machine details to think about, you need to exhaustively define every detail of your business goal, much further than you would ever think about it in regular work. I am a programmer and I personally enjoy this kind of deep dive, but I have been in many meetings where I was trying to extract those details from domain experts and getting increasingly frustrating for them with all the probing. Even worse, you often hit points where you just can't get past some hand-waving, and now it's your job as a programmer to evaluate the trade-offs between different strict implementations of what is a fundamentally fuzzy requirement.

And getting out of the fantasy of the perfect PL, you also then have to work with a real computer with real limitations that someone needs to know how to work in while representing the problem domain (e.g. replacing real arithmetic with floating point arithmetic and dealing with the new problems that stem from that). You also have to spend large amounts of time researching what others have built and re-using it when and where it is possible, and working withing the trade-offs those others have made - since there's no chance of writing every program from scratch, and there is no such thing as a general purpose domain specific library.


You're missing that the computer can help, so it's more like a world where everyone is designing their own chairs, and the computer ensures that the chair is constructable, won't collapse, will fit a range of humans and so on, and finally sends it off to the 3D printer.

The accountant example is a really good one: most accountants likely do write their own Excel macros to suit particular customers or one-off situations.


People build their own furniture all the time when they buy things they need to assemble from ikea, target, etc.

People wire up electrical components when they hook up their TVs, computers, surround sound, etc.

People do their own plumbing when they hook up garden hoses or put on a new p-trap.

Mounting a TV bracket into studs or even hanging a picture is some slight carpentry.

These things work because there are standard connections and good design, but in many cases it took a long time to get to the current designs that are simple enough to be intuitive and come with any tools needed.

Standards and design have made these things possible. Even building a computer is similar. Many people's automation needs aren't so complex that they need to venture out of reasonable constraints.


Very good points. The problem is that people are normally not using other peoples shitty chairs to try to build a space station.


Exactly! This is what scares me about the modern day no-code movement.


You are using the wrong analogy. Computing isn't like building tools - it's about thinking better. Literacy is a better analogy - almost everyone is taught to read/write at an early age, and in fact not being able to do so is a significant hindrance to surviving in modern society.


Re. thinking better: I think we forget programming languages are just tools. (Almost) anyone can learn to program but it doesn't mean they're good at solving problems or are able to bring innovative/new/better solutions to solving current problems.


Reading and writing are just tools too.


I don't see anything wrong with it, just like everybody can do their own cooking.


> Well we've been able to put graphics on the screen for decades, but "real" enterprise programming is still mostly pounding keys into text files and fussing over semicolons and curly braces. Visual interfaces are considered toys that "real" programmers don't need.

You should at least consider the counter-hypothesis that C-style keyboard pounding is fundamentally more productive than visual interfaces.

This shouldn't be that surprising. Text is much more informationally dense than audiovisual multimedia. There's a reason why books are still the preferred medium for information transmission after thousands of years. Sci-fi style visual coding sure seems cool. But I highly doubt that it will ever be as productive as a skilled developer typing out variables, functions and classes.


Research has shown you focus more writing than typing. Imagine if you could program by rough drafting how your function works on paper. Like with arrows and all sorts of messy things connecting the structure of your code, but entirely understandable to another human what your function was to do without needing to understand a lick of code.

Take this example(1) from the R subreddit about how to do matrix math, and the arcane R foo required to actually do something that is pretty simply explained in OPs image(2) to anyone with zero background in R or programming at all. Now, imagine how much more productive the world would be if the computer could take an instruction in a readable form like the OPs image, rather than the R jargon code actually needed by the computer to do the math described in the image. People would be learning to write their own functions right along side learning how to do math on paper. Instead, people today pay six figures and spend four years to learn how to turn the math they learned by hand in high school into something that can be ran on a computer, same as it was 30 years ago.

1. https://old.reddit.com/r/rprogramming/comments/kn2rgb/how_do...

2. https://i.redd.it/ngxi8665zb861.jpg


> Research has shown you focus more writing than typing. Imagine if you could program by rough drafting how your function works on paper. Like with arrows and all sorts of messy things connecting the structure of your code, but entirely understandable to another human what your function was to do without needing to understand a lick of code.

I've worked in a company where programs were originally written with Max/MSP which is exactly that. It was muuuch harder to decipher for other humans than normal C++ & Javascript code.

Everyone I know using patcher tools end up switching to real programming after some years, and being much more happy like that.


Such things have been tried. They usually fail because of underestimating the amount of ambiguity, complexity manamagement and performance problems.

SQL came pretty close - I mean, it was specifically designed for non-developers. Guess who's mostly using not nowadays.


I actually thought about buying a Wacom, because I make so many small scratch notes I throw away afterwards, to later find myself really needing them again. Besides, having hand drawn diagrams together with the code may serve as excellent comments.


Embedding diagrams in code seems like such a no-brainer. You can do it with a third-party extension in Visual Studio, but it’s still clunky and too hard to use.

This should be a basic thing that IDEs help you do.


Fair enough. I agree in the sense that typing isn't what takes up my programming time. 1000 lines of code at 100 wpm (which most coders could beat) is what, 10 minutes? I wish. I've spent hours on a single regex.

I think efficiency and accessibility are at odds with each other to some extent, and we've more neglected accessibility and flattening the learning curve, because we don't need it, we already know how to code. But custom software is basically a subscription service to the programmers that created it. I had a client that hired a guy who had invented his own programming language and built their whole system, then disappeared on not great terms. Boy were they in trouble.

But come to think of it, regex is a perfect example of something that could probably be interactively visualized and made more efficient. Let's ask google.

Yup: https://www.debuggex.com/

So I'm in vim or visual studio or whatever and I want to use this plugin and I... what? Google it in a browser and copy and paste regexes back and forth? This is so not 2021. The lack of a shared information model between tools/paradigms jumps out as a big shortcoming.


Text is also searchable. Can't stress enough how important that is.


And easily diff-able!


In the JetBrains IDEs, you can pop up a little inline interactive regex editor + tester widget, type in some candidate text, and edit your regex until the candidate text goes green.


> Text is much more informationally dense than audiovisual multimedia.

This has it backwards. Audiovisual multimedia have far higher raw bandwidth than text. I agree with your (implied) point, though, that text remains the preferred medium for technical use cases like programming for good reason: humans can't parse a firehose of audiovisual stimuli into precise mental constructs.

Text's typically linear structure and low information density are precisely why it has yet to be superseded, I think.


You are talking about bandwidth as if it was the same as information density.

I think the parent is taking about information as defined by Shannon. Essentially referring to entropy.

By that definition text is far more dense than AV.

https://en.m.wikipedia.org/wiki/Entropy_(information_theory)


I stand corrected; on this view text has more entropy due to limitations of our visual system and working memory (i.e., the encoding), no? We're pretty good at hashing short symbol strings to referents. If we could decode a grayscale image with the same precision, visual media would have higher entropy.


> I wish more people thought like him.

I don't, because instead of introspectively looking at why the world didn't turn out exactly like he wanted it, he blames others by dismissing their achievements as stagnation.

Those futuristic looking user interfaces never caught on because while they look good in movies they suck for actual productivity, even for non-technical users.

And when it comes to "non-programmer programming", just look at the success of tools like Zapier. I see tons of non-programmers using them to great success.


Why not, though? Programming itself is a layer of abstraction. Why did we invent this bespoke complicated and arbitrary syntax that can be explained succinctly with a comment in your native language, rather than just develop the computer to interpret commands in common native language syntax directly? I shouldn't have to remember that ls is list directory contents. I should be able to type all sorts of permutations of 'show me what stuff is here' in my language and not some bullshit syntax that someone came up with on a whim 50 years ago, if the computer was truly developed to be a tool for everyone.

I'm not confident it's going to change for the better and democratize what computers can truly do, simulate and model. People who've already paid the cost of memorizing all the ins and outs of complicated computer syntax don't really care how much of a hill they've had to climb anymore now that they've learned enough to be familiar. Now you have this memorization lock in, where you've spent years and perhaps thousands with university learning some dense programming language and start building your tooling around what you've learned, and in the process you force the next generation to have to learn that syntax if you are to hire them, and so on and so forth. That's why we've been typing ls since the 80s and we will still be typing ls in 50 years. I'm sure the authors of ls would find that horrifying.


This is all complaining about the insignificant problems of programming. Syntax is something you get over after you're out of school. A good Java programmer will also be a good Common Lisp or C or APL programmer within months of first seeing that language.

The complexity of programming lies in coming up with abstract models for real-life problems, and in encoding those models in sufficient details for a machine to execute them. It also lies in knowing the details of how the machine functions to be able to achieve performance or other goals (e.g. avoid parallelism pitfalls). Until we have AGI, none of these problems are even in principle solvable with better tools to the point that someone who hasn't invested in learning how to program for a few years will be able to do anything that isn't trivial. The best possible visual PL can't change this one iota.

What could be doable is creating more DSLs for laypeople to learn, rather than trying to do very complex things through a GUI. One of the few areas where there has been tremendous success in having non-programmers actually do some programming is in stuff like Excel. That is a very purpose-built system, with clear feedback on the steps of computation, very limitted in what you can actually program, but comprehensive enough for its intended domain, and is probably by far the most utilized programming environment, dwarfing something like vim by an order of magnitude.


>This is all complaining about the insignificant problems of programming

To me, syntax is a huge problem. Sure, you learn it in school. Computer Science School. Not everyone is a computer scientist. Some people are political scientists, environmental scientists, mathematicians, statisticians, etc., all of whom have their own difficult four years of schooling, and simply don't have the time to learn this complicated syntax without trading something else off. You don't think its a big deal since you've already paid that price of time. Well, not everyone can afford to pay that time, and it's a damn shame that in order to make full use of all a computer can do, you need to spend this time learning this sometimes remarkably clunky syntax before you can even begin to work on your real life problem. Imagine how much more technological and societal progress we would make if everyone could make computational models about their question at hand without having to spend years and thousands of dollars working with unrelated example data in undergrad, sometimes even using instructional languages that have no use outside of school. To me, that's the world that The Jetson's live in.


> just develop the computer to interpret commands in common native language syntax directly?

We have that: e.g. Google Assistant. (If you don't mind needing an internet connection, and sending your voice to the cloud.)

> I shouldn't have to remember that ls is list directory contents.

Nobody's had to do that in 35+ years, except those of us who prefer that sort of thing. The rest click on some folder icon or something.


Google Assistant et al do not do that. I can't ask Siri to show the contents of my directory. I can't ask Siri to run anything on the command line at all. These voice assistants just do a number of predefined functions, they don't offer 1:1 control of your device the same as traditional inputs. Yeah, that's true that most people use a GUI, but the point still stands. You have to remember these distinct workflows to do something readily explained in person. An old person might have to take a computer class because they want to send an email, whereas an intutively designed system would take that persons input, "email jake" and do all the behind the scenes stuff we do ourselves to compose emails for us. An ideal computer is a secretary that can do everything you can think of since you are both versed in the native language. Siri is not a perfect secretary, Siri is like an elevator man only able to press one of the couple dozen buttons on the wall for you.


True - but it seems like some middle ground might be possible - where the power of the shell is all available, but with the cognitive support that the GUI provides also present.


Even when telling an actual human being to do something there are misunderstandings and misinterpretations. The idea that it's feasible with our current technology to just design programming such that we can just write natural language and the computer will do what we envisioned is absurd. Natural language is very imprecise and squishy and not at all suited to programming. Attempts to make code look more like natural language such as AppleScript and COBOL suck.


> I shouldn't have to remember that ls is list directory contents

I mean, surely you use your own abbreviations when writing, no ? How would you abbreviate "list directory contents" ?

For instance I have `alias l=ls -lha` - and if it was instead list-directory-contents I would have `alias l=list-directory-content -lha`

e.g. just look how hard powershell sucks with those long verbose command names, it's unuseable


Because human language is not specific enough. Code isn't complicated, it's unambiguous. You can always alias ls but that doesnt change the model much does it?


Isn't that essentially Microsoft and Apple’s business model and how they got to where they are today?


I wish more people thought like Zapier too. Their pipe flow designer thing is pretty star trek.

The world is chock full of shit software that people have to use eight hours a day and absolutely hate. I mean, seriously, hate to the point of shaking and tears at the idiotic hoops they have to jump through all day long, every day, for years. Genuine mental health impact. They come home miserable and snap at their kids. Maybe he's channeling their frustration. And maybe we deserve it.


That's the widespread failure to apply insights from Human Factors and HCI research, much of which has been around for decades.

A different problem than better programming paradigms.


Is it really? Programming is just really low level HCI.


Sounds like you're talking about corporate environments, where the people using the software aren't the same folk as those paying for it.


> And when it comes to "non-programmer programming", just look at the success of tools like Zapier. I see tons of non-programmers using them to great success.

Or Excel for that matter.


Go back to the page, look at the right-hand sidebar under "Links", it has:

    About me
    Email me
    Subtext
Click on "Subtext". Read up.


What are your thoughts on the overrepresentation of people in the spectrum in tech?

I think there's a very specific mindset (or "form of intelligence" per your words) that thrives working on a complex sandbox with very rigid rules and low tolerance for logical inconsistency, while most people simply have no tolerance for such inflexibility.

And I think there's no tooling that can change that, and if there ever is there goes programming as an industry because it will be able to interpret loosely defined requirements, ask the right questions and find a logical solution to it.


Yeah I think it takes a special mind to use today's dev environment, but the distance from where we are today, to an interrogative system creator (which is a pretty cool idea btw) is eons; there has to be something more in the middle.

I think it's always going to take someone with a systems-thinking mind to engineer a system that's any good, but lots of people have that, people who created their own business and thought through all the steps in their system and engineered the whole thing in their head. That's systems engineering, just not with code. But there's so much knowledge that's required to go from zero to custom software deployment that I'd characterize as non-essential complexity, today.

Yeah I think programmers tend to make tools for programmers, for themselves and the way they think, which is totally natural and to be expected. But man these poor users.

It's not so bad when they're using mainstream programs with millions of dollars in dev budgets behind them, but when you get further down the long-tail it just gets so awful and users are so helpless. The software an insurance agent uses, the charting app a therapist uses, the in-house warehouse management system people spend their entire day using, where if they could just make this field auto-focus or a million tiny other things, they would save so much time and frustration.

Generally speaking, users edit data and programmers edit schema, and never the two shall cross. Users execute code but only programmers change it. What hope do we have of democratizing programming when these roles are about as foundational to computers as it gets? What would we have to unlearn to change this?


I’m sorry, I still don’t buy it.

Entry level programming concepts are now presented to kids in literal toys.

“Real” enterprise programming has a heavy bit of tooling available now that wasn’t there in ‘96 with testing frameworks of all sizes, intellisense style code completion tools...

I predict the next major functionality that will come is pre-trained machine learning that be dropped into code as simply as boolean statements are today.

Or wait that’s a dream that isn’t here yet so I should be cynical and lash out at my kids’ LEGO Boost set for its antiquated battery tech or something.


The idea of programming with pictures goes way back. I mean, to the 1950s. It's one of the classic mistakes, like fighting a land war in Asia or going in against a particular ethnic group when death is on the line.


I do not believe for a second it's a mistake to have visual representation or manipulation of software, it's just exceedingly difficult to make anything as versatile as text grammars and compilers. No one's done an adequate job because the ROI is terrible for building such things when you can't depend on massive adoption, which you can't.


I think the limitation that no-one has managed to solve is that visual programming paradigm approaches tend to focus on solving a specific type of problem (drag and drop GUI builders are a perfect example), and this means their usage is narrow and they're never been widely adopted. Building a generic visual programming tool that can be used to solve any type of business problem across any business domain is very difficult if not dare I say it impossible.


I see it as just a very difficult problem. Flip the problem around and you can maybe see my point of view. IDEs for text programming are getting more intelligent, to the point where in certain languages, arranging function calls and assigning parameters could almost be done completely with autocomplete menus. Go a few steps further and you can connect these things without the frequent naming-of-things exercise that programmers undergo.

Part of the reason I am so bullish on visual programming is that it lets us skip the excessive naming-of-things that is part of text programming. I would rather copy-paste a pile of colors and shapes that describes an algorithm and connect it to the right inputs and outputs. I think nameless shapes would be more immediately recognizable than identifiers from all the various naming schemes people use in their code.


I think it's a proven mistake to use visual representations in place of textual representations for creating software. The information density is low, it's difficult to work with, browse, and search, and (a real killer) merging in version control systems is a nightmare.


Given the current day struggle to merge code with whitespace differences, merging differently arranged visual programming elements where the end result is the same but the positioning of the elements is just different sounds like a tough problem. Although linking of elements is probably key, rather than visual positioning. I imagine this is something IBM VisualAge had to deal with back in the late 90s. If I remember right local history was a key feature of VisualAge, a feature that carried across to Eclipse and is still there today.


Instead of complaining that the churning world of software is "stagnant" he could have just made a blog post talking about his ideas of what that futurist interface would be. That would have garnered a lot more of a positive attitude.


Step one is to realize you actually have a problem. Then you can start solving it.


>visualizing boolean logic trees and all kinds of other stuff, aimed at making programming more accessible and concrete to regular people.

Here's an art analogy. You make it easier to create art by making an algorithm that takes fuzzy beginner line art that is not following proportions and has all sorts of problems and turn it into professional line art via ML. People will struggle at the far more challenging skill of choosing what to draw because it innately requires knowledge of human anatomy, perspective and many other skills beyond knowing how to use a pencil. Programming languages are equivalent to a pencil, tablet or other any input method in this analogy.


> And yet there's a chasm of capability between programmers and users, and it's actually true that what programmers do isn't something most "normal people" will ever be capable of. But is that because programmers have some wildly special form of intelligence that most people don't have, or is it because our tools suck?

Every time I used a "No-Code" solution it got me 95% where I wanted to be. The remaining 5% took twice as long as the first 95% because now I had to "go behind the scenes" and figure out the custom API exposed by whatever tool I was using. And then figure out a workaround that would inevitably break with the next version of the tool.

And even with the No-Code, it's still being able to decompose a problem into smaller sub-problems, find the similarities and make sure they are well defined. I think there are a lot more folks with these abilities but you'll find them in math and engineering departments mostly.


Agree about no-code. But UI that generates code is a pretty cool paradigm. For most common patterns, end users don't need to know that there's a bunch of boilerplate code under the hood, but if that isn't sufficient, that remaining 5% can be reached by just jumping into some auto-generated code that's readable and well-commented. Mere mortals can do the 95% themselves and then call in a wizard for the hard parts, or try to figure it out on their own. Plus, any non-generated solutions can be highlighted, so it builds up a record of the places where the UI fell short.


Interface builders sort of work like that (creates the boilerplate for UI elements and formatting).

But again, NeXT interface builder predates 1996.


Lots of cynic-cynics in here :) I'll stand up for the author. The lists in the article are not great, but I still agree with the sentiment.

I recommend everyone watch Bret Victor's classic "The Future of Programming" https://www.youtube.com/watch?v=8pTEmbeENF4

Yes, we've had a trillion dollars invested in "How to run database servers at scale". And, we've had some incremental improvements to the C++ish language ecosystem. We've effectively replaced Perl with Python. That's nice. Deep Learning has been a major invention. Probably the most revolutionary I can think of in the past couple decades.

But, what do I do in Visual Studio 2019 that is fundamentally different than what I was doing in Borland Turbo Pascal back on my old 286? C++20 is more powerful. Intellisense is nice. Edit-and-continue worked 20 years ago and most programmers still don't use it. If you are super awesome, you might use a reversible debugger. That's still fringe science these days.

There is glacial momentum in the programming community. A lot of "grep and loose, flat ASCII files were good enough for my grandpappy. I can't accept anything different" And, so we don't have code-as-database. A lot of "I grew up learning how to parse curly bracket blocks. So, I can't accept anything different". So, so many languages try to look like C and are mostly flavors of the same procedural-OO-bit-of-functional paradigm. A lot of "GDB is terrible, don't even try" so many programmers are in reverse-stockholm system where they have convinced themselves debuggers are unnecessary and debugging is just fundamentally slow and painful. So, we don't have in-process information flow visualization. And, so on.


> There is glacial momentum in the programming community. A lot of "grep and loose, flat ASCII files were good enough for my grandpappy. I can't accept anything different" And, so we don't have code-as-database. A lot of "I grew up learning how to parse curly bracket blocks. So, I can't accept anything different". So, so many languages try to look like C and are mostly flavors of the same procedural-OO-bit-of-functional paradigm.

We actually have tried several times to build programming languages that break out of the textual programming style that we use. Visual programming languages exist, and there's a massive list of them on Wikipedia. However, they don't appear to actually be superior to regular old textual programming languages.


I've come to the opinion that "graphical" vs "non-graphical" is a red herring. I don't think it actually matters much when it comes to mainstream adoption. Is Excel graphical? I mean, partly, and partly not, but it's the closest we've gotten to a "programming language for the masses". Next up would probably be Visual Basic, which isn't graphical at all. Bash is arguably in the vicinity too, and again, not graphical.

Here's my theory (train of thought here); the key traits of a successful mainstream programming solution are:

1) A simple conceptual model. Syntax errors are a barrier but a small one, and one that fades with just a little bit of practice. You can also overlay a graphical environment on a text-based language fairly easily. The real barrier, IMO, is concepts. Even today's "more accessible" languages require you to learn not only variables and statements and procedures, but functions with arguments and return values, call stacks, objects and classes and arrays (oh my!). And that's just to get in the door. To be productive you then have to learn APIs, and tooling, and frameworks, and patterns, etc. Excel has variables and functions (sort of), but that's all you really need to get going. Bash builds on the basic concepts of files, and text piping from one thing to another.

2) Ease of interfacing with things people care about: making GUIs, making HTTP requests, working with files, etc. Regular people's programs aren't about domain modeling or doing complex computations. They're mostly about hooking up IO between different things, sending commands to a person's digital life. Bash and Visual Basic were fantastic at this. It's trickier today because most of what people care about exists in the form of services that have (or lack) web APIs, but it's not insurmountable.

I think iOS Shortcuts is actually an extremely compelling low-key exploration of this space. They're off to a slow start for a number of reasons, but I think they're on exactly the right track.


You're missing that the programming environment needs to be scalable to 50 or even 500 collaborators. Arguably bash and excel struggle at scaling non trivial problems to 15 collaborators. A surprising number of programming environments do even worse, notably visual or pseudo-visual ones, but even some textual ones.


I don't think it needs to, since none of the above do, but that would definitely help at least in the enterprise


“Superior” is meaningless out of context. There are domains where visual programming prevails, like shader design in computer graphics. Visual programming is a spectrum: on one end you’re trading the raw power of textual languages for a visual abstraction, and on the other end you just have GUI apps. UI design and prototyping programs like Sketch are arguably visual programming environments, and you’d have a hard time convincing me that working in text would be more efficient.


>We actually have tried several times to build programming languages that break out of the textual programming style that we use. Visual programming languages exist, and there's a massive list of them on Wikipedia. However, they don't appear to actually be superior to regular old textual programming languages.

A lot of the time I spent doing the Advent of Code last month was wishing I could just highlight a chunk of text and tell the computer "this is an X"... instead of typing out all the syntactic sugar.

Now, there is nothing that this approach could do that you can't do typing things out... except for the impedance mismatch between all that text, and my brain wanting to get something done. If you look at it terms of expressiveness, there is nothing to gain... but if you consider speed, ease of use, and avoiding errors, there might be an order of magnitude improvement in productivity.

Yet... in the end, it could always translate the visual markup back to syntactic sugar.


Low code environments as shipped today are actually quite impressive, and I'm saying that as a very long term skeptic about that field. This time around they're here to stay.


I agree.

I also agree with the rough time delineation. Starting with the dotcom bubble, the industry was flooded with people. So we should have seen amazing progress in every direction.

Most of those programmers were non-geeks interested in making an easy buck instead of geeks, into computers and happily shocked that we could make a living at it. And many of the desirable jobs turned out to be making people to click on things to drive revenue.

Who can blame any of those people? They were just chasing the incentives presented to them.


Check out the Unison programming language (https://www.unisonweb.org/). The codebase exists as a database instead of raw text. It has the clever ideas of having code be content, and be immutable. From these 2 properties, most aspects of programming, version control, and deployment can be re-thought. I've been following its development for a few years, I can't wait for it to blossom more!


What problem does this solves?


> Edit-and-continue worked 20 years ago and most programmers still don't use it. If you are super awesome, you might use a reversible debugger. That's still fringe science these days.

Or use a debugger at all. Or write their code in a way that's easy to debug.

> "grep and loose, flat ASCII files were good enough for my grandpappy. I can't accept anything different"

Just try sneaking an unicode character in a non-trivial project somewhere.

> So, so many languages try to look like C and are mostly flavors of the same procedural-OO-bit-of-functional paradigm

C did something right. It's still readable and simple enough it doesn't take too long to learn (memory management is the hardest thing about it).


I feel like this sentence holds the key to the author’s misunderstanding of things:

> This is as good as it gets: a 50 year old OS, 30 year old text editors, and 25 year old languages. Bullshit. No technology has ever been permanent.

Many successful software technologies are Ships of Theseus, where the collection of things with version N bears the same name as the version 1 product but nearly everything works differently or has been rewritten. Consider Modern C++ vs the original language.

Also the author just kind of ignores the whole Cloud Computing industry and virtualization.


I agree with your conclusion, disagree with your argument. Especially the last one.

Cloud computiung was the first sort of computing available, widely, from the 1950s. Central systems which were accessed with terminals. A huge business up until the 1980s, at least.

Virtualisation dates from the 1980s, at the latest. I had a professor in the early 1990s who had worked on it for Amdahl


I still then feel like the author's argument would basically be like "aviation has stagnated in the past 100 years!" because planes existed in 1920 and we're still flying in planes in 2020.


It's not that wrong though, you could certainly argue that not much has changed since roughly the end of WW2 and "mainstream" oceanic crossings. The de Havilland Comet was introduced in 1949 and is very similar to modern jetliners.

The only thing that would shock a traveler from 70 years ago is the awful service and in flight entertainment system.


And for someone who flew Concorde it probably looks like we did a step backwards.


What that shows is how misleading the wrong metrics can be.


> The only thing that would shock a traveler from 70 years ago is the awful service and in flight entertainment system.

...and the cheapness


The remote access part is similar, but in some ways 1950s systems were as far from cloud computing as possible.

1950s - computers are so expensive that I can't even get one to myself and have to share it with a bunch of other people

now - I'm using so many computers that I can't even handle managing them all myself and instead pay Amazon to do it for me


Cloud computing is only superficially comparable to mainframe computing. They’re both centralised, that’s pretty much it.


Shoot, virtualization was a thing on System/370 in the early 70s!


The author does a really poor job of providing evidence for why 1996 happens to be the date for "the end of progress" in software. Why not 1995, 1994, 1993, or the years before that? This sounds more like someone who went into the field in the 80's and now they are grumpy that the world cares about a completely different set of concerns than what was important back then.


bingo


This is just the normal course of technology. The jet engine hasn't fundamentally changed since the 50s or so, but we're still making incremental improvements to it (single crystal casting, various high temperature alloys, probably ceramic turbine blades in the future), and those changes add up to substantial improvements over time. Software is the same.

Edit: Just compare how much easier it is to make something substantial today than it would've been 15 years ago. You could make a fairly sophisticated SaaS today from scratch using Django, Postgres, Redis, Stripe, Mailgun, and AWS in about a month. That's progress, even if it is boring and obvious.


It's easier to make a very specific style of application that's very similar to what a lot of other people make. Anything outside that is probably significantly more difficult though, because of the massive complexity of everything your software has to interact with.


It's a little different imo. We've settled on the design of the jet engine through math and engineering. The shape of it is designed and not arbitrary. On the other hand, the syntax of a programming language is entirely arbitrary, settled upon by whoever wrote the tool using whatever convention struck them at the time. Programming languages weren't designed and come upon through linguistical theory or psychology over what might be the best way to structure this stuff or anything like that. It wasn't rooted in empirical evidence like the jet engine is. It was a programmer at AT&T deciding that grep seemed like a pretty cool acronym 50 years ago, and we are still teaching kids how to use grep today for no reason other than this arbitrary syntax now being the status quo due to everyone in the business having already taken the time to learn it. Not because it's good or we have some evidence that this is a decent syntax, only that we are lazy and entrenched.


> the syntax of a programming language is entirely arbitrary [..] Programming languages weren't designed

This statement is quite simply wrong. Unless I'm misunderstanding you, you seem to be ignoring decades worth of programming language research and the years of work put into designing individual languages (whether by BDFLs, communities, or ANSI committees).

I agree with you that perhaps jet engines weren't the perfect metaphor. However, I think that that's because jet engines have much more stringent external (physical) constraints than software or programming languages. The latter do allow for a certain amount of "taste-judgment": it doesn't matter too much which way you do it, but some people at some times prefer one way over another.

So a more apt metaphor might be architecture: We've known how to build houses for millenia, but we're still improving our materials and techniques, while tradition and fashion continue to play a big role in what they end up looking like.


I get that programming languages are designed by committee and every decision is thought out, but that still makes some things arbitrary to whatever they decided. It really is arbitrary that a for loops is done like this in python:

for val in sequence:

  print(val)
and this in R:

for (val in sequence)

{

print(val)

}

and this in bash: for val in sequence; do echo val; done

I mean there was no paper that I can find that found curly braces to be especially drawing to the human eye or anything like that. or that it's particularly intuitive to write "do" and "done" or not. For loops look like no linguistical sentence structure that I recognize, they are pretty much their own thing structurally imo. These are all arbitrary decisions decided by people, because it just happened to make arbitrary sense at the time for any number of reasons. Not because the R core team ran some psychological experiments to empirically determine the most intuitive for loop structure when they drafted their language; that data has never been collected, the R core team just followed what the S team had already done and that's that. Arbitrary.


I’m an ML researcher, but did my grad work in systems. Here are several crucially important technologies invented since 1996 not mentioned in the article, which are fundamental underpinnings of the research community:

- Jupyter notebooks, for teaching machine learning. I use these for teaching. - OpenCL and other libraries for running scientific simulations on GPUs, gpflow for training on GPUs - Keras and PyTorch (libraries for simple training of deep learning models). More than half of machine learning research exists on top of these libraries

Let’s not even get into the myriad recent discoveries in ML and libraries for them.

Parquet file format and similar formats for mass data storage. Spark and Hadoop for massive parallel computation. Hive and Athena further build upon these innovations. A good portion of distributed computing literature is built on these.

Eventually consistent databases and NOSQL. There’s so much here, hard to list everything.

ElasticSearch and Lucene and other such tools for text search.

Then there is all the low level systems research: new file systems like BTRFS and ZFS. wire guard is something that’s just been built that seems foundational.

I am running out of words but let’s conclude by saying the premise of this article is laughable


>I’m an ML researcher, but did my grad work in systems.

>- Jupyter notebooks, for teaching machine learning. I use these for teaching. - OpenCL and other libraries for running scientific simulations on GPUs, gpflow for training on GPUs - Keras and PyTorch (libraries for simple training of deep learning models). More than half of machine learning research exists on top of these libraries

Would you share your usual workflow and the problems you face in that space with these tools?


I wish pip install automatically updated the requirements file like in some other languages.

I look up some boilerplate in stack overflow every week for matplotlib figures to look better.

CSVs generated in excel cause errors when reading in python without a special incantation.

It’s not super easy to set up multi machine training. AWS GPUs are extremely expensive.

sklearn models, once saved, are not easy to rework into a different format to be used with other languages.

I wish python could do compile-time checking instead of running for 30 minutes to then report a type mismatch. mypy is still not very good and in beta


OK. Asking because we've started building https://iko.ai as an internal machine learning platform to solve actual problems we have faced delivering machine learning projects for our clients these past few years. So far, it has the following:

- No-setup Jupyter environments with the most popular libraries pre-installed

- Real-time collaboration on notebooks: see cursors and changes, etc.

- Multiple versions of your notebooks

- Long-running notebook scheduling with output that survives closed tabs and network disruptions. You can watch the notebook running live even on your phone without opening the JupyterLab interface.

- Automatic experiment tracking: automatically detects your models, parameters, and metrics and saves them without you remembering to do so or polluting your notebook with tracking code

- Easily deploy your model and get a "REST endpoint" so data scientists don't tap on anyone's shoulder to deploy their model, and developers don't need to worry about ML dependencies to use the models

- Build Docker images for your model and push it to a registry to user it wherever you want

- Monitor your models' performance on a live dashboard

- Publish notebooks as AppBooks: automatically parametrize a notebook to enable clients to interact with it without exporting PDFs or having to build an application or mutate the notebook. This is very useful when you want to expose some parameters that are very domain specific to a domain expert.

Much more on our roadmap. We're only focusing on actual problems we have faced serving our clients, and problems we are facing now. We always are on the lookout for problems people who work in the field have. What do you think?


I’m not agreeing with the author, however he accepts ML is an exception, and Jupyter notebooks are the same as mathematics notebooks except that they are open to languages other than wolf.

He does cover this.

Again, I don’t think you are necessarily wrong, but the author does cover Jupyter and ML.


The author specifically calls ML non-human computing and I believe is talking about the advances in ML from a theoretical standpoint. I don’t believe it is possible to lump the human written software used to train models, generate adversarial examples, and perform other tasks on GPUs, as purely non human computing. In fact, a lot of the advances in computing recently are specifically to enable this type of computing to be more efficient: easily parallelize training, analyze bigger datasets, graph databases, and other “big data” technologies.

In any case, software notebooks definitely existed before Jupyter like you say (I’m not familiar with Wolf but Mathematica has a similar idea), but I don’t believe they date back to 1996. I might be mistaken. I don’t see the author mention them in the text.


Mathematica is mentioned and was released in 1988.

Also, I don’t see anything particularly new in how ML computing is organized.

We’ve had vectorization, and massively parallel processors since the 80s.

All that’s new is that those techniques are being used on desktop machines now because of Moore’s law.

I’m not saying there are no new discoveries in ML, just that the engineering isn’t new.


Well of course it's true that once a field gets to a moderate size, someone's been everywhere before. This comment section is littered with "ah but X looked at this in the 1980s".

What seems to have happened since the 90s is we have a lot of ergonomic improvements, which are still valuable. Want to run something in a cloud in 1995? Yes, you can do it, but it's vastly simpler and cheaper to do in 2020.

In the end the major things that have changed are that computing power got much cheaper, and computing has become part of a vast number of businesses. That's why a fashion retailer can now click some buttons and make a website. Or why there's an AI that can beat everyone at StarCraft II.


Ironic that he uses these examples as a non-incremental innovation.

> LISP, Algol, Basic, APL, Unix, C, Oracle, Smalltalk, Windows, C++, LabView, HyperCard, Mathematica, Haskell, WWW, Python, Mosaic, Java, JavaScript, Ruby, Flash, Postgress.

while using the following as incremental:

> IntelliJ, Eclipse, ASP, Spring, Rails, Scala, AWS, Clojure, Heroku, V8, Go, React, Docker, Kubernetes, Wasm

What he doesn't understands is, everything is based on everything else. Many languages got inspiration from another language, or some other technology. https://github.com/stereobooster/programming-languages-genea...

There is no such thing as fully independent innovation. Not just in tech, but every single piece of innovation in human history.


Would be nice for them to give examples of what they consider "real progress". Because the pre-1996 list:

> LISP, Algol, Basic, APL, Unix, C, Oracle, Smalltalk, Windows, C++, LabView, HyperCard, Mathematica, Haskell, WWW, Python, Mosaic, Java, JavaScript, Ruby, Flash, Postgress

Seems just as "weak" as the list they give after 1966:

> IntelliJ, Eclipse, ASP, Spring, Rails, Scala, AWS, Clojure, Heroku, V8, Go, React, Docker, Kubernetes, Wasm

Basically, after LISP and Algol everything was just a small incremental change over them, some might even say a step back.

From the way they phrase "progress", it seems the lineage would look like:

Machine Code -> Assembly -> Fortran -> Lisp -> Algol

That takes us back to 1958 then, not 1996. I could agree more if that was the argument. That since Lisp and Algol, nothing new in programming language really came out. And if making that argument, the hypothesis around why would be very different. Like it seems it might be more because Software just quickly got bootstrapped and then we did reach a sort maximum with it.


This is one of those canards that people who've never actually programmed in Algol or old lisps (or who forget what its like!) say. The thing to remember about Algol 60 is that it does not have nominal types! It only has (fixed) numerical and array types (it didn't even have records!). Nominal typing was invented by Nicholas Wirth and Tony Hoare for their Algol W, and then became real when Wirth released Pascal (it was also present in the Algol 68 spec, though Algol 68 didn't have real compilers until the mid 70s). That's not a small improvement over Algol 60... it was a massive conceptual change. Before 1978, and the lamdba papers, Lisp was a very different language. Lexical scoping (which technically was invented before in ML) was a massive change in how Lisp worked, because it made the functional style of programming possible.


That's so interesting. It is crazy (and frightening) how quickly we forget. I can't imagine a world without record types.

I was thinking about this the other day. When Djikstra was coming up with his graph algorithm, how the heck did he represent the graph as a data structure??


Yes, it is also specially sad that the first almost memory safe systems language ESPOL, later replaced by NEWP still in use to this day in Unysis ClearPath MCP, dates back to 1961 and here we are trying to fix C, created 10 years later.


I thought we gave up on trying to fix C?


Companies like Microsoft and Google still believe that while POSIX is around, C needs some fixing as rewriting everything is too expensive.

"Checked C"

https://www.microsoft.com/en-us/research/project/checked-c/

https://www.youtube.com/watch?v=EuxAzvtX9CI

"Improving Your Android App to Prevent Security Vulnerabilities"

https://www.youtube.com/watch?v=zkoOD4hmiGE


There have certainly been attempts. The Bret Victor school of thought, and LunaLang, for example. It doesn't seem that there's a massive shortage of ideas - which points more towards a generalised industrial lack of effort as the main culprit.

I guess Microsoft and the Lean theorem prover/programming language probably counts? I could see that becoming as big a thing as Mathematica.


VPRI's STEPS was a great research project in this direction

https://alarmingdevelopment.org/?p=229

https://www.hackernewspapers.com/2017/1146-steps-toward-the-...

But, then they published their final report.. did other stuff for a while... announced they brought together a great team with a long-term funding... and they've been doing... something? Nine years now and I haven't heard of any more progress on this front.


I remember reading Alan say (perhaps on Quora, where he seems to be quite active) that he got tired of fighting to fund VPRI every year.


Ironically that group is oddly cynical about the importance of the UI/Visual representation.


I think this is just a sign that the field is mature. You could take someone from the 1800s and show them today's machine shop, and they would basically know what the tools do -- lathes, mills, micrometers -- but we're making pretty complicated things with them today. It's just that "the past" came up with pretty good tools and we're still using them.

In 1996, I'm not sure everyone was carrying around a super computer with a multi-day battery life that could shoot slow-motion 4k video and understand your speech. And yet, today we don't even think twice about that. Clearly something has happened in the past 25 years.


Absolutely. It strikes me that his understanding of (and certainly his appreciation for) the history of technology is quite limited if he expects ad infinitum exponential development.


They left out a lot of advancements in their list. I can't even see the common theme for what they decided to include or not.

Is it just the list of software tools they use?


Yup! just a laundry list of software.


What? I have to hard disagree.

There has been a plethora of brand new languages since 1996. Most of them are due to LLVM, which is impressive on its own. Not to mention Clang.

Search engine used to be considered an impossible problem (just using grep doesn’t count), but it is now quite easy to solve that problem thanks to ElasticSearch and Solr.

What about all those distributed databases we all took for granted? Open Source DB used to sucked when it comes to horizontal scalability.

As if the above are not enough, what about virtualization and containerization technology? Are they not impressive enough for OP? They single handedly spawn multi billion dollar businesses and make deployment significantly easier.

Need I say more?


The ideas behind LLVM were first researched by IBM during their RISC research on the implementation of PL.8 language, and later by Amsterdam Compiler Toolkit.

C++ as database and AST representation was used by Lucid on Energize C++ and IBM Visual Age for C++ version 4. Both products died, because they were too resource hungry for the hardware of the day.

Need I say more?


At minimum, C++11 and forward are revolutionary leaps. Combine that with Rust, and I think the author is missing some critical elements that would have stopped his hypothesis in its tracks.

He make logical leaps that aren't justified.


I don’t see anything revolutionary about rust.

It is an very well designed piece of engineering and an extremely useful and tasteful set of tools, but it’s no paradigm shift.


I think the author has a point, perhaps poorly articulated. There is obviously a large universe of software out there so broadly speaking a lot of innovation has happened since and continues.

However, there is a general crappiness around software innovation (or the lack thereof) these days that hides behind "sexy" CLI tools with a few "sexy" options. Some of these tools are very popular and can be difficult to assail but the smell of mediocrity (or lack of ambition) around them is hard to shake off.

A comment left on the post does in fact hit on one of the reasons: the lack of interest in creating user empowerment technology. The obsession with CLI tools for instance is framed as simply facilitating better integration, while this is true, it is also a convenient excuse to avoid the difficult task of building user interfacing software.


No mention of distributed version control. That seems to me a more important development than most of the post-1996 list.


Today git is mostly used the same way as a centralized version control system, no?


I don't think so. Even if the main repository is organized in a linear style, it's very common for developers to make experimental branches on their own machines. This was inconvenient before DVCS, so you used to see things like conditional compilation or large blocks of commented-out code. People tend to work in the way best supported by their tools, and I'm convinced DVCS changed the way we work by making more things convenient. And that's not even counting large-scale truly distributed projects like Linux kernel development (the origin of git), which would be practically impossible without it.


The experimental branches point is a good one. I'm not convinced about "large-scale truly distributed projects" though. Larger scale projects than that are hosted on github.


The Linux kernel is developed in that way. git itself is another example.


You can use it that way if you want. That's not how everyone else uses it, however.


Actually it is how most of the projects I have been have decided to use it.

Most enterprise shops don't see any value in doing otherwise.


I feel the same for the browser environment. Other than a few new “exciting” APIs and canvas, developing for the browser is essentially still the same deal.

I’m tired of hacking documents into apps. I want the web to be split into Markdown-like web (i.e. pure content and some meta) and actual web apps that work like native apps.

I want Accept: application/web


I mean, if you'd like to put it in perspective, try developing for IE6. You don't even have a dev console.

I had a recent revelation when watching a video someone posted of a Symbolics Lisp machine, and its legendary dev environment. That being - it's basically just a modern browser devtools, but for the whole OS. Turtles all the way down.

But somehow, despite having the bird in our hand, we as an industry lost them, and regressed to having vastly-inferior tools to those Symbolics devtools, and only gradually rebuilt them over the course of multiple decades.

-------

As a harsh rebuttal to the OP article; I think the number one thing that's "whooshing" over that guy's head is that if someone develops a proof-of-concept for something, it doesn't invalidate the far-more-significant trick of making it widespread. Often it's the latter that's far more difficult, because it requires eliminating compromises that weren't acceptable to people before.

The strange thing about the software industry these days is that quite a few "truisms" about software development have been completely invalidated. Ironclad laws of how the world works have slowly been rendered completely false, and it's happened not through some sudden eureka development but through some slow, sneaky, iterative improvement. The same sort of thing i.e. that made solar cheaper than coal, or made SSDs cheaper than HDDs (I don't know if we've actually crossed that line yet, but it seems pretty inevitable that once the investment impetus behind HDDs collapses, we'll eventually get there). Flatscreens versus CRTs was another great example.


What 'truisms' about software development do you think have been completely invalidated?


I don't know how far back your view on web development goes, but the approach from 20 years ago really was documents being thrown around and lots of forms with a little bit of JS. That was hacking documents into apps.

Today's approach for building web apps is basically code up a fat client application which downloads to the browser, runs, and talks to some remote web API (i.e. REST) using JSON. This is much closer to native apps and very far from the document approach. The app and the pure content are cleanly split between the initial app bundle and the web APIs.


> This is as good as it gets: a 50 year old OS, 30 year old text editors, and 25 year old languages. Bullshit. No technology has ever been permanent. We’ve just lost the will to improve.

We’re at the better horses stage of the cycle. What’s the next car?


It is not all about the capacity of the software. If software has been around for a long time, people can gather experience about how to use it. Longstanding software in the hands of someone very familiar with it, will use it like a simple, comfortable, reliable tool. That has huge value to the individuals who use it.

They know they are blind to potentially better products, but they no longer have the time to gather the experience and confidence in the new products.


Content addressable, immutable code? https://www.unisonweb.org


Put everything in the database! Directories are dumb, the world is not a tree it's a graph. Files are dumb, they don't have constraints or decent metadata. Syntax is latent structure. Config files are just a featureless database. Stop it! Code wants to be data. (shameful self-plug: aquameta.org)


I'm somewhat peeved by his "No technology has ever been permanent" statement. Compared to most human technologies, 25 years is not exactly what you'd call a grandfatherly age...


Commercial quantum computer and true parallel programming languages.


Quantum computing is only likely to be an accelerator for a few specific kinds of problems. The bulk of programming (even the bulk of most quantum algorithms!) is still happening in classical computers.


I'm not convinced quantum computers are going to accelerate anything.


Problems in the complexity class BQP are mathematically proven to be more efficient on a quantum computer than on a classical computer.

You can still doubt that quantum computers are physically realizable. Perhaps the number of qubits needed for error correction will turn out to increase fundamentally faster than the number of useful qubits, or perhaps there is some other fundamental limit that we will hit.

But the maths is clear: if it is at all possible to create a quantum computer, it will be faster for certain specific problems than a classical computer (in particular, it will be faster than a classical computer at simulating quantum phenomena).


Since BQP is a subset of PSPACE, are you claiming that P != PSPACE has been proved?


Oops, you're right, I misspoke when I claimed this was proven. I mixed up BQP with the recent quantum superiority demonstration, apologies.

Still, it is strongly suspected that BQP != P. This was especially reinforced by the MIP* = RE result, as far as I understand from Scott Aaronson [0].

[0] https://www.scottaaronson.com/blog/?p=4512#comment-1829033


I see your Scott Aaronson and raise you one Gil Kalai.

https://gilkalai.wordpress.com/2020/12/29/the-argument-again...


The top answer to Alan Kay's question on SO Significant new inventions in computing since 1980 innovation https://stackoverflow.com/questions/432922/significant-new-i... is the Web.

And TFA argues the Web caused innovation to be pushed out by adoption and application aka solving actual people's problems aka money. So that fits.

[ TFA's date of 1996 just freezes the surface tech that got swept into power by the web, e.g. Java was deliberately not innovative, but a "blue-collar language". ]

Of course, it is possible that all the foundational stuff has already been done. It happened to physics, why not us?


Never mind languages... how has the size of software evolved since 1996? Are we now delivering larger programs faster? I think we are.

This came to mind because of a quote I came across the other day, in a paper from 1980:

"Interlisp is a very large software system and large software systems are not easy to construct. Interlisp-D has on the order of 17,000 lines of Lisp code, 6,000 lines of Bcpl, and 4,000 lines of microcode."

("Interlisp-D: Overview and Status", at http://www.softwarepreservation.net/projects/LISP/interlisp-... )


The simplification of user experience has reduced the opportunity for many individuals to learn. In the 1980's many under achieving boys found the small rewards of computer programming addictive. This promoted self learning in a way that no longer happens.


Doesn't it? You can google "how to make iphone app" (or whatever) and get a wealth of videos, blog posts, guides, tutorials, github repositories. You can join online communities to learn and ask questions. You can download all the tools for free.

My early days of programming were tinkering in BASIC and asking my parents to buy me "Learn C in 24 days" type books and hoping the compilers on the bundled CD would actually work. I had very few resources and had nobody to ask if I got stuck, except for other kids who didn't know much more than me. Seems a lot better now.


>I had very few resources and had nobody to ask if I got stuck

That's exactly the self-learning experience. You had simple foundations that demanded to be explored and conquered. My first programming book was a ~700 page Java tome because that was the hot thing. After I had read it start to finish I realized that I still don't know what programming is. Then I read K&R and the scales fell from my eyes. Programming is so simple and elegant! The OOP approach of Java teaches you to just put the things you think of into classes, which is like being handed a blank piece of paper. C has an internal logic and an emergent structure of possibility because it reveals limitations. This is even more so the case for BASIC. Nowadays getting into programming seems far worse than 2000's Java. Everything is a library, everything is answered on Google, all the languages are feature-rich and opaque, any computer is fast enough for the worst code. Why bother if there is nowhere to go?


The main restriction when I was starting out was the cost. Now pretty much any language you want to use is just a download away.


Rust is a huge omission from that list. Until about 5 years ago there was no safe, GC-free, practical systems programming language, and furthermore it was unknown how to build such a thing. Now we know.


> Until about 5 years ago there was no safe, GC-free, practical systems programming language, and furthermore it was unknown how to build such a thing. Now we know.

It was known [1], but only by theorists, and theorists aren't the ones building compilers and implementing real programming languages.

Obviously the Rust community has achieved an impressive feat of engineering, and I'm extremely grateful for that. But using a substructural type system to avoid needing a garbage collector is not a new idea, just one that Rust has successfully popularized.

It takes a long time for ideas from the programming languages theory community to reach the mainstream.

[1] http://www.cs.ioc.ee/ewscs/2010/mycroft/linear-2up.pdf


Theorists knew how to build certain features but did not know whether the resulting language would scale up to work in practice. To determine that, they would have had to do the experiment of trying to get mass adoption for a language. No-one did that experiment until Rust did.

And of course just implementing linear types and trying to get people to use the language probably would have failed. Borrow checking, compiler error message engineering, and various kinds of social engineering were probably essential to make Rust practical.


Also (as someone with a PhD in a PL-related area) I think the effort to implement a real programming language and drive adoption is a much more significant contribution than writing a dozen academic papers describing the theory behind its features. The academic research is helpful but often people say "not a new idea" and imply that the real work was done by whoever first wrote down the idea, which is often quite wrong IMHO.


Paraphrasing from Dave Herman in a 2011 lunch conversation about Rust, the goal of Rust is not to break new ground in PL theory, but rather to bring the existing ideas into a practical language.


Exactly. We should recognize that as the real contribution of Rust: not breaking new ground in what is theoretically possible, but making it practical for the masses.


Ada is an example of this that everyone likes to ignore.


No. Ada had no safe solution to dynamic memory deallocation until SPARK started supporting it, and ergonomic support for pointers in SPARK (borrowing) postdates Rust. Arguably we still don't have proof that SPARK is ergonomic enough for widespread adoption.

(I dated Rust's breakthrough to five years ago because although Rust had the features well before then, it was only about five years ago that it became clear they were good enough for programmers to adopt at scale.)


We’re also missing TypeScript and C#, both of which are quite popular.


It seems pretty unfair to talk about early history of programming languages compared to last 25 years.

I mean, Lisp and Agol were some of the first programming languages ever. Of course there is going to be lots of experimentation in different directions. The ecosystem then matures around a smaller number of solutions, and tries to improve them in depth instead of breadth. That's just the natural product development cycle

Sure, we dont have a lot of new groundbreaking languages, but are you really going to tell me that haskell or javascript is the same now as it was in 1996. Would someone from 1996 even recognize it now? Their improvements are probably more unrecognizable to someone programming from 1996 in that language than go would be to a c programmer in 1996


I think the general thrust of the article, that CS innovation is harder and less prolific than 1996 is generally true. I think it's a lot more nuanced than in the article though.

Something not mentioned is that CS (and every field) has gotten larger with time, and the frontier for new discoveries and improvements is getting farther and farther away.

Another thing is that while there aren't a lot of fundementally new things, much of our technological progress in the past 100 years is refinement on established things.

> We’ve just lost the will to improve.

That statement strikes me as a little to broad. We are less risky in our persuit of improvement is true, but I wouldn't say that we've lost it.


Reminds me of a friend in 2001.

He said my idea to become a dev was dumb and the money was in being an admin.

His reasoning was, he already had every software he needed.

Emule, Kazaa, ICQ, mIRC, Teamspeak these apps did all he wanted to do, so why would anyone pay money to develop new software?


The author sets up a short list of technologies developed since 1996:

> Since 1996 we’ve gotten: IntelliJ, Eclipse, ASP, Spring, Rails, Scala, AWS, Clojure, Heroku, V8, Go, React, Docker, Kubernetes, Wasm.

And then waives them all away as incremental or unimpressive.

The author holds up V8 as an example of something that isn't fundamentally new, which is missing the entire point of developing a runtime for an existing language. Not to mention, V8 is an impressive achievement in itself.

I'm lost as to why the author wants to work so hard to dismiss these technologies while simultaneously suggesting that nothing else noteworthy has been developed in the past two decades.

If an engineer in 1996 fell into a coma and woke up today, they'd be greeted with an entirely different world. We're driving around in electric cars, carrying around cell phones that are orders of magnitude faster than desktop PCs from 1996, and communication technology has evolved to the point that many of us are seamlessly working from home.

Technology only looks stagnant when you're steeped in it every day of every year. Up close, it looks slow. Take a step back and it's amazing what we have at our disposal relative to 1996. Don't mistake cynicism for expertise or objectivity.


As I get older myself, this brand of cynicism seems to be a projection of death anxiety or possibly the "loss of innocence" where grand visions of how the world ought to be were not met.

1996 was 24 years ago. You are totally correct that the progress and change in this period has been incredible on the historical scale. But 24 years on the timeline of a human life is quite a significant portion, and some may feel that their expectations of futuristic utopia have been subverted, making them cynical.


25! Happy new year!


>> some may feel that their expectations of futuristic utopia have been subverted, making them cynical.

i know i do!

lot's of stuff about the future sucks, and i can't think of a way to fix anything important with faster garbage collection or better type systems... maybe that's the difference between makers and cynics. i guess i'll try harder :p


> and i can't think of a way to fix anything important with faster garbage collection or better type systems

I can, I am very hopeful better type systems will help heal the absolutely dreadful state of software robustness.

Better types (and by extension, better type systems) help eliminate many costly tests, improves knowledge transfer from senior engineers to junior engineers, makes refactoring and adapting to changing requirements a much less error-prone process where the computer can guide you, provides a more rigid way of breaking down problems into chunks instead of a procedural approach where good boundaries are harder to intuitively determine.

While education is what enables people to write better code with them, we need better type systems and languages that make this process easier for the average programmer to learn and use. Already languages like TypeScript, Kotlin and Swift are slowly getting many people to take a gander over the brick wall by exposing them to constructs like sum types and specific patterns like optionals and either/result types, that should fuel a little bit of hope at least!

EDIT: That's some visceral reaction. Would be nice to see some disagreement stated in text, or if any particular point needs further explaining, or to see if just hope of things improving is what's angering people.


heh, well, by "important" i meant like freedom, justice, and the continued existence of humanity. but like i was saying up-thread, i think software quality is important, and in that sense, ubiquitous and accessible type safety definitely seems like a Good Thing. i guess they might even help keep us out of trouble in those "important" areas.

EDIT: actually, you kinda cheered me up :)


I see (at least) two technological trends that may strongly influence "important" problems and that may be interesting to compare:

"3d-printification" or democratisation of manufacturing capabilities. When advanced manufacturing become cheap, generic and small scale you need less capital to create and design important hardware.

"UX-ification" of advanced programming practices or democratisation of software creation capabilities. When advanced software design, composition and implementation becomes cheap, generic and scalable you need less capital to create/design/implement software and to be in control of the software you want/need.

When those things come together the importance of capital will greatly diminish for some vital parts of life. It will not solve all problems but some. And maybe create other problems. But it should at the very least greatly impact "important" problems.

The "UX-ification" of software creation is the part where software engineers can be part of.

Disclaimer: Perhaps I have a naive assumption underlying all this - the idea that increased flexibility eventually "will lead" to apparent simplicity. But isn't living organisms kind of a proof of this? The amount of complexity that an animal needs to understand to continue living and reproduce is dwarfed by the complexity of the animal itself. Well, also cars and computers.

Something like that. Maybe. Goodnight.


The problem is that, while improved development technology does raise the floor, it also raises the ceiling, so equality don’t improve.

Some of us could host 2005 YouTube out of pocket, but it’s 2020, and people with fast connections expect 1080p video now, so it remains impossible for individuals to compete with corporations.


Yeah, that's a good point. I kind of expect that the ceiling should pop off at some point though. But I can't make a good argument for that.

Like, will it always make sense to add things to YouTube or will a more decentralised approach make more sense? Will single responsibility principle make sense at that scale? Can infrastructure costs be shifted around in flexible ways? Etc.

But yeah it's telling that Facebook was initially looking for a much more decentralised approach but gave up on it when adapting to reality and economics.


This article is just the IT equivalent of 'no good music gets made today' and so on. It's a combination of nostalgia and advancing age, the observation he's making is really no different. I remember software in 1996 too, and most of it really sucked. If you want to relive the 90s experience just go use some Oracle or IBM software. That will show you true stagnation.


> We're driving around in electric cars, carrying around cell phones that are orders of magnitude faster than desktop PCs from 1996, and communication technology

I'd argue that these are primarily hardware innovations, not software innovations. The OP mentions software, not technology in general.

See also Moore's law (hardware upgrades double computing speed every ~2 years) vs Proebsting's Law [1] (compiler upgrades double computing speed every ~18 years).

[1] http://proebsting.cs.arizona.edu/law.html


> The OP mentions software, not technology in general.

Hardware and software are closely intertwined. We couldn't design, develop, build, and run modern hardware without modern software packages. You're just not seeing the advances in software if you're only looking at famous web services technologies.

The whole article is extremely myopic, limited only to the author's narrow domain of web services and coding in text editors.


Okay - could you give some examples of where you think huge advances in software have been made?


Not OP but eBPF, Android ( phone, TV, auto) and iOS, ChromeOS, VS Code, Go, Swift, Powershell ( it's a terrible piece of software but a vast improvement for Windows), virtualisation, public cloud platforms ( virtualisation at huge scale with APIs), the Hashicorp stack, ZFS, LXC and later Docker, Kubernetes, Redis.


What you think of hardware innovations are often actually software innovations. Your iPhone doesn’t take a better picture because it has a better lens this year - it does because it has better software.


>> If an engineer in 1996 fell into a coma and woke up today, they'd be greeted with an entirely different world.

i was little and didn't know shit, but i was already learning to program, and i think about this all the time. like mind blown, every day. 8 threads and 2GB of ram in my cheap ass phone? crazy! and we got so good at writing compilers they give them away for free!

JS was way slow because MSIE wasn't supposed to compete with desktop apps :p also, transpilation, JIT, virtualization and containers seem like a pretty big deal to me -- virtualization was academic in the 90's.

also, a lot of that "old" stuff like python and ruby and even C++ really weren't as mature and feature-full. memory safety/auto-management and concurrency language features are prevalent now, and i think people don't always appreciate (or remember?) when your computer couldn't walk and chew gum at the same time.

it seems to me like our tools have clearly improved -- i think a more useful conversation could investigate how to apply what we now have to improving productivity, safety and security.


Virtualization isn't that recent. It at least goes back to the 60's with the IBM 360 mainframe.

For micros, it has been available, in some form, on x86 since 1985 when the 386 processor was first released. Remember the vm86 mode that let people run DOS apps under Windows 2.x?


> Remember the vm86 mode that let people run DOS apps under Windows 2.x?

That's not really virtualization. In the x86 world, hardware virtualization extensions don't crop up until 2003. Efficient dynamic binary translation (a key component for efficient virtualization without hardware support) I generally reckon to start with DynamoRIO (~2001), with Intel's Pin tool coming out in 2004.

(Do note that hardware virtualization does predate x86; I believe IBM 360 was the first one to have support for it, but I'm really bad with dates for processor milestones).


I agree it is generally not what people consider virtualization today, but it was still a form of early virtualization. It was amazing for the time, letting people run most of their DOS apps under Windows 386.

(Note VMware Workstation, as another poster pointed out, and even earlier "emulation" solutions like Bochs existed before 2003.)


Just a data point. VMware Workstation came out as a product in 1999, and did excellent x86 virtualisation without hardware support using dynamic translation and code scanning techniques.


> If an engineer in 1996 fell into a coma and woke up today, they'd be greeted with an entirely different world. We're driving around in electric cars, carrying around cell phones that are orders of magnitude faster than desktop PCs from 1996, and communication technology has evolved to the point that many of us are seamlessly working from home.

But these things were not driven by software innovations.


Having been an engineer in 1996. Really nothing wasn't foreseen by anyone with a feel for how things were going.

In 1996 a friend was dating an engineer with an electric car. She'd thrown a bunch of time and money into it. Had a range of about 80-90 miles and wasn't slow at all.

Around then there was skunks works a couple of buildings away from me. On the spectrum analyzer I could see spread spectrum signals showing up at 900 and 2.4GHZ.

Edit: Stay classy HN.


> I'm lost as to why the author wants to work so hard to dismiss these technologies

Some people just want to find negativity in the world.


To me it reads as bitterness about the uptake of his personal project, which he would presumably class with the cool tech from before '96. Some of the copy on that project's page has a similar feel to this blog post:

http://www.subtext-lang.org/

"This approach antagonizes both academics and professionals, but it is what I must do."


> Some people just want to find negativity in the world

Are you kidding? The author has (clearly) been dedicating his entire career to trying to solve these problems. He's not some drive-by hipster reminiscing about the good old days. The problem is that in a lot of ways programming is stuck in the "good old days"


I'm going to take a wiiiiild guess here that the author's personal peak was around 1996.

It's just a reaction to 'the kids' coming after him not being up to how good he (mis) remembers the past.

This negativity is a disappointing and growing trend among some grumpy old men in our field as that generation approach retirement. It disrespects people working today and I don't like it.


>This negativity is a disappointing and growing trend among some grumpy old men in our field as that generation approach retirement. It disrespects people working today and I don't like it.

Your contributions thus far in this thread have been negative with no substantive rebuttal to any points made in the article.


The key logical disconnect here is

> All of these latter technologies are useful incremental improvements on top of the foundational technologies that came before.

As if that wasn’t the case before. We’ve always built on what came before. What on earth does the author think was so revolutionary about Java? The language was an evolution of the C family. The VM was an evolution of Strongtalk.

The negative person says ‘all done before.’ The reasonable person says ‘step forward’ as we’ve been doing ever since.

And what’s the implication? That people have become stupid or lazy or ignorant? This is an attack on the work and integrity of a generation of people.

There is a consistent claim at the moment from CS men of a certain age that we don’t appreciate their genius enough. They bang on about how under appreciated their ideas are at their invited talks and on Twitter. I don’t think I’m the only one in the community thinking this. I think quite a lot of people feel this.

It’s not negative to call out someone else’s negativity.


>It’s not negative to call out someone else’s negativity.

That's not what I was referring to. I was referring to the actual negative statements you made about the author:

>Some people just want to find negativity in the world.

>I'm going to take a wiiiiild guess here that the author's personal peak was around 1996.

>It's just a reaction to 'the kids' coming after him not being up to how good he (mis) remembers the past.

>This negativity is a disappointing and growing trend among some grumpy old men in our field as that generation approach retirement.

That's not "calling out" negativity. That's a list of snark, broad insult, and generalizations with no supporting information. It's rude, discriminatory, and serves only to diminish the overall quality of discussion.


I don’t know what to tell you apart from

> Since 1996 almost everything has been cleverly repackaging and re-engineering prior inventions

seems like an ignorant attack on thousands of people’s professional competence and a veiled insinuation of deceit (the ‘clever repackaging and re-engineering’) and

> Suddenly, for the first time ever, programmers could get rich quick

sounds like a moral attack. All in all it’s a shitty thing to say to people. And there’s more and more grumpy people trying to tell young computer science researchers today that they’re useless and have no new ideas like this.

Have you been to a CS conference recently? You may be missing the context that these people are everywhere criticising everyone for not being as innovative as they remember they were.

It’s damaging and not true either. No thanks!


The great inventions do not look to be incremental because they are presented that way. Personally I find it fascinating to burst that bubble by looking into the rare histories of what lead up to what are considered the great tentpole developments.

I really think everything is incremental in progress. Where things are closer to us, we can see the incremental steps. Where we think of the great things of the past, we are completely ignorant of the incremental steps that came before.


Realistically yes, but as a kid promised flying cars and off-world colonies it's less impressive. Blade Runner is now set in the past shrug. These are mostly hardware I suppose.

Instead we got social networking, ad tech, and surveillance capitalism.


Meh, people always bring up the "flying cars" bit but I'm going to call bullshit that these were ever really feasible, not from a tech perspective but from an economic, energy and safety perspective.

The tech for flying cars already exists, just not the tech for protecting people on the ground from flying car crashes.


Author failed to mention the ML ecosystem, which has made significant improvements to products used by real people every day. A number of items on their cool/before 1996 list seem to serve little practical purpose. Which is cool, but it does lead to the question of by what metric we're judging a technology's importance.


I would consider the advances in ML to be mainly mathematical and hardware rather than software. Being able to write the libraries to support ML operations wouldn't have been a problem in 1996 if the hardware & theory had existed.


What software do we have the theory, hardware, and use for, which hasn't been written? I guess I'm struggling to identify a useful delineation between these categories.


I totally agree with you. But, just as an anecdote on the other side, I recently did some Blender tutorials, the famous donut scene by Andrew Price. When I was done I was happy I learned Blender better but I was also disappointed that making 3D hasn't changed much since 1995. In 1995 I learned PowerAnimator (predecessor to Maya) and in 1996 I learned 3DSMax. The techniques used to build the scenes are exactly the same, still super tedious. What changed is better renderers, sculpting (z-brush), but not much else.


They are nothing new. Just re-iterations and refinements. Incremental innovation is no real innovation. True innovation is destructive, removes whole fields from the map, while creating new ones. I don't like being called a coward, even if the words wrapped in velvet, but in this case the truth sticks.

PS: Machine learning is old too, although application - again due to incremental hardware improvement, has finally arrived.


Devil's advocate here but I sort of agree. I think that all of the hard problems with computers will continue to be hard problems until some groundbreaking new computing is discovered (quantum?). All of these technologies are mere transformations with the same hard problems with computing still present underneath.

A lot of these are also fads that will pass. Are there any problems today that were unsolvable in 1980 given enough time?

Wow, smaller transistors, more clock cycles, big whoop. We have more now but it's not fundamentally different is it?

>If an engineer in 1996 fell into a coma and woke up today, they'd be greeted with an entirely different world.

I'd also be disappointed. Oh we are on DDR4, Windows 10, multiple cores, cpp 20? Ok. So we basically just extended the trendline 25 years. Boring.


Yeah, lots of progress being made left and right: huge improvements in image processing and recognition, SLAM in every cleaning robot (and we have cleaning robots!), much improved machine translation, much better voice recognition and transcription, ubiquitous video conferencing software, super complex virtual worlds accessible to anyone and also most of what was desktop only now also available on mobile devices, which was far from a small task to achieve.


What's so impressive about V8? that billions of dollars have being pour into it to make it fast? for a problem that shouldn't have existed in the first place?


My guess is that the author is comparing the rate of change from 1971-1996 to 1996-2021 (25 years each), but didn't do a good job with explaining that.

In which case software has "stalled" in terms of "invent anything fundamentally new " - and I'd agree.


In the grand scheme of things, there is nothing impressive about V8 - it’s just a really good interpreter. Every technology we have today could be easily programmed in pre-1996 languages, they haven’t unlocked anything not possible before. We even use the exact same tooling. That is the stagnation.


This article sets up a very narrow view of "software" and then criticizes the state of "software" as if all that existed was what the author though of as "software"


Software 2.0 is happening right now. GTP-3 and Tesla FSD are examples of this.

On the other hand software 1.0 was a victim of it success. People realized that they could build software companies with SQL hard coded into windows forms buttons. Theses people have never even heard the word extensibility. Like the rest of corporate America they are mostly focused on the next quarter results and drown out any voices from people who want to innovative for innovation sake.


> Software 2.0 is happening right now. GTP-3 and Tesla FSD are examples of this.

I agree with this. As an anecdote, I've spent the past decade explaining to clients that things like natural language question answering and abstractive summarization are impossible, and now we have OpenAI and others dropping pretrained models like https://github.com/openai/summarize-from-feedback that turn all those assumptions on their head. There are caveats, of course, but I've gone from a deep learning skeptic (I started my career with "traditional" ML and NLP) to believing that these sorts of techniques are truly revolutionary and we are only yet scratching the surface of what's possible with them.


> People realized that they could build software companies with SQL hard coded into windows forms buttons. Theses people have never even heard the word extensibility.

Is there anything wrong with this?


>> Is there anything wrong with this?

No. Building something useful is good. It's just not an example of technological progress. That's what they meant by a victim of it's own success. When something advances enough to be useful it's natural for a bunch of people to just make use of it as-is.


Depends. For a quick prototype or small project? Seems fine. For a project with changing requirements and planned long term use (read: most software), it's probably best to not mix all your technologies in a single layer.


From your examples, it seems like you mean AI applications. Is that what you mean by software 2.0, or am I missing something?


Wayland replaced X.

Risc-v although it not software.

Rust.

Tools for running C-like code on GPUs.

Those are just the things I think belonged on the list but are not. And all of them take many years to mature and gain widespread use.


> Wayland replaced X

The never ending revolution? I am still waiting to upgrade.

> Rust

Rust is great, but overkill for business logic. It's bummer that people think they need to care about low-level details where it doesn't matter just to use a non-shit language. Meanwhile Haskell is waiting :).


If you think there have not been significant achievements since 1996 then you are not looking in the right places.

Then why just look at 1996? The Mother of All Demos by Douglas Engelbart (1968) probably had all the cool stuff anyways: graphical interfaces, videoconferences, collaborative editing, word processing, hypertext...

A lot has been done since 1996. Could we have done more? sure. Computers have way more potential than what we use them for.


I think what the author misses is that the web has created a market for software developers and so many, many, many software developers have come online and with each successive wave the average expertise goes down and the expectation for simple interfaces goes up. We simply need to wait for people to adopt software development practices, it’s not over, we have slowed down to catch up all of the people joining.


I heard this narrative before. Three thoughts:

- People concentrate on the tool (technology) vs. goal (value)

Let say you need to handle a million users in the past (read: "provide value to million users"). It would be an unbelievably huge effort back in 1996, requiring _A LOT_ of logistics. For most companies, it would take years to reach out to these million users. And god forbid you to need to send out an update.

Fast track to 2021. One person can put the code in a docker, drop it in AWS Kubernetes, scale it up and be able to serve million users by the end of the day. (BTW. I am not saying that you don't need to do marketing/sales and so on, but purely engineering/distribution end up being way simpler).

Oh... BTW. All these people can use this new service on the go, because everybody has a computer in his pocket with access to the internet. And get update to it pretty much instantaniously.

I don't care that a new language has curly brackets or a new programming paradigm invented each year or once a decade. I care how much value could be created. And tools/languages/frameworks/infrastructure in 2021 provides a way to build way-way more value

- You don't need to improve a hammer that much

The first hammers date back ~3 millions year ago. There were relatively few improvements to it for a looong time. The idea is still the same (a heavy head and a long arm).

If the tool works well, you don't need to reinvent it that much. You invent other, more complex, and specialized tools.

- Ignoring real achievements

There were dozens of things mentioned here in this thread, which wasn't mentioned in OP (LLVM, Mobile, ML, and dozens and dozens of others).


Start printing the red Make Software Great Again hats.


Seems like there is just a general lack of emphasis on the improvements that have been made.

It makes sense that progress cycles from a big breakthroughs to years of seeing how far we can push it.

Even so, the world is significantly different than it was in ‘96. “Stagnation” doesn’t feel like the right word.

What about advances in quantum computing? Is that not a large enough paradigm shift for the author to acknowledge?


Missing from the post-1996 list: bitcoin, which solved the byzantine generals problem without trusted third parties and gave us the first permissionless digital currency, along with all the innovation in the crypto space, such as zk-SNARKs, IPFS and so on.


As someone living and working in a developed western country, and someone who works in IT, I can't think of a single change or thing Bitcoin or blockchain tech has brought with it. The only difference I see is contained to the headlines of discussion sites like HN and Reddit.

Zero impact.


What all the deep-learning derived advances since 2010? We can do things today that we couldn't dream of in 1996. Think image search on your phone, or auto close captioning, speech to text, etc.

Heck, at that time, Netflix was theoretically impossible, it wasn't until the h264/mp4 codecs became available that this changed.

Even the cloud technologies completely changed the scale at which a single or few developers can scale software. We no longer need big iron (and the capital expenditure that goes with it) to go big.

We've seen plenty of breakthroughs in the past 25 years, some have not materialized fully yet, others are so magical and unobtrusive that it's hard to notice them.


Context matters to grok what he's saying. Read up on his "Subtext", go watch Engelbart's "Mother of all Demos", read Nelson's "Dream Machines". We are living in a shadow world compared to what might be...

http://www.subtext-lang.org/

https://en.wikipedia.org/wiki/The_Mother_of_All_Demos

https://computerlibbook.com/


To pick only one example from the list, C++ as written in 1996 was massively different from how most people (except for the Trogoglyte C++ movement or whatever they call themselves this week) write it nowadays. At the time, it was risky to write code relying on the STL because support for it (especially the template mechanisms required to support it) was quite uneven across compilers. Truly avant garde programmers were just starting to incorporate design patterns (and, of course, overdoing it).

It's hard to overstate how much STL based containers changed C++ code. And C++11 caused quite a bit more change.


On the one hand, I can no longer compiler programs that were written ten years ago due to conflicts and library deprecations, but somehow I am supposed to be bothered that we have "30 year old text editors, and 25 year old languages." I don't care, I want things to work and I don't want to have to make a job out of learning flavor of the week things to make things that worked fine 10 years ago work today.

Truly one of the worst attitudes in technology, the desire for novelty is so inline with how unsustainable and ephemeral tech is.


Absolutely. My take on this is that we got too much too soon to learn how to use it all properly so now we are super inefficient with hardware that could do so much more.


How on earth does this post have this many upvotes? I find in nonsensical in almost every way, most importantly that the author just waves away or flat out ignores basically every major advance in software engineering of the past 25 years.

I mean, does he think the amazing revolution in machine learning, AI and neural networks just didn't happen? What about the absolute tidal wave of open source projects in general? In 1996 if there wasn't a library for some small piece of functionality you needed you either needed to pay for it (anyone remember the market for VB controls?) or build it yourself. These days usually the problem is figuring out which open source library is the best one for your needs.


We've made revolutionary advances in version control, package management, and CI practices too.

I think this is an example of "post something quite wrong and get lots of attention because it is so wrong".


You're totally correct, I'm thinking this is basically just a troll post.

I started my software career in the late 90s, and having worked at a company last year with pretty much greenfield development, I was thinking how so many big, harry problems in software engineering that I experienced are just finally solved now:

1. You bring up version control, the first SCM system I used in the late 90s was CVS. I don't understand how any human that used CVS who now uses git+GitHub/GitLab/GitWebFrontendOfYourFancy can claim things are "stagnating".

2. Similarly, setting up a build and test process (the term "CI/CD" didn't exist in the late 90s) was a relatively huge undertaking, now it's trivial to get tests to run on every merge to master and then autodeployed with something like GitHub Actions.

3. Package management security, while not a fully solved problem, is leaps and bounds better than it was even a couple years ago. Again, I can automate my build process to run things like `npm audit` or other tools from providers like Snyk.

4. Umm, the cloud anyone? You may argue that this is a hardware change but it's really much more of a software issue - the cloud is only enabled by the huge amounts of software running everything.

I think the biggest thing I notice from the early 00s to now is that now I'm able to spend the vast majority of my time, even at a small company, worrying about features for my users, as opposed to the huge about of time I spent in the past just on the underlying infrastructure and maintenance to keep everything working.


If we start with CVS in the 90s (incidentally my first version control system as well) everything looks like great progress.

But if we actually look at what was around, both in theory and practice, CVS was a giant leap backwards.

Examples:

1. Smaltalk ENVY (1990s, I think?): Automatic class/method level versioning on save. Programmable, introspectable history to easily build CI type stuff. See user comments here: https://news.ycombinator.com/item?id=15206339.

> you could easily introspect every change to code and by combining introspection of classes and methods quickly determine which changes were happening where. We built a test framework that could list the changes since the last run, then compute dependencies and run an appropriate selection of test cases to check. This cut down test time by 90%

2. DOMAIN Software Engineering Environment (1980s): a distributed source control and config system where the provenance of built artifacts to source files was maintained. More than that:

> DSEE can create a shell in which all programs executed in that shell window transparently read the exact version of an element requested in the user's configuration thread. The History Manager, Configuration Manager, and extensible streams mechanism (described above) work together in this way to provide a "time machine" that can place a user back in a environment that corresponds to a previous release. In this environment, users can print the version of a file used for a prior release, and can display a readonly copy of it. In addition, the compilers can use the "include" files as they were, and the source line debugger can use old binaries and old sources during debug sessions. All of this is done without making copies of any of the elements. (from Computer-Aided Software Engineering in a Distributed Workstation Environment, 1984, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.575...)

Note again, the above is from 1984.

3. PIE reports (1981): Describes a model of "contexts" and "layers" (roughly analogous to branches and revisions) for nodes (not files) that version methods, classes, class categories and configurations. On merging work from multiple authors:

>Merging two designs is accomplished by creating a new layer into which are placed the desired values for attributes as selected from two or more competing contexts.

(from An Experimental Description Based Programming Environment By Ira Goldstein and Daniel Bobrow, 1981, http://esug.org/data/HistoricalDocuments/PIE/PIE%20four%20re...)


The early 90s also had SRC's Vesta file system which had some interesting properties: https://www.hpl.hp.com/techreports/Compaq-DEC/SRC-RR-106.htm... CMU's Gandalf project had some interesting "semantic" version-control features.

However: it's easy to build experimental systems with cool properties. It's really hard to make them appealing in practice when you have to give up simplifying assumptions.

In particular a lot of these systems were not well adapted to managing collections of text files in unknown formats, edited by unknown tools. That made sense for research purposes, since you can't have as many cool properties without understanding the file formats, but it made them substantially less useful in practice, which I think is partly why they didn't get adopted.


If you pick 1994/1995 as the cutoff, you then have Java, JavaScript and Ruby in the latter group. So I'm calling bullshit on an argument framed around one the biggest years in software language advancement.

Not to mention ignoring PHP, C# and the the whole .Net framework.


But that's what I find so dumb about this post - so what if the original JavaScript came out in 1995, as the current version is so vastly different (and improved) and used for such wildly different things than the original that it's barely the same language. I mean, heck, I was out of JS dev for just a couple years in the early 2010s and when I originally got back into it I could barely understand JS code: ES2015 added a ton of syntax changes, I was totally new to the Node/NPM ecosystem and the whole JS web app toolchain (babel, webpack, etc.)


>How on earth does this post have this many upvotes?

A silent majority? There may be more people than you think who generally agree with the author's thesis and this strikes you as perplexing because you hold a different view.


> A silent majority?

Highly doubtful as HN doesn't have downvotes on stories. Given the comment threads, I find it much more likely that a small minority of folks with a similarly curmudgeonly outlook as the author upvoted the story, and the majority is calling out the author on his BS.


>Highly doubtful as HN doesn't have downvotes on stories. Given the comment threads, I find it much more likely that a small minority of folks with a similarly curmudgeonly outlook as the author upvoted the story, and the majority is calling out the author on his BS.

Well that's just it, isn't it? The silent majority would be silent -- they would not be those commenting.


Lol, so your argument is the silent majority is doing the upvoting on the story but for some reason a different silent majority is upvoting the comments in these threads.


>Lol, so your argument is the silent majority is doing the upvoting on the story but for some reason a different silent majority is upvoting the comments in these threads.

No, that isn't my argument. My argument makes no mention of comment voting.


I, for one, totally agree with the author's thesis. Those of you like me who were building industrial and military software in that era know what I'm talking about. The web, and JavaScript, etc. and even windowing systems were a huge step backwards in terms of software productivity. Those of you who were programming nuclear power plants, battle ships, air traffic control systems, etc. please weight in and tell me if you disagree. Those who didn't - I'm honestly not interested in your opinion.


Seems like JVM developer who doesn't get out much. Two of the list are java editors. :D

How about the explosion of FLOSS?


Since I started coding professionally in 1987, I've had to re-tool myself four times:

* In 1989, to learn to write GUI apps on Windows.

* In 1998, to learn to write web applications.

* In 2008, to learn to write mobile applications.

* In 2014, to learn to write cloud applications.

In the next year, I'll start retooling to write AI applications. I think there's a remarkable rhythm to innovation, and it seems to be getting faster, not slower.


With a tip of the hat to Alan Kay, it's not that great leaps haven't happened since 1996 (or 1980). It's that those things were not part of the story until they were subsumed by the pop culture:

https://news.ycombinator.com/item?id=4956430


I'd consider Clojure a good iterative development of Lisp(s) — for me, this is when using Lisp became really practical for large-scale app development, both for server-side code and client-side code.

As a specific example of a huge advancement, the ability to share most of your data model code between your servers and client browsers is fantastic.


It was probably actually _more_ common to share data model code between a client and server before the web, because the clients wouldn't have been JS running in a browser but more likely would have been written in the same language as the server.


Not sure if you're speaking from experience here, but the systems I worked with (IBM S/370 with VM/SP and 3270 terminals, various UNIX systems with VT100 terminals, UNIX systems with X11 terminals) didn't run much on the client side. Well, unless you consider the "rich" 3270 experience to be "client-side computing". I guess technically an X terminal does run code, too.

Client-side computing in my world started happening with Novell Netware systems (IPX). Not a big fan of that period, to be honest.


This is absolutely true, and the model was also usually one of fatter clients and thinner servers. In some cases the server was just a database.


Rust and GPU programming are notably missing.


The author is a researcher with experience in programming languages, but curiously omits LLVM from his list of post-1996 advancements.

LLVM alone is responsible for a Copernican revolution in compilers and program analysis; just the last decade has seen a proliferation of novel PL and analysis research as a direct knock-on effect of LLVM’s success.


> LLVM alone is responsible for a Copernican revolution in compilers and program analysis

This is hyperbole


It was a reference to Kant’s so-called Copernican Revolution in Philosophy.

Philosophy existed before Kant just like program analysis existed before LLVM; I would argue that the state of the field is both fundamentally different and significantly more accessible than it was pre-LLVM.


"Cambrian explosion" any better? :-)


I kinda have a bias that wants to agree with the author but _almost the entirety of quantum computing_ has been developed since 1996 - including the theory and building the things - which is such a ridiculously humongous exception that there’s little point taking this article seriously.


> No technology has ever been permanent. We’ve just lost the will to improve.

except wheel, fire, clothes, writing... some things appear and won't ever disappear - and the longer things are there, the harder any chance they have for disappearing.


Nitpick on the start - Haskell really got going only with Haskell '98 afaik.



The one big progression is that application development is becoming more democtritized. And maybe this time the "nocode" / "lowcode" meta apps will stick...


Naw. Recently tried some again. Not gonna name names. I was so immediately blocked by lack of flexibility I just went back to code before going deeper into the sunk cost fallacy. It seems like these no/lowcode things need to approach the "problem" differently. I don't have the solution, I just know theirs isn't working for me.


>But maybe I’m imaging things.

You're not. You just don't know enough.


At the end of 1996 we had ASP - Active Server Pages, VB4, and Delphi 2. Those took a bit more time to mature, but I agree with what he says, in terms of the desktop.


Strangely fails to mention Rust, Julia, Octave, RakuPerl6.

Maybe what he is really trying to say is that he is disappointed in the few new paradigms that are coming out.


> Haskell in 1996

The good thing is Haskell keeps on changing.

But yes on the whole the author is right and we should all feel embarrassed and indignant.


The author doesn't appear to care about the impact of technology, merely its existence.


Right. Computer architecture hasn't evolved since von Neumann. Operating systems - since Ken Thompson. Programming languages actually devolved from Lisp and C to JavaScript. The only fundamental improvement I can see is proliferation of GPUs, but it's still only marginal.


Since 1996 we've gotten deep learning, the iPhone, Google (search, maps, translate, youtube), bitcoin, landing rockets for re-use, social networks, and the ability for the whole world to switch to remote work without advance notice (and yes those are all software innovations).


also the idea / protocol of bittorrent


Only if you don’t count several major technology families (.NET, Swift etc)


To make it worse unbelievable software bloat makes most advances in hardware invisible to the end user.

And then we got an ugly business model surveillance capitalism. Users expect everything to be free and happily (or unknowingly) with their privacy and ways to be manipulated.

Monopolies instead of interoperability and federation.

Computer science can't be proud of the current state of affairs.


At the end of the day though, consumers prefer free. Even if they say they want privacy preserving products, they really don't (otherwise we'd all be using Linux and Fairphones). I'd also be careful about the last statement: Computer science is much broader than internet advertising companies.


> Computer science is much broader than internet advertising companies.

Nobody would doubt that. But as a computer scientist I would have hoped that our discipline could have paved the way for something better to mankind than the current reality.


iOS and Android seem like huge omissions from that list.


Unix with lipstick?


That's a tremendous understatement. They're both incredibly huge projects with major advancements. It's not only hardware that helped create modern cellphones, the whole ecosystem and perf improvements are game changers.


Perhaps you've understated Unix as well.


and we're still using ipv4.


Last part from the article: "But maybe I’m imaging things. Maybe the reason progress stopped in 1996 is that we invented everything. Maybe there are no more radical breakthroughs possible, and all that’s left is to tinker around the edges. This is as good as it gets: a 50 year old OS, 30 year old text editors, and 25 year old languages. Bullshit. No technology has ever been permanent. We’ve just lost the will to improve."

Yeah, that's obviously bullshit. But we also didn't lose the will to improve. The leap from where we were 20 years ago, to where we are now, isn't mind-boggling. I still remember my Turbo Pascal days, my Delphi days. Sure things to improve... But overall this experience was sufficient. I wouldn't really have trouble implementing any of the projects I worked on in the past 10 years with Delphi, or even Turbo Pascal. It would sure suck, and take longer, but it wouldn't be a dealbreaker... This is the raw definition of a non-revolutionary progress.

The problem is that the next step of software engineering is incredibly difficult to achieve. It's like saying "Uh math/physics didn't change a whole lot since 1900". Well it did, but very incrementally. There is nothing revolutionary about it. Einstein would still find himself right at home today. That doesn't mean progress was stalling per se.

It means that to get "to the next level" we need a massive breakthrough, a herculean effort. Problem also is that nobody really knows what that will be... For me, the next step of software engineering is to only specify high-level designs and have systems like Pluscal, etc. in place that automatically verify the design and another AI system that code the low-level implementation. We would potentially have a "cloud compiler" that continuously recompiles high level specs and improves over time with us doing absolutely nothing. I.e. "software development as a service". You specify what you want to build, and AI builds it for you, profiting from continuous updates to this AI engine world-wide.


While everyone can have an opinion, not every opinion is valuable.


Because we wait for quantum computers and true parallel programming languages. Until then that's what we do meanwhile "get rich and remix everything that is old on any new platform".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: