Taking a very narrow view of software (i.e., looking only at compiler-related technologies), I can easily list several advancements that the author neglects:
* Profile-guided and link-time optimization aren't really feasible until circa 2010.
* Proper multithreaded memory models in programming languages don't come into existence until Java 5, from whence the C/C++11 memory model derives.
* Compiler error messages have improved a lot since the release of Clang around 2011.
* Generators, lambdas, and async/await have been integrated into many major programming languages.
* Move-semantics, as embodied by C++11 and Rust.
* OpenMP came out in 1997.
* Automatically-tuned libraries, e.g., ATLAS and FFTW are also late 90s in origin.
* Superoptimization for peephole optimizers (Aiken's paper is 2006; souper is 2013).
* Heterogeneous programming, e.g., CUDA.
* Undefined behavior checkers, such as IOC, ASAN. Even Valgrind only dates to 2000!
The only things that remain are those few which appeared seemingly out of nowhere and rose to prominence over a short period of time.
Thanks for doing it.
This is an "Oh, the cloud is just timesharing" take.
Except that Linux containers are a huge regression on what BSD jails gave us 20 years ago. They’re catching up for sure, but it’s still just badly reinventing a wheel that already exists, for philosophical, political, or selfish reasons.
Had that in the 90s:
The idea dates back even earlier from a product called the Segmentor.
The Superoptimizer comes from a paper in the 1980s, and what it discovered was promptly integrated into several compilers.
So looking at the number of projects that are hosted on the internet (e.g. Github, Sourceforge etc.) we are likely to see a different story.
Async/await matters. But compared to inventions like threading, filesystems, java's write-once run anywhere, HTTP/HTML and the invention of the URL? I agree with the author. We've lost our mojo.
Computing is pretty unique in that we have near-complete freedom to define what a program even is. A sorting method implemented in haskell is a different sort of expression than the same idea implemented in C. The abstractions we all take for granted - processes, applications, RAM & disks, databases, etc - all this stuff has been invented. None of these ideas are inherent to computing. And yet apparently we only know how to make three kinds of software today: Web apps, electron apps and mobile apps.
Here's some hairbrained ideas that are only a good implementation away from working:
- HTML/React inspired UI library that works on all platforms, so we can do electron without wasting 99% of my CPU cycles.
- Opaque binary files replaced with shared objects (eg smalltalk, Twizzler). Or files replaced with CRDT objects, which could be turned to support collaboratively editable global state.
- Probabilistic computing, merging ML and classical programming languages. (Eg if() statements in an otherwise normal program which embed ML models)
- SQL + realtime computed views (eg materialize). Or caching, but using an event stream for proactively updating cached contents. DB indexes as separate processes outside of a database, using an event stream to keep up to date. (Potentially with MVCC to support transactional reads from the index & DB.)
- Desktop apps that can be run without needing to be installed. (Like websites, but with native code.). And that can run without needing to trust the developer. (Using sandboxing, like web apps and phone apps).
- Git but for data. Git but realtime. Git, except with a good UI. Git for non-developers.
- Separate out the model and view layers in software. Then have the OS / network handle the models. (Eg SOLID.)
- An OS that can transparently move apps between my devices. (Like erlang but for desktop / phone / web applications)
- Docker's features on top of statically linked executables.
Aren't good error messages very hard to implement and also a _major_ productivity increase? They're also a boon to beginners.
Both are very important as the most powerful resource we have is people.
This is all important work, but none of it makes you think of computing in a new way. Not like the web did, or Haskell, or the idea of a compiler & high level language, a preemptive kernel, or TCP/IP. These were all invented by people, and a lot of the people who invented them are still around.
There are plenty of sibling ideas in the idea space. But for some reason computing cooled down and tectonic shifts of that scale don’t seem to happen anywhere near as frequently now. How long has it been since an interesting new OS appeared on the scene? Feels like a very long time. And even Haiku uses POSIX internally anyway.
a) new developers in general (totally new to programming)
b) developers new to this specific programming language
I'd be curious what the percentages are, but my personal hunch is that for any specific language there are more in category b) than a).
And for category b), those are definitely helped by error messages.
There are React Native forks for Windows, MacOS and Linux. I have no idea whether any of them is "good implementation" though.
> SQL + realtime computed views (eg materialize)
ClickHouse (OLAP DB) has materialized views (but only for inserts). Also Oracle and (I guess!) Materialize DB should have it too.
> Desktop apps that can be run without needing to be installed. (Like websites, but with native code.)
AppImage (and maybe Snap and Flatpak) is like this. Also technically, with Nix you can just run something like
nix-shell -p chromium --command chromium
> Git but for data
https://github.com/dolthub/dolt (again, never tried it yet, but would like in future)
Java didn't invent that, it is only a new iteration on an old idea.
I remember that when they announced Java in 1994 I thought about the p code system from UCSD I used in 1986 on a Z8000 Unix machine from Onyx (sp?).
More info at https://en.wikipedia.org/wiki/P-code_machine
Unfortunately this won't ever happen, at least not something production ready that will be usable on all 5 major platforms (Android, Windows, iOS, MacOS and I'm generously including Linux desktop here :-) ).
It's a very costly super long term investment and the incentives are not there as every platform (including desktop Linux!) is fighting against it.
React Native is the closest thing we have and the development experience is awful, from what friends that are using it are telling me, plus I don't think it's well supported on desktops. I, for one, don't think I've ever seen a Windows React Native app, for example.
So the best we'll ever get is modified browser engines, ergo Electron.
Flutter is also the kind of project Google seems worst at: it will require a lot of tedious work going forward, to support all the platforms. Thankless work that doesn't foster promotions.
I'm not counting Go, I consider it a community project now.
I was once amazed to learn that ClearCase was designed with the objective to be sold to lawyers, not developers.
The reason we all suffer without this is not because we have failed to solve it. It's that Git itself is preventing any forward development in this space.
We had Mercurial 15 years ago, but Git is a massive usability dumpster fire that blocks all further progress (and GitHub continually pours gasoline on the fire so we can never escape).
I think this made it easier to firm up new language implementations. I feel like we started getting more of them because of this.
In terms of software itself - it's not about typing code or about text UIs vs graphic UIs.
It's about programming being a fiddly and tedious pastime with explicit handling of every possible scenario, because the machine has no contextual background knowledge and isn't intelligent enough to interpolate or intuit the required details for itself.
So you get modern software which is basically just a warehouse full of virtual forms glued together with if/then/else - or simple pattern matching if you're really lucky - managed by relatively simple automation pipelines. Plus a UI/API glued on top.
And underneath that is TinkerLand, full of lovely text-langs which are different-but-increasingly-similar and command line environments which are like little puzzles which are not really hard - in the sense of Physics PhD hard - but are hard enough to give people who enjoy tinkering a sense of satisfaction when they (more or less) solve one.
You can do a lot with this - like ordering goods and services, selling advertising, collecting creepy amounts of data about users, and keeping people entertained.
But it's basically just automated virtual bureaucracy. It has no independent goal seeking skills, it doesn't improvise, invent, or adapt, and it's ridiculously brittle and not particularly secure.
Visual programming won't change this. Nor will ML (although it may contribute.) It needs a complete change of metaphor, and there doesn't seem to be much evidence of the research required to make that happen.
Oh boy you hit the nail so hard on the head that my spine is still tingling.
In the past 15 years of my career writing business applications, websites etc. I always had this nagging doubt that this is just me being a very active part in building moloch, adding to all the bureaucracy instead of reducing/simplifying it. Simply because wherever we built something to make things more efficient/easier, those gains are usually swallowed and overshadowed by the demand for more complexity that arises from latent potential that has been opened up.
"we've made taxes easy!"
"...could you also make tax evasion easy?"
"yes, but it would be more complicated"
"More complicated than before you were there?"
This is just an made-up example but that's how it so often goes and it really makes it feel like I should have become a hermit and wrote indie-games while living on cupnoodles instead.
Especially the "faux-value-add" quality of many software systems due to their puzzle like nature rings very true.
Perhaps software cannot be nothing more. But there is lots that _can_ be done with virtual bureaucracy.
For example 3D modelling tools do enhance the human capability to rapidly design complex forms. Text processing tools beat hands down typewriter in several tasks. Etc.
You know in the movies where they have the futuristic user interfaces? Well we've been able to put graphics on the screen for decades, but "real" enterprise programming is still mostly pounding keys into text files and fussing over semicolons and curly braces. Visual interfaces are considered toys that "real" programmers don't need.
And yet there's a chasm of capability between programmers and users, and it's actually true that what programmers do isn't something most "normal people" will ever be capable of. But is that because programmers have some wildly special form of intelligence that most people don't have, or is it because our tools suck?
The divide between users and developers is as accepted and entrenched as it is archaic, not to mention a huge innovation and creativity bottleneck for the species, and by my estimation an embarrassing failure to innovate for our industry. Jonathan has been pushing back for decades. He's not grumpy or cynical, he's a relic from a time when the future of programming and "what programming even looks like" was still a wildly open question.
I wish more people thought like him.
It's also important that programming is a fundamentally tedious endeavor. Even with the most high-level imaginable PL, where there would be no machine details to think about, you need to exhaustively define every detail of your business goal, much further than you would ever think about it in regular work. I am a programmer and I personally enjoy this kind of deep dive, but I have been in many meetings where I was trying to extract those details from domain experts and getting increasingly frustrating for them with all the probing. Even worse, you often hit points where you just can't get past some hand-waving, and now it's your job as a programmer to evaluate the trade-offs between different strict implementations of what is a fundamentally fuzzy requirement.
And getting out of the fantasy of the perfect PL, you also then have to work with a real computer with real limitations that someone needs to know how to work in while representing the problem domain (e.g. replacing real arithmetic with floating point arithmetic and dealing with the new problems that stem from that). You also have to spend large amounts of time researching what others have built and re-using it when and where it is possible, and working withing the trade-offs those others have made - since there's no chance of writing every program from scratch, and there is no such thing as a general purpose domain specific library.
The accountant example is a really good one: most accountants likely do write their own Excel macros to suit particular customers or one-off situations.
People wire up electrical components when they hook up their TVs, computers, surround sound, etc.
People do their own plumbing when they hook up garden hoses or put on a new p-trap.
Mounting a TV bracket into studs or even hanging a picture is some slight carpentry.
These things work because there are standard connections and good design, but in many cases it took a long time to get to the current designs that are simple enough to be intuitive and come with any tools needed.
Standards and design have made these things possible. Even building a computer is similar. Many people's automation needs aren't so complex that they need to venture out of reasonable constraints.
You should at least consider the counter-hypothesis that C-style keyboard pounding is fundamentally more productive than visual interfaces.
This shouldn't be that surprising. Text is much more informationally dense than audiovisual multimedia. There's a reason why books are still the preferred medium for information transmission after thousands of years. Sci-fi style visual coding sure seems cool. But I highly doubt that it will ever be as productive as a skilled developer typing out variables, functions and classes.
Take this example(1) from the R subreddit about how to do matrix math, and the arcane R foo required to actually do something that is pretty simply explained in OPs image(2) to anyone with zero background in R or programming at all. Now, imagine how much more productive the world would be if the computer could take an instruction in a readable form like the OPs image, rather than the R jargon code actually needed by the computer to do the math described in the image. People would be learning to write their own functions right along side learning how to do math on paper. Instead, people today pay six figures and spend four years to learn how to turn the math they learned by hand in high school into something that can be ran on a computer, same as it was 30 years ago.
I've worked in a company where programs were originally written with Max/MSP which is exactly that.
Everyone I know using patcher tools end up switching to real programming after some years, and being much more happy like that.
SQL came pretty close - I mean, it was specifically designed for non-developers. Guess who's mostly using not nowadays.
This should be a basic thing that IDEs help you do.
I think efficiency and accessibility are at odds with each other to some extent, and we've more neglected accessibility and flattening the learning curve, because we don't need it, we already know how to code. But custom software is basically a subscription service to the programmers that created it. I had a client that hired a guy who had invented his own programming language and built their whole system, then disappeared on not great terms. Boy were they in trouble.
But come to think of it, regex is a perfect example of something that could probably be interactively visualized and made more efficient. Let's ask google.
So I'm in vim or visual studio or whatever and I want to use this plugin and I... what? Google it in a browser and copy and paste regexes back and forth? This is so not 2021. The lack of a shared information model between tools/paradigms jumps out as a big shortcoming.
This has it backwards. Audiovisual multimedia have far higher raw bandwidth than text. I agree with your (implied) point, though, that text remains the preferred medium for technical use cases like programming for good reason: humans can't parse a firehose of audiovisual stimuli into precise mental constructs.
Text's typically linear structure and low information density are precisely why it has yet to be superseded, I think.
I think the parent is taking about information as defined by Shannon. Essentially referring to entropy.
By that definition text is far more dense than AV.
I don't, because instead of introspectively looking at why the world didn't turn out exactly like he wanted it, he blames others by dismissing their achievements as stagnation.
Those futuristic looking user interfaces never caught on because while they look good in movies they suck for actual productivity, even for non-technical users.
And when it comes to "non-programmer programming", just look at the success of tools like Zapier. I see tons of non-programmers using them to great success.
I'm not confident it's going to change for the better and democratize what computers can truly do, simulate and model. People who've already paid the cost of memorizing all the ins and outs of complicated computer syntax don't really care how much of a hill they've had to climb anymore now that they've learned enough to be familiar. Now you have this memorization lock in, where you've spent years and perhaps thousands with university learning some dense programming language and start building your tooling around what you've learned, and in the process you force the next generation to have to learn that syntax if you are to hire them, and so on and so forth. That's why we've been typing ls since the 80s and we will still be typing ls in 50 years. I'm sure the authors of ls would find that horrifying.
The complexity of programming lies in coming up with abstract models for real-life problems, and in encoding those models in sufficient details for a machine to execute them. It also lies in knowing the details of how the machine functions to be able to achieve performance or other goals (e.g. avoid parallelism pitfalls). Until we have AGI, none of these problems are even in principle solvable with better tools to the point that someone who hasn't invested in learning how to program for a few years will be able to do anything that isn't trivial. The best possible visual PL can't change this one iota.
What could be doable is creating more DSLs for laypeople to learn, rather than trying to do very complex things through a GUI. One of the few areas where there has been tremendous success in having non-programmers actually do some programming is in stuff like Excel. That is a very purpose-built system, with clear feedback on the steps of computation, very limitted in what you can actually program, but comprehensive enough for its intended domain, and is probably by far the most utilized programming environment, dwarfing something like vim by an order of magnitude.
To me, syntax is a huge problem. Sure, you learn it in school. Computer Science School. Not everyone is a computer scientist. Some people are political scientists, environmental scientists, mathematicians, statisticians, etc., all of whom have their own difficult four years of schooling, and simply don't have the time to learn this complicated syntax without trading something else off. You don't think its a big deal since you've already paid that price of time. Well, not everyone can afford to pay that time, and it's a damn shame that in order to make full use of all a computer can do, you need to spend this time learning this sometimes remarkably clunky syntax before you can even begin to work on your real life problem. Imagine how much more technological and societal progress we would make if everyone could make computational models about their question at hand without having to spend years and thousands of dollars working with unrelated example data in undergrad, sometimes even using instructional languages that have no use outside of school. To me, that's the world that The Jetson's live in.
We have that: e.g. Google Assistant. (If you don't mind needing an internet connection, and sending your voice to the cloud.)
> I shouldn't have to remember that ls is list directory contents.
Nobody's had to do that in 35+ years, except those of us who prefer that sort of thing. The rest click on some folder icon or something.
I mean, surely you use your own abbreviations when writing, no ? How would you abbreviate "list directory contents" ?
For instance I have `alias l=ls -lha` - and if it was instead list-directory-contents I would have `alias l=list-directory-content -lha`
e.g. just look how hard powershell sucks with those long verbose command names, it's unuseable
The world is chock full of shit software that people have to use eight hours a day and absolutely hate. I mean, seriously, hate to the point of shaking and tears at the idiotic hoops they have to jump through all day long, every day, for years. Genuine mental health impact. They come home miserable and snap at their kids. Maybe he's channeling their frustration. And maybe we deserve it.
A different problem than better programming paradigms.
Or Excel for that matter.
I think there's a very specific mindset (or "form of intelligence" per your words) that thrives working on a complex sandbox with very rigid rules and low tolerance for logical inconsistency, while most people simply have no tolerance for such inflexibility.
And I think there's no tooling that can change that, and if there ever is there goes programming as an industry because it will be able to interpret loosely defined requirements, ask the right questions and find a logical solution to it.
I think it's always going to take someone with a systems-thinking mind to engineer a system that's any good, but lots of people have that, people who created their own business and thought through all the steps in their system and engineered the whole thing in their head. That's systems engineering, just not with code. But there's so much knowledge that's required to go from zero to custom software deployment that I'd characterize as non-essential complexity, today.
Yeah I think programmers tend to make tools for programmers, for themselves and the way they think, which is totally natural and to be expected. But man these poor users.
It's not so bad when they're using mainstream programs with millions of dollars in dev budgets behind them, but when you get further down the long-tail it just gets so awful and users are so helpless. The software an insurance agent uses, the charting app a therapist uses, the in-house warehouse management system people spend their entire day using, where if they could just make this field auto-focus or a million tiny other things, they would save so much time and frustration.
Generally speaking, users edit data and programmers edit schema, and never the two shall cross. Users execute code but only programmers change it. What hope do we have of democratizing programming when these roles are about as foundational to computers as it gets? What would we have to unlearn to change this?
Entry level programming concepts are now presented to kids in literal toys.
“Real” enterprise programming has a heavy bit of tooling available now that wasn’t there in ‘96 with testing frameworks of all sizes, intellisense style code completion tools...
I predict the next major functionality that will come is pre-trained machine learning that be dropped into code as simply as boolean statements are today.
Or wait that’s a dream that isn’t here yet so I should be cynical and lash out at my kids’ LEGO Boost set for its antiquated battery tech or something.
Part of the reason I am so bullish on visual programming is that it lets us skip the excessive naming-of-things that is part of text programming. I would rather copy-paste a pile of colors and shapes that describes an algorithm and connect it to the right inputs and outputs. I think nameless shapes would be more immediately recognizable than identifiers from all the various naming schemes people use in their code.
Here's an art analogy. You make it easier to create art by making an algorithm that takes fuzzy beginner line art that is not following proportions and has all sorts of problems and turn it into professional line art via ML. People will struggle at the far more challenging skill of choosing what to draw because it innately requires knowledge of human anatomy, perspective and many other skills beyond knowing how to use a pencil. Programming languages are equivalent to a pencil, tablet or other any input method in this analogy.
Every time I used a "No-Code" solution it got me 95% where I wanted to be. The remaining 5% took twice as long as the first 95% because now I had to "go behind the scenes" and figure out the custom API exposed by whatever tool I was using. And then figure out a workaround that would inevitably break with the next version of the tool.
And even with the No-Code, it's still being able to decompose a problem into smaller sub-problems, find the similarities and make sure they are well defined. I think there are a lot more folks with these abilities but you'll find them in math and engineering departments mostly.
But again, NeXT interface builder predates 1996.
I recommend everyone watch Bret Victor's classic "The Future of Programming" https://www.youtube.com/watch?v=8pTEmbeENF4
Yes, we've had a trillion dollars invested in "How to run database servers at scale". And, we've had some incremental improvements to the C++ish language ecosystem. We've effectively replaced Perl with Python. That's nice. Deep Learning has been a major invention. Probably the most revolutionary I can think of in the past couple decades.
But, what do I do in Visual Studio 2019 that is fundamentally different than what I was doing in Borland Turbo Pascal back on my old 286? C++20 is more powerful. Intellisense is nice. Edit-and-continue worked 20 years ago and most programmers still don't use it. If you are super awesome, you might use a reversible debugger. That's still fringe science these days.
There is glacial momentum in the programming community. A lot of "grep and loose, flat ASCII files were good enough for my grandpappy. I can't accept anything different" And, so we don't have code-as-database. A lot of "I grew up learning how to parse curly bracket blocks. So, I can't accept anything different". So, so many languages try to look like C and are mostly flavors of the same procedural-OO-bit-of-functional paradigm. A lot of "GDB is terrible, don't even try" so many programmers are in reverse-stockholm system where they have convinced themselves debuggers are unnecessary and debugging is just fundamentally slow and painful. So, we don't have in-process information flow visualization. And, so on.
We actually have tried several times to build programming languages that break out of the textual programming style that we use. Visual programming languages exist, and there's a massive list of them on Wikipedia. However, they don't appear to actually be superior to regular old textual programming languages.
Here's my theory (train of thought here); the key traits of a successful mainstream programming solution are:
1) A simple conceptual model. Syntax errors are a barrier but a small one, and one that fades with just a little bit of practice. You can also overlay a graphical environment on a text-based language fairly easily. The real barrier, IMO, is concepts. Even today's "more accessible" languages require you to learn not only variables and statements and procedures, but functions with arguments and return values, call stacks, objects and classes and arrays (oh my!). And that's just to get in the door. To be productive you then have to learn APIs, and tooling, and frameworks, and patterns, etc. Excel has variables and functions (sort of), but that's all you really need to get going. Bash builds on the basic concepts of files, and text piping from one thing to another.
2) Ease of interfacing with things people care about: making GUIs, making HTTP requests, working with files, etc. Regular people's programs aren't about domain modeling or doing complex computations. They're mostly about hooking up IO between different things, sending commands to a person's digital life. Bash and Visual Basic were fantastic at this. It's trickier today because most of what people care about exists in the form of services that have (or lack) web APIs, but it's not insurmountable.
I think iOS Shortcuts is actually an extremely compelling low-key exploration of this space. They're off to a slow start for a number of reasons, but I think they're on exactly the right track.
A lot of the time I spent doing the Advent of Code last month was wishing I could just highlight a chunk of text and tell the computer "this is an X"... instead of typing out all the syntactic sugar.
Now, there is nothing that this approach could do that you can't do typing things out... except for the impedance mismatch between all that text, and my brain wanting to get something done. If you look at it terms of expressiveness, there is nothing to gain... but if you consider speed, ease of use, and avoiding errors, there might be an order of magnitude improvement in productivity.
Yet... in the end, it could always translate the visual markup back to syntactic sugar.
I also agree with the rough time delineation. Starting with the dotcom bubble, the industry was flooded with people. So we should have seen amazing progress in every direction.
Most of those programmers were non-geeks interested in making an easy buck instead of geeks, into computers and happily shocked that we could make a living at it. And many of the desirable jobs turned out to be making people to click on things to drive revenue.
Who can blame any of those people? They were just chasing the incentives presented to them.
Or use a debugger at all. Or write their code in a way that's easy to debug.
> "grep and loose, flat ASCII files were good enough for my grandpappy. I can't accept anything different"
Just try sneaking an unicode character in a non-trivial project somewhere.
> So, so many languages try to look like C and are mostly flavors of the same procedural-OO-bit-of-functional paradigm
C did something right. It's still readable and simple enough it doesn't take too long to learn (memory management is the hardest thing about it).
> This is as good as it gets: a 50 year old OS, 30 year old text editors, and 25 year old languages. Bullshit. No technology has ever been permanent.
Many successful software technologies are Ships of Theseus, where the collection of things with version N bears the same name as the version 1 product but nearly everything works differently or has been rewritten. Consider Modern C++ vs the original language.
Also the author just kind of ignores the whole Cloud Computing industry and virtualization.
Cloud computiung was the first sort of computing available, widely, from the 1950s. Central systems which were accessed with terminals. A huge business up until the 1980s, at least.
Virtualisation dates from the 1980s, at the latest. I had a professor in the early 1990s who had worked on it for Amdahl
The only thing that would shock a traveler from 70 years ago is the awful service and in flight entertainment system.
...and the cheapness
1950s - computers are so expensive that I can't even get one to myself and have to share it with a bunch of other people
now - I'm using so many computers that I can't even handle managing them all myself and instead pay Amazon to do it for me
Edit: Just compare how much easier it is to make something substantial today than it would've been 15 years ago. You could make a fairly sophisticated SaaS today from scratch using Django, Postgres, Redis, Stripe, Mailgun, and AWS in about a month. That's progress, even if it is boring and obvious.
This statement is quite simply wrong. Unless I'm misunderstanding you, you seem to be ignoring decades worth of programming language research and the years of work put into designing individual languages (whether by BDFLs, communities, or ANSI committees).
I agree with you that perhaps jet engines weren't the perfect metaphor. However, I think that that's because jet engines have much more stringent external (physical) constraints than software or programming languages. The latter do allow for a certain amount of "taste-judgment": it doesn't matter too much which way you do it, but some people at some times prefer one way over another.
So a more apt metaphor might be architecture: We've known how to build houses for millenia, but we're still improving our materials and techniques, while tradition and fashion continue to play a big role in what they end up looking like.
for val in sequence:
for (val in sequence)
and this in bash:
for val in sequence; do echo val; done
I mean there was no paper that I can find that found curly braces to be especially drawing to the human eye or anything like that. or that it's particularly intuitive to write "do" and "done" or not. For loops look like no linguistical sentence structure that I recognize, they are pretty much their own thing structurally imo. These are all arbitrary decisions decided by people, because it just happened to make arbitrary sense at the time for any number of reasons. Not because the R core team ran some psychological experiments to empirically determine the most intuitive for loop structure when they drafted their language; that data has never been collected, the R core team just followed what the S team had already done and that's that. Arbitrary.
- Jupyter notebooks, for teaching machine learning. I use these for teaching.
- OpenCL and other libraries for running scientific simulations on GPUs, gpflow for training on GPUs
- Keras and PyTorch (libraries for simple training of deep learning models). More than half of machine learning research exists on top of these libraries
Let’s not even get into the myriad recent discoveries in ML and libraries for them.
Parquet file format and similar formats for mass data storage. Spark and Hadoop for massive parallel computation. Hive and Athena further build upon these innovations. A good portion of distributed computing literature is built on these.
Eventually consistent databases and NOSQL. There’s so much here, hard to list everything.
ElasticSearch and Lucene and other such tools for text search.
Then there is all the low level systems research: new file systems like BTRFS and ZFS. wire guard is something that’s just been built that seems foundational.
I am running out of words but let’s conclude by saying the premise of this article is laughable
>- Jupyter notebooks, for teaching machine learning. I use these for teaching. - OpenCL and other libraries for running scientific simulations on GPUs, gpflow for training on GPUs - Keras and PyTorch (libraries for simple training of deep learning models). More than half of machine learning research exists on top of these libraries
Would you share your usual workflow and the problems you face in that space with these tools?
I look up some boilerplate in stack overflow every week for matplotlib figures to look better.
CSVs generated in excel cause errors when reading in python without a special incantation.
It’s not super easy to set up multi machine training. AWS GPUs are extremely expensive.
sklearn models, once saved, are not easy to rework into a different format to be used with other languages.
I wish python could do compile-time checking instead of running for 30 minutes to then report a type mismatch. mypy is still not very good and in beta
- No-setup Jupyter environments with the most popular libraries pre-installed
- Real-time collaboration on notebooks: see cursors and changes, etc.
- Multiple versions of your notebooks
- Long-running notebook scheduling with output that survives closed tabs and network disruptions. You can watch the notebook running live even on your phone without opening the JupyterLab interface.
- Automatic experiment tracking: automatically detects your models, parameters, and metrics and saves them without you remembering to do so or polluting your notebook with tracking code
- Easily deploy your model and get a "REST endpoint" so data scientists don't tap on anyone's shoulder to deploy their model, and developers don't need to worry about ML dependencies to use the models
- Build Docker images for your model and push it to a registry to user it wherever you want
- Monitor your models' performance on a live dashboard
- Publish notebooks as AppBooks: automatically parametrize a notebook to enable clients to interact with it without exporting PDFs or having to build an application or mutate the notebook. This is very useful when you want to expose some parameters that are very domain specific to a domain expert.
Much more on our roadmap. We're only focusing on actual problems we have faced serving our clients, and problems we are facing now. We always are on the lookout for problems people who work in the field have. What do you think?
He does cover this.
Again, I don’t think you are necessarily wrong, but the author does cover Jupyter and ML.
In any case, software notebooks definitely existed before Jupyter like you say (I’m not familiar with Wolf but Mathematica has a similar idea), but I don’t believe they date back to 1996. I might be mistaken. I don’t see the author mention them in the text.
Also, I don’t see anything particularly new in how ML computing is organized.
We’ve had vectorization, and massively parallel processors since the 80s.
All that’s new is that those techniques are being used on desktop machines now because of Moore’s law.
I’m not saying there are no new discoveries in ML, just that the engineering isn’t new.
What seems to have happened since the 90s is we have a lot of ergonomic improvements, which are still valuable. Want to run something in a cloud in 1995? Yes, you can do it, but it's vastly simpler and cheaper to do in 2020.
In the end the major things that have changed are that computing power got much cheaper, and computing has become part of a vast number of businesses. That's why a fashion retailer can now click some buttons and make a website. Or why there's an AI that can beat everyone at StarCraft II.
while using the following as incremental:
> IntelliJ, Eclipse, ASP, Spring, Rails, Scala, AWS, Clojure, Heroku, V8, Go, React, Docker, Kubernetes, Wasm
What he doesn't understands is, everything is based on everything else. Many languages got inspiration from another language, or some other technology. https://github.com/stereobooster/programming-languages-genea...
There is no such thing as fully independent innovation. Not just in tech, but every single piece of innovation in human history.
Seems just as "weak" as the list they give after 1966:
Basically, after LISP and Algol everything was just a small incremental change over them, some might even say a step back.
From the way they phrase "progress", it seems the lineage would look like:
Machine Code -> Assembly -> Fortran -> Lisp -> Algol
That takes us back to 1958 then, not 1996. I could agree more if that was the argument. That since Lisp and Algol, nothing new in programming language really came out. And if making that argument, the hypothesis around why would be very different. Like it seems it might be more because Software just quickly got bootstrapped and then we did reach a sort maximum with it.
I was thinking about this the other day. When Djikstra was coming up with his graph algorithm, how the heck did he represent the graph as a data structure??
"Improving Your Android App to Prevent Security Vulnerabilities"
I guess Microsoft and the Lean theorem prover/programming language probably counts? I could see that becoming as big a thing as Mathematica.
But, then they published their final report.. did other stuff for a while... announced they brought together a great team with a long-term funding... and they've been doing... something? Nine years now and I haven't heard of any more progress on this front.
In 1996, I'm not sure everyone was carrying around a super computer with a multi-day battery life that could shoot slow-motion 4k video and understand your speech. And yet, today we don't even think twice about that. Clearly something has happened in the past 25 years.
Yeah, that's obviously bullshit. But we also didn't lose the will to improve. The leap from where we were 20 years ago, to where we are now, isn't mind-boggling. I still remember my Turbo Pascal days, my Delphi days. Sure things to improve... But overall this experience was sufficient. I wouldn't really have trouble implementing any of the projects I worked on in the past 10 years with Delphi, or even Turbo Pascal. It would sure suck, and take longer, but it wouldn't be a dealbreaker...
This is the raw definition of a non-revolutionary progress.
The problem is that the next step of software engineering is incredibly difficult to achieve. It's like saying "Uh math/physics didn't change a whole lot since 1900". Well it did, but very incrementally. There is nothing revolutionary about it. Einstein would still find himself right at home today. That doesn't mean progress was stalling per se.
It means that to get "to the next level" we need a massive breakthrough, a herculean effort. Problem also is that nobody really knows what that will be... For me, the next step of software engineering is to only specify high-level designs and have systems like Pluscal, etc. in place that automatically verify the design and another AI system that code the low-level implementation. We would potentially have a "cloud compiler" that continuously recompiles high level specs and improves over time with us doing absolutely nothing. I.e. "software development as a service". You specify what you want to build, and AI builds it for you, profiting from continuous updates to this AI engine world-wide.
Is it just the list of software tools they use?
There has been a plethora of brand new languages since 1996. Most of them are due to LLVM, which is impressive on its own. Not to mention Clang.
Search engine used to be considered an impossible problem (just using grep doesn’t count), but it is now quite easy to solve that problem thanks to ElasticSearch and Solr.
What about all those distributed databases we all took for granted? Open Source DB used to sucked when it comes to horizontal scalability.
As if the above are not enough, what about virtualization and containerization technology? Are they not impressive enough for OP? They single handedly spawn multi billion dollar businesses and make deployment significantly easier.
Need I say more?
C++ as database and AST representation was used by Lucid on Energize C++ and IBM Visual Age for C++ version 4. Both products died, because they were too resource hungry for the hardware of the day.
Most enterprise shops don't see any value in doing otherwise.
He make logical leaps that aren't justified.
It is an very well designed piece of engineering and an extremely useful and tasteful set of tools, but it’s no paradigm shift.
I’m tired of hacking documents into apps. I want the web to be split into Markdown-like web (i.e. pure content and some meta) and actual web apps that work like native apps.
I want Accept: application/web
I had a recent revelation when watching a video someone posted of a Symbolics Lisp machine, and its legendary dev environment. That being - it's basically just a modern browser devtools, but for the whole OS. Turtles all the way down.
But somehow, despite having the bird in our hand, we as an industry lost them, and regressed to having vastly-inferior tools to those Symbolics devtools, and only gradually rebuilt them over the course of multiple decades.
As a harsh rebuttal to the OP article; I think the number one thing that's "whooshing" over that guy's head is that if someone develops a proof-of-concept for something, it doesn't invalidate the far-more-significant trick of making it widespread. Often it's the latter that's far more difficult, because it requires eliminating compromises that weren't acceptable to people before.
The strange thing about the software industry these days is that quite a few "truisms" about software development have been completely invalidated. Ironclad laws of how the world works have slowly been rendered completely false, and it's happened not through some sudden eureka development but through some slow, sneaky, iterative improvement. The same sort of thing i.e. that made solar cheaper than coal, or made SSDs cheaper than HDDs (I don't know if we've actually crossed that line yet, but it seems pretty inevitable that once the investment impetus behind HDDs collapses, we'll eventually get there). Flatscreens versus CRTs was another great example.
Today's approach for building web apps is basically code up a fat client application which downloads to the browser, runs, and talks to some remote web API (i.e. REST) using JSON. This is much closer to native apps and very far from the document approach. The app and the pure content are cleanly split between the initial app bundle and the web APIs.
We’re at the better horses stage of the cycle. What’s the next car?
They know they are blind to potentially better products, but they no longer have the time to gather the experience and confidence in the new products.
You can still doubt that quantum computers are physically realizable. Perhaps the number of qubits needed for error correction will turn out to increase fundamentally faster than the number of useful qubits, or perhaps there is some other fundamental limit that we will hit.
But the maths is clear: if it is at all possible to create a quantum computer, it will be faster for certain specific problems than a classical computer (in particular, it will be faster than a classical computer at simulating quantum phenomena).
Still, it is strongly suspected that BQP != P. This was especially reinforced by the MIP* = RE result, as far as I understand from Scott Aaronson .
However, there is a general crappiness around software innovation (or the lack thereof) these days that hides behind "sexy" CLI tools with a few "sexy" options. Some of these tools are very popular and can be difficult to assail but the smell of mediocrity (or lack of ambition) around them is hard to shake off.
A comment left on the post does in fact hit on one of the reasons: the lack of interest in creating user empowerment technology. The obsession with CLI tools for instance is framed as simply facilitating better integration, while this is true, it is also a convenient excuse to avoid the difficult task of building user interfacing software.
And TFA argues the Web caused innovation to be pushed out by adoption and application aka solving actual people's problems aka money. So that fits.
[ TFA's date of 1996 just freezes the surface tech that got swept into power by the web, e.g. Java was deliberately not innovative, but a "blue-collar language". ]
Of course, it is possible that all the foundational stuff has already been done.
It happened to physics, why not us?
This came to mind because of a quote I came across the other day, in a paper from 1980:
"Interlisp is a very large software system and large software systems are not easy to construct. Interlisp-D has on the order of 17,000 lines of Lisp code, 6,000 lines of Bcpl, and 4,000 lines of microcode."
("Interlisp-D: Overview and Status", at http://www.softwarepreservation.net/projects/LISP/interlisp-... )
My early days of programming were tinkering in BASIC and asking my parents to buy me "Learn C in 24 days" type books and hoping the compilers on the bundled CD would actually work. I had very few resources and had nobody to ask if I got stuck, except for other kids who didn't know much more than me. Seems a lot better now.
That's exactly the self-learning experience. You had simple foundations that demanded to be explored and conquered. My first programming book was a ~700 page Java tome because that was the hot thing. After I had read it start to finish I realized that I still don't know what programming is. Then I read K&R and the scales fell from my eyes. Programming is so simple and elegant! The OOP approach of Java teaches you to just put the things you think of into classes, which is like being handed a blank piece of paper. C has an internal logic and an emergent structure of possibility because it reveals limitations. This is even more so the case for BASIC. Nowadays getting into programming seems far worse than 2000's Java. Everything is a library, everything is answered on Google, all the languages are feature-rich and opaque, any computer is fast enough for the worst code. Why bother if there is nowhere to go?
It was known , but only by theorists, and theorists aren't the ones building compilers and implementing real programming languages.
Obviously the Rust community has achieved an impressive feat of engineering, and I'm extremely grateful for that. But using a substructural type system to avoid needing a garbage collector is not a new idea, just one that Rust has successfully popularized.
It takes a long time for ideas from the programming languages theory community to reach the mainstream.
And of course just implementing linear types and trying to get people to use the language probably would have failed. Borrow checking, compiler error message engineering, and various kinds of social engineering were probably essential to make Rust practical.
(I dated Rust's breakthrough to five years ago because although Rust had the features well before then, it was only about five years ago that it became clear they were good enough for programmers to adopt at scale.)
I mean, Lisp and Agol were some of the first programming languages ever. Of course there is going to be lots of experimentation in different directions. The ecosystem then matures around a smaller number of solutions, and tries to improve them in depth instead of breadth. That's just the natural product development cycle
Something not mentioned is that CS (and every field) has gotten larger with time, and the frontier for new discoveries and improvements is getting farther and farther away.
Another thing is that while there aren't a lot of fundementally new things, much of our technological progress in the past 100 years is refinement on established things.
> We’ve just lost the will to improve.
That statement strikes me as a little to broad. We are less risky in our persuit of improvement is true, but I wouldn't say that we've lost it.
> Since 1996 we’ve gotten: IntelliJ, Eclipse, ASP, Spring, Rails, Scala, AWS, Clojure, Heroku, V8, Go, React, Docker, Kubernetes, Wasm.
And then waives them all away as incremental or unimpressive.
The author holds up V8 as an example of something that isn't fundamentally new, which is missing the entire point of developing a runtime for an existing language. Not to mention, V8 is an impressive achievement in itself.
I'm lost as to why the author wants to work so hard to dismiss these technologies while simultaneously suggesting that nothing else noteworthy has been developed in the past two decades.
If an engineer in 1996 fell into a coma and woke up today, they'd be greeted with an entirely different world. We're driving around in electric cars, carrying around cell phones that are orders of magnitude faster than desktop PCs from 1996, and communication technology has evolved to the point that many of us are seamlessly working from home.
Technology only looks stagnant when you're steeped in it every day of every year. Up close, it looks slow. Take a step back and it's amazing what we have at our disposal relative to 1996. Don't mistake cynicism for expertise or objectivity.
1996 was 24 years ago. You are totally correct that the progress and change in this period has been incredible on the historical scale. But 24 years on the timeline of a human life is quite a significant portion, and some may feel that their expectations of futuristic utopia have been subverted, making them cynical.
i know i do!
lot's of stuff about the future sucks, and i can't think of a way to fix anything important with faster garbage collection or better type systems... maybe that's the difference between makers and cynics. i guess i'll try harder :p
I can, I am very hopeful better type systems will help heal the absolutely dreadful state of software robustness.
Better types (and by extension, better type systems) help eliminate many costly tests, improves knowledge transfer from senior engineers to junior engineers, makes refactoring and adapting to changing requirements a much less error-prone process where the computer can guide you, provides a more rigid way of breaking down problems into chunks instead of a procedural approach where good boundaries are harder to intuitively determine.
While education is what enables people to write better code with them, we need better type systems and languages that make this process easier for the average programmer to learn and use. Already languages like TypeScript, Kotlin and Swift are slowly getting many people to take a gander over the brick wall by exposing them to constructs like sum types and specific patterns like optionals and either/result types, that should fuel a little bit of hope at least!
EDIT: That's some visceral reaction. Would be nice to see some disagreement stated in text, or if any particular point needs further explaining, or to see if just hope of things improving is what's angering people.
EDIT: actually, you kinda cheered me up :)
"3d-printification" or democratisation of manufacturing capabilities. When advanced manufacturing become cheap, generic and small scale you need less capital to create and design important hardware.
"UX-ification" of advanced programming practices or democratisation of software creation capabilities. When advanced software design, composition and implementation becomes cheap, generic and scalable you need less capital to create/design/implement software and to be in control of the software you want/need.
When those things come together the importance of capital will greatly diminish for some vital parts of life. It will not solve all problems but some. And maybe create other problems. But it should at the very least greatly impact "important" problems.
The "UX-ification" of software creation is the part where software engineers can be part of.
Disclaimer: Perhaps I have a naive assumption underlying all this - the idea that increased flexibility eventually "will lead" to apparent simplicity. But isn't living organisms kind of a proof of this? The amount of complexity that an animal needs to understand to continue living and reproduce is dwarfed by the complexity of the animal itself. Well, also cars and computers.
Something like that. Maybe. Goodnight.
Some of us could host 2005 YouTube out of pocket, but it’s 2020, and people with fast connections expect 1080p video now, so it remains impossible for individuals to compete with corporations.
Like, will it always make sense to add things to YouTube or will a more decentralised approach make more sense? Will single responsibility principle make sense at that scale? Can infrastructure costs be shifted around in flexible ways? Etc.
But yeah it's telling that Facebook was initially looking for a much more decentralised approach but gave up on it when adapting to reality and economics.
I'd argue that these are primarily hardware innovations, not software innovations. The OP mentions software, not technology in general.
See also Moore's law (hardware upgrades double computing speed every ~2 years) vs Proebsting's Law  (compiler upgrades double computing speed every ~18 years).
Hardware and software are closely intertwined. We couldn't design, develop, build, and run modern hardware without modern software packages. You're just not seeing the advances in software if you're only looking at famous web services technologies.
The whole article is extremely myopic, limited only to the author's narrow domain of web services and coding in text editors.
i was little and didn't know shit, but i was already learning to program, and i think about this all the time. like mind blown, every day. 8 threads and 2GB of ram in my cheap ass phone? crazy! and we got so good at writing compilers they give them away for free!
JS was way slow because MSIE wasn't supposed to compete with desktop apps :p also, transpilation, JIT, virtualization and containers seem like a pretty big deal to me -- virtualization was academic in the 90's.
also, a lot of that "old" stuff like python and ruby and even C++ really weren't as mature and feature-full. memory safety/auto-management and concurrency language features are prevalent now, and i think people don't always appreciate (or remember?) when your computer couldn't walk and chew gum at the same time.
it seems to me like our tools have clearly improved -- i think a more useful conversation could investigate how to apply what we now have to improving productivity, safety and security.
For micros, it has been available, in some form, on x86 since 1985 when the 386 processor was first released. Remember the vm86 mode that let people run DOS apps under Windows 2.x?
That's not really virtualization. In the x86 world, hardware virtualization extensions don't crop up until 2003. Efficient dynamic binary translation (a key component for efficient virtualization without hardware support) I generally reckon to start with DynamoRIO (~2001), with Intel's Pin tool coming out in 2004.
(Do note that hardware virtualization does predate x86; I believe IBM 360 was the first one to have support for it, but I'm really bad with dates for processor milestones).
(Note VMware Workstation, as another poster pointed out, and even earlier "emulation" solutions like Bochs existed before 2003.)
But these things were not driven by software innovations.
In 1996 a friend was dating an engineer with an electric car. She'd thrown a bunch of time and money into it. Had a range of about 80-90 miles and wasn't slow at all.
Around then there was skunks works a couple of buildings away from me. On the spectrum analyzer I could see spread spectrum signals showing up at 900 and 2.4GHZ.
Edit: Stay classy HN.
Some people just want to find negativity in the world.
"This approach antagonizes both academics and professionals, but it is what I must do."
Are you kidding? The author has (clearly) been dedicating his entire career to trying to solve these problems. He's not some drive-by hipster reminiscing about the good old days. The problem is that in a lot of ways programming is stuck in the "good old days"
It's just a reaction to 'the kids' coming after him not being up to how good he (mis) remembers the past.
This negativity is a disappointing and growing trend among some grumpy old men in our field as that generation approach retirement. It disrespects people working today and I don't like it.
Your contributions thus far in this thread have been negative with no substantive rebuttal to any points made in the article.
> All of these latter technologies are useful incremental improvements on top of the foundational technologies that came before.
As if that wasn’t the case before. We’ve always built on what came before. What on earth does the author think was so revolutionary about Java? The language was an evolution of the C family. The VM was an evolution of Strongtalk.
The negative person says ‘all done before.’ The reasonable person says ‘step forward’ as we’ve been doing ever since.
And what’s the implication? That people have become stupid or lazy or ignorant? This is an attack on the work and integrity of a generation of people.
There is a consistent claim at the moment from CS men of a certain age that we don’t appreciate their genius enough. They bang on about how under appreciated their ideas are at their invited talks and on Twitter. I don’t think I’m the only one in the community thinking this. I think quite a lot of people feel this.
It’s not negative to call out someone else’s negativity.
That's not what I was referring to. I was referring to the actual negative statements you made about the author:
>Some people just want to find negativity in the world.
>I'm going to take a wiiiiild guess here that the author's personal peak was around 1996.
>It's just a reaction to 'the kids' coming after him not being up to how good he (mis) remembers the past.
>This negativity is a disappointing and growing trend among some grumpy old men in our field as that generation approach retirement.
That's not "calling out" negativity. That's a list of snark, broad insult, and generalizations with no supporting information. It's rude, discriminatory, and serves only to diminish the overall quality of discussion.
> Since 1996 almost everything has been cleverly repackaging and re-engineering prior inventions
seems like an ignorant attack on thousands of people’s professional competence and a veiled insinuation of deceit (the ‘clever repackaging and re-engineering’) and
> Suddenly, for the first time ever, programmers could get rich quick
sounds like a moral attack. All in all it’s a shitty thing to say to people. And there’s more and more grumpy people trying to tell young computer science researchers today that they’re useless and have no new ideas like this.
Have you been to a CS conference recently? You may be missing the context that these people are everywhere criticising everyone for not being as innovative as they remember they were.
It’s damaging and not true either. No thanks!
I really think everything is incremental in progress. Where things are closer to us, we can see the incremental steps. Where we think of the great things of the past, we are completely ignorant of the incremental steps that came before.
Instead we got social networking, ad tech, and surveillance capitalism.
The tech for flying cars already exists, just not the tech for protecting people on the ground from flying car crashes.
PS: Machine learning is old too, although application - again due to incremental hardware improvement, has finally arrived.
A lot of these are also fads that will pass. Are there any problems today that were unsolvable in 1980 given enough time?
Wow, smaller transistors, more clock cycles, big whoop. We have more now but it's not fundamentally different is it?
>If an engineer in 1996 fell into a coma and woke up today, they'd be greeted with an entirely different world.
I'd also be disappointed. Oh we are on DDR4, Windows 10, multiple cores, cpp 20? Ok. So we basically just extended the trendline 25 years. Boring.
In which case software has "stalled" in terms of "invent anything fundamentally new " - and I'd agree.
On the other hand software 1.0 was a victim of it success. People realized that they could build software companies with SQL hard coded into windows forms buttons. Theses people have never even heard the word extensibility. Like the rest of corporate America they are mostly focused on the next quarter results and drown out any voices from people who want to innovative for innovation sake.
I agree with this. As an anecdote, I've spent the past decade explaining to clients that things like natural language question answering and abstractive summarization are impossible, and now we have OpenAI and others dropping pretrained models like https://github.com/openai/summarize-from-feedback that turn all those assumptions on their head. There are caveats, of course, but I've gone from a deep learning skeptic (I started my career with "traditional" ML and NLP) to believing that these sorts of techniques are truly revolutionary and we are only yet scratching the surface of what's possible with them.
Is there anything wrong with this?
No. Building something useful is good. It's just not an example of technological progress. That's what they meant by a victim of it's own success. When something advances enough to be useful it's natural for a bunch of people to just make use of it as-is.
Risc-v although it not software.
Tools for running C-like code on GPUs.
Those are just the things I think belonged on the list but are not. And all of them take many years to mature and gain widespread use.
The never ending revolution? I am still waiting to upgrade.
Rust is great, but overkill for business logic. It's bummer that people think they need to care about low-level details where it doesn't matter just to use a non-shit language. Meanwhile Haskell is waiting :).
Then why just look at 1996? The Mother of All Demos by Douglas Engelbart (1968) probably had all the cool stuff anyways: graphical interfaces, videoconferences, collaborative editing, word processing, hypertext...
A lot has been done since 1996. Could we have done more? sure. Computers have way more potential than what we use them for.
- People concentrate on the tool (technology) vs. goal (value)
Let say you need to handle a million users in the past (read: "provide value to million users"). It would be an unbelievably huge effort back in 1996, requiring _A LOT_ of logistics. For most companies, it would take years to reach out to these million users. And god forbid you to need to send out an update.
Fast track to 2021. One person can put the code in a docker, drop it in AWS Kubernetes, scale it up and be able to serve million users by the end of the day. (BTW. I am not saying that you don't need to do marketing/sales and so on, but purely engineering/distribution end up being way simpler).
Oh... BTW. All these people can use this new service on the go, because everybody has a computer in his pocket with access to the internet. And get update to it pretty much instantaniously.
I don't care that a new language has curly brackets or a new programming paradigm invented each year or once a decade. I care how much value could be created. And tools/languages/frameworks/infrastructure in 2021 provides a way to build way-way more value
- You don't need to improve a hammer that much
The first hammers date back ~3 millions year ago. There were relatively few improvements to it for a looong time. The idea is still the same (a heavy head and a long arm).
If the tool works well, you don't need to reinvent it that much. You invent other, more complex, and specialized tools.
- Ignoring real achievements
There were dozens of things mentioned here in this thread, which wasn't mentioned in OP (LLVM, Mobile, ML, and dozens and dozens of others).
It makes sense that progress cycles from a big breakthroughs to years of seeing how far we can push it.
Even so, the world is significantly different than it was in ‘96. “Stagnation” doesn’t feel like the right word.
What about advances in quantum computing? Is that not a large enough paradigm shift for the author to acknowledge?
Heck, at that time, Netflix was theoretically impossible, it wasn't until the h264/mp4 codecs became available that this changed.
Even the cloud technologies completely changed the scale at which a single or few developers can scale software. We no longer need big iron (and the capital expenditure that goes with it) to go big.
We've seen plenty of breakthroughs in the past 25 years, some have not materialized fully yet, others are so magical and unobtrusive that it's hard to notice them.
He said my idea to become a dev was dumb and the money was in being an admin.
His reasoning was, he already had every software he needed.
Emule, Kazaa, ICQ, mIRC, Teamspeak these apps did all he wanted to do, so why would anyone pay money to develop new software?
It's hard to overstate how much STL based containers changed C++ code. And C++11 caused quite a bit more change.
Truly one of the worst attitudes in technology, the desire for novelty is so inline with how unsustainable and ephemeral tech is.
I mean, does he think the amazing revolution in machine learning, AI and neural networks just didn't happen? What about the absolute tidal wave of open source projects in general? In 1996 if there wasn't a library for some small piece of functionality you needed you either needed to pay for it (anyone remember the market for VB controls?) or build it yourself. These days usually the problem is figuring out which open source library is the best one for your needs.
I think this is an example of "post something quite wrong and get lots of attention because it is so wrong".
I started my software career in the late 90s, and having worked at a company last year with pretty much greenfield development, I was thinking how so many big, harry problems in software engineering that I experienced are just finally solved now:
1. You bring up version control, the first SCM system I used in the late 90s was CVS. I don't understand how any human that used CVS who now uses git+GitHub/GitLab/GitWebFrontendOfYourFancy can claim things are "stagnating".
2. Similarly, setting up a build and test process (the term "CI/CD" didn't exist in the late 90s) was a relatively huge undertaking, now it's trivial to get tests to run on every merge to master and then autodeployed with something like GitHub Actions.
3. Package management security, while not a fully solved problem, is leaps and bounds better than it was even a couple years ago. Again, I can automate my build process to run things like `npm audit` or other tools from providers like Snyk.
4. Umm, the cloud anyone? You may argue that this is a hardware change but it's really much more of a software issue - the cloud is only enabled by the huge amounts of software running everything.
I think the biggest thing I notice from the early 00s to now is that now I'm able to spend the vast majority of my time, even at a small company, worrying about features for my users, as opposed to the huge about of time I spent in the past just on the underlying infrastructure and maintenance to keep everything working.
But if we actually look at what was around, both in theory and practice, CVS was a giant leap backwards.
1. Smaltalk ENVY (1990s, I think?): Automatic class/method level versioning on save. Programmable, introspectable history to easily build CI type stuff. See user comments here: https://news.ycombinator.com/item?id=15206339.
> you could easily introspect every change to code and by combining introspection of classes and methods quickly determine which changes were happening where. We built a test framework that could list the changes since the last run, then compute dependencies and run an appropriate selection of test cases to check. This cut down test time by 90%
2. DOMAIN Software Engineering Environment (1980s): a distributed source control and config system where the provenance of built artifacts to source files was maintained. More than that:
> DSEE can create a shell in which all programs executed in that shell window transparently read the exact version of an element requested in the user's configuration thread. The History Manager, Configuration Manager, and extensible streams mechanism (described above) work together in this way to provide a "time machine" that can place a user back in a environment that corresponds to a previous release. In this environment, users can print the version of a file used for a prior release, and can display a readonly copy of it. In addition, the compilers can use the "include" files as they were, and the source line debugger can use old binaries and old sources during debug sessions. All of this is done without making copies of any of the elements.
(from Computer-Aided Software Engineering in a Distributed Workstation Environment, 1984, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.575...)
Note again, the above is from 1984.
3. PIE reports (1981): Describes a model of "contexts" and "layers" (roughly analogous to branches and revisions) for nodes (not files) that version methods, classes, class categories and configurations. On merging work from multiple authors:
>Merging two designs is accomplished by creating a new layer into which are placed the desired values for attributes as selected from two or more competing contexts.
(from An Experimental Description Based Programming Environment By Ira Goldstein and Daniel Bobrow, 1981, http://esug.org/data/HistoricalDocuments/PIE/PIE%20four%20re...)
However: it's easy to build experimental systems with cool properties. It's really hard to make them appealing in practice when you have to give up simplifying assumptions.
In particular a lot of these systems were not well adapted to managing collections of text files in unknown formats, edited by unknown tools. That made sense for research purposes, since you can't have as many cool properties without understanding the file formats, but it made them substantially less useful in practice, which I think is partly why they didn't get adopted.
Not to mention ignoring PHP, C# and the the whole .Net framework.
A silent majority? There may be more people than you think who generally agree with the author's thesis and this strikes you as perplexing because you hold a different view.
Highly doubtful as HN doesn't have downvotes on stories. Given the comment threads, I find it much more likely that a small minority of folks with a similarly curmudgeonly outlook as the author upvoted the story, and the majority is calling out the author on his BS.
Well that's just it, isn't it? The silent majority would be silent -- they would not be those commenting.
No, that isn't my argument. My argument makes no mention of comment voting.
How about the explosion of FLOSS?