I know this should be the default and not deserve a compliment, but I really appreciate it when an article on a website does proper sourcing.
Maybe ten years from now, I can recount the horrid tales of writing code in notepad that didn't even have Ctrl+S shortcut, let alone code highlighting. :) Joking aside, I revere veteran programmers who wrote without keyboards, and coded mostly in assembly but I can't help think if it was pretty normal because that's the way things were done at that time.
There is a opinion I hit on every now and then—of programming, especially web development, getting ridiculously easier compared to the past. It has, to a great extent, considering the tools and the computing power we have now, but the complexity of software is also pacing at the same speed, briskly consuming that efficiency. The real question that should be asked is: was programming in the past inordinately difficult when seen in perspective of software complexity the market demanded? Is software development easier today because the gain in programming efficiency is more than the complexity of software that needs to be made?
Esoteric things are great but you'll never find that they reach critical mass in a way that's meaningful. I love ham radio but I don't think you'll ever see a day where everyone has a 2m transceiver in their car/house :).
It's sort of like the fiction market: adding bad books to a bookstore or library, does not decrease the number of good books available.
I guess you could make the argument that if the low-quality products are well-marketed, they might outcompete the high-quality products—so, as a competent developer, the existence of incompetent developers (and thus the existence of companies who can subsist off their talents) might be "stealing revenue" from you/your company?
It adds friction all over the place. It's more difficult for a company to find the competent programmers, more difficult to find good books among the dreck, more difficult to judge which product is excellent and which isn't (without significant effort, in some cases). Rating systems can (and will be) abused, and finding the needle is harder when the haystack is larger.
Looking at hiring, a company might have to spend considerably more effort sorting through the "bad" programmers. There will be false positives+negatives in their search. How much damage will be done to the "good" products and the productivity of the "good" programmers because someone was misjudged when they were hired? How many skilled and passionate but less marketing-savvy programmers end up working for the quantity-over-quality companies?
If we had cheap, fast, and reliable ways to actually gauge the quality/skill of media/people/whatever, then more options would always be good, even if they're of widely-varying quality. As it is, we don't, so the extra options are something of a mixed blessing.
I mean, direct-linking to things from a product site is still a thing; and—due to Steam technically being a website—discovery by search-engines indexing the page is still a thing. But also, there's no reason there can't be things like "App Store channels" where you can subscribe to a given app reviewer's "view of the world" (i.e. a store scope containing only the stuff they like, and—with less weight—the stuff the people they follow like, recursively); and then browse those, or a front-page that's the union of those. You could even automatically generate such channels from existing app-review sites/Youtube channels/whatever.
The problem is that you're increasing the workload on everyone who wants to use that market by forcing them to filter out more crap.
We're talking about programming, not hocus pocus.
Yes and no. The truth is both yes and no, and here is why. If we consider programming along a timeline from yesterday to today, there is unique complexity at both ends of that timeline, and unique simplicity at both ends of that timeline.
Today there's extra complexity of an abundance in choice. There's also choice in documentation, which often comes in the form of blog posts which may or may be out of date. And by the time you start to get comfortable using something the new version is probably out, which happened to me with Angular 2 (you knew Angular 4 was released right?). So in many ways programming is more complex today. None of those kinda of problems existed when I was a kid programming on TRS-80 in the late 70's. There wasn't much choice of hardware, software, or much risk of getting overwhelmed by too many learning resources. Basically everything you could
know about the hardware and BIOS fit in a single book that Peter
Norton wrote. So yes it is more complex today.
But there's complexity at the other end, too. We used to spend a lot of time trying to cram data and operations into small amounts of memory. We spent a lot of time inventing our own serialization protocols, writing lower level communication primitives that we don't think about today.
The efficiency gain is more than worth the new complexity. A novice programmer today is able to accomplish a great deal more in a month than a novice programmer of way back when.
For that matter, power users of today can accomplish way more than a team of programmers could back in the day for many applications. Much of the software I wrote in my first years as a professional developer can be done with spreadsheets.
The trick to keeping the efficiency is to not get lost in all the choices and remember to get real work done. Change how you do things too often and you become inefficient. Change not often enough and you become obsolete and inefficient. The real efficiency gains come from working the same problem domain for a while with the same tools. You develop a bag of tricks for dealing with the kinds of problems you run into in that domain and you become very fast. Change too much and you don't. But once every problem you encounter is pretty easy it's probably time to move to a new domain.
Angular 1 to Angular 2/4 is a valid argument though.
(Not that this is really substantial -- it's the history of the project that not only warrants but demands skepticism on the ease of both the upgrade and the utility and learning curve. The tone-deaf nature of the upgrade number is simply frosting on the cake.)
So at this point, a few months into Angular 4, it's simply wrong to make the comment that the OP made.
I wanted to leverage Firebase, but the examples available were for Angular 1 at the time. Angular 2 support was new enough that I was blazing my own trail. RxJS was undergoing major upheaval as well. Using Redux ideas in Angular 2 (NGRX) is/was in a state of flux. Everything is in a state of change; when you find examples of how to integrate things, typically one or more of the libraries you're working with are at a slightly different level than the things you've already integrated into your codebase from the last set of examples and best practices you integrated.
I'm not saying the situation is impossible, just that it introduces additional friction and difficulty compared to working with a more mature less cutting edge toolset.
I can't search Angular since then I get AngularJS (Angular 1) results even though the current iteration of the framework is called Angular. I can't search Angular 4 to filter Angular 1 out since not many articles are labeled with Angular 4 yet. I can't search for Angular 2 since new articles for Angular 4 are not going to be labeled with Angular 2, and searching for Angular 2 includes tons of useless media about Angular at various stages of release candidate.
/rant but I guess most of the time I'm just reading the Angular docs anyways.
Programming change. As an old timer I of course think it has been for the worst but I would not call it easier today. Consider that in most programming language, two decades ago, getting a context to render pixels directly was pretty straightforward.
I was taught programming on BASIC. To draw a line in BASIC you just need a two lines program:
SCREEN 9 (initialise a graphic screen)
Even Processing isn't that straightforward nowadays.
Of course, web programming will be cross-platform. Of course if done properly your per-pixel rendering will be displayed on a GPU-accelerated surface. Of course, you can share it easily on a web page.
Things were different. Not easier, not harder.
When I first learned assembly, I was surprised at how simple it is. It is probably the easiest language out there. It is very simple, but very tedious to use. The only thing is, to use it you can't avoid knowing a bit more about the hardware it runs on.
Nowadays, especially in web programming, a lot of the lower layers are abstracted. It brings a whole can of new problems while removing others. You won't have to worry (much) about memory management and socket pools but then you have interactions between your React update and your version of whatever is used for matrix computation in JS nowadays.
100 200 lineto
No, I have no idea why I tried to learn some Postscript either.
I wrote a program that could solve almost any formulaic problem in Chemistry, Algebra or Geometry. I shared it with a bunch of people, but ended up losing it because my link port broke. I had made a ton of money in High School charging $10 to fix broken link ports, but at that point I hadn't developed a method to fix a link port without disconnecting all of the batteries. And so I lost all my programs when the ram was cleared for a standardized test.
Programming on Ti-84's is one of the most fun I've ever had, and I'm sure I learned more writing those programs than I did in class. Even the non-CS nerds were programming stuff.
Well, just meant we had to make a 2.0 version that allowed you to do operations on the calculator within the app. That one we didn't distribute to friends, but just kept for ourselves.
Man, those were the days...
A friend and I wrote a chat program that used the GraphLink cable to send messages back and forth so we could chat in class. This was a neat hack, but considering the cable that came with the 85 was like 2 feet long, it was kinda useless in practice unless we were sitting at the same table. So I made a longer cable at home out of stuff from RadioShack.
Everything was cool until we got caught using the homebrew cable in class. Fortunately, once we showed the principal what we'd done, we were let off with a warning not to have the cable in class anymore. :)
You know, I think I might still have my TI-85 in a box somewhere. I wonder if any of my old programs are still there.
On my TI-83+, I could transfer RAM programs into flash, to "archive" them. Wasn't that an option on the TI-84+?
I think the ti-84 actually had a USB port (or a redesigned port of some sort), which eliminated this issue.
I feel for your loss though, whatever the circumstances were. The first graphing calculator I had access to was a TI-81. No flash space, RAM only. And since it had been my mother's when she went back to college, by the time I was using it, the button battery was pretty "iffy".
>considering the tools and the computing power we have now, but the complexity of software is also pacing at the same speed
I really don't think complexity has increased since the early 90's. How we interact with computers is fundamentally the same now as it was then, with multi-tasking probably being the last major improvement. A good, simple to use green screen application would be (almost) functionally identical to a good, simple to use web application.
Most of the additional complexity seems to be coming from either attempts to remove it or attempts to improve efficiency. I think they've been largely unsuccessful in their aims and now we just have complexity.
Where am I going with this comment... I have no idea. I guess our tools have scaled as the needs have scaled? More complicated webapps led to more complicated debugger tools (I personally would find it extremely hard to do my job without 1. Chrome element picker + CSS editor and 2. Chrome Debugger).
Other way round: even back in the day of the Atari 2600 smooth animation was important, despite only having 128 bytes of RAM and a 4k cartridge.
Smoothest low-latency game experience I've ever had was a Tempest arcade machine from the 80s.
It's funny how every programmer who got into programming before the age of ubiquitous laptops/tablets/phones/etc. has stories like that to tell; one of my old boss's favorite stories is deleting the boot sector accidentally while programming an old DOS machine, and having to rewrite by hand and from memory while the computer was still powered. When he restarted the machine, everything worked, which was pretty much a miracle.
On one hand I wonder if programmers who had to face this kind of drastic challenges end up having a stronger understanding of their craft and tooling compared to the new generation, learning languages and machines with very strong guard rails, never having to learn the deep insides of what they're working with; on the other hand, every generation bemoans the fact that the one before doesn't get things quite the way they do, but things usually turn out fine.
My plan was to always present a 3x3 grid with cells containing whichever language constructs were available from the current cursor position: after selecting one, the cursor position updates, and the options available in the cells update.
I've tried to imagine using it many many times, but haven't been able to definitely conclude whether it would be usable or not...
For something similar to your idea, I'd suggest playing with the Japanese-language keyboard for iOS.
For a bit of background: The two main Japanese writing systems are syllabic—each character represents a consonant+vowel pair, with ~15 possible consonants and 5 possible vowels, though not all possible combinations of those exist.
So, instead of attempting to give you a ~70-key keyboard (wouldn't really fit on the screen), the standard Japanese IME for touchscreen devices instead displays one key per initial consonant (indicated with the character you'd get by completing the syllable with an "a"-vowel.) Tapping the key neutrally picks said "[consonant]+a" character; swiping in each compass direction instead picks one of the four other vowel options. And holding down one of the keys displays the options as a little plus-shaped menu.
Let's say you select "method definition", for instance. After that (assuming we're writing Java), you'd have to select an 'access modifier', 'return type', 'method name', and 0 - N 'method parameters', and so on.
This is the premise of 'pataphysique, the science of imaginary solutions . This has been applied passionately to not only literature (oulipo) but comics (oubapo), music (oumupo), etc. to much entertainment. I was delighted to find out at a Donald Knuth talk that he was into oumupo.
The programming was similar, but I do know that you weren't required to select the commands from a menu. You could also type the command using the alpha key combinations. This was a good shortcut for the shorter commands. Not sure if the 82 was the same way.
I seem to remember copy/paste functionality but it has been a long while.
Leaving aside the contention whether visual programming languages have ever been fashionable, the author has a very deformed view of programming if he thinks typing out code via an on-screen keyboard is the difference between traditional programming and "visual" programming.
I think it was Woz who said that an advantage of the generation of programmers who grew up in the 8 bit era was that they could have a mental model of the entire machine, which allowed them to squeeze out every last drop of performance and do some mind blowing things with very limited power by today's standards.
The solution to most of these woes is as simple as not allowing other people to run arbitrary programs on your computer, which even Nintendo programmers from way back when probably understood but seems to elude many today.
What struck me about this person's situation is: why didn't he and other engineers there work together and jerry-rig something with a PC to interact with the Twin Famicom and on-screen keyboard using the trackball's protocol?
Perhaps they eventually did!
You could then enter a new value on the hex pad and hit "enter".
Once you were satisfied, you'd point the reset vector to your program, and hit "reset" to start the chip going.