Hacker News new | past | comments | ask | show | jobs | submit login
Early Nintendo programmer worked without a keyboard (arstechnica.com)
204 points by msh on Apr 27, 2017 | hide | past | web | favorite | 72 comments



> As reported by Game Watch (and wonderfully translated by the Patreon-supported Source Gaming)

I know this should be the default and not deserve a compliment, but I really appreciate it when an article on a website does proper sourcing.


It's always nice to know about the little ways passionate organizations like Source Gaming are doing their part to deliver information around the world.


And for the people who want to read their original article about this (with the full translation), here's the link:

http://sourcegaming.info/2017/04/19/kirbys-development-secre...


> Sakurai, who was 20 at the time, says he simply thought this keyboard-free programming environment was "the way it was done," and he coded an entire functional test product using just the trackball.

Maybe ten years from now, I can recount the horrid tales of writing code in notepad that didn't even have Ctrl+S shortcut, let alone code highlighting. :) Joking aside, I revere veteran programmers who wrote without keyboards, and coded mostly in assembly but I can't help think if it was pretty normal because that's the way things were done at that time.

There is a opinion I hit on every now and then—of programming, especially web development, getting ridiculously easier compared to the past. It has, to a great extent, considering the tools and the computing power we have now, but the complexity of software is also pacing at the same speed, briskly consuming that efficiency. The real question that should be asked is: was programming in the past inordinately difficult when seen in perspective of software complexity the market demanded? Is software development easier today because the gain in programming efficiency is more than the complexity of software that needs to be made?


I think you’ll find that the average programmer of decades ago would have been significantly more competent that the average programmer of today. Why? Because programming wasn’t nearly as popular back then, so it was mainly people with genuine aptitude or interest in programming who pursued that path. These days, programming is more fashionable and quite a number of mediocre people have jumped on the bandwagon thus pulling down the average.


Programming has become one of the only paths towards a middle class life (as we've known middle class in the past). No one I know from college that doesn't work in IT makes enough money to be old-school middle class like I am.


...you must not know many people then. There's lots of opportunity for a middle class life in non-it-related professions. Something I found pretty shocking after I left the sv bubble.


Such as?


Marketing, sales, numerous supporting jobs in entertainment, skilled trades and engineering, (as many mentioned) white collar food industry jobs, logistics, accounting, statistics, mediation, community builders, I could go on and on. So many diffierent types of jobs, even when you exclude all the shitty ones that make the world a shittier place, like low-information political shills and propagandists, (tiny conservative talk radio shows and online communities do decently, moderate and liberal not so much) junk and junk food marketers.


I would imagine any engineer or someone who went through trade school. Maybe they don't make enough to be considered middle class?


Many trade school disciplines.


Is that a bad thing? I think we should celebrate the diversity of developers and perspectives that they bring.

Esoteric things are great but you'll never find that they reach critical mass in a way that's meaningful. I love ham radio but I don't think you'll ever see a day where everyone has a 2m transceiver in their car/house :).


Not necessarily a bad thing, as it makes programming more accessible to a more diverse range of people as you mentioned, whom may bring other kinds of benefits to the profession. But there is the drawback of a higher percentage of incompetent people. Can't have everything..


Is there any reason to measure by percentage? If, before, we had 100k competent programmers and 0 incompetent ones; and now we have 100k competent programmers and 1mm incompetent ones—we still have enough competent programmers to get all the stuff done. Some companies will bother to wade through 10 bad hires to find a good one, and so end up producing a stream of excellent products, just as existed before; other companies (that wouldn't have existed before) will take the leftovers, and produce bad products that everyone ignores in favor of the good ones.

It's sort of like the fiction market: adding bad books to a bookstore or library, does not decrease the number of good books available.

I guess you could make the argument that if the low-quality products are well-marketed, they might outcompete the high-quality products—so, as a competent developer, the existence of incompetent developers (and thus the existence of companies who can subsist off their talents) might be "stealing revenue" from you/your company?


> Is there any reason to measure by percentage?

It adds friction all over the place. It's more difficult for a company to find the competent programmers, more difficult to find good books among the dreck, more difficult to judge which product is excellent and which isn't (without significant effort, in some cases). Rating systems can (and will be) abused, and finding the needle is harder when the haystack is larger.

Looking at hiring, a company might have to spend considerably more effort sorting through the "bad" programmers. There will be false positives+negatives in their search. How much damage will be done to the "good" products and the productivity of the "good" programmers because someone was misjudged when they were hired? How many skilled and passionate but less marketing-savvy programmers end up working for the quantity-over-quality companies?

If we had cheap, fast, and reliable ways to actually gauge the quality/skill of media/people/whatever, then more options would always be good, even if they're of widely-varying quality. As it is, we don't, so the extra options are something of a mixed blessing.


I can't speak directly for the programmer market, but this exact thing happened when steam opened the floodgates with greenlight. i used to browse the catalog to find new games; now there's so much crap in there that it isn't worth the time.


In my mind, that's just an argument for curation. There's no reason you can't let developers have their cake (i.e. have something up on Steam) while consumers eat it too (i.e. have a default view of Steam that hides all the crap games.)

I mean, direct-linking to things from a product site is still a thing; and—due to Steam technically being a website—discovery by search-engines indexing the page is still a thing. But also, there's no reason there can't be things like "App Store channels" where you can subscribe to a given app reviewer's "view of the world" (i.e. a store scope containing only the stuff they like, and—with less weight—the stuff the people they follow like, recursively); and then browse those, or a front-page that's the union of those. You could even automatically generate such channels from existing app-review sites/Youtube channels/whatever.


How do you expand that to a bookstore or an employement market without functionally denying the existence of some books/people?

The problem is that you're increasing the workload on everyone who wants to use that market by forcing them to filter out more crap.


You could make the same argument about literacy, how it allows dumber/more sinful people to interpret the Bible in erroneous ways.


Yecch. The bible. Please don't bring the bible into this.

We're talking about programming, not hocus pocus.


Competency isn't just aptitude though, today's programmers have massive training advantages. I mean you could make the same argument about (say) football players but NFL champions from the 20's would struggle to keep up with today's top high school programs.


> Is software development easier today because the gain in programming efficiency is more than the complexity of software that needs to be made?

Yes and no. The truth is both yes and no, and here is why. If we consider programming along a timeline from yesterday to today, there is unique complexity at both ends of that timeline, and unique simplicity at both ends of that timeline.

Today there's extra complexity of an abundance in choice. There's also choice in documentation, which often comes in the form of blog posts which may or may be out of date. And by the time you start to get comfortable using something the new version is probably out, which happened to me with Angular 2 (you knew Angular 4 was released right?). So in many ways programming is more complex today. None of those kinda of problems existed when I was a kid programming on TRS-80 in the late 70's. There wasn't much choice of hardware, software, or much risk of getting overwhelmed by too many learning resources. Basically everything you could know about the hardware and BIOS fit in a single book that Peter Norton wrote. So yes it is more complex today.

But there's complexity at the other end, too. We used to spend a lot of time trying to cram data and operations into small amounts of memory. We spent a lot of time inventing our own serialization protocols, writing lower level communication primitives that we don't think about today.

The efficiency gain is more than worth the new complexity. A novice programmer today is able to accomplish a great deal more in a month than a novice programmer of way back when.

For that matter, power users of today can accomplish way more than a team of programmers could back in the day for many applications. Much of the software I wrote in my first years as a professional developer can be done with spreadsheets.

The trick to keeping the efficiency is to not get lost in all the choices and remember to get real work done. Change how you do things too often and you become inefficient. Change not often enough and you become obsolete and inefficient. The real efficiency gains come from working the same problem domain for a while with the same tools. You develop a bag of tricks for dealing with the kinds of problems you run into in that domain and you become very fast. Change too much and you don't. But once every problem you encounter is pretty easy it's probably time to move to a new domain.


Angular 2 to Angular 4 is not a good example of getting comfortable using something then the next version coming out. They are the same framework and it is trivial to upgrade Angular 2 to Angular 4. "Updating to 4 is as easy as updating your Angular dependencies to the latest version, and double checking if you want animations. This will work for most use cases." from https://angularjs.blogspot.com/2017/03/angular-400-now-avail...

Angular 1 to Angular 2/4 is a valid argument though.


Boy, nothing says "This upgrade is a totally backward compatible drop-in replacement for Angular 2" like not only bumping a major version number but actually skipping over the usual increment entirely.

(Not that this is really substantial -- it's the history of the project that not only warrants but demands skepticism on the ease of both the upgrade and the utility and learning curve. The tone-deaf nature of the upgrade number is simply frosting on the cake.)


I agree that skepticism is important, but this is honestly really straightforward if you've been following the development and community discussion. IMO the Angular team made it rather clear why they were skipping the 3rd increment (router package was on 3 and they wanted all of the packages to be at the same increment) and that upgrading version numbers was just semantic versioning and not releasing of a new framework like Angular 1 to 2 was. There were also a few popular articles explaining that no one should be freaking out over the 2 to 4 change and a lot of discussion on HN.

So at this point, a few months into Angular 4, it's simply wrong to make the comment that the OP made.


It's true, the nature of the changes from angular 2 to angular 4 are actually smaller and easier to deal with than some of the changes between beta release candidates. But now we're poised to be able to actually build some applications without so much infrastructure change.


You are right, but it does lead to additional complexity in terms of using available documentation and examples. A big part of the efficiencies we have today is the ability to leverage multiple libraries to get things done.

I wanted to leverage Firebase, but the examples available were for Angular 1 at the time. Angular 2 support was new enough that I was blazing my own trail. RxJS was undergoing major upheaval as well. Using Redux ideas in Angular 2 (NGRX) is/was in a state of flux. Everything is in a state of change; when you find examples of how to integrate things, typically one or more of the libraries you're working with are at a slightly different level than the things you've already integrated into your codebase from the last set of examples and best practices you integrated.

I'm not saying the situation is impossible, just that it introduces additional friction and difficulty compared to working with a more mature less cutting edge toolset.


Yeah the SEO for Angular is honestly a nightmare.

I can't search Angular since then I get AngularJS (Angular 1) results even though the current iteration of the framework is called Angular. I can't search Angular 4 to filter Angular 1 out since not many articles are labeled with Angular 4 yet. I can't search for Angular 2 since new articles for Angular 4 are not going to be labeled with Angular 2, and searching for Angular 2 includes tons of useless media about Angular at various stages of release candidate.

/rant but I guess most of the time I'm just reading the Angular docs anyways.


As a new programmer with an interest in hardware, this is something I've noticed quite a lot- I was able to learn everything about C/asm as in what functions are available to me to use in a rather short amount of time, but the actual usage of it is quite a lot harder, whereas with a language like python I spend way more time learning how to do something than actually doing it, or if I want to use a database, it took me almost as long to decide which database to use as it did to implement it.


> There is a opinion I hit on every now and then—of programming, especially web development, getting ridiculously easier compared to the past.

Programming change. As an old timer I of course think it has been for the worst but I would not call it easier today. Consider that in most programming language, two decades ago, getting a context to render pixels directly was pretty straightforward.

I was taught programming on BASIC. To draw a line in BASIC you just need a two lines program:

SCREEN 9 (initialise a graphic screen)

LINE (10,10)-(100,200)

Even Processing isn't that straightforward nowadays.

Of course, web programming will be cross-platform. Of course if done properly your per-pixel rendering will be displayed on a GPU-accelerated surface. Of course, you can share it easily on a web page.

Things were different. Not easier, not harder.

When I first learned assembly, I was surprised at how simple it is. It is probably the easiest language out there. It is very simple, but very tedious to use. The only thing is, to use it you can't avoid knowing a bit more about the hardware it runs on.

Nowadays, especially in web programming, a lot of the lower layers are abstracted. It brings a whole can of new problems while removing others. You won't have to worry (much) about memory management and socket pools but then you have interactions between your React update and your version of whatever is used for matrix computation in JS nowadays.


10 10 moveto

100 200 lineto

No, I have no idea why I tried to learn some Postscript either.


Back in the 90's I didn't have a computer at home and made countless very long programs on my ti calculator. A few years later I finally got a computer and a Ti-Link to edit programs on my computer. The tedious nature of it was okay, because you were creating amazing things that were impossible otherwise.


We created some really ridiculous programs on TI-84's ca. 2003-2007 for the exact opposite reason. We had computers we could program on almost anywhere except in class. So we'd write programs on our calculators.

I wrote a program that could solve almost any formulaic problem in Chemistry, Algebra or Geometry. I shared it with a bunch of people, but ended up losing it because my link port broke. I had made a ton of money in High School charging $10 to fix broken link ports, but at that point I hadn't developed a method to fix a link port without disconnecting all of the batteries. And so I lost all my programs when the ram was cleared for a standardized test.

Programming on Ti-84's is one of the most fun I've ever had, and I'm sure I learned more writing those programs than I did in class. Even the non-CS nerds were programming stuff.


Funny you mention the RAM clearing -- a buddy in HS and I wrote a program to simulate the ram clearing screen so that we could keep using our programs and not lose them. Then the teachers caught on and then instituted a policy that they would use the calculator to make sure it had cleared.

Well, just meant we had to make a 2.0 version that allowed you to do operations on the calculator within the app. That one we didn't distribute to friends, but just kept for ourselves.

Man, those were the days...


Heh, this was me with my TI-85 in the late 90s. I had a computer at home, but not in class. So after finishing whatever thing we had to work on in class, I'd write programs and games on my TI-85.

A friend and I wrote a chat program that used the GraphLink cable to send messages back and forth so we could chat in class. This was a neat hack, but considering the cable that came with the 85 was like 2 feet long, it was kinda useless in practice unless we were sitting at the same table. So I made a longer cable at home out of stuff from RadioShack.

Everything was cool until we got caught using the homebrew cable in class. Fortunately, once we showed the principal what we'd done, we were let off with a warning not to have the cable in class anymore. :)

You know, I think I might still have my TI-85 in a box somewhere. I wonder if any of my old programs are still there.


> And so I lost all my programs when the ram was cleared for a standardized test.

On my TI-83+, I could transfer RAM programs into flash, to "archive" them. Wasn't that an option on the TI-84+?


This actually was a Ti-83+, I just got the model number wrong. I don't recall the specifics, but I believe the flash didn't store everything I needed through reformatting, or my programs were too large to fit entirely within the flash.

I think the ti-84 actually had a USB port (or a redesigned port of some sort), which eliminated this issue.


The flash was a lot larger than the RAM, but I think you could only store programs there, maybe. Data tables, strings, and such might've needed to be stored in RAM.

I feel for your loss though, whatever the circumstances were. The first graphing calculator I had access to was a TI-81. No flash space, RAM only. And since it had been my mother's when she went back to college, by the time I was using it, the button battery was pretty "iffy".


Ooh, I did something similar but with Casio Basic. I eventually made a giant program with several minigames and passed it around to friends.


Funny how we revere this guy, but if we replaced him with an anonymous programmer and changed it to working with out source control we'd probably be admonishing him for being a clueless idiot.

>considering the tools and the computing power we have now, but the complexity of software is also pacing at the same speed

I really don't think complexity has increased since the early 90's. How we interact with computers is fundamentally the same now as it was then, with multi-tasking probably being the last major improvement. A good, simple to use green screen application would be (almost) functionally identical to a good, simple to use web application.

Most of the additional complexity seems to be coming from either attempts to remove it or attempts to improve efficiency. I think they've been largely unsuccessful in their aims and now we just have complexity.


People were building things for the hardware they had back then. No need for smooth animations when nobody's managed to display an image yet, etc. No need for flexible scaling web pages when all the screens are generally the same aspect ratio.

Where am I going with this comment... I have no idea. I guess our tools have scaled as the needs have scaled? More complicated webapps led to more complicated debugger tools (I personally would find it extremely hard to do my job without 1. Chrome element picker + CSS editor and 2. Chrome Debugger).


> No need for smooth animations when nobody's managed to display an image yet

Other way round: even back in the day of the Atari 2600 smooth animation was important, despite only having 128 bytes of RAM and a 4k cartridge.

Smoothest low-latency game experience I've ever had was a Tempest arcade machine from the 80s.


for our generation, typing basic or assembly code out on a graphing calculator is probably similar.


I feel like we discuss this to death, and it's rarely satisfying or enlightening because it circles around these concepts we don't actually interrogate. People love to mention the distinction between essential complexity and incidental complexity, but how do you distinguish between them rigorously? What even is "complexity"? The ambiguity allows "complexity" to become a shorthand for "what I don't like or consider inelegant". And it often builds up to just another argument about why Node sucks, or whatever.


This is very cool. Constraints are definitely the mother of all creativity, even in indirect ways. This reminds me of my first few years of programming as a pre-teen, on a TI-82 calculator (I had no computer at home at the time), where I'd input code directly with the tiny keyboard. You didn't type keywords (if, else, etc.) directly - instead you would pick them from a menu system and the editor would insert them in your code. You could use the number keys as shortcuts in the menu system; after a while you'd develop pretty good muscle memory and type relatively fast on the machine. When I graduated to assembly, it was a whole other ordeal. I also remember drawing sprites for my game on graph paper, and then painstakingly inputting the coordinates for on pixels in the source.

It's funny how every programmer who got into programming before the age of ubiquitous laptops/tablets/phones/etc. has stories like that to tell; one of my old boss's favorite stories is deleting the boot sector accidentally while programming an old DOS machine, and having to rewrite by hand and from memory while the computer was still powered. When he restarted the machine, everything worked, which was pretty much a miracle.

On one hand I wonder if programmers who had to face this kind of drastic challenges end up having a stronger understanding of their craft and tooling compared to the new generation, learning languages and machines with very strong guard rails, never having to learn the deep insides of what they're working with; on the other hand, every generation bemoans the fact that the one before doesn't get things quite the way they do, but things usually turn out fine.


Whoa, that's interesting. Your description of the input scheme for the TI-82 sounds very similar to something I started building for a program editor meant to be used with motion sensors (https://www.youtube.com/watch?v=tztmgCcZaM4&feature=youtu.be...).

My plan was to always present a 3x3 grid with cells containing whichever language constructs were available from the current cursor position: after selecting one, the cursor position updates, and the options available in the cells update.

I've tried to imagine using it many many times, but haven't been able to definitely conclude whether it would be usable or not...


You're basically talking about an IME or software keyboard.

For something similar to your idea, I'd suggest playing with the Japanese-language keyboard for iOS.

For a bit of background: The two main Japanese writing systems are syllabic—each character represents a consonant+vowel pair, with ~15 possible consonants and 5 possible vowels, though not all possible combinations of those exist.

So, instead of attempting to give you a ~70-key keyboard (wouldn't really fit on the screen), the standard Japanese IME for touchscreen devices instead displays one key per initial consonant (indicated with the character you'd get by completing the syllable with an "a"-vowel.) Tapping the key neutrally picks said "[consonant]+a" character; swiping in each compass direction instead picks one of the four other vowel options. And holding down one of the keys displays the options as a little plus-shaped menu.


Interesting. Yes, that is very similar to what I had in mind. The ideal solution for typing in Japanese is slightly different though since the things you're specifying are always comprised of two parts, allowing the system to always be two-step. I think I would have to go with a sequences of taps, since the things being specified could have many parts.

Let's say you select "method definition", for instance. After that (assuming we're writing Java), you'd have to select an 'access modifier', 'return type', 'method name', and 0 - N 'method parameters', and so on.


> Constraints are definitely the mother of all creativity

This is the premise of 'pataphysique, the science of imaginary solutions [0]. This has been applied passionately to not only literature (oulipo) but comics (oubapo), music (oumupo), etc. to much entertainment. I was delighted to find out at a Donald Knuth talk that he was into oumupo.

[0] https://en.m.wikipedia.org/wiki/%27Pataphysics


I also started on a TI calculator :) Specifically, the TI-83 family.

The programming was similar, but I do know that you weren't required to select the commands from a menu. You could also type the command using the alpha key combinations. This was a good shortcut for the shorter commands. Not sure if the 82 was the same way.

I seem to remember copy/paste functionality but it has been a long while.


There is a whole community formed around those calculators. http://www.ticalc.org/


Yup they've been around for a while! Cool thing that it's still running. I remember hanging out there quite a lot back in the day.


> As if the limited power wasn't bad enough, Sakurai revealed that the Twin Famicom testbed they were using "didn’t even have keyboard support, meaning values had to be input using a trackball and an on-screen keyboard." Those kinds of visual programming languages may be fashionable now, but having a physical keyboard to type in values or edit instructions would have probably still been welcome back in the early '90s.

Leaving aside the contention whether visual programming languages have ever been fashionable, the author has a very deformed view of programming if he thinks typing out code via an on-screen keyboard is the difference between traditional programming and "visual" programming.


Something so tight and elegant about the construction of games back then. Just look at the sprite sheets. Nothing wasted.

I think it was Woz who said that an advantage of the generation of programmers who grew up in the 8 bit era was that they could have a mental model of the entire machine, which allowed them to squeeze out every last drop of performance and do some mind blowing things with very limited power by today's standards.


Seemed like an interesting article, but while I was reading it disappeared from view and was replaced by a photo of a resort or something.

Oh well.


Ars used to be a decent computer-enthusiast website. A little less hardcore than anandtech or [H]ardOCP. Somewhat helpful forums, but often offtopic. Obsessed with water-cooling your overclocked Celerons. But that was 1.5e-1 centuries ago, before the Condé Nast acquisition.


These are the moments when the reader mode in Safari really shines


DNS by PiHole and uBlock Origin combine to do a damn fine job of preventing the behavior you describe.



I have JavaShit switched off everywhere except where I explicitly allow it, so I never encounter these problems.

The solution to most of these woes is as simple as not allowing other people to run arbitrary programs on your computer, which even Nintendo programmers from way back when probably understood but seems to elude many today.


My brain couldn't even grasp the thought of it, I read the title as "worked out with a keyboard" 5 times and expected to see some sort of crazy exercise equipment when I opened the article. I even thought that ugly trackball mouse was some kind of workout machine until I finally read the first paragraph. That had to really mess with his hand programming that way.


Ironically you are correct [0]. Sakurais health has become a bit of a meme in the Super Smash Bros. community over the last few years. I can't be 100% sure that his issues were related to track-ball-programming or not but I am sure it didn't help.

[0] http://www.polygon.com/2013/2/27/4035046/why-masahiro-sakura...


Trackball for normal mouse use alone could play a big part. I really liked them but I haven't used one since the late 90's because the fatigue was very noticeable, especially on the top of my hand, my hands feel weird just thinking about using one again.


Amazing dedication!

What struck me about this person's situation is: why didn't he and other engineers there work together and jerry-rig something with a PC to interact with the Twin Famicom and on-screen keyboard using the trackball's protocol?

Perhaps they eventually did!


Most primitive system I ever programmed on had a hexadecimal number pad and an LED display. You punched in an address, hit "addr", and it showed the current value at that address. There were also "next" and "prev" buttons.

You could then enter a new value on the hex pad and hit "enter".

Once you were satisfied, you'd point the reset vector to your program, and hit "reset" to start the chip going.


And employees complain today if they don't have a fridge stocked with free Jamba Juice!


Managers beg you "what can I put in the fridge?!" It's not a one way street because manager is just as fearful of losing engineers as engineers enjoy the perks.



I kinda thought the article was going to be about programmers that marked up a line printer output and handed it off to someone that keyed the changes into a line editor (or punchcards)



I'll never complain about Centura again.


Obligatory xkcd: https://xkcd.com/378/




Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: