Hacker News new | past | comments | ask | show | jobs | submit login
How I coded in 1985 (jgc.org)
470 points by jgrahamc on April 29, 2013 | hide | past | favorite | 118 comments



That headline got me to thinking...

  Date:     Monday, April 29, 1985
  Age:      29
  Location: Santa Ana, California.
  Company:  electronics manufacturer
  setup:    dumb black & green 24x80 14" CRT terminal
  Hardware: Honeywell mini
  OS:       Honeywell proprietary
  DBMS:     Pick (Ultimate Flavor)
  Language: BASIC
  App:      Work Order Processing (I wrote from scratch.)

  Date:     Monday, April 29, 2013
  Age:      57
  Location: Miami, Florida
  Company:  aerospace manufacturer
  setup:    Windows PC, 19" & 32" flat screens monitors
  Hardware: who knows
  OS:       Unix
  DBMS:     Pick (Unidata flavor)
  Language: BASIC
  App:      Work Order Processing (I'm writing from scratch.)
That's right, everything I wrote in 1985 would compile and run perfectly in the environment I'm connected to in my other session right now. And everything I'm writing today would have also worked on that mini-computer back then. I wonder how many other programmers could make the same claim.

I have learned and continue to use many other client/server and web-based technologies since then but somehow, I always come back to "old faithful".


How did I know this story would have your comment and it would be up top? :)

To still add something to the discussion -> I never coded on a black&green CRT, but I did learn programming on an environment doing 24x80 16colour ... graphics? Text graphics. (this was ~1994)

It was fun. For the first few years of my computing experience I never really left DOS and kept trying to reproduce the shiny Windows stuff in there. I still think that was better and more fun than when I briefly tried learning how to make "real applications" before jumping ship to Web.


My story is similar to yours, even the year 1994.

I was totally engrossed in writing custom batch files using ANSI.SYS escape sequences to create text "graphics."


My story is also similar to yours, but it involves QBasic, Mode 13h, and 1995. :)


How can you not go crazy doing the same thing over and over? My first commercial software project was in BASIC, back on a Commodore PET, it was a billing app for a shared telex system, but there's not way on earth I'd want to be still using BASIC, or Pick for that matter. The one constant for me over the years has been Unix (from Apollo, to HP UX, to Sun to Linux) but if you try new tech stacks you'd never want to go back to systems from 40+ years ago.

While I think there's money to be made in maintaining those old systems, I would just go insane not keeping up with technology and using modern stacks. CS has progressed for a reason, the level of abstraction today is so much higher that we don't have to write from scratch all the time, there are well tried and tested components.

Setup=Windows PC && OS=Unix, doesn't seem to make much sense, unless you mean Windows as a terminal (putty et al.)

Now, back to Python...


How can you not go crazy doing the same thing over and over?

Because I'm not doing the same thing over and over. I'm just using the same technology. I rarely encounter things I can't do with it. I love Pick more than anything else I've ever used.

but there's not way on earth I'd want to be still using BASIC, or Pick for that matter

Pick is almost like a cult and for good reason. It's incredibly elegant, simple, and powerful. Most Pick programmers, even after learning newer technologies, still love Pick and use it whenever practical.

but if you try new tech stacks you'd never want to go back to systems from 40+ years ago

Exactly the opposite of my experience. I've tried many tech stacks and have come to appreciate their pros and cons. But I love coming back to "old faithful". With Pick, I spend most of my time on the problem at hand, not the tech stack. Not so true for many more modern technologies.

I would just go insane not keeping up with technology and using modern stacks.

I never said I didn't keep up. I do and I love to. The thing that would make me go insane: not having fresh customers with fresh problems.

CS has progressed for a reason

Make no mistake about it; the biggest reason has been the internet. So much CRUD technology is so powerful and stable, there hasn't been as much need for "progress".

the level of abstraction today is so much higher that we don't have to write from scratch all the time

I never write from scratch. I use the same 30 or 40 building blocks for every project.

there are well tried and tested components

I'd say that Pick's 40 years and millions of apps make it well tried and tested, too.

unless you mean Windows as a terminal (putty et al.)

I do. Putty on my left, Firefox on my right.

Now, back to Python...

Oy.


I wonder how many other programmers could make the same claim.

Probably a lot more than you think.

I've seen estimates of 1-2 million COBOL developers worldwide.


You're connected right now to a Pick system? From your point of view, how does Pick compare to the current wave of NoSQL databases, practically speaking? What are the advantages and disadvantages?


UniData/UniVerse were acquired by IBM and then spun off to form Rocket Software. As I recall (this could be total folklore) the product team wasn't getting the love from IBM and managed to convince IBM to sell off the BU.

Rocket and IBM have done a bit with U2 to open up the multi-value data files and UniBASIC programs to "open systems", i.e., Java, .NET, SOAP, REST-ish, etc.

It's still a bit of a pain because values are strings, one has to extract MV/SV data by tokenizing along delimiters, there's a massive amount of data duplication (no FK relationships), multi-byte causes problems with older programs (and many, many of them are very old programs), and there's no concept of data queues as you might be expecting if you've worked in an OS/400|i Series|System i|i|etc. shop with RPG.

That said, it runs happily on Linux and is quite robust.

Many of the problems relate to programming practice. As you can imagine, there are some "lifers" who don't stretch themselves, argue GOTO vs. GOSUB, copy-and-modify existing programs for new requirements, and don't use source control or automated unit testing. Then there are those who refactor 17 copied versions of programs doing basically the same thing into a modular package and slap an object-ish API on top.

With a rational approach, the environment is actually quite pleasant, but it does take some discipline. It is difficult to retrofit a more rigorous engineering process onto legacy code, but that's the case whether it's Pick/UniData or any other programming environment.


I haven't used NoSQL databases, so I don't have an opinion on that comparison. As for more traditional relational DBMSs (MySQL, Postgress, Oracle), there are many differences which could be considered either advantages or disadvantages, depending on who you asked. Examples:

  - level of normalization determined by programmer, not DBA
  - relationships (joins, etc.) enforced at program level, not schema
  - no typing (everything is a string)
  - DBMS performance is mathematically predictable & determined (hashing)
  - everything is variable length
  - anything can be multi-valued or multi-subvalued
  - No schema needed. Just code.


It sounds a lot like the benefits of one flavor of the NoSQL databases — thus my question. (Another flavor of NoSQL, stemming from BigTable, is more focused on horizontal scalability and partition-tolerance.)


but isn't it questionable that you're (seemingly) writing the same stuff, from scratch, for 28 years now? shouldn't there be a work order processing standard solution for aerospace by now?

like an engineer re-inventing the process to build bricks anytime a new building is requested.

industry observation, not personal.


I'm not writing the same stuff. I've written over a thousand different custom applications using the same technology. That was my point. This environment is so tried and true and so effective, it still kicks ass after 28 years. Pretty amazing when you think about how much has changed.

shouldn't there be a work order processing standard solution for aerospace by now?

I'm sure there is. Just like there are blog, website, & CMS standard solutions. And plenty of people using them. I never see them because those people would never call me. Like many other programmers here, I imagine, we only see the customers who need something custom, almost always for good reason.


+1 for Pick. I've met several people who were/are Pick devotees. Also heard stories about how Dick Pick was kind of crazy.


My mentor knew Dick, and I also heard stories!


A shameless plug for people wondering "what is this PICK thing?" I host http://www.pickwiki.com, in an effort to capture some of those fascinating (to me) stories. The guy that created it really was called Dick Pick.

I wouldn't choose PICK for a brand new project today, but as Ed says, there's life in the old dog yet...


sounds like you need a new mentor


so how do you keep trying new environments and after validating them return to your preferred one?

asking cause 28 years is an eternity in regards of IT - or at least that's what we hear day in, day out.


> shouldn't there be a work order processing standard solution for aerospace by now?

There are probably several, all mutually incompatible in some way, and chances are every company picks bits from each to make their own "local" standard (a local standard that gets worked every time the name associated with a C*O title changes).

Sometimes we are not in a position to control the constant wheel evolution and just have to get on with it & accept the paycheck.


The aerospace industry is a license to print money for any vendor involved.


I'm wondering, is the dialect and feature set of the BASIC you use now the same as the one you used back then? I imagine the largest factor in that list is the language.

I commonly have to code in older versions of my preferred language, and it's always a bit grating. I'm always pining for the newer features that I've grown accustomed to.


Damn, I love reading your comments. Although my early memories were from typing BASIC programs into a Commodore 64...from the back of a magazine! (I'm a little younger than you).


I last played with pick and pick-basic back in 1999 (and I used the UniData flavor). Here is what I remember of PickBasic: an overly simplistic version of BASIC without the advancements of scoped variables and methods, you still had to use GOTO <numbered-label>.

I don't miss that language one bit.

Now PICK as a data storage. Not bad, but did nothing well either. Could be I have the reverse of positive nostalgia on this one.


Strange how much code I wrote on a 80 char x 20 line Beehive terminal in RSX-11 and later on a 80 x 25 line Heathkit H29 (woo hoo detachable keyboard!). Now with arbitrarily sized screens I really can't go back to just "one" (or in the case of the VT420 "two") screens worth of text.


Holy crap. I'm the annoying chubby suit bringing in all the Java, Rails, and ecommerce people.

Small world.


This is neat stuff, but it also shows how far behind some schools can be in technology. Yes, I did the hex-keypad-assembler-in-your-head thing, but, by 1985 there were far better options. One word:

  Forth
Funny enough, I built an eleven processor parallel-ish computer in 1985 to control a large (three feet high) walking robot. That's in the days when you couldn't buy kits for this stuff. You had to design all of the mechanical bits, machine the parts, design the electronics, build the boards and then program it yourself.

The machine had eleven 6502's plus an additional one acting as the supervisor and comms link to a PC. The entire system was programmed in Forth, which I mostly rolled out from scratch. This included writing my own screen-based text editor. The PC ran APL.

The project was a bit more elaborate than that. As a very --very-- green college student I ended up presenting a paper on this project at an international ACM conference.

Once you bootstrap an embedded system with Forth the world changes. In '85 there were few reasons to slog it out using hand-coded assembler to automate a labeling machine. My guess is that he and/or his school did not have any exposure to the mounting Forth embedded community and so they ended-up doing it the hard way. That's still good. You learn a lot either way.


My experience so far is that my Forth is shorter, but my assembly has fewer bugs. Maybe this means that Forth wins for larger projects, because it's harder to read but easier to write?


Perhaps your last few words reveal the reason for the bugs? Forth --or any other language-- is hard to read only when someone doesn't know it or does not use it frequently enough. For example, I programmed in Lisp nearly every day for a couple of years. I could read Lisp without even thinking about it. That was fifteen years ago. Today I would struggle to make sense out of a non-trivial Lisp program for a couple of weeks.

If you use Forth as intended I can't see how you would introduce more bugs when compared to anything. Few languages are inherently buggy. Bugs can nearly always be traced to the programmer, not the language.


All bugs can be traced to the programmer, of course. I find Forth error-prone because it's untyped (not even dynamically typed) and you have parameter-passing errors, where you pass a parameter or get a return value from the wrong place. Traditionally, too, you don't have local variables, but I think that's not actually nearly as big a problem as it sounds.

That said, my last few words were exactly backward from what I meant to say. Harder to write than assembly, because you have to take more care to avoid stack-effect bugs and you don't have local variables; but easier to read, because your Forth isn't full of arbitrary noise about which register you're using for what.


You are not looking at Forth correctly. It isn't C, C++ or Java. It's a "raw" language that you should use to write a DSL. It's just above assembler yet many orders of magnitude faster to develop with.

Even with your comment backwards it makes little sense. I have developed in nearly every language between typing raw machine code with a hex keypad and Objective-C (not to moly that O-C is at the opposite end of the scale). I did years of Forth. Not one thing you are saying rings true. If you know a language you speak it. It's as simple as that.


Oh, I know that about the EDSL nature of Forth, and it's entirely possible that if I really knew Forth, my experience would be different.

On the other hand, if you were right that Forth is many orders of magnitude faster to develop with than assembler, we'd see people writing HTML5-compliant web browsers in Forth in a few hours. (I'm figuring: 100 person-years for e.g. Chromium, multiplied by ten for assembly, divided by six orders of magnitude (because five isn't "many") gives you about 8 hours.) So I suspect you're living in some kind of a fantasy world.


Nope. Wrong again. Writing a browser isn't the right fit for Forth. Could you? Sure. Should you? Nope.

Forth is great to develop with but comes with liabilities if used on the wrong project. I remember a product a friend of mine did using Forth. High end embedded system with custom hardware all selling for tens of thousands of dollars per unit. When he started to look into selling the business Forth became a liability. Nothing wrong with it. Te code worked well and was mostly bug-free. The company bidding to acquire his business did not do Forth and saw it as an impediment. They, like a million others, did C. Ultimately he sold the company for $16 million with a requirement to translate the code base to C under contract.

Today I use C for any time I need to do embedded work. Having a foundation in a diverse range of languages means that you tend to use a language like C as a toolset to create an efficient development environment through domain specific libraries, data structures and data representation.

I rarely find the need to get down to Forth these days. It made a lot of sense with 8 bit resource-restricted microprocessors. Today you can have a 16 or 32 bit embedded processor for a few bucks and have it run at 25, 50 or 100MHz. Different game.


I'm still skeptical about this "many orders of magnitude" nonsense. Even on an 8-bit resource-restricted microprocessor, what could you write with Forth in a day that would take half a century with a team of 60 people in assembly? Or what could you write with Forth in ten minutes that would take you 60 years in assembly? I think you're talking nonsense and being condescending to try to cover it up.

And if almost all projects are "the wrong project" for Forth (which seems to be what you're saying, although I don't agree with it) then does it make sense to compare it to a general-purpose language like assembly? You make it sound like a DSL like SQL, not like a framework for all DSLs.


You know. You are right. I don't know what I am talking about. Thanks for helping me sort that out. Good day.


I'm not accusing you of ignorance; I'm accusing you of talking nonsense, which is to say, writing a bunch of bullshit without regard to whether it's true or false. I'm sure you know perfectly well that you're talking nonsense, as does anybody else who's read this thread, and that's why you're responding with this kind of diversionary aggressive bluster.

But I find it disappointing, and I wish you'd engage the conversation at a rational level instead of bullshitting and sarcasm. It's a missed opportunity to share your knowledge — not with me, but with whoever else reads the thread in the future.


"I'm sure you know perfecly well that you are talking nonsense, as does anybody else who's reading this thread"

How do you expect me to answer something like that?

You don't know the language and are intent on having an argument with me about this?

When Forth is used correctly productivity gains become exponential (the limit being application dependent). So, yes, when used correctly you can go 10, 100 and even 1,000 times faster than assembler.

A good comparison would be to look at the productivity gains had when writing raw Python vs using a library such as SciPy. The productivity gains are orders of magnitude greater than the raw language. And, even raw python is orders of magnitude faster than writing assembler.

What I said was that [Forth] "It's just above assembler yet many orders of magnitude faster to develop with." and you seem to think this is nonsense. Well, you are wrong. What else do you want me to say. Go learn the language and we can have a conversation. Using it casually doesn't produce the same level of understanding you get from dedicated non-trivial usage. This is true of any language or technology.

I am not insulting or diminishing you. Perhaps you are taking it that way. Relax. It's OK to not know everything, I certainly don't and I've been designing electronics and writing software for a long time.

Here's a place to start:

http://www.ultratechnology.com/forth.htm

Here's a good Q&A:

http://www.inventio.co.uk/forthvsc.htm

If I have the time later tonight I might post a follow-up with an example of what development flow might be for, say, developing code for a multi-axis CNC machine.


Te key argument behind this thread centered around my assertion that Forth can be orders of magnitude faster than assembler. That means 10, 100, 1000 --or more-- times faster to code a solution in Forth than in assembler.

I haven't done a any serious work in Forth in about ten years, so coming up with an example for this thread would have consumed time I simply don't have right now.

The reality is that ANY language is orders of magnitude faster than assembler. The idea that this assertion is being challenged at all is, well, surprising.

I happen to be working on a project that, among other things, makes extensive use of nested state machines. The main state machine has 72 states and some of the children FSM's have up to a dozen states. Writing this in C it takes mere minutes to lay down a bug free structure for the execution of the entire FSM set. It should go without saying that doing the same in assembler would take far longer and result in a mess of code that would be difficult to maintain.

Assembler has its place. I have written device drivers, disk controllers, motor controllers, fast FIR filters, pulse/frequency/phase measurement and a myriad of other routines across a number of processors all in assembler. These days I'd venture to say the vast majority of embedded systems are done in C. Coding is faster, far more maintainable and embedded optimizing compilers do an excellent job of producing good, tight and fast machine code. As much as I love and enjoy Forth this is one of the reasons I rarely use it these days.

I still think it is important to learn about TIL's as it adds a layer of thinking outside the box one would not otherwise have.


I agree that learning about threaded interpretive languages is important, despite the fact you allude to that Forth compilers (at least those I'm familiar with) produce deeply suboptimal code.

I agree that writing in a high-level language is faster — quite aside from the availability of library functionality like hash tables and heaps and PNG decoders and whatnot, you have less code to write and to read, and less fiddly decisions about register assignment and memory allocation to make, possibly get wrong, and have to debug.

But that difference is a constant factor — it's not going to be even 1000, and it's never going to approach "many orders of magnitude", which was the nonsense claim I took most issue with. (Two or three isn't "many" in my vocabulary.) Typically I think it's about 10 to 30, which is nothing to sneeze at in the real world, but which would be insignificant compared to your absurd earlier claims.

You can get a "many orders of magnitude" boost — rarely — from libraries or static checking. Neither of these is Forth's strong suit.

Nested state machines are actually an interesting case — often by far the most convenient way to write them is with one thread per state machine. C by itself doesn't give you the tools to do that predictably, because you can't tell how much stack space you're going to need — it depends on your compiler and compilation options; assembly and Forth do. So it's one of the rare things that might actually be easier to do in assembly than in C. (But if you have some memory to spare, maybe setcontext() is plenty good enough.)


The difference between 'grok' and 'use'.


Oops, I meant "easier to read but harder to write".


It's kind of useful to write out code by hand, or at least pseudo-code...I'm not a neuroscientist but I wouldn't be surprised if it turns on a different part of the brain. Last week I was sitting in the park with nothing to do and I reflexively pulled out my iPod to play Sudoku on it. Since it was one of our first nice days I felt bad about that so I just wrote out the Sudoku grid on a pad of paper and solved it from there. It was surprisingly difficult, even though it was an easy puzzle (the iOS version highlights which minigrids on the main grid that a given number appears in for starters).

Anyway, anytime I think about what a pain in the ass writing, or even just typing out code can be, I refer to the chapter in iWoz in which, lacking development software at the time, he "wrote out the software [for Little Brick Out] with pen and paper and then hand assembled the instructions into 4096 bytes of raw machine code for the 6502"

http://en.wikipedia.org/wiki/Integer_BASIC


I agree with you. I think the physical motion of writing is involved in the process, activating neurons that just flicking fingers doesn't.

Would be fascinating to see it in an fMRI.


There is a psychiatric condition known as 'graphomania' in which one has an obsessive compulsion to write, the content being secondary in importantce. So it's conceivable that such a structure in the brain exists and can become overstimulated.

https://en.wikipedia.org/wiki/Graphomania


Brain debugger.

I honed mine early enough that the most I need from a program is a few Sprintf statements just to give the brain debugger some context when the application is dealing with external data or data transformation.

I personally find it hard to turn off the brain debugger and rely on software. Originally this was because you'd very much be used to the debugger being wrong. As in, "Ah, but in this case the debugger thinks X but Y is the case", and later just because the brain debugger had become so effective that not using it seemed like a lot of hard work for little extra gain.

How other programmers visualise or handle code in their brain, and how they put together new functionality in their head, is deeply fascinating. Even if you have self-awareness enough to grasp the basics of how you do it, you can be reasonably sure that each of us does it slightly differently.


I can often do that on code I've written or am intimately familiar with, but tons of the code I work on is code that someone else wrote and that I rarely interact with. As a result, I love my debugger.


In my experience this only works if the feedback cycle is quick. If working with a language like ruby or python and just refreshing webpages or running a script, no big deal. However, I then would have to wade through logger output to filter out the information I needed. I ultimately found I appreciated a debugger such as pry.

Then I started writing for Android and the time between compilation, installation, running the app, and then reproducing the scenario was too long and a debugger was a godsend.


Let's put the code to text and test if it runs?

I've made a start, but it takes a little longer than I'd thought (handwriting is often hard to read). Let's do some crowdsourcing?

https://docs.google.com/document/d/192TW6ghnmMFnVaNvIe98aqTG...

Edit (+12mins): I see lots of people viewing, but not a single edit over the past 10 minutes.

Edit (+60mins): One other anonymous user helped at last and the document is complete, but didn't leave his username in the contributors list yet. Anyway, thanks!

Now the big question is how to run this. If you have any idea how to run it, let us know!


There are manuals and an emulator here: http://www.floodgap.com/retrobits/kim-1/


Will accept pull requests.


The mental debugger is definitely one of the tools I'm glad I developed early.

For me, it was by reverse engineering. I started cracking software and eventually moved on to white-hat black box security auditing, and that quickly taught me how to evaluate execution flow mentally.

I find that even though I'm writing high-level Ruby web apps now, my ability to rapidly follow code around in my head lets me debug more quickly and effectively than many of my co-workers.

I firmly recommend trying reverse-engineering for anyone who hasn't - it will forcibly provide a lot of the same metal execution mapping abilities while feeling more relevant than writing machine code or assembler out on a piece of paper. And once you learn the basics, everything transfers back up to higher-level languages pretty well (with the exception of mental hex arithmetic, which will still come in handy as soon as you segfault your high-level language's runtime). Plus, when reverse-engineering, you can't fall back or get frustrated and use a compiler - unless you have an IDA + HexRays license for some reason, you're stuck figuring things out yourself.

As a side-note, it's always fun watching my dad work - he's an oldschool mainframe guy and he'll sometimes solve common setup issues using JCL or REXX stuff with a modified date sometime in the late 1980s.


This brings back a lot of memories ... Hand-compiling code for a COSMAC ELF (http://www.cosmacelf.com/docs.htm) and punching it into memory on a hex keyboard (the original had a switch for each bit and 4 push-buttons for load, run, etc).

By the time I was coding for 6502 and 8088 processors (still in assembly language - I was after all an embedded engineer), I had assemblers and an 80-column by 43-line text editor.

Aren't we spoiled today? I wouldn't want to go back, but I've also found that the low-level experience with machine code is something many "newer engineers" are missing ... it's an appreciation of the hardware that you can't get any other way.


Computer Engineering still offers this experience. It's the main reason I picked this major when I returned to school after many years as a programmer. Most of the graduates from here go on to software development. The low level experience makes them well suited to work on embedded systems, device drivers, and operating systems.


I did my bachelors (in India) in the mid-90s, and we all coded this way in our 8085 labs. Many of us even did final-year projects on similar-looking 8085 kits.

Given that such kits are still sold [1] in India, I guess quite a few engineering students still learn to code like that.

My bosses, though, had all worked on punched cards.

[1]: http://www.dynalogindia.com/products/education-solutions/808...


>>Given that such kits are still sold [1] in India, I guess quite a few engineering students still learn to code like that.

Did my Bachelors(in India) in 2010's, and yes even we coded this way in our 8085 labs!!! Nothing has changed. Talked to my cousin who is now in his 3rd semester, They to start out coding the same way. Its 2013 and NOTHING has changed.


Not to go all Yorkshire, but I used to dream of having a KIM-1. My first computer — the best thing I could afford — was a Quest Super Elf [ http://www.oldcomputermuseum.com/super_elf.html ] with 256 bytes of RAM, and a processor on which a recursive subroutine call took 16 instructions.

On the other hand, by 1985 I was doing QA for a 64-bit Unix environment.


You're joking, this was a wonderful machine. Especially the CDP1802 processor with its ability to use any register as a PC. I've very nice memory with this chip.


In some ways it was nice. For those unfamiliar:

The 1802 had sixteen general purpose 16-bit registers. Any one of these could be selected as the program counter. In a tiny embedded system (which is what the part was meant for), you might choose to designate one as the ‘normal’ PC and reserve a few others for important subroutines, which could then be invoked with a one-byte ‘SEP n’ instruction. Similarly you could implement coroutines or simple task switching by switching the selected PC between a pair of registers.

On the other hand, there was no conventional call instruction. The SCRT (“Standard Call and Return Technique”) for recursive or reentrant subroutines essentially involved defining (non-reentrant) coroutines to perform the ‘recursive call’ and ‘recursive return’ operations.


I remember playing around with machine instructions on the 6502. I wouldn't call it hand assembly; it was more like copying and groping around. I probably wrote less than 200 lines in my life.

But what a neat feeling! There's something about putting in these numerical codes and seeing a graphical result that I haven't experienced since then. Yes, higher-level languages rock, but dang, this is coding next to the metal.

After learning a bit about logic gates and playing around with the theory, there was an incredible feeling of discovery. Somehow I had deciphered the code of computers, and was finally speaking their native language. This brought back great memories. Thank you.


I know that feeling. My undergrad CS course had a rigorous electronics component where we coded in ASM for the 8086. One of my fondest moments of that time was when I wrote a program that would split the terminal screen into two halves of different color, and display what you typed on one side and the inverse case on the other.

This could probably be done with less than 50 lines in python, but doing it in assembly, calling all those interrupt routines, making sure your registers contained all the right values, man, when it finally worked, it was the greatest feeling in the world!


Wasn't it? And for that moment, you could look out and imagine "Damn, I could code the whole freaking thing like this!"

Then you realize it would take around 30 years, but still -- it was doable. You finally figured out how it all worked. You could make the computer do anything it was capable of doing. There was no more mystery there.

Fun stuff.


Do you really think it would take 30 years? The NAND-To-Tetris kids do most of it in a semester, and they're starting at the level of individual gates.

I mean, Chuck Moore has been doing it for about 30 years, but he had a working top-to-bottom system after less than five years.

Here's what I think it would look like:

Week 0: build a working interactive Forth system that can compile itself, in machine code. This is assuming you already have hardware and a way to boot it into some code you wrote. If not, that could be a matter of minutes or of months.

Weeks 1-3: build an interactive high-level language interpreter on top of it. Say, JS.

Week 4: enough of TCP/IP to do HTTP GET.

Week 5: write a bitmap font editor and a minimal filesystem.

Weeks 6-8: parse HTML (more or less), lay it out with a simplified box model, and draw it to pixels on a canvas, using a pixel font.

Week 9: cookies, HTTP POST, XMLHttpRequest.

Week 10: a more elaborate filesystem, maybe based on Git.

Weeks 11-12: some more HTML5 features, maybe including WebGL.

Weeks 13-14: TLS.

This is sort of cheating because you're taking advantage of our previous 70 years of experience to figure out what's worth building, and you're likely to end up spending a fair bit of time going back and retrofitting stuff onto previous iterations — JIT compilation, say, and multitasking — but it still seems like rebuilding the personal computing environment ought to be more like a one-semester project than a 30-year project.

You could argue that you get a huge speedup by not writing stuff in assembly, and I agree, but I think that's only a constant factor. You could easily write assembly code that maps directly onto JS semantics:

        ;;; invoke getContext("2d") on the canvas object
        .data
    sgetContext: .asciz "getContext"
    s2d: .asciz "2d"
        .text
        mov $sgetContext, %eax
        call GetProp
        mov $s2d, %edx
        call InvokeMethod
That's more code to write and read, but only by a linear factor. So it might take you two semesters, or four, but not 60.


I, too, know that feel bro.

At 12 I had learned how to program, in Z80 assembly, a routine to output to the graphics card of my dad's ancient-even-then Radio Shack model 16. I created a custom font that looked like the beautiful VGA font and used it to render 16-bit numbers in gorgeous decimal digits.


Dude wrote in pen. We are not worthy, brothers and sisters. We are not worthy.


As did most of my EE brethren in school in 2009. Trust me, writing code in pen does not a master programmer necessarily make.


Jokes aside, writing down something like OP's software in pen from scratch would require some degree of proficiency and confidence. I can still see some pencil around the paper, in any case :) (Not to diminish the achievement - As a young programmer, I'm always impressed when I see these things and I'd love to see more and more of them!)


>>Dude wrote in pen.

Most of us still do. Especially while dealing in areas which are not familiar to us.


i agree, you have to give serious props for that. anytime i write any kind of prototyping/design sketch in pen, it gets all scratched up and crossed out..


It probably wasn't his first pass at the problem. :-)


That's late for hand coding assembler (and very late for a KIM 1). Last time I did any hand assembly was 81/82 for a ZX81. Graduated to an Apple //e where I wrote my own assembler using the mini-assembler in the integer basic ROM and Apple Writer I as a text editor. By 1984 I had gotten my hands on the Apple ProDOS assembler so my bodged mini-assembler got put out to pasture.

Mind you I've just got an Atari 65XE from ebay today which I will now have to explain to my wife (I always thought the Atari graphics was neat and wanted to play with the display list stuff but never did so at the time)


The comments made me go look for Compute! 168 issues available on archive.org: http://archive.org/details/compute-magazine


That's really fantastic that he was able to save his work from 28 years ago. I really wish I had the foresight to do the same from only five years ago, though I expect the result would be less like unearthing buried treasure and more like finding what stinks in the fridge.



I am just beginning to worry about this sort of thing (#) but what age were you in 1979/82 with your sharp computer, and I have assumed I will be trying to guide my son / daughter onto emulators (nand2tetris etc) rather than real physical machines - or are there real physical machines which are simple enough and have root on to be early hacking machines?

(#)https://github.com/lifeisstillgood/importantexperiments4kids


"are there real physical machines which are simple enough and have root on to be early hacking machines?"

If you really want to re-enact the OP article, visit Briel Computers and get a Micro-KIM. Mine works perfectly. At the 2013 Midwest Gaming Classic someone (not me) was exhibiting an original KIM (as seen in the post) and a microKIM side by side in the retrocomputing room, which my family found entertaining because I also have one mounted on a wood plaque in my "office" area. How else are you going to display one?

I was always more of a Z80 guy... speaking of Z80 computers, one currently shipping project is the N8VEM CP/M SBC project which I found pretty trivial to assemble and use, not to forget the P112 SBC as recently seen on kickstarter.

Then there's my stack of FPGA devices ranging from the micronova mercury to my boring typical Spartan boards, which twists the borders of "real hardware" and emulation.

A link to the MicroKIM1

http://www.brielcomputers.com/wordpress/?cat=24


I gave my daughters TRS 80 model 100 computers for their tenth birthdays. It has a 8085 processor and really a quite powerful basic (the last software bill gates worked on, and its not bad) with a bit mapped display. They run for weeks on 4 Aa batteries and have a better keyboard than most modern laptops. here is an emulator available, but since they go for about $30 on eBay, I like having the real thing.


I bought a box of these recently and they are really great; I used to have TRS 80 III and 4 so when I saw I box (don't hit me) of model 100s for 10 euros I bought it. They're a very nice addition to my museum and batteries last forever. The box had 1 Sharp PC-1211 too. Talking about battery life :)

What always amuses me is that my computers from around 2000 are not working anymore (heck, most laptops I have from the past 10 years are not even booting anymore) while computers from my parents basement which are around 30 years old just work like they just came from the shop. Even the Philips computers from that time who had known capacitor issues in the power circuitry work like time didn't happen.


Arduino kits are great hits with the 8-15 year olds in the family. They can play with the sample code and tweak it and see something happen "in the real world", whether it's a flashing LED or "upgrading" their Lego to torment the cat.

It's similar to those Radio Shack "3000 electronic experiments" boards that were sold in the 80s(?) and yet can scale with the kids' understanding.

Sure, it's not self-hosting, but I've noticed the physical connection to caps, resistors, diodes, etc. in a "safe" environment makes them more proud of their accomplishments than their html/css projects.


There are a few machines designed be similar experiences:

There's something called the Maximite, which is a single IC computer running BASIC with output to VGA and SD storage (http://geoffg.net/maximite.html)

There's something called 'petite computer' for Nintendo DS (http://www.petitcomputer.com/)

But it'd be great if someone started a hub for this kind of thing.


I used an East German version of the KIM (LC80) for lab experiments. In 1999! Tying in code was slow and debugging even slower, but the hardware I/O was easy, no fuzzing around with RS232. Ahhh, good times.


I spent several weeks during the Summer of '82 living in dorms and taking college courses at UCF. My suitemate, John, was writing Petman in machine code, on and for his PET 16. An enduring memory for me and one which reminds me that there are programmers who are 100 more productive than others - I was at the stage of typing in code from magazines.


Wow. I should really dig out more stuff from my box of old programs. This one blog post is at almost 100,000 page views.


And this was just a small utility. Then think about things like the Apollo program being coded the same way (or more obliquely on punch cards).

There is a finite and limited amount of complexity this allows for. We've been given quite the catch-22: the ability to build increasingly complex system while needing to maintain those increasingly complex systems.

On whatever project you are currently working on, how much less code do you think you'd be writing if you had to hand write it? Where would the project be better for it and where would it be worse?


This weekend for the ludum dare someone made an implementation of Pong on his C64, in a weird turn of events he had to resort to printing out screenshots of memory dumps, run those through an OCR. But the OCR was buggy and he had to check every byte by hand.

http://www.ludumdare.com/compo/2013/04/29/ponkmortem/#more-2...

Reminded me a bit of this story..


As a relatively new (5 or so years experience) programmer, 1985 sounds like hell.


Software development basically peaked in the mid 1980's.

Macintosh Common Lisp circa 1987: http://basalgangster.macgui.com/RetroMacComputing/The_Long_V..., specifically: http://basalgangster.macgui.com/RetroMacComputing/The_Long_V...

Firebug circa 2013: http://getfirebug.com/logging, specifically http://getfirebug.com/img/logging/consoleDir.png


In the 70s and 80s programmers were working on fixing the problems that programmers faced in the 70s and 80s: memory management, GOTO spaghetti, efficient assembly etc. In response we got heaps, garbage collectors, functional languages, virtual machines, compilers, debuggers, IDEs.

Now most programmers spend most of their time dealing with problems of persistence, networking, parallelism, security, and massive codebases. In response we are creating scripting languages with slightly different syntax, adding closures to Java, and arguing about vim vs emacs. We're so obsessed with solving the problems of the 80s, my way, that we've just gotten trapped there.


Sometimes I wonder how software would be today if Smalltalk or Lisp environments had become mainstream.

Oh well.


I have fond memories of programming my "Arnesh" 68 K board around 1995. We used them in our microprocessor course, and since I had so little time in the lab, I bought my own board. After the class ended I would lend it to friends until it came back fried. One of these days I need to dig it out and try to repair it, now that I feel a little more secure with a soldering iron.


Finally, some programming this old fart recognises. Why did it all get so complicated?

Yeah, I know. Rose tinted and all that.


> Why did it all get so complicated?

Two words: 'the web'.

Slightly longer: we're re-inventing the wheel through a very roundabout path, I think right now we've gone back in time to roughly the mainframe area, it won't be long before someone will invent the mini computer all over again only this time it will be a local cluster in a box. Next after that the PC, a 'personal cluster' with a few 100 nodes the size of the desktop machine you had last year.


I don't think it was the web. The pre-web applications with GUI clients running on PCs and servers on mainframes were already pretty complicated (remember Microsoft's ever-changing object APIs, or CORBA?). What made things increasingly complicated was the ever-growing power of computer and communication hardware, which made it possible to create larger and more complex systems. If you're limited by having to run your software on a machine with a total of 256K of memory (like the first machine I used Unix on in 1980), you just can't build something as complex as what you can build today on a cheap laptop.


We were building clusters of PCs in the 1980's using arcnet. Those were pretty large and complex systems, they used a very elegant message passing protocol. And that definitely wasn't the only game in town for large and complex systems (but it was one of the most productive ways of constructing those systems with a small team).


And it's not the first time the mainframe has been re-invented - I remember sitting through a presentation on the technical architecture of SAP (don't laugh) and I thought "it's a software mainframe".


nice 'syntax highlighting' ;)


I was thinking the same thing. I, too, wrote many sheets of hand-coded 6502 assembly back then, but I didn't think to use a multi-colored pen.


Yea!, I used to steal "NCR programing paper sheets" from my older bro (he worked for NCR) to write most of my assembly code.. (For a Zinclair Z80) ahh the memories of peek & poke :)


I wrote code like this in 1997. First year in college, micro processing course. So I wrote a simulator for the 6800 chip, and an assembler. So I didn't have to wait every Saturday at 8:30am to run my code. The professor wasn't impressed at all when I showed him my program.


This made a big wave of nostalgia wash over me, I learnt to write 6510 assembler on my C64.


Articles like these make me realized I am completely spoiled. I am in no position to complain about debugging when modern technology afforded me a much easier time than 30-40 years ago.


I had a KIM-1 in the late seventies and for several years I would run into other people who used one, but 1985 seems really late for programming on one of these. For me, the KIM-1 was my first experience with programming and a computer of any sort. It influenced my taste for low-level concepts, not only through x86 assembly programming for graphics in the 90s but much later in the lambda calculus and combinatory logic. I'm tickled that the KIM-1 is seeing such a resurgence of interest.


It was late to be programming one but it was (a) what was available where I was and (b) it had good I/O for the interfacing.


But that FSK demodulator was incredibly finicky. More than one tape I recall I had to try to read a lot of times while tweaking the playback speed on the recorder!


Oh, the pleasure of writing hex dumps into monitors....


I was in high school in 1985 and our computer club built a Heathkit HERO-1, which had a 6808 processor and also had to be programmed through the hex keypad. After a few months of hand writing machine code and typing the opcodes into machine, I wrote a simple assembler on the Apple //e that would then download the assembled programs to the robot through the RS-232 interface. So much fun.


My dad had one of those. So cool, I enjoyed keying in "programs" on the head keypad to make it move and speak (I was around 9 or 10); I would have loved your tool (if I could get it for the Osbourne we had then...).


heh. i also programmed a stepper motor in ~1985 (after building the driver board). but it was z80-based (and it never ran smoothly - when i left i think they were going to try adding some noise to the timing, but that didn't sound likely to me - more likely i had a crossed wire in the hardware...). anyway, i am pretty sure i had an assembler (and this was working for a one-man company in the backwoods of n yorkshire). why were you hand assembling?


one of the good things about a formal CS education (well, mine anyhow) is that they force you to do things like write in assembly. obviously there's far less immediate practical application these days, but the exercise is definitely valuable, and probably not something I would have done voluntarily. Actually, it feels like something I would have thought sounded cool, tried, and then given up on when it got annoying.


We had to make adders, flip-flops, etc (on paper, not physically) and those were used to build an ALU and (I think) a memory-switch bank.

It was very cool to know that when the world ends I can build a new computer, from scratch.


What are you going to make the logic devices from? I've speculated about this extensively in http://lists.canonical.org/pipermail/kragen-tol/2010-June/00... but I still don't have a working scratch-built computer.


You can make transistors with some...hazardous components, but you may want to start with relays or tubes and resistors (which could, theoretically, just be varying lengths/guages of wire) and build NAND gates. It would be big, hot, noisy, slow, and a huge waste of your time, but there were computers before semiconductors.


For the curious: here's how you make your own tubes from glass, metal wire and metal sheets: http://m.youtube.com/watch?v=EzyXMEpq4qw.


In a practical apocalypse, you might be able to recover transistors from garbage dumps, too.


(http://www.youtube.com/watch?v=w_znRopGtbE) for transistors, but it feels like cheating.

Relays are doable from scratch (mining and refining copper; winding coils, etc etc) but it's going to be pretty hard work and you're going to have a very slow computer at the end of it.


I was planing on going the relay/vaccume tube route.


Best coding I do is with a pen and paper.


That makes me wonder what we will think 30 years from now, when we look back at how we develop software today.


Articles like this inspire awe and shame in me at the same time. Is there an emotion called Aweshame?


"...typically I reach for the brain debugger before gdb".

Gdb, wow. So he'll still coding like in 1985.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: