Date: Monday, April 29, 1985
Location: Santa Ana, California.
Company: electronics manufacturer
setup: dumb black & green 24x80 14" CRT terminal
Hardware: Honeywell mini
OS: Honeywell proprietary
DBMS: Pick (Ultimate Flavor)
App: Work Order Processing (I wrote from scratch.)
Date: Monday, April 29, 2013
Location: Miami, Florida
Company: aerospace manufacturer
setup: Windows PC, 19" & 32" flat screens monitors
Hardware: who knows
DBMS: Pick (Unidata flavor)
App: Work Order Processing (I'm writing from scratch.)
I have learned and continue to use many other client/server and web-based technologies since then but somehow, I always come back to "old faithful".
To still add something to the discussion -> I never coded on a black&green CRT, but I did learn programming on an environment doing 24x80 16colour ... graphics? Text graphics. (this was ~1994)
It was fun. For the first few years of my computing experience I never really left DOS and kept trying to reproduce the shiny Windows stuff in there. I still think that was better and more fun than when I briefly tried learning how to make "real applications" before jumping ship to Web.
I was totally engrossed in writing custom batch files using ANSI.SYS escape sequences to create text "graphics."
While I think there's money to be made in maintaining those old systems, I would just go insane not keeping up with technology and using modern stacks. CS has progressed for a reason, the level of abstraction today is so much higher that we don't have to write from scratch all the time, there are well tried and tested components.
Setup=Windows PC && OS=Unix, doesn't seem to make much sense, unless you mean Windows as a terminal (putty et al.)
Now, back to Python...
Because I'm not doing the same thing over and over. I'm just using the same technology. I rarely encounter things I can't do with it. I love Pick more than anything else I've ever used.
but there's not way on earth I'd want to be still using BASIC, or Pick for that matter
Pick is almost like a cult and for good reason. It's incredibly elegant, simple, and powerful. Most Pick programmers, even after learning newer technologies, still love Pick and use it whenever practical.
but if you try new tech stacks you'd never want to go back to systems from 40+ years ago
Exactly the opposite of my experience. I've tried many tech stacks and have come to appreciate their pros and cons. But I love coming back to "old faithful". With Pick, I spend most of my time on the problem at hand, not the tech stack. Not so true for many more modern technologies.
I would just go insane not keeping up with technology and using modern stacks.
I never said I didn't keep up. I do and I love to. The thing that would make me go insane: not having fresh customers with fresh problems.
CS has progressed for a reason
Make no mistake about it; the biggest reason has been the internet. So much CRUD technology is so powerful and stable, there hasn't been as much need for "progress".
the level of abstraction today is so much higher that we don't have to write from scratch all the time
I never write from scratch. I use the same 30 or 40 building blocks for every project.
there are well tried and tested components
I'd say that Pick's 40 years and millions of apps make it well tried and tested, too.
unless you mean Windows as a terminal (putty et al.)
I do. Putty on my left, Firefox on my right.
Now, back to Python...
Probably a lot more than you think.
I've seen estimates of 1-2 million COBOL developers worldwide.
Rocket and IBM have done a bit with U2 to open up the multi-value data files and UniBASIC programs to "open systems", i.e., Java, .NET, SOAP, REST-ish, etc.
It's still a bit of a pain because values are strings, one has to extract MV/SV data by tokenizing along delimiters, there's a massive amount of data duplication (no FK relationships), multi-byte causes problems with older programs (and many, many of them are very old programs), and there's no concept of data queues as you might be expecting if you've worked in an OS/400|i Series|System i|i|etc. shop with RPG.
That said, it runs happily on Linux and is quite robust.
Many of the problems relate to programming practice. As you can imagine, there are some "lifers" who don't stretch themselves, argue GOTO vs. GOSUB, copy-and-modify existing programs for new requirements, and don't use source control or automated unit testing. Then there are those who refactor 17 copied versions of programs doing basically the same thing into a modular package and slap an object-ish API on top.
With a rational approach, the environment is actually quite pleasant, but it does take some discipline. It is difficult to retrofit a more rigorous engineering process onto legacy code, but that's the case whether it's Pick/UniData or any other programming environment.
- level of normalization determined by programmer, not DBA
- relationships (joins, etc.) enforced at program level, not schema
- no typing (everything is a string)
- DBMS performance is mathematically predictable & determined (hashing)
- everything is variable length
- anything can be multi-valued or multi-subvalued
- No schema needed. Just code.
like an engineer re-inventing the process to build bricks anytime a new building is requested.
industry observation, not personal.
shouldn't there be a work order processing standard solution for aerospace by now?
I'm sure there is. Just like there are blog, website, & CMS standard solutions. And plenty of people using them. I never see them because those people would never call me. Like many other programmers here, I imagine, we only see the customers who need something custom, almost always for good reason.
I wouldn't choose PICK for a brand new project today, but as Ed says, there's life in the old dog yet...
asking cause 28 years is an eternity in regards of IT - or at least that's what we hear day in, day out.
There are probably several, all mutually incompatible in some way, and chances are every company picks bits from each to make their own "local" standard (a local standard that gets worked every time the name associated with a C*O title changes).
Sometimes we are not in a position to control the constant wheel evolution and just have to get on with it & accept the paycheck.
I commonly have to code in older versions of my preferred language, and it's always a bit grating. I'm always pining for the newer features that I've grown accustomed to.
I don't miss that language one bit.
Now PICK as a data storage. Not bad, but did nothing well either. Could be I have the reverse of positive nostalgia on this one.
The machine had eleven 6502's plus an additional one acting as the supervisor and comms link to a PC. The entire system was programmed in Forth, which I mostly rolled out from scratch. This included writing my own screen-based text editor. The PC ran APL.
The project was a bit more elaborate than that. As a very --very-- green college student I ended up presenting a paper on this project at an international ACM conference.
Once you bootstrap an embedded system with Forth the world changes. In '85 there were few reasons to slog it out using hand-coded assembler to automate a labeling machine. My guess is that he and/or his school did not have any exposure to the mounting Forth embedded community and so they ended-up doing it the hard way. That's still good. You learn a lot either way.
If you use Forth as intended I can't see how you would introduce more bugs when compared to anything. Few languages are inherently buggy. Bugs can nearly always be traced to the programmer, not the language.
That said, my last few words were exactly backward from what I meant to say. Harder to write than assembly, because you have to take more care to avoid stack-effect bugs and you don't have local variables; but easier to read, because your Forth isn't full of arbitrary noise about which register you're using for what.
Even with your comment backwards it makes little sense. I have developed in nearly every language between typing raw machine code with a hex keypad and Objective-C (not to moly that O-C is at the opposite end of the scale). I did years of Forth. Not one thing you are saying rings true. If you know a language you speak it. It's as simple as that.
On the other hand, if you were right that Forth is many orders of magnitude faster to develop with than assembler, we'd see people writing HTML5-compliant web browsers in Forth in a few hours. (I'm figuring: 100 person-years for e.g. Chromium, multiplied by ten for assembly, divided by six orders of magnitude (because five isn't "many") gives you about 8 hours.) So I suspect you're living in some kind of a fantasy world.
Forth is great to develop with but comes with liabilities if used on the wrong project. I remember a product a friend of mine did using Forth. High end embedded system with custom hardware all selling for tens of thousands of dollars per unit. When he started to look into selling the business Forth became a liability. Nothing wrong with it. Te code worked well and was mostly bug-free. The company bidding to acquire his business did not do Forth and saw it as an impediment. They, like a million others, did C. Ultimately he sold the company for $16 million with a requirement to translate the code base to C under contract.
Today I use C for any time I need to do embedded work. Having a foundation in a diverse range of languages means that you tend to use a language like C as a toolset to create an efficient development environment through domain specific libraries, data structures and data representation.
I rarely find the need to get down to Forth these days. It made a lot of sense with 8 bit resource-restricted microprocessors. Today you can have a 16 or 32 bit embedded processor for a few bucks and have it run at 25, 50 or 100MHz. Different game.
And if almost all projects are "the wrong project" for Forth (which seems to be what you're saying, although I don't agree with it) then does it make sense to compare it to a general-purpose language like assembly? You make it sound like a DSL like SQL, not like a framework for all DSLs.
But I find it disappointing, and I wish you'd engage the conversation at a rational level instead of bullshitting and sarcasm. It's a missed opportunity to share your knowledge — not with me, but with whoever else reads the thread in the future.
How do you expect me to answer something like that?
You don't know the language and are intent on having an argument with me about this?
When Forth is used correctly productivity gains become exponential (the limit being application dependent). So, yes, when used correctly you can go 10, 100 and even 1,000 times faster than assembler.
A good comparison would be to look at the productivity gains had when writing raw Python vs using a library such as SciPy. The productivity gains are orders of magnitude greater than the raw language. And, even raw python is orders of magnitude faster than writing assembler.
What I said was that [Forth] "It's just above assembler yet many orders of magnitude faster to develop with." and you seem to think this is nonsense. Well, you are wrong. What else do you want me to say. Go learn the language and we can have a conversation. Using it casually doesn't produce the same level of understanding you get from dedicated non-trivial usage. This is true of any language or technology.
I am not insulting or diminishing you. Perhaps you are taking it that way. Relax. It's OK to not know everything, I certainly don't and I've been designing electronics and writing software for a long time.
Here's a place to start:
Here's a good Q&A:
If I have the time later tonight I might post a follow-up with an example of what development flow might be for, say, developing code for a multi-axis CNC machine.
I haven't done a any serious work in Forth in about ten years, so coming up with an example for this thread would have consumed time I simply don't have right now.
The reality is that ANY language is orders of magnitude faster than assembler. The idea that this assertion is being challenged at all is, well, surprising.
I happen to be working on a project that, among other things, makes extensive use of nested state machines. The main state machine has 72 states and some of the children FSM's have up to a dozen states. Writing this in C it takes mere minutes to lay down a bug free structure for the execution of the entire FSM set. It should go without saying that doing the same in assembler would take far longer and result in a mess of code that would be difficult to maintain.
Assembler has its place. I have written device drivers, disk controllers, motor controllers, fast FIR filters, pulse/frequency/phase measurement and a myriad of other routines across a number of processors all in assembler. These days I'd venture to say the vast majority of embedded systems are done in C. Coding is faster, far more maintainable and embedded optimizing compilers do an excellent job of producing good, tight and fast machine code. As much as I love and enjoy Forth this is one of the reasons I rarely use it these days.
I still think it is important to learn about TIL's as it adds a layer of thinking outside the box one would not otherwise have.
I agree that writing in a high-level language is faster — quite aside from the availability of library functionality like hash tables and heaps and PNG decoders and whatnot, you have less code to write and to read, and less fiddly decisions about register assignment and memory allocation to make, possibly get wrong, and have to debug.
But that difference is a constant factor — it's not going to be even 1000, and it's never going to approach "many orders of magnitude", which was the nonsense claim I took most issue with. (Two or three isn't "many" in my vocabulary.) Typically I think it's about 10 to 30, which is nothing to sneeze at in the real world, but which would be insignificant compared to your absurd earlier claims.
You can get a "many orders of magnitude" boost — rarely — from libraries or static checking. Neither of these is Forth's strong suit.
Nested state machines are actually an interesting case — often by far the most convenient way to write them is with one thread per state machine. C by itself doesn't give you the tools to do that predictably, because you can't tell how much stack space you're going to need — it depends on your compiler and compilation options; assembly and Forth do. So it's one of the rare things that might actually be easier to do in assembly than in C. (But if you have some memory to spare, maybe setcontext() is plenty good enough.)
Anyway, anytime I think about what a pain in the ass writing, or even just typing out code can be, I refer to the chapter in iWoz in which, lacking development software at the time, he "wrote out the software [for Little Brick Out] with pen and paper and then hand assembled the instructions into 4096 bytes of raw machine code for the 6502"
Would be fascinating to see it in an fMRI.
I honed mine early enough that the most I need from a program is a few Sprintf statements just to give the brain debugger some context when the application is dealing with external data or data transformation.
I personally find it hard to turn off the brain debugger and rely on software. Originally this was because you'd very much be used to the debugger being wrong. As in, "Ah, but in this case the debugger thinks X but Y is the case", and later just because the brain debugger had become so effective that not using it seemed like a lot of hard work for little extra gain.
How other programmers visualise or handle code in their brain, and how they put together new functionality in their head, is deeply fascinating. Even if you have self-awareness enough to grasp the basics of how you do it, you can be reasonably sure that each of us does it slightly differently.
Then I started writing for Android and the time between compilation, installation, running the app, and then reproducing the scenario was too long and a debugger was a godsend.
I've made a start, but it takes a little longer than I'd thought (handwriting is often hard to read). Let's do some crowdsourcing?
Edit (+12mins): I see lots of people viewing, but not a single edit over the past 10 minutes.
Edit (+60mins): One other anonymous user helped at last and the document is complete, but didn't leave his username in the contributors list yet. Anyway, thanks!
Now the big question is how to run this. If you have any idea how to run it, let us know!
For me, it was by reverse engineering. I started cracking software and eventually moved on to white-hat black box security auditing, and that quickly taught me how to evaluate execution flow mentally.
I find that even though I'm writing high-level Ruby web apps now, my ability to rapidly follow code around in my head lets me debug more quickly and effectively than many of my co-workers.
I firmly recommend trying reverse-engineering for anyone who hasn't - it will forcibly provide a lot of the same metal execution mapping abilities while feeling more relevant than writing machine code or assembler out on a piece of paper. And once you learn the basics, everything transfers back up to higher-level languages pretty well (with the exception of mental hex arithmetic, which will still come in handy as soon as you segfault your high-level language's runtime). Plus, when reverse-engineering, you can't fall back or get frustrated and use a compiler - unless you have an IDA + HexRays license for some reason, you're stuck figuring things out yourself.
As a side-note, it's always fun watching my dad work - he's an oldschool mainframe guy and he'll sometimes solve common setup issues using JCL or REXX stuff with a modified date sometime in the late 1980s.
By the time I was coding for 6502 and 8088 processors (still in assembly language - I was after all an embedded engineer), I had assemblers and an 80-column by 43-line text editor.
Aren't we spoiled today? I wouldn't want to go back, but I've also found that the low-level experience with machine code is something many "newer engineers" are missing ... it's an appreciation of the hardware that you can't get any other way.
Given that such kits are still sold  in India, I guess quite a few engineering students still learn to code like that.
My bosses, though, had all worked on punched cards.
Did my Bachelors(in India) in 2010's, and yes even we coded this way in our 8085 labs!!! Nothing has changed. Talked to my cousin who is now in his 3rd semester, They to start out coding the same way. Its 2013 and NOTHING has changed.
On the other hand, by 1985 I was doing QA for a 64-bit Unix environment.
The 1802 had sixteen general purpose 16-bit registers. Any one of these could be selected as the program counter. In a tiny embedded system (which is what the part was meant for), you might choose to designate one as the ‘normal’ PC and reserve a few others for important subroutines, which could then be invoked with a one-byte ‘SEP n’ instruction. Similarly you could implement coroutines or simple task switching by switching the selected PC between a pair of registers.
On the other hand, there was no conventional call instruction. The SCRT (“Standard Call and Return Technique”) for recursive or reentrant subroutines essentially involved defining (non-reentrant) coroutines to perform the ‘recursive call’ and ‘recursive return’ operations.
But what a neat feeling! There's something about putting in these numerical codes and seeing a graphical result that I haven't experienced since then. Yes, higher-level languages rock, but dang, this is coding next to the metal.
After learning a bit about logic gates and playing around with the theory, there was an incredible feeling of discovery. Somehow I had deciphered the code of computers, and was finally speaking their native language. This brought back great memories. Thank you.
This could probably be done with less than 50 lines in python, but doing it in assembly, calling all those interrupt routines, making sure your registers contained all the right values, man, when it finally worked, it was the greatest feeling in the world!
Then you realize it would take around 30 years, but still -- it was doable. You finally figured out how it all worked. You could make the computer do anything it was capable of doing. There was no more mystery there.
I mean, Chuck Moore has been doing it for about 30 years, but he had a working top-to-bottom system after less than five years.
Here's what I think it would look like:
Week 0: build a working interactive Forth system that can compile itself, in machine code. This is assuming you already have hardware and a way to boot it into some code you wrote. If not, that could be a matter of minutes or of months.
Weeks 1-3: build an interactive high-level language interpreter on top of it. Say, JS.
Week 4: enough of TCP/IP to do HTTP GET.
Week 5: write a bitmap font editor and a minimal filesystem.
Weeks 6-8: parse HTML (more or less), lay it out with a simplified box model, and draw it to pixels on a canvas, using a pixel font.
Week 9: cookies, HTTP POST, XMLHttpRequest.
Week 10: a more elaborate filesystem, maybe based on Git.
Weeks 11-12: some more HTML5 features, maybe including WebGL.
Weeks 13-14: TLS.
This is sort of cheating because you're taking advantage of our previous 70 years of experience to figure out what's worth building, and you're likely to end up spending a fair bit of time going back and retrofitting stuff onto previous iterations — JIT compilation, say, and multitasking — but it still seems like rebuilding the personal computing environment ought to be more like a one-semester project than a 30-year project.
You could argue that you get a huge speedup by not writing stuff in assembly, and I agree, but I think that's only a constant factor. You could easily write assembly code that maps directly onto JS semantics:
;;; invoke getContext("2d") on the canvas object
sgetContext: .asciz "getContext"
s2d: .asciz "2d"
mov $sgetContext, %eax
mov $s2d, %edx
At 12 I had learned how to program, in Z80 assembly, a routine to output to the graphics card of my dad's ancient-even-then Radio Shack model 16. I created a custom font that looked like the beautiful VGA font and used it to render 16-bit numbers in gorgeous decimal digits.
Most of us still do. Especially while dealing in areas which are not familiar to us.
Mind you I've just got an Atari 65XE from ebay today which I will now have to explain to my wife (I always thought the Atari graphics was neat and wanted to play with the display list stuff but never did so at the time)
If you really want to re-enact the OP article, visit Briel Computers and get a Micro-KIM. Mine works perfectly. At the 2013 Midwest Gaming Classic someone (not me) was exhibiting an original KIM (as seen in the post) and a microKIM side by side in the retrocomputing room, which my family found entertaining because I also have one mounted on a wood plaque in my "office" area. How else are you going to display one?
I was always more of a Z80 guy... speaking of Z80 computers, one currently shipping project is the N8VEM CP/M SBC project which I found pretty trivial to assemble and use, not to forget the P112 SBC as recently seen on kickstarter.
Then there's my stack of FPGA devices ranging from the micronova mercury to my boring typical Spartan boards, which twists the borders of "real hardware" and emulation.
A link to the MicroKIM1
What always amuses me is that my computers from around 2000 are not working anymore (heck, most laptops I have from the past 10 years are not even booting anymore) while computers from my parents basement which are around 30 years old just work like they just came from the shop. Even the Philips computers from that time who had known capacitor issues in the power circuitry work like time didn't happen.
It's similar to those Radio Shack "3000 electronic experiments" boards that were sold in the 80s(?) and yet can scale with the kids' understanding.
Sure, it's not self-hosting, but I've noticed the physical connection to caps, resistors, diodes, etc. in a "safe" environment makes them more proud of their accomplishments than their html/css projects.
There's something called the Maximite, which is a single IC computer running BASIC with output to VGA and SD storage (http://geoffg.net/maximite.html)
There's something called 'petite computer' for Nintendo DS (http://www.petitcomputer.com/)
But it'd be great if someone started a hub for this kind of thing.
There is a finite and limited amount of complexity this allows for. We've been given quite the catch-22: the ability to build increasingly complex system while needing to maintain those increasingly complex systems.
On whatever project you are currently working on, how much less code do you think you'd be writing if you had to hand write it? Where would the project be better for it and where would it be worse?
Reminded me a bit of this story..
Macintosh Common Lisp circa 1987: http://basalgangster.macgui.com/RetroMacComputing/The_Long_V..., specifically: http://basalgangster.macgui.com/RetroMacComputing/The_Long_V...
Firebug circa 2013:
http://getfirebug.com/logging, specifically http://getfirebug.com/img/logging/consoleDir.png
Now most programmers spend most of their time dealing with problems of persistence, networking, parallelism, security, and massive codebases. In response we are creating scripting languages with slightly different syntax, adding closures to Java, and arguing about vim vs emacs. We're so obsessed with solving the problems of the 80s, my way, that we've just gotten trapped there.
Yeah, I know. Rose tinted and all that.
Two words: 'the web'.
Slightly longer: we're re-inventing the wheel through a very roundabout path, I think right now we've gone back in time to roughly the mainframe area, it won't be long before someone will invent the mini computer all over again only this time it will be a local cluster in a box. Next after that the PC, a 'personal cluster' with a few 100 nodes the size of the desktop machine you had last year.
It was very cool to know that when the world ends I can build a new computer, from scratch.
Relays are doable from scratch (mining and refining copper; winding coils, etc etc) but it's going to be pretty hard work and you're going to have a very slow computer at the end of it.
Gdb, wow. So he'll still coding like in 1985.