
Programming Modern Systems Like It Was 1984 - spiffytech
http://prog21.dadgum.com/201.html
======
informatimago
1984: MacOS GUI programming in (Lisa)Pascal, Lightspeed Pascal came the
following year, and the MPW with Object Pascal a couple of years later.

OOP wasn't very common yet, but when you look at the Mac Toolbox, you can see
that it is definitely an OO design (eg. Window is a subclass of GrafPort and
Dialog is a subclass of Window, see also the various Control and
"subclasses").

Therefore I would say that 1984 with the Macintosh starts modern programming
age, marked with high level programming languages (the Mac Toolbox was written
in Pascal! (only the Mac OS was written in Assembler, and of course Quickdraw
was first debugged in Pascal, and then hand-optimized in assembler), and with
object oriented programming.

Of course, when people started to write Mac applications in C, it was a set-
back, but then NeXTSTEP made a big advance with Objective-C and OpenStep (now
Cocoa, ie. MacOSX and iOS), which allowed the creation of the web (and
therefore Javascript).

But really, OpenStep (Cocoa) is just a cleaned up version of the Mac Toolbox
(which was itself a dumbed down version of Smalltalk developped in the 70's at
the PARC).

So I would say that not much new since then. We've heard of some experiment
where some program was run on the Amazon cloud to debug programs on a cost of
$8/bug (which assumedly is cheaper than a programmer debugging), but we don't
have those kind of AI aids integrated in our tool boxes. We've heard of some
automatic proof systems (Coq, ACL, etc), but management wouldn't let us invest
in their use (perhaps rightly so, they seem to be as much time sinks as
programming itself).

We ARE still programming like it wask 1984!!! (unfortunately).

Well, not exactly, ok. Now compilations are instantaneous instead of taking
hours, and just try to write your next program on a 512x342 pixel screen (or
just a 80x25 terminal) with no Internet access for the documentation (all the
documentation printed in thick bibles on your knees! (Imagine Android
documentation printed!)).

Ok, there's some progress still: nowadays I use Common Lisp and emacs all the
time (for a pale rendition of the 80's Lisp Machines), and people often use
Ruby or python, or even Java, which are run with garbage collectors. So this
invention from 1959 finally takes ground, helping us concentrate on solving
users' problem instead of managing computing resources.

------
pjmlp
Started coding in 1986. So no cryogenic sleep, rather living through the whole
thing.

Agree with all those points.

Actually C was already primitive in 1984 and only mattered for those coding in
an UNIX system. Meanwhile, some of us had the chance to Modula-2, Ada and the
myriad of Pascal dialects that were available back then.

Always with an healthy set of Assembly code, given the quality of generated
code, even in C's compilers case, which were everything but fast back then.

~~~
greenyoda
_" Actually C was already primitive in 1984 and only mattered for those coding
in an UNIX system."_

There was _lots_ of code getting written in C on IBM PCs (running DOS) back
then. This article in Byte Magazine from 1983 reviewed nine different PC C
compilers:

[https://archive.org/stream/byte-
magazine-1983-08/1983_08_BYT...](https://archive.org/stream/byte-
magazine-1983-08/1983_08_BYTE_08-08_The_C_Language#page/n135/mode/2up)

~~~
pjmlp
C on MS-DOS was an option among a myriad of languages. MS-DOS was written in
Assembly. All major compiler vendors had C, Modula-2, Pascal dialects, BASIC
dialects, Assemblers to choose from.

I only looked at C for the first time in 1993 and was dysmaid what it offered
vs the alternatives.

C on UNIX was The Official System Programming language.

~~~
cturner
I'm setting up a DX4 for messaround with Norton books and asm at the moment,
so quite interested in this. The general purpose environments I know of from
then would have been C, Turbo Pascal, Basic, ASM, and then probably some
databasey things like Clipper, Foxpro. What do you feel was stronger for
general purpose dev in that era than Turbo C? Even on OS/2 2.0/1, I can't
think of a general purpose tool that would have been more effective than
Visual C (but I wasn't actively developing myself then).

~~~
ido
Back then the "default" languages for making dos programs were Turbo Pascal &
QuickBasic. C was the 3rd place I guess.

~~~
pjmlp
4th place.

Assembly came before it. MASM and TASM were quite good.

------
101914
It used to be that the major limitation in computing was hardware. Look at the
creativity that it spawned. We are still living off of ideas three decades
old.

Now what is the limitation? Is it programmer stupidity? Or maybe there is no
limitation? Maybe that is the problem.

History shows hardware kept getting faster, but there has been no decrease in
the amount of ever larger, slow, unreliable software. Where is all the lean,
"instantaneous" software running in RAM, extracting every last bit of
computing power, instead of mindlessly consuming it?

What exactly is the point of "programmmer productivity" and whatever problems
some programmers think that this justifies?

Is the point to advance computing? The state of the art?

Or is the goal is to peddle their junk to those with very low expectations,
not to mention those with zero expectations. (The later were not old enough,
or even alive, to see how computing was done in the 80's and hence have
nothing with which to compare.)

It is unfortunate that giving programmers what they wanted -- faster hardware
-- has not resulted in software that is any more creative or powerful, and
truly it is less so, than the software of the past. Only hardware has
improved.

Given the history so far, I would argue that the only proven path to true
creativity in computing is through limitation.

Perhaps present day computing's limitation is programmer stupidity, or to be
more gentle, programmer ignorance.

Agree 100% with all points in the blog post, except perhaps the last one. Well
done.

~~~
userbinator
_It used to be that the major limitation in computing was hardware. Look at
the creativity that it spawned._

This can be summed up in one word: demoscene. In fact I thought the article
would be about the demoscene, just from its title.

 _History shows hardware kept getting faster, but there has been no decrease
in the amount of ever larger, slow, unreliable software._

That's what I think someone who took this 30-year-leap would perceive if they
switched from a computer of the 80s to a new system today: "This app is _how
many_ bytes!? It's pretty and all, but it can only do this? Why does it take
so long to boot up?"

Interesting comparison here:
[http://hallicino.hubpages.com/hub/_86_Mac_Plus_Vs_07_AMD_Dua...](http://hallicino.hubpages.com/hub/_86_Mac_Plus_Vs_07_AMD_DualCore_You_Wont_Believe_Who_Wins)

 _What exactly is the point of "programmmer productivity" and whatever
problems some programmers think that this justifies?_

I think it's more like a combination of laziness and selfishness: programmers
want to be more "productive" by doing the least work possible, and at the same
time are uncaring about or underestimate the impact of this on the users of
their software. Trends in educating programmers that encourage this sort of
attitude certainly don't help...

There's also this related item a few days ago:
[https://news.ycombinator.com/item?id=8679471](https://news.ycombinator.com/item?id=8679471)

------
fegu
Learn to trust virtual memory. This is the real reason not to use temp files.
Virtual memory handling is mature in all modern operating systems.

------
sehugg
In some ways it's still 1984, especially for mobile dev:

> Highly optimizing compilers aren't worth the risk.

Rewrite slightly to "non-mainstream toolchains aren't worth the risk". Should
you write your next game in pure Java, or use JNI/C++, or use Unity, or
something more obscure? No matter which you pick, you'll be running to
StackOverflow (if you're lucky) to fix the periodic show-stopper. Using the
most popular tool gives you the highest likelihood of success without delving
into library source code.

> Something is wrong if most programs don't run instantaneously.

Forget startup time -- Make a 1980s developer deal with iOS code-signing
issues and watch them switch their occupation to animal husbandry.

> Everything is so complex that you need to isolate yourself

Back then documentation was delivered in thick paper volumes which often
contained more bits of information than the software being documented. Now we
just Google StackOverflow and cross our fingers.

------
mklencke
In some cases, 2x or 4x is worth the risk. Imagine having a warehouse-scale
computer where you can use 50,000 CPUs instead of 200,000.

~~~
lostcolony
'In some cases' is the key bit. In the 80s, it was much closer to 'all cases'.
Nowadays, by all means, turn on the optimizing stuff if you need it, but you
are adding risk (especially since you are probably using something like C if
you're that concerned about performance, and with so many undefined
behaviors)...but unlike the 80s, there's a decent chance you may not need it
to achieve the performance goals you need.

------
lmm
Gah. An interesting thing to think about, but you do have to actually think.

> Highly optimizing compilers aren't worth the risk.

If they were buggy, sure. But, probably through the weight of every single
developer using them for absolutely everything, the bugs have been ironed out.

> Something is wrong if most programs don't run instantaneously.

Why? Do a straight-up cost/benefit analysis. How much extra would you pay for
a program that ran twice as fast? If anything I'd expect people pay more for
bigger programs that take longer to run, since they feel like they're doing
something important.

> Design applications as small executables that communicate.

The microservices people are sort of doing this. It's a great way to turn all
your problems into distributed problems. And what's not set up is any sensible
way to send even basic data structures around. "pipes and sockets" send
streams of bytes. In 2014, if I want to send an object, a function even -
heck, if I want to send an inert struct - from one process to another, I'm
basically stuck. The least-bad option is probably to write out the object
definition in Thrift IDL, generate a bunch of code and then write a bunch more
code to map back and forth with my actual program datatypes. It's hard to
express quite how much it sucks, but many people have tried and failed to come
up with something better.

> Don't write temporary files to disk, ever.

If you're building new executables for everything you're either doing
something wrong, or you don't care about the space you're wasting. The right
way to do these tiny tweaks is in an interactive, workbooky environment like
ipython. If you like that style of programming.

> Everything is so complex that you need to isolate yourself from as many
> libraries and APIs as possible.

Absolutely backwards. Everything is so complex - and using libraries so easy -
that you should use libraries for everything. A good library provides a
simpler interface than the things it's built on top of, so you can just call
e.g. encrypt(data) rather than doing thousands of lines of bit-twiddling.

~~~
AnimalMuppet
> > Something is wrong if most programs don't run instantaneously.

> Why? Do a straight-up cost/benefit analysis. How much extra would you pay
> for a program that ran twice as fast? If anything I'd expect people pay more
> for bigger programs that take longer to run, since they feel like they're
> doing something important.

This is one of the things going on with Go. What's the cost/benefit of a sub-
second compile time compared to a ten-minute compile time? Not all that much,
really. But when you multiply it times multiple team members, times multiple
daily compiles, times a code base that's going to live for the next 20 years,
those slower compiles add up to a _big_ waste.

~~~
lmm
Eh maybe. I don't find compile time is ever a limiting factor, but if it
became one there are ways around it (e.g. incremental compile servers). And
sacrificing a small amount of development time for fewer errors is almost
always a good tradeoff.

(What's your estimate of the cost of maintenance on the repetitive code that
Go's inadequate type system makes necessary?)

~~~
AnimalMuppet
That's hard for me to say. Can you give me a concrete example of "the
repetitive code that Go's inadequate type system makes necessary"?

~~~
lmm
I'm starting from an assumption that keeping track of effects is necessary,
and that these effects are things that aren't built into the language. E.g.
database access, structured logging (like an audit trail). (You're allowed to
say that you wouldn't want to carefully keep track of these things, that it
doesn't matter whether a particular function accesses the database or not, but
in that case rather than a repetitive code cost, I'd say you end up with a
testing/safety/maintenance cost). An example is necessarily pretty abstract or
complex, because it's only when you're doing these complex things that you
really need to abstract over them at multiple levels. But I'll try:

One of the fun things I can do in Scala is build up an "action" using
for/yield syntax:

    
    
        for {
          f ← auditedFunction1()
          g = nonAuditedFunction(f)
          h ← auditedFunction2(g)
        } yield h
    

I can do the same thing for database actions:

    
    
        for {
          results ← myQuery()
          modified = results map myTransformation
          saved ← modified traverse save()
        } yield saved
    

And then I can have my Spray directives know how to handle a generic action,
by writing a simple instance for each action. E.g. I can tell it to display
the audit trail if a particular header is present:

    
    
        implicit def auditOnSuccessFutureMagnet[T](
          inner: Auditable[T]) =
          new OnSuccessFutureMagnet {
            type Out = AuditTrail \/ T
            def get = header(MySpecialHeader) map {
              case Some("mySpecialValue") =>
                -\/(inner.written)
              case None => \/-(inner.value)
          }
    

And I can do a similar thing for database actions, wrapping them in a
transaction at the HTTP request level - giving me the equivalent of session-
creation-in-view, but tied to the actual request rather than a thread, and
with safety maintained by the type system.

All of this stuff is typesafe. I can only use a contextual action that I can
provide an implementation for how to handle (there are a few defaults like
e.g. Future), and I can only return an object "inside" the context that I've
also provided how to handle (e.g. by providing a json formatter for that
type). And none of this is special or built-in (other than the
OnSuccessFutureMagnet interface that I'm implementing); all these things are
custom classes or from different libraries.

How could I do that in Go? I'd have to declare each of the different
combinations (each possible DatabaseAction[A] and also each possible
Auditable[A]) as separate types, right? And my framework certainly couldn't
offer the generic handling of F[A] that lets me reuse the same functions for
both, so I'd have to have separate copies for anything that I wanted to work
with both DatabaseAction and Auditable.

------
bsaunder
Great write-up. I like the perspective of a 30 year jump. In retrospect it
feels like we are lobsters slowly boiling. When looked at from a distance,
these observations become much clearer.

> It's time to be using all of those high-level languages that are so fun and
> expressive

I think for most of us, programmer efficiency is more important than program
efficiency. IMHO, in these cases, there's much to be gained from the scripting
languages. For many programmers though, these seem to push some comfort zones
too much. There are obviously some domains were performance is still king and
the high-level languages won't cut it.

> Highly optimizing compilers aren't worth the risk.

For some they sure are. Probably would have been better stated with "not worth
the cost" rather than risk. I don't see much risk here, more so cost (in the
terms of effort and opportunity cost). I'd rather not worry about the details
of bit twiddling on things if my current problem is plenty efficient on the
small n's I'm dealing with. Compilation time can be a real cost as well. I
have much faster code-test cycles in scripting environments.

> Something is wrong if most programs don't run instantaneously.

I think many people don't comprehend how true this is. Its truly mind boggling
to consider how many instructions a second are run these days. What could your
program possibly be doing with all of them in one second? Again, for some
folks there are very good answers. For most of us, its loading and
initializing layers upon layers of classes. I routinely see some stack traces
dozens of lines deep. I'm sure each one is providing some useful abstraction
of something, but.. really?

We have a Java based application that was written without any external
libraries, just straight up Java. It compiles down to a 120KB jar and provides
significant real-time functionality with phone systems. Meanwhile some of our
Java based web apps provide a 50MB war, but probably use every relevant Java
library out there to help.

> Design applications as small executables that communicate.

Yes! I've gotten a lot of mileage out of this. Many would to well to read Eric
Raymond's book: "The Art of Unix Programming". The only thing I'd like a
better solution for is a nice mechanism to set-up and configure the deployment
of said "small executables that communicate". I'd like programming to be as
easy as building chain of commands in a Unix command line. Yet setting up a
production environments with chains of commands seems a bit.. ugly.

> Don't write temporary files to disk, ever.

Once upon a time, those in the know would set-up RAM disks. These days with
SSDs, it seems that's effectively what we are doing. Probably a negligible
gain to use RAM over SSD. Only slight advantage of RAM over SSD I see is that
things get cleaned up on a reboot (does anyone still reboot regularly?).

> Everything is so complex that you need to isolate yourself from as many
> libraries and APIs as possible.

I think this is behind the recent backlash against frameworks. Unfortunately,
many raw languages still lack some basic utilities that make it just a bit
painful to use without any libraries. Seems many people will be migrating
towards unobtrusive libraries that add value in the places needed without
requiring a cascading set of components and tight coupling with the
application being coded. This is a good thing.

> C still doesn't have a module system?

I've got nothing here. In general, I've been migrating towards a more data
driven approach rather than native code. I wish we'd start loading code
modules the same way we would load a piece of data from a file or database. At
the end of the day it's all the same. Obviously you need to be sure of the
sources you are loading code from, but you should be protecting your data the
same way.

UPDATE: Minor editing for clarity. UPDATE2: And math.

~~~
jfoutz
Optimizing compilers are pretty magical. It's easier than you think to code
undefined behavior. When the compiler detects undefined behavior, it can do
_anything_.

Here's a pretty neat example [1]. There have been a few writeups on HN over
the years, i can't find some of the more elaborate surprising behaviors.

[1] [http://blog.llvm.org/2011/05/what-every-c-programmer-
should-...](http://blog.llvm.org/2011/05/what-every-c-programmer-should-
know_14.html)

~~~
to3m
I drag these two out every time somebody mentions this stuff:
[http://blog.metaobject.com/2014/04/cc-
osmartass.html](http://blog.metaobject.com/2014/04/cc-osmartass.html),
[http://robertoconcerto.blogspot.co.uk/2010/10/strict-
aliasin...](http://robertoconcerto.blogspot.co.uk/2010/10/strict-
aliasing.html)

------
gbuk2013
> why isn't it possible to create and execute a script without saving to a
> file first

It is certainly possible in Bash, at least:

    
    
      CODE='echo Hello; echo World'
      RESULT=$(bash -c "$CODE")
      echo "$RESULT"
    

Very good article though! :-)

~~~
npsimons
Other more "obscure" options also come to mind: Emacs Lisp, Smalltalk, etc.
Wasn't there also something in Plan 9's GUI like this? The whole saving to a
file first thing is just a convention, and one that got me thinking: it's a
convention for a reason (reproducibility), and this argument ("why can't I
just run it without saving first?"), seems kind of contrary to the whole point
of the post. Disks are fast! Saving a file first is ridiculously fast in this
day and age. Just bind a keystroke to save and run in one button press. It
speaks more to lousy development environments than anything else if you can't
make this a one button press operation.

------
Roboprog
I started working part time while going to school in '85 (off by one year,
here).

I _did_ use a reasonably high level language, dBASE, rather than C/Pascal
(most of the time). Writing a CRUD app in dBASE/xBASE (Clipper, Fox) back
then, vs using a low level language, felt like using Ruby on Rails does now
(vs "Enterprise" make-work). Of course, in both cases, you sometimes run into
performance/capacity issues that make you rethink what you did with the higher
level platform :-)

So, I guess I disagree that everything was done at or near the assembly
language level back then.

------
npsimons
I don't have to imagine what a programmer doing this jump would do, I can look
to my father, who started out writing programs on punch cards in high school
to be sent to the university, and maybe a week later he'd get results back (if
his program worked at all). He retired from his day job writing Java a few
years back, and is still hacking on websites using Tomcat, etc, and he's never
looked back.

------
sehugg
How it might go:

"That web programming thing sounds pretty fun!"

(glances across bookshelf)

"So if you'll give me a copy of the spec, I can get started."

------
brians
So that's where Bernstein comes from.

This advice has a lot in common with his (& others) advice on how to write
safe programs with a small trusted base---and the other elements seem a matter
of shared taste.

------
xolve
> Why does this little command line program take two seconds to load and print
> the version number?

> Don't write temporary files to disk, ever.

> Why does a minor update to a simple app require re-downloading all 50MB of
> it?

Gets me!

------
vim-guru
> C still doesn't have a module system

There's package manager-ish:
[https://github.com/clibs/clib](https://github.com/clibs/clib)

~~~
pjmlp
How does it solve naming clashes?

------
shoover
High level, instant startup, message passing, isolation from complex APIs
(interfaces), and a good module system. Sounds like he's advocating for Go.

------
Roboprog
Temp files are great - you can inspect your output. And, while they might suck
on Windows, on Unix, they are pretty much an in-memory data structure anyway.

~~~
gizmo686
On Ubuntu (14.04), /tmp is part of the root filesystem. /run seems to be the
main ramdisk.

~~~
Roboprog
What I meant is that a "temp" file may very well never hit the actual disk,
rather than memory buffers, until after the time that app 1 writes it and app
2 reads it.

------
Alupis
Several issues with this post:

> It's time to be using all of those high-level languages that are so fun and
> expressive

But then soon says:

> Something is wrong if most programs don't run instantaneously

Well, most higher level languages are getting interpreted, which require
_some_ cpu time to startup. Even with something not-so-high-level-but-sorta-
high-level like Java or C#, you still have an abstraction level (JVM/CLR) that
must first boot, then interpret, then execute.

> Highly optimizing compilers aren't worth the risk

Today's optimizing compilers are truly works of genius, and can perform
optimizations on code un-thinkable by most programers. Even if they were
thinkable, the source code required to produce the same machine code _without_
compile-time optimizations would be hairy to say the least, and mostly
unmaintainable (unrolling loops, bit-shifts instead of multiplication, etc).
Not to mention 2-4x performance improvements is nothing to shake a stick at.

> Design applications as small executables that communicate

I don't have any particular qualms with this statement in principle, however
with some enterprise apps this simply isn't feasible. Sure I may use a great
many separate apps, all working together in orchestration (my database,
authentication server, some 3rd party libs, etc), however for some apps, a
great deal of the codebase will end up being a monolithic piece, and that is
just fine so long as it's maintainable and extensible.

> Don't write temporary files to disk, ever

This depends on what the temporary file is (and how "temporary", temporary
really means in context). If I'm writing a program that will generate some XML
document then ultimately send that off to some remote server, I might want a
local copy kept in the working directory so that in the event something goes
wrong, I can look at the file and see what the last produced content was (or
wasn't).

> Everything is so complex that you need to isolate yourself from as many
> libraries and APIs as possible

This seems to be in the context of writing system-level code. Even in that
context, if a library has done something already that will be useful to you,
use it. Taken to the extreme, this could imply one should re-write portions of
libc into their app to avoid any external API calls in a vein attempt to guard
against external dependencies and a potentially changing API surface. In my
projects, I typically have a loose rule that if something is relatively
maintained and recent, and it helps me get my job done quicker, I'll use it.
That is a very loose definition, but typically prevents me from including some
lib that hasn't been updated since 2002 just to parse some XML document.

> C still doesn't have a module system

Any why should it? It's a low-level language usually (but not always) reserved
for low-level (ie. system) programming. This is not the typical language of
choice to write your pluggable enterprise app in.

~~~
derefr
> Even with something not-so-high-level-but-sorta-high-level like Java or C#,
> you still have an abstraction level (JVM/CLR) that must first boot, then
> interpret, then execute.

There's no reason those shouldn't be resident daemons. Heck, if they were, why
should the OS be anything other than a hypervisor for managing those, plus
some abstractions like a "container" interface that each runtime execution
driver implements? We could have had something like Microsoft Research's
SingularityOS a decade ago if we had approached the problem from the other
direction—pushing the runtime lower in the stack, instead of trying to write a
kernel that does everything. Imagine your laptop booting into Xen, and then
loading a Smalltalk image that contains the desktop, while in parallel booting
up some background daemons into virtually-isolated Erlang nodes.

> Sure I may use a great many separate apps, all working together in
> orchestration (my database, authentication server, some 3rd party libs,
> etc), however for some apps, a great deal of the codebase will end up being
> a monolithic piece, and that is just fine so long as it's maintainable and
> extensible.

The author's complaint was less about factoring of components and more about
the way we (still) do IPC. As in, why are we directly-and-unsafely sharing
allocated memory handles between families of processes (i.e. using threads)
instead of just having processes push information to one-another's inboxes
over sockets? Anything that's currently threaded could be done with message-
passing instead. And like the article said, it'd be only a 2-4x difference in
overhead, minuscule compared to the speedups over the past decades.

~~~
Alupis
> There's no reason those shouldn't be resident daemons

Well, you can blame OS vendors for that. But would it be practical even if
implemented? For a system that will never execute Java code, why would a JVM
daemon be running in the background? Even if it were a daemon, it would still
require spool-up time to interpret the first byte-code down to machine code
before the built-in jvm optimizing compiler could kick in and compile down to
native machine code.

> hypervisor for managing those

We do have similar things now-days, but it's worth considering that your OS
does a lot more than just simply execute other code. But again, there's a
practicality issue trade-off. Containers aren't exactly easy, and it's
unlikely that even today's implementations will be the last attempts that get
it 100% right. (BTW, Xen requires a Kernel, and for the sake of argument, your
laptop does not require anything but the kernel to boot (actually less, you
can boot raw binary machine instructions), the rest is convenience for the
user).

> why are we using processes that share mapped memory allocations (i.e.
> threads) instead of just having processes push information to one-another's
> inboxes over sockets

Sockets still have overhead. IPC can be achieved in a number of ways, some
better than others, and some better in certain circumstances. Why bother
opening a socket when my program consists of only a small handful of child
processes all running on the same host and only require very small amounts of
communication (such as a keypoller subprocess)? Message queues are fine for
sending basic messages around, but even a message queue is unnecessary if I
can pass the info into the child process as I spawn it. Why over-complicate
things if not necessary?

> a 2-4x difference in overhead, minuscule compared to the speedups over the
> past decades.

That is, of course, relative. I think some large projects would argue a 2-4x
improvement in compile-time, for example, is a dramatic time savings. If we
snubbed every 2-4x performance improvement, we'd still have performance on the
scale of 1984.

~~~
pjmlp
The OS/400 mainframes use a JIT in their kernel.

All userspace code, including C, compile to bytecode known as TIMI.

~~~
Alupis
Depends what version you are running and the applications running ontop. The
AS/400 box at my company runs a load of RPG language code, which is
interpreted at runtime.

~~~
lomnakkus
It's very likely that your _CPU_ interprets the machine code at runtime.
What's your point?

EDIT: "machine" -> "machine code". Doh.

