
Ask HN: People programming before 2000: why were 2 digit years a thing? - Jaxkr
Recently dug out one of my dad’s old Apple laptops, and was amazed to find it only supported 2-digit years. And I was around as a young child for y2k, and I’ve been thinking about it a lot lately.<p>Specifically: how was it ever a problem? What kind of thinking led to people making software with 2-digit years, even in the late 80s? Did it really save a lot of effort?<p>The idea of someone sitting down and making the conscious choice to design an OS that uses 2-digit years is hilarious to me, but was there a good reason?
======
jenkstom
There were several reasons. One, everybody used two digit years for
everything. When writing checks you used two digits. When you saw a date on
television or in a book it used a two digit year. So it just made sense.

Secondly there was the idea that it would save space. And really it did. When
your computer new out of the box had 3.5 KB available storage, those two extra
characters were very important.

And thirdly because computers were new and things were changing so rapidly
that everybody assumed everything would be replaced in just a few years. It's
a bit difficult to explain, but electronic data was considered a lot "less
real" then than it is now. When you switch the computer off it's gone. The
idea that digital data would persist for years was difficult to grasp for a
lot of people. Floppy disks had a realistic lifespan measured in months and
later in years, but single digits for sure. It wasn't until optical drives
became commonly available that it was possible for digital personal data to
last very long at all.

~~~
Jaxkr
Thanks for your reply. This is an interesting psychological perspective (the
other comment offered a technical one) and it makes a lot of sense.

------
yongjik
To expand on another good comment: space was precious. Really precious. If you
were only a kid in the 90s you will have a hard time appreciating it. (But
then again ancient UNIX greybeards will say the same to me, I guess...)

For example, my first computer was Apple II. It had 48KB RAM and 12KB ROM
which contained initial bootstrap code that corresponds to BIOS, plus a Basic
interpreter, plus a REPL tool with disassembler. All of that, in 12KB. The
main page of Hacker news is about 42KB, not including icons and HTTP headers:
it's already about three times a BIOS+Basic interpreter+disassembler.

And the monitor only had 40x24 lines (or maybe 40x25, it was a long time ago).
Nobody's mad enough to waste _two_ columns, out of 40, just to print two
digits that will be forever stuck at 19.

As another example, around that time Turbo Pascal was immensely popular, and
every string started with a byte showing its length: so no string can be more
than 255 bytes. Good enough: what kind of madman would waste precious memory
on a string more than 256 characters?

If you go back to that period and think about the environment, using 4-digit
years will be considered a madness, not the other way.

~~~
Jaxkr
Thank you very much for your reply. I definitely lack perspective on this. I
really sympathize with the reasoning behind not printing a 4-length date on
the low column displays.

But I’m still curious about just how wasteful a 4 digit year would’ve been to
use internally (not to display). Wouldn’t it only use like extra 2 bytes of
memory or less per “date” structure in the memory?

~~~
lioeters
I don't have much personal experience in the matter, but I believe it's the
mentality of working with very constrained systems, in this case limited
memory, storage, processing. It was/is considered efficient and elegant, to
shave off any possible unnecessary space or operations.

Those extra 2 bytes meant and "cost" a big deal more than they do now. At
least, that's what I imagine is a reason behind the decision to use 2 digits
to represent a year.

Future programmers will be asking us too, why we thought it was a good idea to
use signed 32-bit binary integers to represent time - when we should have
known it would run out soon.

[https://en.wikipedia.org/wiki/Year_2038_problem](https://en.wikipedia.org/wiki/Year_2038_problem)

------
DrScump
In earlier days of computing, storage was expensive. Saving two bytes for each
date field multiplied by thousands of records across hundreds of files saved
real money.

Another example of contortion for storage saving was Packed Decimal used in
mainframes, where a (say) 11-digit number could be stored in 6 bytes rather
than 11. There were even IBM 360/370 assembler math operations for packed
decimal so they didn't have to be decided and re-encoded for arithmetic use.

