
Ask HN: Was the Y2K crisis real? - kkdaemas
There was very little fallout to the Y2K bug, which begs the question: was the Y2K crisis real and well handled or not really a crisis at all?
======
dwheeler
Yes, the y2k crisis was real, or more accurately, would have been a serious
crisis if people had not rushed and spent lots of money to deal with it ahead
of time. In many systems it would have been no big deal if unfixed, but there
were a huge number of really important systems that would have been a serious
problem had they not been fixed. Part of the challenge was that this was an
immovable deadline, often if things don't work out you just spend more time
and money, but there was no additional time that anyone could give.

The Y2K bug did not become a crisis only because people literally spent tens
of billions of dollars in effort to fix it. And in the end, everything kept
working, so a lot of people thought it wasn't a crisis at all. Complete
nonsense.

Yes, it's true that all software occasionally has bugs. But when all the
software fails at the same time, a lot of the backup systems simultaneously
fail, and you lose the infrastructure to fix things.

~~~
skissane
From what I've heard, a number of IT departments used it as justification to
dump legacy systems.

For example, at a former employer of mine, a big justification for getting rid
of the IBM-compatible mainframe and a lot of legacy systems which ran on it
(various in-house apps written in COBOL and FORTRAN) was Y2K.

In reality, they probably could have updated the mainframe systems to be
Y2K-compliant. But, they didn't want to do that. They wanted to dump it all
and replace it with an off-the-shelf solution running on Unix and/or Windows.
And, for reasons which have absolutely nothing to do with Y2K itself (the
expense and limitations of the mainframe platform), it probably was the right
call. But, Y2K helped moved it from "wouldn't-it-be-nice-if-we" column into
the "must-be-done" column.

~~~
marcus_holmes
COBOL programmers were charging a fortune to be hauled out of retirement to
work on this. There was a huge shortage of experienced COBOL devs. And devs
who actually understood the specific legacy system involved were even rarer.
If the company had customised their system and not kept the documentation up
to date or trained new programmers on the system, well, they didn't have a
choice. They had to replace it.

~~~
mbellotti
It's adorable that these answers are all in the past tense. This is still
going on. Many of these systems still exist.

~~~
giaour
To be fair, I've met a lot of young COBOL programmers since starting to work
on mainframe systems last year. I think the "elderly COBOL programmer coaxed
out of retirement by dumptrucks full of money" dynamic that supposedly
dominated the Y2K response is less prevalent these days, and companies that
still run mainframe systems just realize they have to hire regular programmers
and teach them COBOL and z/OS.

~~~
marcus_holmes
well, I witnessed it, so it's not really a supposition is it?

~~~
giaour
Sorry, didn't mean any slight at you by that. I don't doubt your account, but
having no idea how widespread the practice was, its dominance on a large scale
is just a supposition on my part.

------
Diederich
To add a little nuance:

A global retailer you've definitely heard of that used to own stores in
Germany spent a lot of time preparing for Y2K. This was a long and painful
process, but it got done in time.

But problems still slipped through. These problems ended up not being that big
and visible because a large percentage of the code base had just been recently
vetted, and a useful minority of that had been recently updated. Every single
piece had clear and current ownership.

Lastly, there was an enormous amount of vigilance on The Big Night. Most of
the Information Systems Division was on eight hour shifts, with big overlap
between shifts at key hours (midnight in the US), and _everyone else_ was on
call.

As midnight rolled through Germany, all of the point of sale systems stopped
working.

This was immediately noticed (the stores weren't open, but managers were
there, exercising their systems), the responsible team started looking into it
within minutes, a fix was identified, implemented and tested within tens of
minutes, and rolled out in less than an hour.

Pre-Y2K effort and planning was vital; during-Y2K focus, along with the
indirect fruits of the prior work, was also critical.

~~~
fleetingmoments
Couldn't the rollover be simulated ahead of time by simply setting the date
forward? Seems crazy that you had to have people on the ground fixing things
as the clock struck 0.

~~~
davesmylie
Sure. On systems under your control.

It's the network of interacting systems that prevents this from being so
simple.

~~~
protomyth
Yep, the super fun of getting a Y2K compliant specification of how the data
will come over the interface only for it to fail and have '1910' instead of
'2000' in the 4 character date field that worked in test. People on site to
correct problems were a needed item for some even after testing.

------
davismwfl
From someone who went through it and dealt with code, it was a real problem
but I also think it was handled poorly publicly. The issues were known for a
long time, but the media hyped it into a frenzy because a few higher profile
companies and a lot of government systems had not been updated. In fact, there
were still a number of government systems that were monkey patched with date
workarounds and not properly fixed well into the 2000's (I don't know about
now but it wouldn't shock me).

There was a decent influx of older devs using the media hype as a way to get
nice consulting dollars, nothing wrong with that, but in the end the problem
and associated fix was not really a major technical hurdle, except for a few
cases. It is also important to understand a lot of systems were not in a SQL
databases at the time, many were in ISAM, Pic, dBase (ouch), dbm's
(essentially NoSql before NoSql hype) or custom db formats (like flat files
etc) that required entire databases to be rewritten, or migrated to new
solutions.

My 2 cents, it was a real situation that if ignored could have been a major
economic crisis, most companies were addressing it in various ways in plenty
of time but the media latched on to a set of high profile companies/government
systems that were untouched and hyped it. If you knew any Cobol or could work
a Vax or IBM mainframe you could bank some decent money. I was mainly doing
new dev work but I did get involved in fixing a number of older code bases,
mainly on systems in non-popular languages or on different hardware/OS because
I have a knack for that and had experience on most server/mainframe
architectures you could name at that time.

~~~
_red
>dbase

At the time I was managing a dBase / FoxPro medical software package...we were
a small staff who had to come up with Y2K mitigation on our own.

Our problem is we only had source code for "our" part of the chain...other
data was being fed into the system from external systems where we had no
vendor support.

Thus our only conceivable plan was to do the old:

    
    
      If $year<10; 
        date="20$year"
      else 
        date="19$year"
    

It worked in 99.9% of the cases which was enough for us to limp thru and just
fix the bad cases by hand as they happened. Eventually we migrated off the
whole stack over the next few years so stopped being a problem. I'm sure many
mitigation strategies did the same....

~~~
andyjpb
A much more insidious problem with the Y2K bug was the leap year calculation.
As you point out, the 20-digit-year thing was relatively easy to fix.

[https://en.wikipedia.org/wiki/Year_2000_problem#Leap_years](https://en.wikipedia.org/wiki/Year_2000_problem#Leap_years)

~~~
protomyth
I still love the fact that if you only implemented the first rule or had the
knowledge to implement all 3 rules, it would totally work, but if you
implemented 2 of the 3 rules you were wrong.

It taught me a great lesson about results not proving correctness as how you
got there could bite you later.

~~~
kccqzy
I don't get why people don't know all three rules. In elementary school when
the calendar is taught, the complete rules of the leap year were simply taught
by the teacher. We even joked about people born on the 29th of February. Why
wasn't this taught everywhere?

------
protomyth
Yes, it was a real crisis. This revisionist history that some are now saying
it was no big deal. It was a big deal and many people spent many hours in the
90's assuring that the financial side of every business continued. I am
starting to get a bit offended at the discounting of the effort put in by
developers around the world. Just because the news didn't understand the
actual nature of the crisis (Y2K = primarily financial problems) is no excuse
to crap on the hard work of others. It is sad that people that got the job
done by working years on it get no credit because they actually got the job
done.

I see this as a big problem because Y2038 is on the horizon and this "not a
big deal" attitude is going to bite us hard. Y2K was pretty much a financial
server issue[1], but Y2038 is in your walls. Its control systems for machinery
that are going to be the pain point and that is going to be much, much worse.
The analysis is going to be a painful and require digging through
documentation that might not be familiar (building plans).

1) yes there were other important things, but the majority of work was because
of financials.

~~~
joshstrange
This is always such an annoying problem:

1\. Shine light on problem and make sure people hear about it

2\. People respond and fix it

3\. Outcome not as bad as you said it could be /because it was fixed/

4\. Some time later "That wasn't a big deal"

No, it wasn't that big of a deal because we worked hard to fix it!

~~~
umanwizard
Mark my words, if the containment measures work and Covid-19 goes away, this
is exactly what people will be saying. Drives me crazy.

~~~
gregmac
Yeah, I'm not sure if the original question was asked because of this, but
I've had a couple conversations where we've compared the COVID-19 situation to
Y2K.

Even in this case, we have evidence of what happens from delaying response
(eg: China, Italy) yet if proactive steps like shutting down schools, events
and gatherings are effective you'll get people complaining it was all an
overreaction.

------
saberworks
I was enlisted in the USAF and was assigned as part of a two-person group to
deal with Y2K issue for all unclassified systems on the base. We were outside
the normal AF procedures because the base didn't have wartime mission. The two
people assigned to the group were myself (E-3 or E-4 at the time) and a very
junior (but smart) lieutenant.

We basically inventoried every unclassified computer system on the base. If it
was commercial, off-the-shelf software that could be replaced we recommended
they replace it. If it could not be replaced with newer version (because it
ran software that could not or would not be replaced) we replicated and tested
it by changing the computer hw clock. In all cases we recommended shutting
down the computer so it wasn't on during the changeover.

Most home-grown systems were replaced with commercial software.

One interesting case was a really old system, I think it had something to do
with air traffic control. It was written by a guy who was still employed there
and he was still working on it. I got to interview him a bunch of times and
found the whole situation fascinating and a little depressing. Yes, he was
storing a 2-digit year. He didn't know what would happen when it flipped. He
didn't feel like there was a way to run it somewhere else and see what would
happen (it's very difficult to remember but I think it was running on a
mainframe in the comm squadron building).

The people in charge decided to replace it with commercial software. Maybe the
guy was forced to retire?

Overall the base didn't have any issues but only because they formed the "y2k
program management group" far enough ahead of time that we were able to
inventory and replace most everything before anything could happen.

------
rabboRubble
Old Y2K project manager here. We had a multiyear project, found issues, fixed,
and deployed fixes. Both hardware and software issues. I faced off against
internal business, external clients, internal audit, internal compliance, and
external regulators. We had IIRC nine potential failure dates including
2000-Feb-29. My ultimate project documentation required twenty 5 inch binders
and a hand cart to deliver the package to our local CEO for sign off. Pretty
sure he didn't read anything and simply just signed the cover page. We rocked
the project, I passed all aggressive audits and earned myself a nice bonus
that year for having a successful Y2k rollover.

Then came 2000-Feb-29 and it happened, I had a risk management system hosted
out of the UK that just didn't work. Had to file the system failure through to
internal global management and domestic regulators.

I was thrilled. First because that system owner had refused to conduct global
integrated testing so I could blame the SO. Had the request, negotiation, and
finally the outright refusal in writing. The failed system was relatively
trivial domestically. Risk wasn't calculated one day on a global platform that
and that risk didn't hit my local books. Ha ha sucks to be you. Most
importantly, I was thrilled because I could point to the failure and say "see,
that is what would have happened x100 if we hadn't nailed the project." It was
a great example for all the assholes who bitched about the amount of money we
spent.

------
__d
I worked at a regional bank. Like many banks, we offered mortgages, so
starting in 1970, the 30 year mortgages had a maturity date in 2000, and our
bank had begun the process of adapting its systems from 2-digit to 4-digit
dates.

Basically all of our software was written in COBOL, and most COBOL data is
processed using what we'd consider today to be string-like formats. And to
save space (a valuable commodity when DASD (aka hard drives) cost hundreds of
thousands of dollars, and stored a few megabytes of data) two-digit dates were
everywhere.

I started in 1991. The analysis had been done years before, and we knew where
most of the 2-digit problems were, so it was just a matter of slowly and
steadily evolving the system to use 4-digit dates where possible, or to shift
the epoch forward where that made sense.

Every few months we'd deploy a new version of some sub-system which had
changed, migrate all the data over a weekend, and cross off another box in the
huge poster showing all the tasks to be done.

External interfaces were the worst. Interbank transfers, ATM network
connections, ATM hardware itself, etc, etc. We mostly tried to switch internal
stuff first but leave the APIs as 2-digit until the external party was ready
to cut over. Similarly between our internal systems: get both ready
internally, migrate all the data, and then finally flick the switch on both
systems to switch the interfaces to 4-digit.

Practically, it meant that we our development group (maybe 30 people?) was
effectively half that size for 5 or 6 years in the early 90's as the other
half of the group did nothing but Y2K preparation.

All of these upgrades had to be timed around external partners, quarterly
reporting (which took up a whole weekend, and sometimes meant we couldn't open
the branches until late on the Monday after end-of-quarter), operating system
updates, etc, etc. The operations team had a pretty solid schedule booked out
years in advance.

We actually had two mainframes, in two data centers: one IBM 3090 and the
other the equivalent Armdahl model. We'd use the hot spare on a weekend to
test things.

It was a very different world back then: no Internet, for a start.
Professional communication was done by magazines and usergroup meetings.
Everything moved a lot slower.

I left that job before Y2K but according to the people I knew there, it went
pretty well.

------
Sparkware
I worked for Columbia/HCA, now HealthCare of America, at the time and we
started gearing up for Y2K in January, 1997.

Every system, every piece of hardware - both in the data centers and in the
hospitals - had to be certified Y2K compliant in enough time to correct the
issue. As I recall, we were trying to target being Y2K ready on January 1,
1999 but that date slipped.

A "Mission Control" was created at the Data Center and it was going to be
activated on December 15, 1999, running 24 hours a day until all issues were
resolved. Every IT staff member was going to rotate through Mission Control
and every staffer was going to have to serve some third shifts too.

I left Columbia/HCA in June, 1999 after they wanted to move me into COBOL. I
had no desire to do so and I took a programming position with the Tennessee
Department of Transportation.

I remember my first day on the job when I asked my boss what our Y2K policy
was. He shrugged and said "If it breaks, we'll fix it when we get back from
New Year's".

What a difference!!!

~~~
aaron_m04
> I remember my first day on the job when I asked my boss what our Y2K policy
> was. He shrugged and said "If it breaks, we'll fix it when we get back from
> New Year's".

I'm a little surprised. TDT is in a critical business too (transportation).

------
marcus_holmes
I worked for Hyder, the Welsh Water and Gas authority, on their Y2K project,
from March 1998 to November 1998.

Their billing and management system was written in COBOL, and contained
numerous Y2K bugs. If we did nothing, then the entire billing system would
have collapsed. That would mean Welsh people either receiving no bills, or
bills for >100 years of gas/water supply, depending on the bug that got
triggered. Very quickly (within days) the system would have collapsed, and
water/gas would have stopped flowing to Welsh homes.

Each field that had a date in it had to be examined, and every single piece of
logic that referenced that field had to be updated to deal with 4 digits
instead of 2.

I wasn't dealing with the actual COBOL, I managed an Access-based change
management system that catalogued each field and each reference that needed to
be changed, and tracked whether it had been changed or not, and whether the
change had been tested and deployed. This was vital, and used hourly by the
200+ devs who were actually changing the code.

We finished making all the changes by about December 1998, at which point it
was just mopping up and I wasn't needed any more. I bought a house with the
money I made from that contract (well, paid the deposit at least).

The cost was staggering. The lowest-paid COBOL devs were on GBP100+ per hour.
The highest-paid person I met was on GBP500 per hour, enticed out of
retirement. They were paid that much for 6-month contracts, at least. Hyder
paid multiple millions of pounds in contract fees to fix Y2K, knowing that the
entire business would fail if they didn't.

Still less than the cost to rewrite all that COBOL. The original project was
justified by sacking hundreds of accounts clerks, replaced by the COBOL system
and hardware. By 1998 the hardware was out of date, and the software was
buggy, but the cost-benefit of a rewrite made no sense at all. As far as I'm
aware Hyder is still running on that COBOL code.

~~~
ficklepickle
Diolch!

------
jerf
I've also found myself musing on a similar question, but one where you may
have a different temporal perspective at this particular moment: In six
months, are we going to collectively believe that the Coronavirus was nothing
and we massively overreacted to it? Because if we do react strongly, and it
does largely contain the virus, that will also be "proof" (quote-unquote) that
it wasn't anything we needed to be so proactive about in the first place.

Unsurprisingly, humans are not good at accounting for black swan events, and
even less so for averted ones.

~~~
idlewords
Neither the current pandemic nor Y2K really fit the definition of a black swan
event, since they were completely predictable (and predicted).

~~~
jerf
In my opinion (emphasis opinion), "black swan" includes the concept of
_timing_... that a pandemic would occur is inevitable, but you have no idea on
the timing. Market crashes are inevitable, but you have no idea on the timing.
Volcanic eruptions are inevitable, but you have no idea on the timing. etc.

Things that are inevitable only when you encompass time spans longer than a
human life (it has been approximately _one and a half_ average human lifespans
since the previous pandemic) may be predictable at that large aggregate scale,
but on _useful_ scales they are not. Or, to put it another way, if you've been
shorting the market since 1918 for the next pandemic crash, you went bankrupt
a long time ago.

Y2K is only a black swan for those not in the industry, since that one is
obviously intrinsically timing based. The UNIX timestamp equivalent is also
equally predictable to you and I, but to the rest of the world will seem even
more arbitrary if it's still a problem by then. (At least Y2K was visibly
obviously special on the normal human calendar.) But I wouldn't claim the term
for that; call it a bit of sloppiness in my writing.

------
smoyer
I was involved in several of the efforts at the time including building the
communications systems for the "studio NOC" at AT&T in NYC. I started hearing
about vulnerable systems about 5 years before 2000 and we were doing serious
work on those systems about 2 years before. I predicted (to friends and family
who didn't always care to believe me) that it would be a non-event because
disruptions would be localized in smaller systems (we were expecting local
banks and credit unions). Even I was blown away by how few of those systems
had problems. So know when people say Y2K was no big they fail to recognize
the work that went into to ensuring it was a non-event.

There's a very current equivalent - if we're good about social distancing,
people may talk about COVID-19 the same way.

------
GnarfGnarf
You're darn tootin' it was real. It was only through the dedicated, focused
efforts of thousands of unsung IT heroes that we averted catastrophe.

Just because it didn't happen doesn't mean it couldn't have.

Even in the late 80's I had to argue with some colleagues that we really
shouldn't be using two-digit dates anymore.

I worked with 80-column punched cards in the 70's, every column was precious,
you had to use two-digit years. When we converted to disc, storage was still
small and expensive, and we had to stay with two-digit years.

See:
[http://www.kyber.ca/rants/year2000.htm](http://www.kyber.ca/rants/year2000.htm)

------
wglb
Well, there were fallouts, but few disastrous ones.

First, enormous amounts of money was spent on repairs to the extent that they
could be done. I know of some 50-year-old processes that didn't have the
original source any longer. Significant consultant time was used in what at
times resembled archeology.

Second, there was a little downturn in new projects after the turn, as budgets
had been totally busted.

There was one consultant who preached doom and gloom about the collapse of
civilization when that midnight came. He went so far as to move his family
from NYC to New Mexico. He published on his web page all sorts of survivalist
techniques and necessities. When the time came, his kids, who apparently
didn't share the end-of-the-world view, woke him up and said "Dad!! New
Zealand is dark!!!" but of course it wasn't.

The lesson there was that there was a tunnel vision about exactly how
automated stuff actually was. While there were enormous systems with
mainframes, Sun servers, workstations doing all this work, what the tunnel
vision brought was the perception that excluded the regular human interactions
with the inputs and outputs and operation of these systems. Not so fully
automated after all.

There were a few disasters--I remember one small or medium grocery chain that
had POS systems that couldn't handle credit cards with expiration dates beyond
12-31-1999 and would crash the whole outfit. The store was unable to process
any transaction then until the whole thing was rebooted. They shortly went out
of business.

------
acdha
Yes. I worked for a COBOL vendor at the time and we had customers and
colleagues tell us how many things would not have functioned without the time
spent remediating it — not planes falling from the sky but, for example,
someone at a household-name credit card company saying they wouldn't have been
able to process transactions.

This was victim of its own success: since the work was largely completed in
time nobody had the huge counter-example of a disaster to justify the cost.
I'm reminded of the ozone hole / CFC scare in the 1980s where a problem was
identified, large-scale action happened, and there's been a persistent
contingent of grumblers saying it wasn't necessary ever since because the
problem didn't get worse.

------
csixty4
Yes and no.

There were a lot of two-digit dates out there which would have led to a lot of
bugs. Companies put a lot of effort into addressing them so the worst you
heard about was a 101 year old man getting baby formula in the mail.

The media over-hyped it, though. There was a market for books and guest
interviews on TV news, and plenty of people were willing to step up and preach
doom & gloom for a couple bucks: planes were going to fall out of the sky,
ATMs would stop working, all traffic lights were going to fail, that sort of
thing. It's like there was a pressure to ratchet things up a notch every day
so you looked like you were more aware of the tragic impact of this bug than
everyone else.

That's the part of the crisis that wasn't real, and it never was.

------
codezero
I worked New Years 1999 when I was at Red Hat.

Leading up to the change over there was a lot of work to make sure all the
systems would be OK, and that underlying software would also be OK, but keep
in mind, auto-update on the Internet wasn't super common.

I ended up getting one call from a customer that night where they had a valid
Y2K bug in their software, and since it wasn't in Red Hat's system, they moved
along to their next support person to call :)

It was a thing, but much less of a thing because of the work put into getting
ahead of it.

------
ecpottinger
In 1998 we tested the new computers we sold and many failed or gave odd
results when the date was changed to 2000. By mid 1999 almost none of the
computers had any problems if you advanced the date.

Also one of the major results of the Y2K bug, IT department finally got the
budgets to upgrade their hardware. If they had not gotten newer hardware I am
sure there would have been more problems.

Finally, in my area the main reason companies failed from IT problems is
because of problems with their database, but it turns out their backup are not
good or have not been done recently. Many companies tried to be cheap and
never updated their backup software, so even if they did backup their data the
backup software could really mess things up if it used 2 digit dates to track
which files to update.

Things go very bad if you lose Payroll, Accounts Payable or Accounts Receive-
able.

------
tachoknight
I worked at a bank at the time and can say that we started working on it back
in 1996, and all the systems were in place and tested by early 1999 so we had
no issues. It was _absolutely_ a crisis; back in 1995 one of the mainframe
programmers did an analysis on her own and determined that, not only was it a
problem, but that the system would be hopelessly corrupted if it wasn't fixed.
They spent, if not seven figures, at least high-six figures to get everything
ready. One thing that was drilled in from management was that no one was to
talk about it because it might be perceived that "real" work wasn't getting
done. :\

------
ebrenes
I was not working on fixing Y2K issues, but I did notice the impact it had on
systems that hadn't been patched. It's the typical IT conundrum, when you do a
good job no one notices and you don't get rewarded for doing a good job; the
only recognition comes when things fail.

Some historians seem to think that it was a real crisis in which the US
pioneered solutions that were used across the world:
[https://www.washingtonpost.com/outlook/2019/12/30/lessons-
yk...](https://www.washingtonpost.com/outlook/2019/12/30/lessons-yk-years-
later/)

------
SideburnsOfDoom
\- Why did you catch that?

\+ because it was going to fall.

\- Are you certain?

\+ yes.

\- but it didn't fall. You caught it. The fact that you prevented it from
happening doesn't change the fact that it was going to happen.

Minority Report , 2002

[https://www.youtube.com/watch?v=IVGQHw9jrsk](https://www.youtube.com/watch?v=IVGQHw9jrsk)

People worked for years in the late 1990s replacing systems that were not Y2K
compliant with new ones that were.

It is becoming ever more common to question the veracity of disaster averted
through effort. And it is very dangerous.

~~~
dragonwriter
> It is only in the last few years that it has become common to question the
> veracity of disaster averted through effort.

No, it isn't. Questioning whether Y2K was overhyped started before Jan. 1,
2000 and accelerated on Jan 1, 2000 when there weren't major breakdowns. If
you are too good at mitigating a problem before it manifests, there's a good
chance lots of people will doubt there was a problem to mitigate. On the other
hand, if it's a _sui generis_ problem like Y2K, it's by definition too late
for their doubts to impact mitigation efforts for the one potential
occurrence, so it doesn't matter all that much. For a recurring problem, where
those doubts can impact preparedness for the next potential occurrence, that's
a bigger challenge.

~~~
SideburnsOfDoom
My apologies. I have edited the comment after that, because I didn't feel that
"It is only in the last few years" was entirely correct.

However, I feel that this tendency is getting worse. A denial of the role of
expertise. The question "was Y2K real?" is a political issue now, and it's not
because of Y2K specifically, it's as a comparison to more recent events.

------
AndrewDucker
Not only was it real, some of the fixes were kludges put in place that would
only last for 20 years. But that was fine, because surely we'd have done
longer term fixes by then?

Except we didn't:

[https://en.wikipedia.org/wiki/Year_2000_problem#On_1_January...](https://en.wikipedia.org/wiki/Year_2000_problem#On_1_January_2020)

~~~
dhosek
Well when most of the bad Y2K code was written they knew it was bad, they just
didn't think civilization would last until the year 2000. It was the Reagan
years, after all.

------
msla
If you're driving down the road, see an overturned cart in your path, and
safely avoid it, was the cart a danger to you? Nothing bad happened, so was
the cart a hoax? The Y2K problem was, for a number of organizations, precisely
such a cart, and it was successfully avoided to the extent nothing seriously
bad happened as a result of the bug (really, an engineering trade-off which
lived too long in the wild) so we can either count it as a victory of
foresight and disaster aversion, or we can say it was all a hoax and there was
never anything to it. Guess which conclusion will best let us avoid the next
potential disaster.

------
y-c-o-m-b
Like others have said, it was real and handled well. They knew about it for
years before so there was time to fix the issue.

The panic was also very real despite not being proportional to the actual
problem, but just like any other media-induced widespread panic, it served as
a means to make lots of profit for those in a position to do so. Media
companies squeezed every last drop of that panic for ratings... well into the
year 2000 when they started spreading the story that Y2K was the tip of the
iceberg, and the "real" Y2K won't actually start until January 1, 2001.

As an immigrant to the US, I got to see the weird side of American culture in
how people tend to romanticize (for lack of a better word) post-apocalyptic
America. Kind of like the doomsday hoarders of today are doing. It's like they
think a starring role on the Walking Dead is waiting for them, except in real
life.

------
f2000
Was employed by a large bank in their web division. We fixed many Y2K bugs
that would have been triggered. Was on duty New Year's eve and there were a
few Y2K bugs that surfaced but nothing show stopping. My anecdotal opinion is
that the panic to fix these bugs likely prevented some larger cascading
effect/catastrophe. Remember, this was 1999 when testing/QA practices were not
always de rigueur. For some shops, Y2K mitigation might have the first time
their code base was subjected to any sort of automated tests :-))

~~~
ZainRiz
Fun story related to this:

During the Y2K panic Sun Microsystems (IIRC) announced that they would pay a
bounty of ~$1,000 per Y2K bug that anyone found in their software. As you
noted, there was very little automated testing at the time so these problems
were really hard to discover.

James Whittaker (a college professor at the time) worked with his students to
create a program that would parse Sun's binaries and discover many types of
Y2K bugs. They wrote the code, hit run, and waited.

And waited. And waited.

One week later the code printed out it's findings: It found tens of thousands
of bugs.

James Whittaker went to Sun Microsystems with his lawyer. They saw the results
and then brought in their own lawyers. Eventually there was some settlement.

One of James' students bought a car with his share.

------
blihp
It wasn't a crisis, but it was a real problem that needed to be, and was,
fixed in plenty of time. It didn't surprise anyone in the industry as it was
well known throughout the 90's it was coming. The biggest problem was
identifying what would break and either fix or replace it. Many companies I
dealt with at the time humorously did both: they had big remediation projects
and as soon as they finished, decided to dump most of the old stuff for shiny
new stuff anyway.

------
PappaPatat
Very anecdotal, but here is my take:

For the place I worked at (large international company) it was a G*d send
opportunity. All the slack that had been build up in the past by "cost
reducing" management suddenly had a billable cost position that nobody
questioned.

Of course there where some actual Y2K issue solved in code and calculations,
but by large the significant part of the budget was spend on new shiny stuff,
to get changes approved and compensate workers for bonuses missed in the
previous years.

We had a blast doing it, and the biggest let down while following the year
roll over from the dateline and seeing nothing like the expected and predicted
rolling blackouts.

------
0xff00ffee
I know this is redundant, but I have to add to the signal:

YES. It was real.

I was finishing an engineering degree (CSE) in 1992 and several of my peers
took consulting jobs to work on Y2K issues. For nearly a decade a huge amount
of work was done to review and repair code.

Y2K is the butt of many jokes, but the truth is: it didn't happen because the
work was done to fix it. Sort of ironic.

------
kitteh
There were parts of our telecom infrastructure that weren't ready but got
fixed before y2k. A certain mobile phone switching vendor (think cell towers,
etc.) ran tests a year before to see what happened when it rolled over and the
whole mobile network shut down (got in a wedged state where calls would fail,
no new calls, signalling died). They fixed it and got customers upgraded in
time.

~~~
TedDoesntTalk
I dont think I even had a mobile phone in 1999. Probably not a critical system
back then.

~~~
sgt
I got my first cell phone in 1996. It worked really well (GSM network) and
SMS's were available. So I can easily imagine that if you relied on it, it
seemed like a critical system to you personally or as a business.

------
franze
In 1999 to 2000 I was working as a freelancer At a state agency. From the
change from 99 to 00 I got paid 3 times a few days apart, always the same
amount. Later turned out that the indicator that a person was paid was not
working thx to y2k. So somebody clicked on payout a few times. They fixed it
for the employees before but not for the freelancers. I gave back the money,
which was difficult on its own as there was no process for it.

------
billpg
Watch it happen...

[https://twitter.com/basiccomic/status/1099332074983641094](https://twitter.com/basiccomic/status/1099332074983641094)

------
fapjacks
I was the lead console operator during the midnight rollover on NYE for an
aging fleet of state-owned mainframes and minicomputers which were affected by
the bug. Some of the machines could not be updated with a fix, so we were
almost completely uncertain as to how these machines would behave. Testing on
many of those machines was a huge investment. A lot of effort went into
checking and re-checking job code. It was a lot of work for everyone. As
mentioned here by others, it could have been (would have been) worse were it
not for the heroic efforts of programmers to bug-proof their job code in the
run-up to the rollover. As the lead console op, my responsibility that night
and morning was to try and ride any trains that decided to jump the tracks.
The skills I developed then still serve me well today, and I will forever be
grateful to those grayest of graybeards for the trust I was extended when
chosen for that role. Everyone on the payroll was there for the rollover
except for the next shift's operators. For my part, it was in the end a lot of
preparation which thankfully was not needed. I must admit to having a drink
before that shift started. But when the rollover came, all was quiet. And
after a few nervous hours, we poured some champagne with the hero programmers
who were there in the room watching their jobs run without any issues.

------
Gatsky
Can anyone provide an example of a country or a company that was not at all
prepared for Y2K (there must be one somewhere?), and suffered disastrous
consequences? That would seem to be the best way to answer the original
question, but I haven't seen any such example provided.

------
nineteen999
I didn't run into any Y2K problems - for UNIX/Linux itself this was mostly a
non-issue due to times being stored in at least 32-bit time_t at that point.
Individual applications may have had their own Y2K related issues of course,
but I didn't run into any.

However, one issue I did run into nearly two years later was when UNIX time_t
rolled over to 1 billion seconds. The company I worked with at the time was
running WU-IMAP for their email server, plus additional patches for qmail-
style maildir support. We came into work on September 10th 2001 and all the
email on our IMAP server was sorted in the wrong order.

Turns out there was a bug in the date sorting function in this particular
maildir patch (see [http://www.davideous.com/imap-
maildir/](http://www.davideous.com/imap-maildir/) \- "10-digit unix date
rollover problem"). I think we were the first to report it to the maintainer
due to the timezone we were in. First time for me in identifying and
submitting a patch to fix a critical issue in a piece of open source software!
My co-worker and I were chuffed.

Of course, we swiftly forgot about it the next day when planes crashed into
the NY World Trade Center.

------
tobias2014
There's a very good ~20min documentary of the Y2K bug which exactly discusses
your question:
[https://www.youtube.com/watch?v=Xm5OiB3CPxg](https://www.youtube.com/watch?v=Xm5OiB3CPxg)
(by LGR)

~~~
perch56
Came here to say this. Great channel and explanation of the Y2K.

------
jedberg
It was like most other big IT problems that are properly anticipated -- a ton
of work went into making sure it wasn't a problem, so everyone assumes there
was nothing to worry about and all the IT people were lazy and dramatic.

But that couldn't be more wrong.

------
DreamSpinner
Yes, it was real.

Keep in mind that it was also used as a significant contributing factor to
replace a lot of major legacy IT systems (especially accounting systems) at
big organisations (a lot of SAP rollouts in the late 90s had Y2K as part of
the cost justifications).

The company I worked for ran a Y2K Remediation "Factory" for mainframe
software - going through and change to 4 digits, checking for leap year
issues, confirming various calculations still worked.

I worked on a full system replacement that was partially justified on the
basis of (roughly) we can spend 0.3x and do y2k patches, or spend X and get a
new system using more recent technologies and UIs.

There were still problems, but they were generally in less critical systems as
likely major systems had been tested, and were remediated or replaced.

Keep in mind that there was often much more processing that occurred on
desktop computers (traditional fat client) - so lots of effort was also
expended on check desktop date rollover behaviour. Once place I worked at had
to manually run test software on every computer they had (10's of thousands)
because it needed reboots and remote management was more primitive (and less
adopted) at the time.

------
barkingcat
I worked at IBM as an intern during the crossover, and there was a TON of
internal activity. It might seem like there was nothing going on, but in fact,
there were a lot of interns (like our entire intern class of about 200 that I
know of, across entire IBM in all disciplines, research, database,
microprocessors, AIX, QA, mainframes etc) who were basically doing the same
thing - Y2K readiness for our respective departments.

I worked in QA in one of their bank teller application development branch
offices, so all I did for weeks was enter in date times between 99 and 00 into
the software and test that the fixes were successful.

The unique thing about Y2K was that the problem was well understood and came
with an actual deadline, so you could project manage around it.

Any normal bug couldn't be project managed this way, and you can't just throw
interns at regular problems, whereas with Y2K, if you had the money, you can
just assign people to look at every line of code to look for date handling
code.

------
unoti
I worked hard on y2k issues in 1998-1999. It was a real thing for my company
at the time. It was a crisis averted. In the 80’s and 90’s I worked on many
systems where the equivalent of “max date” or “end of time” was expressed as
12/31/99\. The way that these systems expressed, stored, entered, and
validated dates all had to be reworked in a series of major overhauls.

------
mncolinlee
I personally worked in Y2K support at the time on PC hardware. Most
motherboards we tested worked, but some needed BIOS updates and one model
needed a new BIOS fix which didn't exist. We swapped out the bad motherboards,
updated software, and had no problems.

In the UK, there were some medical devices (my memory says dialysis machines)
that malfunctioned over the issue.

There is an important lesson about the behavior of the media in this. They
whipped out people into a survivalist, doomsday prepper frenzy over an issue
that could be solved simply by updating BIOS, software, and/or hardware.

With that said, the effort was very expensive because so much software and
hardware needed to be audited at every company.

------
bridgerj
There are really a couple of issues in this topic. First, was there a
potential problem due to dates? Second, was the massive scare campaign
justified?

The answer to the first question is yes. There was a potential problem.
However the companies and government departments that were affected had
started planning in the early 90s, and they prepared during the decade. Many
took the opportunity to embark on huge system upgrades. It was just one of
many issues CIOs dealt with.

The answer to the second question is no. The huge disaster scares were not
justified. Banks, airlines, insurance companies and government departments had
already fixed their systems, just like they fix other problems.

What happened was that consulting companies, outsourcers and law firms
suddenly realized there was a huge new market that they could scare into
being. They started running campaigns aimed at getting work from mid size
businesses.

The campaign took off because it was an easy issue for the media and
politicians to understand. It also played into the popular meme that
programmers were stupid. The kicker was the threat that directors who failed
to prepare could be sued if anything went wrong. Directors fell into line and
commissioned a lot of needless work.

In summary, there was the professional work carried out by big banks, airlines
etc, generally between 1990 and 1997, and the panic-driven, sometimes
pointless work by smaller firms in 1998 and 1999.

~~~
bkor
> There was a potential problem. However the companies and government
> departments that were affected had started planning in the early 90s, and
> they prepared during the decade.

I can point to several huge companies who did nothing until 1998 or even 1999.
The media scare helped with that (priority, money) a lot.

------
jarkkom
It was very much real, lots of effort was spent to make sure everything worked
correctly, especially around older billing systems which used only 2-digit
years in fixed width records.

Because of all the preparation and upgrades being done, I think only incident
we had when Y2k migration manager sent out "all clear"-email after rollover -
Unix mail client he used formatted date on email as "01/01/19100" \- though I
suspect he knew of the issue and didn't upgrade on purpose just to make a
point.

~~~
dhosek
Ah, bad Perl code, outputting a four digit year by doing "19" . $year instead
of 1900 + $year. I fixed a lot of those in 1999.

------
harry8
Accross fortune 500 cos, smaller cos, all government depts in every country of
the world nobody stuffed it up and had a total horror story of death,
destruction and bankruptcy.

Nobody. None.

Everybody got a good landing in the pilot's sense of a good landing being one
you walk away from. Think of your boss, all the people you work with and ever
have. And they all suceeded.

So crisis, no. No way everybody pulls it off if it were a real crisis. But
damn it made sales easy by consultants to all of the above who spent big.
"Planes will drop out of the sky!"

A very powerful way to sell is through fear. We got sold the iraq war on
weapons of mass destruction that could kill us in our beds here! And this was
used and abused by consulting firms to make sales to managers and boards of
directors who have no clue what a computer is and what it does and think
hackers can whistle the launch codes etc. That fear based sales job happened
on mass and was a vastly bigger phenomenen than y2k. But having said that
there were so many people who bought the fear sales job that employed them
that they still believe it. Many will post here about it and you can weigh it
all up for yourself.

So yeah there were y2k issues, some got dealt with in advance, some didn't but
nothing like the hype of 1999. Nothing like it.

------
rossdavidh
1) certainly, there were real Y2K issues that had to get fixed 2) however,
what IT workers in general don't realize, is how many of their systems are
broken all the time, in ways that people just learn to live with or work
around; this is not all of them, but it does include more than IT folks
realize 3) I worked as an engineer in the semiconductor industry at the time,
and "it's not Y2K compliant, and cannot be upgraded" was a way to get obsolete
equipment replaced, in a way that bypassed normal budget controls. Engineers,
salesmen, and managers all engaged in a sort of unspoken conspiracy to get the
new equipment purchased. However, this doesn't mean it wasn't a good thing
that it happened. Which makes one wonder whether a certain amount of
brokenness in accounting and software controls is not necessary for the
economy to function 4) countries like Japan, Russia, etc. spent a tiny
fraction of the effort in Y2K preparation, and they sailed through. This was
in part because of U.S. overhyping it, but also because we were using
interconnected computer systems more than other countries were at that time.

So, it's a mix. It was real, it was well-handled, but there was also some
hype, and even some hype that served a real (covert) good purpose.

------
mattsears
Yes, it was real. While in college, I was working as a programmer (Perl
mostly) for a book publishing company and several database systems wouldn't
boot up after the new year. It turned out they didn't bother upgrading their
software. This was common, but by then, most software companies had patches
available - probably because of the hype. In this case, I believe the fear
provided by the media actually helped avoid a much bigger crisis.

------
ralphc
I'll add my own little data point to the pile. I changed jobs in 1998 so I
fixed problems at two companies, small to mid-size. At each place they would
have had problems if they weren't found and fixed. At second company they
asked for volunteers to get paid extra to stay the night, eat pizza, watch
movies and be prepared to fix things if they went south. They had enough
volunteers that no one had to be voluntold. They weren't needed.

------
adrianmsmith
I have always wondered the same thing. I came to the conclusion that it's
pretty difficult to determine that.

I lived and worked as a software developer through the Y2K "crisis" (although
I wasn't working on solving the crisis myself). Everyone was very worried
about it. Nothing really went wrong in the end.

Was that because there was no problem? Or because everyone was worried about
it and actually solved the problem? I don't think it's easy to tell the
difference really.

~~~
mathw
It's only hard to tell if you don't talk to the developers who were working in
the late 90s.

~~~
slantyyz
>> It's only hard to tell if you don't talk to the developers who were working
in the late 90s.

I think you mean "working ON it". Talking to developers as a broad group from
that time wouldn't necessarily produce any useful information.

The person you replied to was himself a developer working in the late 90s.
During the late 90s, I talked to a lot of developers, but only a small
percentage of them were on Y2K jobs.

~~~
sourcesmith
There were a lot of Y2K related tasks in the course of many developers
generally activity. Where I worked at the time, there were not developers
solely dedicated to Y2K work. The Y2K issue pretty much caused an employment
boom for software developers though.

~~~
dhosek
I was fixing Y2K bugs at a company founded in 1997. Unless you never touched
date stuff, I can't imagine being oblivious to Y2K issues working in IT in
1999.

~~~
slantyyz
The startups I worked in around that time - we were using recent hardware with
4 digit years and unconcerned about Y2K issues.

So while we were -aware- of the Y2K issue, it didn't impact any of us in a
concrete fashion. We would talk about people we knew on Y2K projects, which
were mostly mission critical legacy systems.

So it's not inconceivable for devs in the 90s to have only cursory awareness
of the -real- issues that the people who worked on Y2K projects were facing
and solved.

------
vlan0
My prior UPS maintenance guy had some war stories from that time. He said
prior to the event, it was the busiest he's ever been. He was either replacing
affected hardware or performing software updates to solve the issue on Liebert
UPSs.

He spent New Years in a DC of a big financial firm in NYC. Apparently the firm
was so worried about a failure they shelled out big bucks to have UPS
maintenance staff onsite during the cut-over "just in case".

------
zhoujianfu
At the time I predicted it would all work out okay... just because every day
millions of computer systems broke because of unknown bugs and we were okay.
This bug we knew about 40 years in advance. My company actually ended up
having two pretty major y2k bugs we hadn’t fixed. Come January 2nd we fixed
them like we had to fix dozens of other bugs every day.

------
ninju
Yes..the Y2K was _very_ real and was (moderately) well handled.

The potential for large disruptions in financial, real-time and other systems
would have occurred if not for the effort applied.

Unfortunately some problems require a certain level of media-awareness and/or
hysteria before we devote the necessary resources to fix the problem __before
__it become a crisis

------
gentle
Yes, it was a real potential crisis, and it was only ameliorated because lots
of companies spent tons of money testing and reviewing their systems, and
fixing bugs that they found.

Airplanes were probably never going to fall out of the sky at the stroke of
midnight, but I personally fixed tons of bugs that had potential impacts in
the dozens of millions of dollars.

~~~
gentle
And, like others have pointed out, teams started working on fixes well in
advance of the changeover (~4-5 years with the systems that I was familiar
with), but even with the extra money and the extra time, there were systems
that were not remediated until shortly before the changeover.

------
ArnoVW
Imagine that 1% of all software had an issue, across all of the economic
tissue of the developed world. Now imagine that this software would start
failing, all on January 1st 2000, everywhere around the world. Or better
still, not failing, just silently corrupting data.

Just like the crisis we are currently facing in our health systems, it seems
unlikely that we would have had enough IT resources to deal with the issues in
real-time.

This is one of the cases of a "self-denying-prophecy", much like acid rain.
There was an issue, we collectively dealt with it (better yet, we actually
anticipated!), and now people are saying that in the end there was no issue.

[https://www.bbc.com/future/article/20190823-can-lessons-
from...](https://www.bbc.com/future/article/20190823-can-lessons-from-acid-
rain-help-stop-climate-change)

------
ZacharyPitts
I was on call that night, working for Webcom, a forgotten pioneer of the early
web. Working so our servers at Exodus data center in San Jose would keep
functioning.... Nothing happened :)

Of course though, because we had spent the previous few months setting clocks
forward to see what broke, and fixing it.

------
andrewfromx
And the people that did all the coding? India! Y2K bug put using remote
programmers in India on the map as something that works. And Y2K was perfect
for this since the logic changes in 1000s of files was all the same. Very easy
bug to fix, but just needed humans who can understand code to do it.

------
Japhy_Ryder
Peter Gibbons: Well see, they wrote all this bank software, and, uh, to save
space, they used two digits for the date instead of four. So, like, 98 instead
of 1998? So, like, 98 instead of 1998? Uh, so I go through these thousands of
lines of code and, uh... it doesn't really matter.

------
eb0la
I had some SGI IRIX machines impacted by the Y2K bug: if you ran an unpatched
OS, nobody could login after Jan 1, 2000 0:0:0Z.

One of them, was running calculations 24/7 for a research group at the
university and _fortunately_ they were able to stop the jobs in time for an OS
upgrade.

------
anigbrowl
It was indeed. Lot of diligent development work, lot of diligent field work
checking individual machines and updating software from CD-ROMs. USB storage
wasn't a thing, you burned stuff to CD or used floppy disks. You could also
use a laptop with a network cable but that was generally more trouble than it
was worth if you had to deal with Windows or Mac deployments. On large
corporate networks you could update each desk unit from a central server, but
that was impractical on many smaller/bespoke network setups. Linux system
administration was not as smooth as it is today, but it wasn't _so_ different.

------
zzo38computer
Many things worked fine despite that (some software/hardware was already Y2K
compliant even in the eighties); many people thought many more things would be
problems, even stuff that does not deal with the date at all, but of course
that doesn't make sense. Some things did cause minor problems, mainly display
errors with the date (sometimes causing columns to not line up properly, but
sometimes less severe than that). Some things would cause more problems, but
perhaps they were already fixed, or fixed soon afterward; I don't know much
about it.

------
CiPHPerCoder
The Y2K bug, in the public imagination, was premised on code like this
existing somewhere in our computers:

    
    
      const DATE_COMPUTERS_DID_NOT_EXIST = /* arbitrary */;
      
      /* snip */
      
      if (Date::now() < DATE_COMPUTERS_DID_NOT_EXIST) {
        Computer::selfDestruct();
      }
    

(See also: the Simpsons Y2K episode, which I think is a good representation of
what many non-tech people believed would happen.)

I think it's a great lesson in the failings of the public imagination and
should serve as a warning to _not give into moral panics_.

------
samch93
I really recommend this video from LGR about the Y2K crisis
[https://youtu.be/Xm5OiB3CPxg](https://youtu.be/Xm5OiB3CPxg)

------
yarrel
This is a question that is asked by everyone, up to and including politicians
who will have to tackle any future such crises.

Pre-Y2K I worked to fix loan systems that would have failed had their Y2K bugs
not been fixed. Not getting a loan isn't accidentally launching a nuclear
missile, but it affects your credit score and stops you buying a car.

Enough of this kind of failures would have had a severe effect on the economy,
up to and including causing an economic crisis.

------
jason_slack
My first real job out of college was for a company doing only Y2K prep. That
is the reason why I was hired specifically.

I looked at every system and decided the fix and coded it up from Delphi,
Access, SQL, VB, QBASIC and c++.

It was quite enjoyable and I was enjoying a glass of wine and a steak on the
dreaded evening, which was a Friday. Not a single phone call until Monday
morning when my boss said take a few days off but pay attention to my pager. I
put it in the refridgerator :-)

------
JJMcJ
I know of at least one company that had to replace some computers with out-of-
support OS that was not Y2K compliant.

Big companies (banks, insurance, health care) had elaborate contingency plans.

There were some failures, but nothing to disrupt life for 99.9% of the
population, unless you call a website that says it's

    
    
       January 5, 19100
    

a failure.

There have been other problems. Day 10,000, but VAX and Unix systems, some
programs had problems, once again mostly cause they didn't allow for the
longer strings.

------
GekkePrutser
It's like the Ozone layer and acid rain.

Climate sceptics often use these as an excuse. "Yes but there was sooo much
hubbub about those and it proved to be nothing".

Well, yes it is nothing now because it was decisively and effectively handled.
The ozone layer is still recovering but on a steady path there, and acid rain
is reduced to the point of not really being an issue anymore (at least in the
western world!).

Stuff really would have gone wrong with Y2K. Maybe not armageddon but yes.
There was a problem.

------
FWKevents
My impression is the same as dwheeler. Y2K could have been a crisis were it
not for the swift, expensive and sometimes heroic actions of coders who saved
us from a calamity. I don't know that putting gallons of water in the basement
was a necessity - water treatment plants probably would have still functioned
- but even those plants are controlled by computers, so I'd say it was
collective action.

------
tstrimple
The company I'm consulting for just went through another Y2K scare. A fairly
large financial institution using two digits to track the year. They pushed
out the problem last time by updating the code to treat 00 - 20 as 2000 to
2020 because obviously 20 years is enough time to put in the real fix right?
Well they bought themselves 30 more years, so maybe it'll actually be fixed by
2050.

------
lasereyes136
Y2k wasn't a bug at first, it was a cost and space saving feature. It only
became a bug after the practice never changed until very close to the
deadline.

Just a public service announcement that the decisions you make today for good
reasons can, in retrospect, be seen as a huge mistake. The future view of your
decisions will always have better knowledge than you have when you make that
decision.

------
GoldenMonkey
No. It was not. I was a software developer in a large US Bank at the time. We
had already dealt with it years ago for critical systems. All the banks had.

~~~
VLM
It was a VERY widely held belief by the general public that Y2K only applied
to real time clocks and wall time.

However arguably most dates in corporate IT work are involved in some level of
forecasting and prediction and future planning.

In reality, starting in 1970 anyone writing an amortization table program for
a 30 year mortgage had to work around Y2K. Anyone dealing in any way with the
expiration date for a twenty year term life insurance policy had to start
caring about Y2K in 1980. Even a mere net-30 business to business payment
account either broke or not in nov-1999. Even on Dec 31 1999 it was
hilariously charming how people all around the world thought all computers
were located in THEIR timezone and thus any real time clock type failures
would occur at precisely midnight local time where they live as opposed to
where the computer is actually located. Due to the miracle of UTC time
anything bad would have happened to our stuff early in the day while I was
eating a late dinner, not when the operations center overstaffed during local
timezone 3rd shift.

I was working at a telco at the time and we were very worried and overstaffed
over Y2K, our stuff was fine, but we were pretty worried about rioters and
such if anyone ELSE failed, like maybe the power co. Hilariously the power co
people were probably overstaffed over Y2K, despite knowing their stuff was
fine, they were likely worried about those telco goofballs failing thus losing
SCADA links to their substations, LOL.

In the end it seems pretty much nothing failed anywhere, as I recall. Or the
failure rate for that day, was no higher than any other average calendar date.

------
Springcleaning
It was real and most important issues got resolved because of the large
investments. But there was also a lot of scammy consultancy going on.

I remember visiting a smaller hotel in the UK early 2000 where the check-in
terminal had a Y2000 Approved sticker with a serial number. That made sense,
but in the room everything with a plug, including the hair dryer had such a
sticker.

------
nsxwolf
Just imagine if no one had ever lifted a finger to fix any of the bugs. We
only talk about it being a scam because everyone collectively did such a great
job mitigating it in time.

~~~
arethuza
There is a certain class of work in the IT/software world that is utterly
thankless - nothing goes wrong and people wonder what they pay you for,
something goes wrong and the very same questions get asked.

~~~
Psyladine
It's not IT/software, it's in every field, it's called shitwork, that you
don't get credit for, but catch hell if it isn't.

e.g. "what do we pay all these janitors for when this place looks spotless?"

~~~
StaticRedux
I don't think that's true. I've never heard that about any other industry.
Certainly not about janitorial work. Doctors curing a patient, lawyers winning
a big case, secretaries keeping the schedule up to date, teachers with kids
who pass their tests, truck drivers who deliver on time. all of that is
plainly visible. IT is a field in which few people see the results or
understand the effort required.

~~~
Psyladine
it's true of any industry or organizational structure that is considered a
cost center.

As for your examples, those are all boolean - they either do it successfully
or they don't, and if they do, it's what they are paid for, and if they don't,
they catch hell. There is nothing intrinsic about IT that differs.

------
Fnoord
As a home user, I had software on my old computer (80286 XT from end of 80s)
which ended up saying we were in the year 00. Don't remember if time/date was
otherwise correct.

I do realize it could've been a lot worse if it were not thanks to the many
efforts of people in the tech industry.

And in case you wonder: I would bet the same is going to happen with regards
to 32-bit and 2038.

~~~
mcswell
Did you go talk to Mary and Joseph and tell them what was going to happen?

------
spamizbad
Yes.

The media did sensationalize it with stuff like "Planes falling out of the
sky!" but there would have been massive disruptions due to systematic
date/time issues. Tons of money was spent testing systems to ensure they're
Y2K compatible and if not these systems were either patched or hastily
replaced with something that was.

------
generalpass
My recollection is that there could be real problems, but the solutions were
already planned and appropriate resources allocated prior to media hype.

I recall being in Seattle on New Years Eve and there were cops everywhere in
full battle dress with armored personnel carriers and nobody was out partying
like it was 1999, which was a shame because the weather was unusually mild.

------
cronix
It was real, and we get to deal with it again in 18 years. Without a great
deal of thought, I'm wondering why we didn't address both at the same time.

[https://en.wikipedia.org/wiki/Year_2038_problem](https://en.wikipedia.org/wiki/Year_2038_problem)

------
andrewdubinsky
It was very real. One of the reasons there was not widespread failure was due
to the creation of software that scanned for two digit dates and fixed systems
at scale. That and people spent years preparing for it. A few systems failed,
but nothing on the scale of what was predicted.

------
ivanhoe
I remember walking around my home city on the morning of 1/1/2000 and like a
half of public digital clocks in the streets were showing invalid dates or
just errors, including the one specially put there for the NY celebration
count-down and pompously named "The Millennial clock".

------
altitudinous
At some point those of us who worked on Y2K will be dead. This question will
come up again, and there will be no-one left to prove it or defend it. At that
point it will be come a giant conspiracy. I'll be happy to have been part of
the legendary mysterious Y2K coverup!!!!

------
HackerLars
It absolutely was real. Countless software engineers, including myself, worked
our tails off updating code to avoid the worst of the issues. It worked. I
think it's one of software's great success stories (of course after an initial
lack of planning).

------
SkyMarshal
It was real, and there was very little fallout b/c tens of thousands of IT
staff worked for years leading up to 2000 to fix the bugs.

The Indian IT outsourcing industry was effectively created by the Y2K bug.
Those companies did a large amount of the bug fixing.

~~~
Thorentis
More details / sources on the outsourced fixing?

~~~
SkyMarshal
[https://www.google.com/search?q=y2k+indian+outsourcing&oq=y2...](https://www.google.com/search?q=y2k+indian+outsourcing&oq=y2k+indian+outsourcing)

------
strickinato
There's an amazing podcast that does a deep dive here:

[https://open.spotify.com/show/1VgCMwF8Pp4WRjchwVwApz](https://open.spotify.com/show/1VgCMwF8Pp4WRjchwVwApz)

------
justizin
whenever this comes up at the bar or with random friends i am amused at how
much people think that preparing for something was not worth it because the
problem did not happen.

this is basically ALL OF LIFE in IT OPERATIONS. lol.

------
JSeymourATL
Perception is reality, so goes the Old Line.

It's certainly true an absurd amount of resources went into 'fixing' the
problem.

Apply to this to any crisis du jour-- drugs/terrorism/climate/viruses etc...

Never let a Good Crisis go to waste.

------
nerfhammer
there were some public digital clocks in my university that displayed "1900"
afterward and so they turned them off and never fixed them or turned them
again. that was the only effect of Y2K that I ever saw.

I could see why people would be worried that banking software written in COBOL
in 1983 would break and had to spend significant sums of money making sure it
didn't. Since it was an extremely predictable problem with a specific, known,
non-negotiable deadline everyone who had money to lose if there were a problem
had plenty of time and incentive to prevent it.

------
kentbrew
Spent that night in the NOC, for absolutely nothing. The only thing that broke
was our guestbook, which came from Matt's Script Archive and thought the year
was 19100 on New Year's Day.

------
phonebucket
Computerphile did a video on it which I enjoyed:
[https://www.youtube.com/watch?v=BGrKKrsIpQw](https://www.youtube.com/watch?v=BGrKKrsIpQw)

------
senthil_rajasek
Yes it was. I worked on fixing y2k bugs in telecom systems.

(Shameless plug, here is my humorous take on it)

[https://youtu.be/tbUg8-RdwXE](https://youtu.be/tbUg8-RdwXE)

------
wjdp
We integrate with a lot of third parties with varying levels of legacy
conversion required to do so. At least one still stores years as two digit
numbers. This is the payments industry.

------
snvzz
Meanwhile, Linux didn't move the time_t to 64bit despite.

They're still working on it now, and not fully prepared.

The BSDs fare much better on that, as most of them have done this a long time
ago.

~~~
cesarb
That's the curse of backwards compatibility. AFAIK, on BSDs not only are the
kernel and the C library developed together (so neither needs to stay
compatible with older versions of the other), but also programs are expected
to be recompiled for every major release of the operating system (so there's
no need to stay compatible with binary-only software from the 90s).

~~~
snvzz
Yet Linux broke things several times around those times. I recall Linux 2.0
was very breaking. gcc 3.1 -> 3.2 also broke almost everything a few years
later.

Linux has had plenty of opportunity to fix stuff, all wasted.

------
sys_64738
Yes but the media tries to paint it as an overblown reaction. Thanks to the
hard work and long hours of all the IT folks and programmers, it was reduced
to no real fire fights.

------
pintxo
This might capture how some people envisioned it:
[https://youtu.be/WhF7dQl4Ico](https://youtu.be/WhF7dQl4Ico) /s

------
dreamcompiler
I worked on US national security Y2K issues for two years prior. Yes, it was
real. The fact that so many people now think it was a big nothing means my
team did its job.

------
HackerLars
Absolutely real. I, along with many others, worked our tails off updating code
so systems would still perform. It worked. It was a huge success.

------
pabs3
Yes, and the Y2038 crisis is real but just like last time we are working on
fixing it early instead of waiting for it to become a problem.

------
terrislinenbach
The 2038 bug is coming. Will the response be complacent because conspiracy
theorists think Y2K was fake news?

------
rconti
Not a programmer, but an Ops guy 6 months into my career.

All I know is, I plan to retire by age ~57 (before the 2038 bug hits :) )

------
hprotagonist
Perhaps the definition of a successfully managed crisis is that exactly this
question is asked after it was managed.

------
jkingsbery
It ended up not being a big deal. But, there are still stories in the news
about bugs in software that assume years only have 2 digits that matter (see
e.g., [https://www.theguardian.com/uk-news/2020/feb/12/home-
office-...](https://www.theguardian.com/uk-news/2020/feb/12/home-office-tells-
man-101-his-parents-must-confirm-id)).

------
eyegor
Follow up: for those who have dealt with both, which was more stressful? SAP,
or y2k?

------
keithnz
yes, I was working in the airline industry at the time (the logistics side)
and there was a bunch of stuff, across multiple vendors, that had to change or
it would've broke. It would've effectively grounded airlines

------
bob33212
No it was not a crisis. There were plenty of bugs in legacy systems that
needed to be fixed, but legacy systems have bugs all the time, for example
when the dates for daylight savings time changed. The general public was not
well informed and also generally didn't have a software background to
understand the problem.

------
IshKebab
Not really. At least not like it was portrayed. The public thought that all
computers stored dates like `99` for 1999, so potentially all code that
handled dates/times would need to be fixed.

But actually most software uses epoch time or something similar. So the scope
of the problem was much smaller than the news implied.

------
kevin_thibedeau
Just wait for Y2100. There are still lots of RTCs storing two digit years.

------
29athrowaway
Imagine having to pay 100 years in interest or late fees.

~~~
dhosek
I was working on billing systems in 1999. For our company at least, the danger
was in autobilling these amounts to a customer's credit card (I actually
accidentally put a bug into production that increased rather than decreased
the balance due after a payment, causing exponential growth in the amount the
customer owed, so I saw what happens in this circumstance). End result, angry
customer, reputational loss and likely the loss of a minimum of one billing
cycle's revenue (assuming customer didn't cancel outright).

From the customer's perspective, they would lose the use of their billing
credit card for typically a day while until the charges were reversed. This
was less of an issue in 2000 than today as far fewer regular payments happened
via credit card, but would still be a major disruptor.

------
olliej
It was a crisis, but it was actually prepared for and handled, which is why
things didn’t go wrong.

The trick yo avoiding predictable crises is to actually doing something before
it happens in order to avoid it.

------
Marazan
Yes it was a real crisis. Yes it was well handled.

------
Wiretrip
Yes it was real. My fav phrase to describe the work was 'KY2K Jelly - helps
you insert 4 digits where only 2 would go before' :-)

------
rhacker
Even if not - it was a pretty great payout event for older devs that are
pretty much in their 70s now.

------
thaumaturgy
I was a programmer for a large school district in the bay area from 1996 to
1999. What a great way to cut my teeth professionally!

Yeah, it was a big deal. Pretty much all dev work was done by me and one other
guy. How much dev work could a school district have back then? Oh, lots. Every
school, and in some cases individual educators, would send in requests for
various reports, and each one required configuring a mainframe job, running
it, and doing some kind of thing with the output (conversion to a "modern"
format on a 3.5" disk or printing it or something). Every payroll cycle
required a lot of manual labor, every report card cycle, there were daily
student attendance jobs, and this particular district had a rather advanced,
for the time, centrally-managed network across the entire district with
Solaris DNS.

So on top of all this regular workload, we had to go over pretty much every
single line of COBOL in that mainframe, visually, and search for Y2K-related
bugs. There were many. The Solaris box needed to be patched too, and the first
patches that came out weren't great and I didn't know what I was doing yet
either.

So we started on this in earnest in Summer of 1997, while everyone was out of
school. We ran a lot of test jobs, which involved halting all regular work,
monkeying around with the mainframe's date, and then running a payroll job to
see what blew up. By late 1999, my mentor there was pulling multiple all-
nighters. He had a family of his own too and it really impacted his health.

There were mountains of greenbar printouts in our little office, all code,
with bright red pen marks. Such was life when working on a mainframe at the
time. The school district also brought out of retirement the guy who had
written much of the key operating system components for our mainframe. I
believe he came on as a consultant at rates that would be pretty nice even by
today's standards.

In the end, school restarted after winter vacation and most things ran okay. A
few jobs blew up where we had missed something here or there, but everyone by
then had got sort of accustomed to the chaos and it just needed a phone call
to say, "it broke, we're working on it, call you tomorrow".

Rough estimate, there was probably over a thousand hours' worth of labor to
fix it all. Had that not been done, virtually nothing in that school district
would have worked correctly by the beginning of 2000. (Some things started
failing a month or two in advance, depending on the date arithmetic they
needed to do.)

And these weren't just "year 100" printer errors; a lot of things simply
barfed in some fancy way or another, or produced nonsense output, or -- in the
most fun cases -- produced a lot of really incorrect data that was then handed
off to other jobs, which then produced a lot of even more incorrect data, and
then saved it all to the database.

------
stretchwithme
An avoidable problem can become real if you don't take the actions required to
avoid it.

------
wrmsr
Was the cold war real?

------
mywittyname
We are lucky that the Y2k issue was so understandable by the public. I doubt
we will have such luck addressing the Y2038 problem.

~~~
criley2
>We are lucky that the Y2k issue was so understandable by the public.

As someone who was forced to spend Y2K in a "prepped" cabin on the side of a
mountain with two years of supplies buried underneath, I think you might
overestimate the quality of the public's response to Y2K.

The public did not maturely understand that software needed to be updated and
everything was OK.

There was some real panic out there. It was arguably the biggest "End Times"
event of the modern era, definitely IMO surpassing "2012" and other
"apocalypse panics".

The Y2k preppers and panic, I think, was the foundation for the modern prepper
movement and the public's desire to flip from conspiracy to conspiracy to
predict collapse.

~~~
circlefavshape
!!!

I think you might be using an unusual description of "the public"

~~~
michaelmrose
I think you probably misunderstand the actual size of this public. Between 30%
and 40% believe that the earth was created as is between 6k and 10k years ago.

[https://www.livescience.com/46123-many-americans-
creationist...](https://www.livescience.com/46123-many-americans-
creationists.html)

People who believe that the earth evolved over time due to a completely
natural process is mostly gaining ground through losses to "god guided
evolution" not young earthers who are going strong.

~~~
s_gourichon
Not quite the same topic, but even popes have been telling that Bible should
not be taken literally at least for the past 23 years.
[https://en.m.wikipedia.org/wiki/Evolution_and_the_Catholic_C...](https://en.m.wikipedia.org/wiki/Evolution_and_the_Catholic_Church#Pope_John_Paul_II)

------
dhosek
I was working at a telecommunications startup in 1999. They were founded in
1997. A big part of what I was doing was fixing Y2K bugs.

That said, none of the bugs would have been critical to the operations of the
services. Everything was in the billing systems and I think if unfixed it
would have been more of a reputation hit than anything.

Also, "begs the question" doesn't mean what you think it means.
[https://en.wikipedia.org/wiki/Begging_the_question](https://en.wikipedia.org/wiki/Begging_the_question)

~~~
rriepe
Personally, I've given up on begging the question. It just means "raises" now.
The descriptivists always win in the end.

~~~
JdeBP
Look at it another way: a centuries-old mistranslation is finally being fixed.
(-:

~~~
nitrogen
Do we have a replacement phrase?

~~~
cygx
Some possibilities: Circular reasoning, assuming the conclusion, petitio
principii.

------
bjourne
What does the science say? I have seen exactly zero studies which claim that
the y2k bug would have led to disastrous consequences if action had not been
taken.

Compare that to the CFC situation in the 80's. Scientists agree that the
mitigating actions we took saved the ozone layer. Or compare it to the current
global warming crisis. Scientists tell us that if we do nothing, we will
suffer catastrophic climate change.

Media never tells you the truth, but the scientists usually do. So you listen
to them.

~~~
fanf2
Try looking for y2k articles here
[https://m-cacm.acm.org/magazines/decade/1990](https://m-cacm.acm.org/magazines/decade/1990)

~~~
bjourne
Which of those articles provide evidence for that the y2k bug could have been
a major disaster?

