edit: Link to C++ N3344 paper we wrote, describing how we represent dates (times/datetimes are similar): http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n334...
edit2: Anyone interested in examples -- on the early side:
ISIN GB0000436070, UK Gilt, issued 1853, perpetual annuity
ISIN XS0560190901, Dong Energy A/S, callable 06/01/3010
ISIN US786581AA66, HSBC Holdings Luxembourg SA, callable 10/15/2997
I hope I live to see it.
Me? I'm waiting for the idiots who claim it's a worldwide geek conspiracy à la Y2K. Here's a haiku (senryu) I wrote for that:
Is Y2K real?
The problem's being solved by
Men who can't find dates.
In another 68 years or so, change them all back to unsigned.
Of course if you miss any...
It does really annoy me when idiots talk about the Y2K problem as being nothing at all and just a hyped-up non-event.
I for one, worked on fixes for Y2K and it all went seamlessly because we (collectivly) worked hard and got it right and no reactors/missles blew up and the banks did not fail.
"Installed a new CloudStor 2.0TB Pro and no matter what I try on the System...Date/Time settings (manual or automatic), the device date (and ultimately new folder create dates) set themselves back to February 7, 2036 1:28AM. My device timezone is set for UTC-05:00 Eastern Time (US & Canana)."
So anti-lock brakes are going to stop working after the wraparound? Seems to me that either (a) they're relying on code that they shouldn't--why should ABS depend on date and time? or (b) the article is just giving examples of embedded systems that could fail.
If case (a) is true, I imagine (or would like to think) that they're just using the timestamp to perform some simple physics computations of how to control the brakes. If this is the case, they could just use the uptime of the processor instead. Then they would only bump into this issue if the car ran nonstop for 60+ years, which is a reasonable thing to assume will not happen.
If case (b) is true, then the article needs a bit more research into examples of systems that actually rely on datetime.
Sure you have to set the time on your microwave, but none of the ones I've had displayed a calendar date.
I never imagined the internal clock could affect camera focus, but it did.
Now that I think of it, I don't believe ECUs have internal batteries, so every time your battery dies the clock would be reset anyway.
Shameless plug: my project `fluxcapacitor` may also be useful for testing:
It's all the integration/glue code that's the problem.
The dark matter of the programming world.
There will be a LOT of patching.
Seriously though, in 20-25 years I think we cannot even fathom how fast and parallel databases will be, an ALTER will probably take a few seconds on a billion row table.
compute martini equals gin plus vermouth.
Other times there would be absolutely critical systems which were developed, but were tucked away on an unknown server and we'd find out about them at the last minute and panic trying to figure out if they would work correctly.
And if you get into languages who have severe bondage and discipline on datatypes, finding out that the subtly altered data that works in one system is blowing up another one just isn't fun.
What's worse: sometimes the logic just assumes that dates can't be past a certain time. Ooh, let's set the date to maxint of this data type and use that as the cutoff in our code. This code won't exist then, right? Naturally, the tests hard-code a datetime in them and it won't show up even when clocks are deliberately set forward to test the code. Of course, I've hacked on code that was written before I was born, so the whole "this code won't exist then" actually means "I won't be working here then".
That's the most terrible code I've ever read. There's no units, it doesn't specify the ratio of gin to vermouth. That's not a martini, that's a mess.
The actual code for time will work fine with 64bit integers, well once it's compiled correctly in a 64bit environment which is a huge percentage of servers right now.
I'm just too old school to waste 4 bytes on millions of rows (and for index storage which does bloat) when I am not dealing with future dates.
Right now ALTER in most SQL databases is kinda slow and not done in parallel but that is certainly going to change and next-gen SSD in this decade will also change everything.
I'd think the number of cases in which it's a worthwhile tradeoff for new development to save 4 bytes times even tens of millions of rows (each million costs you 4 MB, the size of a song) is vanishingly small today.
I'm not saying optimizations like that are never worth it, but as a general rule the case would have to be made extremely well for me to ever sign off on it. And I bet in the huge majority of cases, the time cost of making that case would exceed the extra storage cost for using "unoptimized" data structures.
To save a similar amount of money on storage today of course it'd probably have to be tens or hundreds of kilobytes wasted per record. That kind of an optimization would very often still be worth looking into and optimizing for.
If you're still there in 2038. And remember to do it.
So keeping up to date software might not be enough. We'll also have to pressure our peers to update and to adapt new protocol versions (that might have other changes in addition to just the widening of the timestamps)
Heck, you can let it wrap forever and interpret timestamps as 'now plus or minus 50 years'.
It's precise enough for logging now and also timeproof.
This is fortunate because there are huge legacy java codebases. However, problems may still arise where java talks to other things (databases, input/output).
1) Someone gone insane from too much exposure to IBM mainframes (seems quite likely)
2) A great performance artist - making us all think
3) A real time traveller
According to him the farther back or forward in time he goes the more things change.
So, with that said, we might still be screwed. But maybe we can find an IBM 5100 and solve this problem.
It sure (would have saved / will save) him a ton of trouble.
Unless he is John Titor!
I assume the more interesting aspect of Y2038 will be the societal... Will we all be stocking our bunkers with 5 gallon pails of dried pinto beans and arming our personal attack drones?
I'd like to think we'll be a bit wiser, but somehow doubt it. Software is full of date related bugs and fails all the time. The world didn't end before, during, or after Y2K. I don't think it's ending in 2038.
In the early 1990s you could get four letter domains and you could get them for free. There was no registration fee until around 1995.
... was a very entertaining and interesting talk from linux.conf.au 2013. The first ten or so minutes is amazing theater and quite hilarious (at least it was when I saw it live), and the remaining talk is also very informative.
It deals with quite a clever way to re-use the existing timezone databases that are designed to work with 32bit dates, such that they are usable with 64bit dates. This is important, because it is already an extremely difficult task to keep the original 32bit timezone databases up to date (what with all of the political decisions to change daylight savings - at a few months notice).
The talk is from the perspective of a perl developer, but the discussion is for anyone who is interested in solving the Year 2038 Problem in their language of choice.
or 91mb mp4 download:
Using 64-bit integers is overkill, they will work till the year 584bln and it's twice as much data for every timestamp. Adding one byte (5-byte integers) is uncommon, but that supports the year 36000 already, so it would be more than enough.
 In full, till the year 584 942 419 325.
Yes because we have so little memory and so many timestamps to store that scrounging on the size of timestamps is highly important and absolutely not something which has been a pain in the ass half a dozen times in the short lifespan of computers.
Wait, we don't and it has.
The arbitrary numerical limits are usually in the implementation, as many a buffer overflow bug as demonstrated.
A lot of mainframe code that was fixed for Year 2000 had two-digit years in the data, that was converted to a four-digit year by adding 1900. The "fix" for a lot of that was to say if it was less than 70 (or whatever cutoff handled all the dates for the application), add 2000, otherwise add 1900. 72 => 1972, 00 => 2000, 13 => 2013. This fails in 2070, with 2070 being reported as 1970 again.
That same software is untouched 13 years later... I don't have hopes for it being updated soon.
You can't create an authenticated S3 URL without an expiration date, and it doesn't accept an expiration date after 2038.
I expected to be called out for this comment some day, but never have -- makes me wonder how often people read through code.
Get a Pi(e) and enjoy eating it as the world collapses...or when the world is just fine you will have just enjoyed a delicious pie.