> Years in the range 0..99 are interpreted as shorthand for years in the rolling "current century," defined as 50 years on either side of the current year. Thus, today, in 1999, 0 would refer to 2000, and 45 to 2045, but 55 would refer to 1955. Twenty years from now, 55 would instead refer to 2055. This is messy, but matches the way people currently think about two digit dates
I mean I understand if that two-digit year is a part of a form input (where the logic makes sense btw).
But why would a software developer decide to use two-digit years inside the application? Isn’t that like the first thing you would think about when you’re implementing the format?
For a lot of very old legacy applications, RAM and data interchange size. For a lot of the Y2K remediation of those apps, not requiring every component that sees a date in a data structure to change, so instead just the date processing pieces consume new library functions that use a sliding rather than fixed interpretation of two digit years or some similar stopgap that allows two digit format to remain.
For newer code, well, that's how the developer is used to writing dates by hand and they have a library available that seems to do something reasonable thing with two-digit dates, and, look, he simple unit tests they wrote passed, so it must be okay.
It is definitely not intended for internal use of years though!
Oh dear, there are far far far worse... Especially when combining timezones into play. I wouldn't wish them on my worse enemy.
Asked him about it, he said that they warned management not to window the code to 2020, and showed alternatives that would take longer to code but would be future proofed.
Most agreed with enough convincing. Some did not, usually with the argument of "the systems won't even be around in 2020!".
I think in many cases the thinking is: s/the systems/I/
And as it turned out in 2020 the system was still being used, but unlike in 2000, the source code was lost.
So there can be applications today that parse ASN.1 datetimes manually (1) but only expect the UTCTime format. They'll break when they encounter a cert with a not-after beyond 2050 because it'll be in the GeneralizedTime format instead.
Luckily this one will be detected over a period of time rather than happening at precisely 2050-01-01 00:00:00, so there's more time to fix it in each application that has the bug.
(1) if the application wants to parse it into a `struct tm` for manipulation, for example. For that specific case, openssl 1.1.1 added `ASN1_TIME_to_tm`, so it's only a problem for applications that don't use openssl or need to support older versions. One can hope that at least the latter will stop being a requirement as 2050 gets closer.
$ pem=''; cat /etc/ssl/ca-bundle.pem | while read -r line; do pem="$(printf '%s\n%s' "$pem" "$line")"; if grep -q 'END CERTIFICATE' <<< "$line"; then openssl x509 -inform pem -enddate -noout <<< "$pem" | cut -d= -f2 | date -f - -I; pem=''; fi; done | sort -r | head -1
$ for x in /etc/ssl/certs/*.pem; do openssl x509 -in $x -dates -noout; done | grep After | cut -d= -f2 | sort -n +3
Oct 6 08:39:56 2046 GMT
Subject: C = PL, O = Unizeto Technologies S.A., OU = Certum Certification Authority, CN = Certum Trusted Network CA 2
I'll be just over 60 years old when that happens. With any luck, I'll be retired and not have to worry about it... I think having to deal with both Y2K and Y2K38 in one career is too much.
Retiring by 2038 is also my strategy :)
One of signs of inexperience is using abs(int) in hash functions...
You mention it as a sign of inexperience, and I can't really disagree with you. Just adding a nuance that inexperience can be due to loss of experience as easily as lack of experience.
Pretty much all modern CPUs are two's complement ones - hence min integer is impossible to convert to anything else (having just the highest/sign bit, and rest zero)
HN is a community. You needn't use your real name, of course, but you should have some identity for others to relate to. Otherwise we may as well have no usernames and no community, and that would be a different kind of forum. https://hn.algolia.com/?query=by:dang%20community%20identity...
Guidelines such as "throwaway accounts are ok for sensitive information, but please don't create accounts routinely" are naive. Non-sensitive information can reveal sensitive things when aggregated. We've all read articles about how anonymized data sets can still contain enough data to identify, or come close to identifying, individuals. I'm sure a number of those articles have been linked to from this web site. Practicing user account rotation is a useful tool for mitigating such risks.
Maybe it's time to rethink this particular site guideline. Let's not throw out the baby with the bathwater when it comes to users making contributions to the site.
They were most likely just kicking the ball down the road either thinking that a more permanent solution will be applied in the 2 decades to come, or just not caring at all because they won't be around in 20 years.
Technical debt tends to accrue this way and many times it's not in bad faith. It's meant as a short term solution to buy some time and ends up being permanent because someone doesn't understand what's the point in spending more money to fix an issue that was "obviously" fixed already.
Well you know the old saying, save the grease for the squeaky wheels. When you face a major hurdle sometimes the best course of action is to take a shortcut just to avoid it and fix it later when you can properly allocate resource for a solid fix. But most times after the fix it's hard to justify fixing it "again".
I've had managers who said "I understand the issue but we have a budget and more critical cracks to fix", and I've had managers who said "what are you going on about, looks good to me". Result is the same but the potential of each attitude is vastly different. The first kind of manager knows when "that" crack becomes a priority. The second kind of manager is unaware there's a crack.
For example, you can grep a Rust code base for "unwrap", "expect", and "unsafe", while in other languages, ignored return codes or unchecked exceptions are harder to detect.
Similarly, (if I am not mistaken) you can grep Swift code for "try", and find every call site that might throw an exception. Can't do that in Java, C#, or Python!
Tool designers can help by differentiating their products through how well they allow for tracking technical debt.
It is not irresponsible to use temporary fixes in the face of a hard deadline - what is irresponsible is not going back post-deadline to deal with them with a long-term view. Unfortunately, bonuses typically don't get paid for long-term results...
I remember my best friends older brother saying the same thing. "Well, this isn't a perfect solution, but it will give us about 20 years to figure it out." and then he chuckled quite a bit.
Essentially, my reaction was they got paid to make sure the software avoided breaking, not fixing the core problem. There were so many software programs that were affected, companies didn't have time to completely fix them. Most knew it was just a patch to avoid a major set back financially.
However in many cases such an issue comes from the core data store of the company and then goes into all derived systems. Updating this is a major project, where you need to update all consumers, most likely by adding a whole new API layer isntead of direct data access first and then update all data (don't forget the process to work on the data from the tape archive!) and then move on.
And then comes reality – Y2K has been fixed with hit fixes to the systems which do calculations and then different systems do their work-arounds and then one things "oh, there is this important business change now, but the refactoring has 20 years" and then it's pushed and pushed. Five years later somebody stumbles over the hitfix wonders, asks management, which again pushes other tasks ... and suddenly it's 2020.
Fixing technical debt is often overseen as priorities are on features impacting immediate business value.
If I remember right, there is probably another group 20 more years out that used some date changes to get buy for Y2K. I do hope someone replaces them.
2025, 2030, 2040, 2050, 2060, 2075, 2080, 2090.
Every developer will have picked some random round number that made logical sense to them.
So I can relate that it's easy to write buggy date handling code even today.
But at least we have tests. All those poor programmers that worked on Y2K didn't have them. And those that work on Y2k20 bugs probably still don't.
Guess which one we chose
"Sitting the problem out" is a valid problem solving strategy. :-)
In some cases “kicking the ball down the road” makes perfect sense, just fixing what is most urgent.
Given the amount of effort that Y2K required, and people thinking it actually wasn't a big deal, I have little hope that 2038 will go well.
> By 2038 I'll be retired, and probably using medical equipment containing 32 bit microcontrollers.
He realized that Y2K was going to be a disaster on a cataclysmic scale. So he put himself into hibernation with a timer set to wake him up a couple years after Y2K when everything would be presumably fixed.
He wakes up and finds out that due to a Y2K bug in his timer, he has been asleep for much longer than he expected. It's way into the future. Bill Gates greets him. People can see virtual screens in mid air, and tap on invisible (to us) keys and make gestures in mid air. Life expectancy has been greatly extended so far that nobody is sure how long people will live. There is now plenty of energy for all and unlimited resources.
The programmer expresses that he's glad his timer woke him up. Bill Gates says, "oh, your timer didn't wake you up. It was permanently stuck on a Y2K bug. We chose to wake you up."
"But why?", the programmer asks.
Bill Gates explains, "Well, it's the year 9997, and the Y10K bug is right around the corner, a lot of critical systems need to be fixed, and it says in your records that you know COBOL."
I would say grab some popcorn, but I wouldn't trust that microwave.
At least the programmers writing code in the 80s had an excuse. It was the Reagan years. Nobody thought civilization would make it until the year 2000.
The correct fix is to add 1900.
Windows 95 is version 4.10, XP is v5.1/v5.2, Vista is v6.0, Windows 8.1 is v6.3, and Windows 10 is v10.0.
This is the closest I can find to a definitive statement: https://www.reddit.com/r/technology/comments/2hwlrk/new_wind...