Of course, the singularity will be here by then and fix it for us just before turning us all into paper clips, yada yada.
I will advertise with the slogan "Epochalypse... NOW."
or you can do the same as the y2k people did: Y2K... TOMORROW!
Fernand Braudel (in The Structures of Everyday Life) talks of how it was the staple food of the poor in Turkey, and I think in Persia. US commercial yogurt is weak and sugary; the Eastern variety is much more lifelike.
Why make "artificial fertilizer" a goalpost? Yield is what matters, there's apparently a million things that go into improving rice paddy yields.
Epochs & date formats aren't only used to represent & display the current date, but dates in the future, e.g. think about reservation systems, graphing libraries that display things 10-15-50 years in the future etc.
The Long Now Foundation uses five-digit dates like 02017 in their work. :)
Hopefully they don't have any octal-related bugs.
Nobody will care about significant year digits in a few billion years
PIC 9(5) for 9995!
With suitable groundwork, there will be a willing and wealthy market looking for people to assuage their fears - a service I see myself as happy to provide. Much as it was in the "Millennium bug" area, a lot of the effort in getting the business is PR spadework, but I've got 15-20 years of prep time to position myself suitably. I also hope to provide a slightly more useful service than many Millennium consultancies did at the time.
select unix_timestamp('2038-01-19') returns 2147472000
select unix_timestamp('2038-01-20') returns 0
If this class of business is not seeing a problem this minute, it isn't a bug. And it won't be a serious-enough bug to spend money on until whatever workarounds they can think of start having negative effects on income.
Sources: experience with Y2K remediation, experience with small business consulting & software development, experience with humans.
So true. That said, thinking 38 years into the future is usually not sensible for most businesses, because it's very possible that they're bankrupt before then. Thinking 21 years into the future also offers poor ROI for the same reason.
Strikes me that big businesses would have thought "will we need to do this again in a few years" and acted appropriately and that there should be a trickle down effect as large corps demand future proofing in IT products.
Yes negligence, ignorance, lack of foresight, corner cutting, and other human traits feed in to that.
If you know anything about programmers, you know they found that error at some point during development, thought about how other people dealt with that, remembered that windows used to treat >30 as 1900s and <30 as 2000s and did the same. So probably the people in this thread planning their retirement solving this will have to figure out which random unix timestamp number is treated as pre 2038 and which one is after. And then they will have to undo all the last-minute spaghetti code tying it up on the original program.
And, even if they happen to be safe, they do tend to pay well just for an audit to confirm that.
Now that I think about it, I probably had an Ericsson T28 in 2000 and it did kinda suck.
The problem with mainframes is that they can't be trivially upgraded or migrated to 64-bit like modern OS's on x86 hardware can be. Vendor lock-in, retirement of OS, bare to the metal coding, etc caused this. If these mainframes were running a modern OS, it would have been trivial to upgrade them to a 64-bit version and make whatever small changes are needed to date storing in the old 32-bit apps. You won't need a wizened COBOL guy for this. A first year CS student would be able to look at C or C++ code and figure this out. Modern languages are far more verbose and OO programming makes this stuff far easier to work with.
Comparing mainframes to unix systems really doesn't make sense. Its two entirely different designs. Not to mention, the idea of running a 32-bit OS today is odd, let alone 20+ years from now, especially with everything being cloudified. You'd be hard pressed to even find a 32-bit linux system in 20+ years, let alone be asked to work on one. That's like being asked to setup 1000 Windows 98 workstations today.
I have no idea why you think running 32-bit today is "odd." 32-bit desktops and small servers are still perfectly usable today. 32-bit microcontrollers are going to be around for a very long time (just look at how prevalent the 8051 remains), and a lot of them are going to be running Linux. It also makes a lot of sense to run 32-bit x86 guests on AMD64 hypervisors - your pointers are half the size so you can get a lot more use out of 4GiB of memory.
(disclaimer: IBMer, but not a mainframe person)
These installations exist, and (outside of tech startupland) isn't even that strange, although he is probably pushing things. The owner of that business is proud of how long he's made his IT investment last; his main concern is that dirt-cheap second-hand replacements that can run 98 are apparently getting harder to find.
Installation of win98 is so quick, so easy, compared to XP.
But websites don't render so well in the win98 version of IE. I don't think it knows about CSS.
(not avilable just yet)
E.g. what language do you use? Is it SW or HW that you "ship"? You probably perform some kind of verification and or validation - how does the tool chain look like?
Do you perform model-checking on all possible inputs?
Lots of questions, and you do not have to go into detail, but I would appreciate your input, as it is an interesting topic.
The product is custom hardware built with off-the-shelf parts like microcontroller, power converters, sensors, memory. Texas Instruments MSP430 family of microcontrollers  is popular for this type of application. They are based around MIPS CPU cores with a bunch of peripherals like analog-to-digital converters, timers, counters, flash, RAM, etc.
I don't work on medical devices, so validation is more inline with normal product validation. We certainly have several very well staffed test teams: one for product-level firmware, one for end-to-end solution verification, others for other pieces of the overall solution. We are also heavy on testing reliability over environmental conditions: temperature, pressure, moisture, soil composition, etc.
The firmware is all done in-house written in C. Once in a while someone looks at what the assembler the compiler, but nobody writes assembler to gain efficiency. We rely on microcontroller vendor's libraries for low-level hardware abstraction (HAL), but other than that the code is ours. The tool chain is based on GCC I believe, but the microcontroller vendor configures everything so that it crosscompiles to the target platform on PC.
Debugging is done by attaching to the target microcontroller through a JTAG interface and stepping through code, dumping memory, checking register settings. We also use serial interfaces, but the latency introduced by dumping data to the serial port can be too much for the problem we're trying to debug and we have to use things like togging IO pins on the micro.
We don't model the hardware and firmware and don't do exhaustive all possible inputs test like one would do in FPGA or ASIC verification.
I need to go, but if you have more questions, feel free to ask, and I'll reply in a few hours.
I am surprised that you do not apply some kind of verification or checking using formal methods, however it might be the case (at least it is the experience I have) that this is still too inconvenient (and so expensive) to do for more complex pieces of software.
For your pleasure, I did dig up a case study on using formal methods on a pacemaker since I think someone mentioned it upthread.
David Wheeler has the best page on tools available:
Here's a work-in-progress of my list of all categories of methods for improving correctness from high-assurance security that were also field-proven:
I am fairly new to this field and I share your surprise that more formal methods are not used in development. To be honest, the development process in my group and others I'm familiar with can be improved tremendously with just good software development practices like code reviews and improved debugging tools.
example for the issues in this area:
For what it's worth, devices I work on have a few wireless interfaces while guaranteeing 20-year life time: one interface is long-range (on the order of 10km), two are short range (on the order of a few mm). There is no way we can get to 20-year life time with doing WiFi (maintaining current battery size/capacity) for long'ish range and maybe not even BT for shorter range.
and just like any other IoT, using generic chips and stacks is cheaper.
run QNX or Linux on it and walk away.
there are DYI insulin pump monitors out there already that use Linux on RaspberryPi - see here: https://openaps.org/
A RasPi has no chance of running for 20 years off a single A-size non-rechargeable non-serviceable battery.
Once we hit the ten year out mark then you're going to see things like expiry dates for services roll over the magic number. The shit will hit the fan by degrees.
reek v. To give off a foul odor
wreak v. To inflict or execute
Once 64-bit processors became mainstream, the 2038 problem pretty much solved itself. There's only disincentives to building a 32-bit system today let alone in 20+ years.
Unlike with Y2k where there was nothing but incentives to keep using Windows and DOS systems where the 2000 cut-over was problematic. The non-compliant stuff was being sold months before Jan 1, 2000. The 32-bit linux systems have been old hat for years now, let alone 20+ years from now.
Not to mention that those old COBOL programs were nightmares of undocumented messes and spaghetti code no one fully understood, even the guys maintaining them at the time. Modern C or C++ or Java or .NET apps certainly can be ugly, but even a second year CS student can find the date variables and make the appropriate changes. They won't be calling in $500/hr guys for this. Modern systems are simply just easier to work with than proprietary mainframes running assembly or COBOL applications that have built up decades of technical debt.
Those 70s and 80s programmers were working on mainframes with multi-decade depreciation. We work on servers and projects with 3-5 year deprecation when we aren't working on evergreen cloud configurations. Not to mention we've already standardized on 64-bit systems, outside of mobile, which is soon following and has typically a 2 year depreciation anyway.
Airline reservation systems run on software written in the 50s and 60s
Your views on evergreen this and disposable up to date that are very naive.
Embedded systems and business systems live for a VERY long time.
But some industrial- or military-spec ARMv7 core running a critical embedded system or two, in 2038? Twenty-year design lifespans (often with servicing and minor design updates) are definitely not unheard of, and successful systems often outlive their design lifespan.
Edit: Ntp has something of a protocol issue to be addressed as well.
Technically, "always" is a stretch. But it's shorter than "until the age of the universe is well over an order of magnitude bigger than it is now".
Even if there are issues, more than likely they'll be able to handle it internally. OO programming isn't going anywhere and modern languages and concepts are easier to work with than piles of undocumented COBOL from Y2K. They won't be calling you with help to change date fields. That's trivial stuff.
There will be Fortune 500 companies that have old ntp clients running somewhere in 2038...pretty much guaranteed. They'll also have apps with 32 time_t structures running as well, database columns that overflow, etc. Or maybe they won't, but aren't sure. You sell them a service that audits all of those things, scripts that look for troublesome stuff using source code greps, network sniffing for old protocols, static analysis, simplistic parsing of ldd, etc. And, a prepackaged methodology, spreadsheets, PowerPoint to socialize the effort, and so on.
It was the same for Y2K. Fixes for many things were available well ahead of time. Companies had no methodology or tools to ensure that the fixes were in place.
Also doesn't it depend what the language/database does as much as the system?
When you look up "integer": https://www.cs.cmu.edu/Groups/AI/html/hyperspec/HyperSpec/Bo...
"An integer is a mathematical integer. There is no limit on the magnitude of an integer."
What happens when an integer overflows from a fixnum (single-word representation) is that it gets upgraded to a bignum behind the scenes.
IMO Common Lisp is the only programming languages that handles time correctly out of the box, and aside from Scheme (http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z...), is the only programming language with proper support for numbers.
Its datetime implementation, however, is implemented partially in C, and does not support arbitrary timestamps.
The AI's will be scrambling to fix the problem.
You have: (2^64 / 2) seconds
You want: years
(2^64 / 2) seconds = 292277265670.798 years
THE END IS NIGH1111!!!!!
The linux kernel (and many other applications) solve this with a tuple of 64-bit ints (seconds, nanoseconds) where 0 <= nanoseconds <= 999999999. Compare this to simply 64 bits of nanoseconds, which would run out in roughly 2554 CE.
Other systems still (perhaps most commonly) are using double floats for seconds. Under that scheme, nanoseconds were only representable until Feb 17th 1970. The last representable microsecond will be some time in 2106, and the last representable second won't be for another 150 million years or so.
Personally, I'm happy with the precision afforded by floats. Timing uncertainly (outside niche applications) is generally much larger than a single nanosecond, and even microseconds are a bit suspect.
Let's not even get into timeval which uses the same size field for microseconds.
Fortunately for me there was already a fixed OpenSSL already available once I'd found the bug in it.
Does this seem horrible to anyone else? Why not fix stat()? Does this syscall have to be so highly preserved even when it will be broken?
One of the advantages of the OpenBSD approach of being able to make intrusive changes saw their time_t made 64-bit in the 5.5 release in 2014.
Admittedly this is much harder for Linux as they can't make the change an verify in a single place due to the separation of kernel and userland/ the fact Linux has many distros.
The tricky part starts if you also have to keep the old libraries updated with security patches.
On BSDs, or Window, or most every OS, there's a base "userland" library (e.g. libc) which serves as the kernel API and hides whatever ABI specific syscalls use, Linux doesn't have that.
It also moves it to a place where you can have somewhat abstracted types (so chances are the client will work with just a recompile as it picks up the updated typedef), you can much more easily prepare the transition by e.g. using various flags or even trying alternatives beforehand (for instance OSX originally added a stat64 call before before rollbacking that choice and using macros to transition stat to 64b), and you are able to stop before a completely mangled syscall is actually attempted (by checking for some version symbol which must be set if you compiled for the 64b version of the various syscalls).
The libc (~POSIX) call is "pretty standard" and uses typedef'd pseudo-abstract types (OSX's stat has been 64b optionally since 10.5 and by default since 10.6, though the 32b version seems to remain available in 10.11 by compiling with _DARWIN_NO_64_BIT_INODE), the underlying syscall is not in any way.
If you updated stat() to write 64bit values into a different struct, any existing program calling it with a pointer to an old struct would get garbage data (and a potential buffer overflow).
Renaming the function also makes debugging easier - after statx() is in widespread use, stat() could be replaced with a placeholder that raises the SIGTRAP signal, and immediately detects epoch-unsafe programs still in use.
No direct insight, but -
a.out -> ELF, glibc2, and NPTL threads radically shifted threading and
required kernel/userland cooperation in the Linosphere; We now have ELF
symbol versioning widely supported which should make this even easier, so I suspect there will be some sort of long-run transition period and some fun C #ifdef macro fun over the long haul - e.g. I could see this new statx() thing (which apparently has additional information) being the baseline, and then having some -DLINUX_2038 gizmo which redefines stat() in terms of this function when present, possibly with some sort of ld trickery to splice together the appropriate functions in the case of shared libraries, yadda..
64 bits would also allow you to also cover the entirety of history, all the way back to 13.7 billion years ago when the Universe came into existence, but instead the UNIX time format is shackled to be within ~68 years of 1970.
Add to that the fact that memory was very much not cheap at the time. Memory for the PDP-7 (the first computer to run UNIX) cost $12,000-20,000 for 4kB of memory. In 1965 dollars. In 2017 terms, that means that wasting four bytes had an amortized cost of three hundred to six hundred dollars. And that's for each instance of the type in memory.
I think the developers made the right choice.
The first definition was 60th of seconds since 1970-01-01T00:00:00.00 stored in two words (note that a word is 18 bit on a PDP-7!). That definition was later changed.
Also Linus could have defined `time_t` to be 64 bit when he started linux.
So GCC may have had 'long long' already when Linus started working on Linux.
I think the main reason nobody pushed back on a 32-bit time_t is that back then much less was done with date and time data. I don't think time rollover would have been perceived as a big problem, given that it would only happen every 100 years or so.
In the decades since we have become used to, for example, computers being connected to each other and so in need of a consistent picture of time; to constant use of calendaring and scheduling software; to the retention of important data in computers over time periods of many decades. None of these things was done or thought about much back then.
Even the 32-bit Unix versions shipped with this limitation for a very long time.
They could have made it unsigned instead of signed, which would have made it work until 2100 or so, but I think a 68-year horizon is more than most systems being built today have.
C actually didn't have unsigned integer types in the beginning. They were added many years later and also not at the same time. For example, the Unix V7 C compiler only had "unsigned int".
>but I think a 68-year horizon is more than most systems being built today have.
That's a lot of time, especially if we see Linux breaking into the mainstream about 1995 or so. That's 43 years to worry about this. Meanwhile, we saw Microsoft break into the mainstream at around 1985, which only gave us 15 years to worry about Y2K.
It would be more accurate to say that "no language had a two-word integer type." 1960s CDC 6000-series machines had 60-bit words, and Maclisp got bignums sometime in late 1970 or early 1971.
In the late 1970s, the cutting-edge microprocessor was 16-bit. The first 32-bit Intel chip was the 386, which debuted in 1985.
The TRS-80, a common small computer in the late 70s, offered 4kb-48kb of RAM.
When using hardware with that capacity, overflowing time_t in 2038 is hardly a concern.
For example, if every file stores three timestamps (mtime, ctime, and atime), then that's an extra 12 bytes per file to store a 64 bit timestamp vs a 32 bit timestamp. If your system has five thousand files on it, that's an extra 60 KB just for timestamps. In 1970, RAM cost hundreds of dollars per KB , so this savings was significant.
It's why we have "creat" instead of "create", it's why file permissions are tightly packed into three octal digits (as one of the old systems Unix ran on was actually a fan of 36-bit machine words, so 9 bits divided things more evenly at the time). It's why C strings are null-terminated, instead of the more sensible in every way length-delimited, except that length delimited strings require one extra byte if you want to support the size range between 256-65535. Yes, the programmers of that time would rather have one extra byte per string than a safe string library. Pre-OSX Mac programmers can tell you all about dealing with one-byte-length-delimited strings and how often they ended up with things truncated at 255 chars accidentally.
In an era where "mainframes" shipped with dozens of kilobytes of RAM, yeah, they cared.
Hmm, every software gig I've had in the past 5 years that's exactly what I've been expected to do because the extra ten bucks a month for a bigger VM is wayyy less expensive than engineering time. Interesting times.
They may not have had a meeting about it, but I think it's exceedingly unlikely that whoever decided to assign a 32 bit int to store time didn't give some consideration to the date range it could represent. Otherwise how would they know not to use a 16 bit int?
5x 16 bit registers.
So ability to operate on 1x 64 bit number and some change if loaded all
How many instructions do you think it would take to add/subtract 2x
64 bit integers? vs 2x 32 bit integers on such a machine?
Not to mention having to implement and debug this logic in assembly on a teletype
vs using a native instruction.. (see "Extended Instruction Set (EIS)" in same link)
Noone would have considered 64 bits at all because it would have been a huge
hassle and not worth it, even beyond thinking ahead in this way..
Besides.. if 'the last OS I worked on' was probably the 1st or second interactive timesharing system ever written, give or take (e.g MULTICS/ITS), and I worked on it at a low level, because thats what people did, chances are, I might have talked to the person who came up with the idea on how to store the time on that system.. who conceivably could be the 2nd or 3rd person ever to actually implement this, ever.. And if this is the case, don't you think, that person would have thought about it somewhat?
Programmers at that time were many times much better at these things
See also: http://catb.org/jargon/html/story-of-mel.html
(which itself was posted in 1983 concerning the same topic...)
I'd suggest spinning up some SIM-H VM's and mucking around for a while with
early unices (v5,v7,32V,4.3BSD), and probably ITS or TOPS-10/TWENEX as well ...
it is quite illuminating and very insightful.
When your machine has a 16 bit processor and a few dozen kilobytes of RAM you look to save wherever you can. 64 bit number support was primitive and quite slow as well.
It's in the same bag as IPv4 "only" supporting a few billion addresses, hindsight is always 20/20...
Moreover even 64bit timestamps wouldn't be good enough for certain applications that require sub-second precision. PTP (the precision time protocol) for instance uses 96bit timestamps to get nanosecond granularity. You always have to compromise one way or an other.
> Addresses are fixed length of four octets (32 bits). An address begins with a network number, followed by local address (called the "rest" field). There are three formats or classes of internet addresses: in class a, the high order bit is zero, the next 7 bits are the network, and the last 24 bits are the local address; [...]
7 bit network times 24bit local addresses is already more than two billions.
IPv4 was running out of class B's, those 64k address chunks, when CIDR was introduced.
Pretty sure it was related to space being an issue. In every place where you needed to save time you likely didn't want to use more space than you had to. This was also a driving factor as to why years were stored with only the last two digits.
In 2017 we have no problem store-wise making it a 64-bit integer. But in the 90s and earlier? I think it would have been a hard sell to make a change that would future proof them beyond 2038 especially when so many play the short term money game.
A choice that gets you 40 years down the road, instead of millions of years down the road is a good choice, when you don't even know if you're going to have roads in 40 years.
/* Urbit time: 128 bits, leap-free.
** High 64 bits: 0x8000.000c.cea3.5380 + Unix time at leap 25 (Jul 2012)
** Low 64 bits: 1/2^64 of a second.
** Seconds per Gregorian 400-block: 12.622.780.800
** 400-blocks from 0 to 0AD: 730.692.561
** Years from 0 to 0AD: 292.277.024.400
** Seconds from 0 to 0AD: 9.223.372.029.693.628.800
** Seconds between 0A and Unix epoch: 18.104.22.168
** Seconds before Unix epoch: 9.223.372.091.860.848.000
** The same, in C hex notation: 0x8000000cce9e0d80ULL
** New leap seconds after July 2012 (leap second 25) are ignored. The
** platform OS will not ignore them, of course, so they must be detected
** and counteracted. Perhaps this phenomenon will soon find an endpoint.
This is almost the same situation, except I assume slightly less understandable to a non-programmer (you have to understand seconds-since-1970 and why we'd do that instead of storing the date as text, powers of 2 and the difference between 32 and 64-bit).
“I asked CBO to run the model going out and they told me that their computer simulation crashes in 2037 because CBO can’t conceive of any way in which the economy can continue past the year 2037 because of debt burdens,” said Ryan.
I love politicians.
But if you took code that was compiled with each version, the binary data that they will produce/consume for dates between the epoch and 2038 is bit for bit identical.
The effects of this problem are closer than they seem - only 14 years away or less
time_t epoch = -100;
I don't understand, not knowing much about BSD. Is this an LTS/support thing?
Can someone explain?
The Linux kernel can't freely do this as then the ABI break is placed on various distro maintainers and software authors because there is no clear point in time they can say ABI $FOO will break on date $BAR.
But they can break the ABI because they do not want to maintain compatibility with old proprietary binaries. It's a source world, in the sense that any software can and will be recompiled if and when needed. That doesn't mean every user has to compile their own system.
1) for my definition of crappy, not compiling PostgreSQL support is the most common for me
http://www.macworld.com/article/2026544/the-little-known-app... (scroll down)
I do not speak, write, or search for things using Chinese characters. Seems as though this problem must have been heavily Google'd for by Chinese speakers - why else would it have popped up in my search recommendations?
Btw: Google Translate tells me 年問題 means "year problem"
Perhaps this information was important for ensuring the safety of Kylin, which started out as a sort of Chinese DARPA-style project to get the state off of MS Windows. Kylin was announced in 2006. It was supposedly based on FreeBSD 5.3.
Strange thing is, Kylin later became known to use the Linux kernel (with a Ubuntu influence). - Google search recommendations, which should be based on a recent volume of searches, if they did suggest anything about Kylin development, should yield "2038年問題 linux" rather than "2038年問題 freebsd" - Maybe some of those FreeBSD-Kylin systems are still being heavily used.
Or perhaps there are a lot of embedded systems being produced in China which use FreeBSD.
I know maybe it sounds pedantic or perhaps out there but I think that token dates that are fixed and specified as positive or negative infinity gives a mathematical value that can then be reasoned with formulaically, just like we do with calculus. We keep running into finite limits which is what keeps causing problems such as Y2K, Y2K38, the beginning of Unix time (1970-1-1) - maybe if we treated the beginning and end of time as infinity some new method of reasoning about dates would become more apparent. I'm not sure as I haven't gone all the way down the rabbit hole with this idea yet.
I've heard people talk about the risk to cars, but what other kinds of embedded systems will still be in use after 20 years? Maybe certain industrial machines?
All BSDs have, as far as I'm aware of, solved this years ago.
Short summary: Many systems (including Unix) store time as a signed 32 bit int, with the value 0 representing January 1st 1970 00:00:00. This number will overflow on 03:14:07 UTC on 19 January 2038.
Y2K is more of a formatting / digit representation problem than a pure data type overflow. The solution for Y2K was to switch the representation of year from 2 to 4 digits, along with coding and logic changes to go along with this.
For Unix / Linux, the solution for the 2038 problem involves changing time_t from 32 bits to 64 bits. At a higher level (eg what's in your C++ code), instinctively I don't think this in itself would involve as many code changes (maybe some data type changes, but probably less logic changes than Y2K, that's my guess). I believe several platforms have already moved towards 64 bit time_t by default... some support this by default even on 32 bit systems, such as Microsoft Visual C++ -- https://msdn.microsoft.com/en-us/library/3b2e7499.aspx
Since this involves a data type overflow issue, though, we're dealing more with platform specific / compiler / kernel type issues. I don't know, for instance, how easily 32 bit embedded type systems could handle a 64 bit time_t value. I understand that there are some technical issues with Linux kernels (mentioned in some of the comments) that prevent them from moving to a 64 bit time_t irregardless of platform (time_t should always be okay on 64 bit platforms, it's the 32 bit platforms that will have the issue...)
The good news is we have 21 years to think about it...
The highest date I could make with node+chrome was 'Dec 31 275759', which cozies-up pretty close to that (8639977899599000)
From the ECMAScript Spec :
The actual range of times supported by ECMAScript Date objects is slightly smaller: exactly –100,000,000 days to 100,000,000 days measured relative to midnight at the beginning of 01 January, 1970 UTC. This gives a range of 8,640,000,000,000,000 milliseconds to either side of 01 January, 1970 UTC.
That work, he said, is proceeding on three separate fronts
I can't read that without thinking of the turbo encabulator.
The speaker warned that traffic lights would stop working. Maybe someone more techno-literate than me can explain why that would be a genuine concern, but from my perspective at the time it seemed like the guy was making money from fear mongering.
This is global adoption; some countries, including the United States, have already hit 20%.
I'm finding it really hard to believe, as someone in Guatemala, that 6% of requests to Google here are made over IPv6. Is there any way to gain more insight? e.g. what ISPs are responsible?
Edit: APNIC has similar data published here: https://stats.labs.apnic.net/ipv6/GT
> Also they tend to rely more on enterprise network appliances which have bad IPv6 support in my experience.
This I would believe.
> IPv6 is more of a boon to consumer applications since carrier-grade NAT is a nuisance and otherwise you need an IP per customer.
It would have been a slight boon at work, too. HR perennially makes me grab documents/data off my home machine, and I cannot wait for the day when I can just `ssh` to a domain name. My .ssh/config aliases are getting pretty good, but it still adds considerable latency to pipe everything through a gateway. (Alternatively, I could run SSH on non-standard ports, but I've yet to get to mucking around with the port-forwarding settings for that.)
There were also times when we needed to do stuff like employee laptop-to-laptop communications, and the network just wouldn't deal with it. I was never sure if this was NAT, or just that Corp Net liked to drop packets. (It seemed rigged to drop basically anything that didn't smell like a TCP connection to an external host. ICMP wasn't fully functional, which of course makes engineering more fun when you're having your personal desktop at home do pings or traceroutes for you, but that doesn't help if the problem is on your route.)
according to https://mobile.slashdot.org/story/16/08/20/2059216/ipv6-achi... it's only the US mobile carriers that are over 50%.
Well I won't say you are stupid. But do you realize we are talking about a time format "designed" 40+ years ago. And some cpus are still compatible with chips from 80's 90's. To imagine all that will go away and be solved in 20 years is not logical.
Also, the problem isn't with PCs (which will be upgraded) but the billions of IoT, industrial controls, and other embedded devices that lack easy upgrade paths. Things like elevators, pressure release valves, cars.
We're already bordering on it being a little late. Y2K sucked for the developers of 1999, but they had few computers to worry about. They where less interconnected and all the Unix/Linux based system wasn't at risk. Imaging trying to patch or replace just 20% of the embedded devices when we get to 2036.
What fascinates me is that we didn't start addressing the issue right after Y2K. Perhaps we would have if more computers had failed.
How different is computing now compared to 1996?
Do you consider 2017 vs 2038 to probably have more difference or less difference than 1996 vs 2017?
If more difference, do you expect technology to accelerate faster in the next 21 years than the last 21 years, and why?
Distributed computing went from being "something that is possible" to "the default". Otherwise, all the standard resources such as processors, RAM, disk, and networking all got faster and cheaper. USB revolutionized plugging in peripherals. All the CRT displays are gone.
From the Linux command line point of view, you're more likely to use tools written with python, perl, or ruby, but the big change there is package managers. You don't download source or binaries from FTP sites as often as you did in the old days.
Even if they don't run the actual hardware, they're likely to be running the same software and OS on an emulator running inside modern hardware.
It's simply too expensive and risky to rewrite all the software on a new platform.
Not stupid, just naive