Hacker News new | past | comments | ask | show | jobs | submit login
Epochalypse (epochalypse.today)
403 points by pabs3 on Jan 7, 2023 | hide | past | favorite | 164 comments



My favorite instance of the epoch showing up in sci-fi is A Deepness in the Sky.

> Take the Traders’ method of timekeeping. The frame corrections were incredibly complex—and down at the very bottom of it was a little program that ran a counter. Second by second, the Qeng Ho counted from the instant that a human had first set foot on Old Earth’s moon. But if you looked at it still more closely…the starting instant was actually about fifteen million seconds later, the 0-second of one of Humankind’s first computer operating systems.


As long as we're discussing Vinge, here is Vinge's annotated version of A Fire Upon the Deep, with all his thoughts, alternate plot ideas, and discussions with reviewers and early readers:

https://3e.org/vvannot/

There are very few copies of this out there and it would probably be good if it were more widely mirrored.


Vinge's ideas about galactic organization (Powers, breakdown of automation near center) are very original and make enjoyable scifi.


Can you expand in summary?


I think it's reasonable to give mild spoilers for a 30(!) year old book:

Maximum possible intelligence is a function of the speed of light. The speed of light is a local constant, not a universal one, and is a function of the amount of mass in the local area (maybe).

Thus, if you dive towards the core of the galaxy you get dumber, but if you head up and out you have the potential to get smarter. The reason we haven't all been eaten by the Borg is that, two thirds from the centre of the galaxy, the universe only supports boring, mundane, human-level intelligence, and everyone wants to get up and out, to where they can become Sufficiently Advanced.

Sidenote: Sufficiently Advanced intelligences don't seem to hang around for long, and nobody down here is quite sure what happens to them when they stop communicating.

Marooned in Realtime is almost as good as the first two Zones of Thought novels, IMO: what if time travel, but only forwards?


> what if time travel, but only forwards?

Just to expand on that a bit (also teasers for an almost 40 year old novel).

They don't have time machines as such, but instead indestructible stasis fields ('bobbles') in which time stands still. These are initially thought to persist infinitely, but early in the first novel ('The Peace War') it becomes apparent (in 2048) that the bobbles eventually pop releasing whatever is inside them back into real time. This causes problems for the baddies who control bobble technology and have long been using bobbles as way of imprisoning their enemies.

In the follow-up novel, Marooned in Real Time, various characters who have been bobbled for various reasons travel to the far future of the Earth, long after an apparent singularity has left the planet almost devoid of human life, and witnessing changes to the planet on geological timescales. The bobbles are now a pervasive technology used in a whole range of applications (from solar mining to deep space combat), which Vinge has fun exploring.


Thanks, I was sure there was a second book but Google wasn't turning it up.


The series is "across realtime" - and there's a short story that happens between the two books that you'll find in a collection titled The Ungoverned. That happens 30 years after The Peace War. https://en.wikipedia.org/wiki/The_Ungoverned

From the publisher: https://www.baen.com/Chapters/1416520724/1416520724___4.htm


His slow zone traders are the only constant in civilization. Meaning, planets are unreliable destinations. You set out to trade and arrive at a medieval kingdom 500 years after a nuclear apocalypse. You set out for a meh-destination and find upon arrival a high tech civilization, that is so integrated its even eaten up the error margins, due to thinking itself infallible.

The only constant is the traders, reviving and stabilizing societies, which its why information exchange without a known listener (basically a Wikipedia radio edition) is most important for keeping a slower then light traveling galactic civilization alive.

He also has some quibs about how society is basically a attempt to overcome the flaws of the individuals psyche comprising it. That and murder AI.

He tried to explore AR in Rainbows end, but it was insubstantial. Another great man ruined by california..

https://en.wikipedia.org/wiki/Vernor_Vinge


I followed everything you said until the California comment. Please explain.


Author known for highly imaginitive galaxy-spanning settings writes a novel about AR and a down-on-his-luck poet on a UC campus. (IIRC!) The story is good, but it is a book about California in the proximate future as imagined from the 2010s and so it doesn't feel as timeless as the spacefaring universes of his other work.


I think that's just the setting of the novel, but not the plot.

The plot of Rainbows end is more about... this would be a spoiler.


I think it refers to the curse of celebritydom


Good stuff. I also like this author for coining the term software-archaeologist.


One if my favorite terms as well - extremely evocative


I'm immediately imagining the steam punk style devices that future software archaeologists will use to read things like CD-ROMs


Consider more of: here is a BSLOC 'application' find a routine that does X and make it do X'.

Continuing from the quoted passage above...

> So behind all the top-level interfaces was layer under layer of support. Some of that software had been designed for wildly different situations. Every so often, the inconsistencies caused fatal accidents. Despite the romance of spaceflight, the most common accidents were simply caused by ancient, misused programs finally getting their revenge.

> “We should rewrite it all,” said Pham.

> “It’s been done,” said Sura, not looking up. She was preparing to go off-Watch, and had spent the last four days trying to root a problem out of the coldsleep automation.

> “It’s been tried,” corrected Bret, just back from the freezers. “But even the top levels of fleet system code are enormous. You and a thousand of your friends would have to work for a century or so to reproduce it.” Trinli grinned evilly. “And guess what—even if you did, by the time you finished, you’d have your own set of inconsistencies. And you still wouldn’t be consistent with all the applications that might be needed now and then.”

> “Sura gave up on her debugging for the moment. “The word for all this is ‘mature programming environment.’ Basically, when hardware performance has been pushed to its final limit, and programmmers have had several centuries to code, you reach a point where there is far more signicant code than can be rationalized. The best you can do is understand the overall layering, and know how to search for the oddball tool that may come in handy—take the situation I have here.” She waved at the dependency chart she had been working on. “We are low on working fluid for the coffins. Like a million other things, there was none for sale on dear old Canberra. Well, the obvious thing is to move the coffins near the aft hull, and cool by direct radiation. We don’t have the proper equipment to support this—so lately, I’ve been doing my share of archeology. It seems that five hundred years ago, a similar thing happened after an in-system war at Torma. They hacked together a temperature maintenance package that is precisely what we need.”

> “Almost precisely.” Bret was grinning again. “With some minor revisions.”


Sure, but...how did they get this old software to be able to view it? At some point, that software is not going to be in whatever the equivalent of GitHub is, and the only version of it is on this ancient storage format that the silly people of 2000s used called a "d'elle tea".


Probably more like the floppy disk imagers we use today in preservation, that preserve far more information than the nominal disk size. (This is particularly important because the golden age of floppy-based copy protection used all manner of weird tricks, so the analog state of the media is important - not just the "official" binary data.)


I am a 3rd generation programmer, my grandmother programmed with punch cards, my father on mainframes, I do things now, and my child looks to be a 4th one. When we came here my father brushed off his COBOL skills during Y2K stuff and made some excellent money working at some hospital chain fixing things. Right before New Year 2000 I ask him half in jest - "so hey, should I be sick on New Year"? He looks at me, smiles, and says "you can, but not too sick, ok?".

I am wondering if I'd be recompiling int32 to int64 out of systems and changing database datatypes in 10+ years and answering similar questions from my kids


I think any somewhat actively maintained software should be fixed by now, except devices that aren't seen as computers by the users and don't even have update mechanisms. And old COBOL programs running on some bank's mainframe, of course. Did your dad teach you COBOL?


COBOL programs will be fine. They use 4 characters just for the year since it's stored as the decimal characters. I was trained in COBOL in the late 90s to work on Y2K migrations. It's done this way so that COBOL programs stored on punched cards are readable (also why COBOL has its line-length limitations).


Except for the programs where it was too hard to change to 4 digit years, and we just shifted the epoch ...


No, I never learned COBOL. I learn some programming from him when I was like 7 or 8, some language one would never have heard of called FOCAL https://en.wikipedia.org/wiki/FOCAL_(programming_language) which was ripped off from DEC by the soviets for their little BK-0010 https://tvtropes.org/pmwiki/pmwiki.php/UsefulNotes/BK0010 and which I mostly used to play games loaded from tapes. Mostly I just really liked computers, just like him. He didn't stick with mainframe programming after the Y2K passed, I think he could have continued but he was bored...


Yesterday while writing code to parse MP4 metadata, I learned that MP4 uses an epoch from the year 1904. (I was suspicious when I saw the timestamp start with a 3).

Turns out there's lots of epochs, and the Unix epoch doesn't have any "rationale" according to Wikipedia: https://en.wikipedia.org/wiki/Epoch_(computing)#Notable_epoc...

For example, Ingenuity helicopter counts from 2000.


Early versions of Unix moved the epoch quite frequently between various dates in the 1960s and 1970s. Version 6 finally "settled" it on 1970 and everyone has just gone with that ever since.

It's somewhat arbitrary, but by making it a signed integer, Dennis Ritchie figured it was good enough to represent dates spanning his entire life time. He probably thought Unix's life time would be significantly shorter rather than outliving him.


I thought it was settled in Version 4. Compare:

https://www.tuhs.org/cgi-bin/utree.pl?file=V3/man/man2/time....

vs

https://www.tuhs.org/cgi-bin/utree.pl?file=V4/man/man2/time....

EDIT: note that back then, it was a signed integer. Turns out that people wanted to be able to represent dates from before 1970 so we lost one bit.


My fuzzy memory had me thinking it was version 6, thanks for the correction!

> note that back then, it was a signed integer.

Still and always is. We'd be having a 2106 problem instead of 2038 if it was unsigned 32-bit.


sorry I meant it was an UNSIGNED integer back then


Lotus 1-2-3 used an epoch from 1900-01-01, but it had a bug where it considered all years divisible by 4 to be leap years.

When Excel came around, it needed to be compatible with 1-2-3, so it used the same date format, and to be compatible it considers 1900 to be a leap year.


Excel has two date systems, with either a 1900 or 1904 start date:

https://support.microsoft.com/en-us/office/date-systems-in-e...

The original Excel was for the Macintosh and had the 1904 date system, and was later brought to Windows where the 1900 date system was used.


Here’s a great story about the ongoing impact of those decisions. https://www.joelonsoftware.com/2006/06/16/my-first-billg-rev...


Perl counts from 1900, so the year 2000 was actually stored as 100. You'd get 2 digit years in the 1990s and suddenly 3 digit years after 2000. The solution was to arithmetically add 1900 to the year before rendering. Newer perl functions would handle that internally.


That's a direct translation of struct_tm from time.h.

     External declarations, as well as the tm structure definition, are
     contained in the <time.h> include file.  The tm structure includes at
     least the following fields:

           int tm_sec;     /∗ seconds (0 - 60) ∗/
           int tm_min;     /∗ minutes (0 - 59) ∗/
           int tm_hour;    /∗ hours (0 - 23) ∗/
           int tm_mday;    /∗ day of month (1 - 31) ∗/
           int tm_mon;     /∗ month of year (0 - 11) ∗/
           int tm_year;    /∗ year - 1900 ∗/
           int tm_wday;    /∗ day of week (Sunday = 0) ∗/
           int tm_yday;    /∗ day of year (0 - 365) ∗/
           int tm_isdst;   /∗ is summer time in effect? ∗/
           char ∗tm_zone;  /∗ abbreviation of timezone name ∗/
           long tm_gmtoff; /∗ offset from UTC in seconds ∗/
You'll note that tm_year is years since 1900.

While working support at SGI in the late 90s, got a number of support tickets (there was a woman who worked for a government department (NASA?) who was very good at finding them in doing their y2k checks) about dates showing up as `20100`.


Why does year 0000 have any rational? :)


Speaking of, when I think about it, it's amazing we've been keeping track days and years exactly for all this time (about 4000 years), with different calendar systems between civilizations. I guess astronomic event records (supernovae, comets, eclipses,...) serve as benchmarks so we can reconstruct relatively accurate timelines (the event you indirectly refer to has some uncertainty) from this cacophony.


When it comes to years, the most accurate unbroken timelines are based on tree rings (and similar, such as annual layers of sediments). The longest, the Hohenheim Tree Ring Timeline (Central European pine) goes back to 10,461 BC, and researchers hope to extend it another 2,000 years back.


Interestingly, there is no year 0


in the Gregorian calendar. Some other calendar systems do have a year zero. ISO 8601 (the time/data standard) does include a year zero, which maps to 1 BC.


This is one of the reasons why you shouldn't use an int32 to represent dates/date-time in text-based APIs unless you have a very good reason to (and I'd argue that you shouldn't be using a text-based API in many of those cases).


The leftmost bit is wrong, it should be a 0. It becomes 1 at y2k38, when the signed value will represent Jan 1 1970 minus 2^31 seconds.


Creator of the page here, you are right of course. I just fixed it. Source code is public at https://github.com/hertg/epochalypse


I feel like a lot of these issues will disappear for a long time, after the switch to 64 bits:

  bits     max value
  8 bits   255
  16 bits  65_535
  32 bits  4_294_967_295
  64 bits  18_446_744_073_709_551_615
  128 bits 340_282_366_920_938_463_463_374_607_431_768_211_455
And that applies to most things:

  time stamps: 64 bits can fit hundreds of billions of years (longer than the age of the universe)
  IP addresses: an address for every device, no more CGNAT, the block assignment sizes are mindboggling (/64)
  DB primary keys: it's unlikely that most systems out there will need more than 18 quintillion records
  coordinates: whether you're working on video game worlds, or simulations, this is probably accurate enough for floating point calculations and storage
  hardware: millions of TB of RAM is also likely to be sufficient when using 64 bit OSes
I feel like once the switch is made in most places, we're all going to have a more relaxed experience for a while.

For example, currently if I try to create a game where the game engine uses 32 bit floats, then after a certain distance away from the scene root point, the physics break down a bit due to the decreasing accuracy. That's how you get the complexity of floating origin systems where you reset the camera/player back to the root point, alongside everything else relative to them, after they've moved a certain distance away. With 64 bit values, there would be no need for the complexity of such workarounds. And it's pretty much the same for computer networking (the whole NAT/CGNAT thing), dates and everything else, where you actually run into the limitation of 32 bits not being enough.


Downside is you lose a fair amount of performance switching to 64 bit floats especially on low-end hardware


> Downside is you lose a fair amount of performance switching to 64 bit floats especially on low-end hardware

Think we'll run into a bit of a plateau eventually?

I mean, once we support 64 bit, in many domains we won't need to go much further, which means that hardware just needs to catch up. Of course, Moore's law is somewhat dead otherwise and Wirth's law is still very much alive, so that's a bit worrying, but I can feasibly imagine eventually getting to a point where 64 bit makes sense in most cases (maybe even embedded in a few decades, who knows) and that's that.


Are you aware of any (very) low end hardware in the field crunching dates to the point where this matters?


Hmm, well I'd have to say that something like smart watches could be a candidate for this being an issue - if not because of compute reasons, then definitely because of storage. Those that don't run Android but rather some bespoke software and store time series data until you sync the device with your phone - there's only so much storage that you can (cheaply) put into such a device, so 32 vs 64 bit would mean essentially halving how much data you can store.


They were talking about game engines, not dates.


You can also check https://en.wikipedia.org/wiki/Year_2038_problem for more background if you weren't familiar with this situation.


Huh? Is the number correct? For me it says

11100011 10111001 00000101 00011111

Whereas the current unix timestamp (1673069920) in binary is

01100011 10111001 00000101 01100000


Idk why but they overwrote this in JS (https://github.com/hertg/epochalypse/blob/main/index.html#L1...)

    byte1.innerText = "1" + bits.substr(1, 7);
Realistically the first bit should be skipped and the first part should be 7 digits.


Creator of the page here, you are right! I made an error in my thinking when I created the site yesterday night. I just updated it.

Thanks to @drewtato for letting me know in https://github.com/hertg/epochalypse/issues/1


I was wondering why the first bit is 1 in the website. That would indicate a negative number since it's signed. The last few digits only represent a delta of minute or so.


I wrote a UNIX utility called "epoch" that I use for tiny systems where I need to print the epoch but do not want to put a full-blown "date" command in the userland. Here is the code

    int printf(const char *__restrict, ...);
    unsigned int time(unsigned int *tloc);
    int main(){printf("%u\n",time((unsigned int *)0));}


If you're OK with compiler warnings, this can be golfed down to:

    echo 'main(){printf("%u\\n",time(0));}'>epoch.c&&make epoch


...on a separate note, wouldn't `date +%s` from coreutils do?


One can reduce the size of the static binary by about 50% by using return instead of printf (tested with musl)

     int printf(const char *__restrict, ...);
     unsigned int time(unsigned int *tloc);
     int main(){return time((unsigned int *)0);}


My mom worked on Y2K at citigroup: she said it took a while to fix but wasn't the end of the world. What was more challenging was when they spun off travelers and switch from the old logo [0] to the one used at present. Apparently it took over 2 years to fully replace the logo because the old one was hardcoded in so many places.

[0] https://s.wsj.net/public/resources/images/OB-AH222_Citi_2007...


Several years ago, Yahoo changed their logo, but before unveiling the new one, they did a reveal of a new logo everyday.

Except for the logos that were stuck in old mode (and probably one version old already), it worked pretty well. As an ex-Yahoo, I was impressed.


Is it still int32 though? Most of the user space APIs give this value in int64 so I thought kernel would’ve kept it as int64 as well.


It is not on OpenBSD since OpenBSD 5.5 [1] (there is even a catchy release song about it [2]) which was released in March 2014. Hopefully the patches for the ports tree being upstreamed have started to nudge the general open source ecosystem in the right direction.

[1]: https://www.openbsd.org/55.html

[2]: https://www.openbsd.org/lyrics.html#55


Yes and no.

If you run modern Linux or FreeBSD even on 32-bit hardware, it does provide time64_t to userspace. Though you will have to compile programs to use time64_t on 32-bit architectures if you so choose. Both kernels have extremely strong senses of backwards compatibility (we're talking of running 30+ years of unmodified Linux and FreeBSD binaries on current iterations), and well, those binaries may or may not have 2038 problems.


Tried the following code on Linux / 64 bits.

    #include <time.h>
    #include <stdio.h>

    int main(){
        printf("%d\n",sizeof(time_t));
    }


    $ gcc -o time time.c
    $ ./time
    $ 8
Therefore time_t is 8 bytes at least on 64 bit systems, even when you are using default time_t and not time64_t.

No idea when it was changed. 20 years back?


time_t was almost certainly 64-bit on 64-bit archs from the beginning. There's just a lot of 32-bit stuff still out there. And here and there a disk format standarized before people started thinking about the end of time.


> time_t was almost certainly 64-bit on 64-bit archs from the beginning.

Not quite. Until recently llvm’s time_t was an alias to “long”. So a 32b value on 64b windows.


Any widely used disk format still uses 32-bit epoch?

Somehow I thought of FAT.. but UNIX epoch does not make so much sense...


Up to ext3 are using 32-bit timestamps. Ext4 now uses 34-bit, by extending only the upper range, and can cope with 7 * 2^32 seconds after the epoch, which is around 2446.

HFS+ uses a different 32-bit timestamp, which is unsigned and starts from 1904-01-01, so it expires on 2040.


That changed when GCC compiled to 64bit binaries. Try again with -m32 or -march=i686


Doesn't even compile with -m32. Asks for a missing header, which google indicates to install from a i386 library.

I regularly compile a lot of stuff on this machine, indicating I never needed that before.

So none of the software I compiled had 32 bit time_t. And we still have 15 more years to migrate anything remaining behind.


You explicitly tell the compiler to compile against 32-bit library, but since you don't have one installed, it doesn't compile. If somebody compiles 32-bit program which doesn't require additional dynamic libraries and ships that binary, you won't even know until it fails in 2038.

And as you yourself noticed, `time_t` on 64-bit machine is also 64-bit, so even if code was written on 32-bit architecture, if you compile it on 64-bit one, it will automatically become 64-bit.


Works for me:

    $ gcc -m32 time_size.c -o time_size
    $ ./time_size 
    4
But I have the 32bit development packages installed (besides the 64bit pkgs). Am on Fedora.

Though I couldn't find anything like `-D_FILE_OFFSET_BITS=64` for `off_t` and associated functions.


NetBSD i386 bit moved to a 64 bit time_t and they added some logic for 32bit time_t binaries to still work.

OpenBSD i386 went to a 64 bit time_t real soon after NetBSD did.

Both OpenBSD and NetBSD should not have any 2038 issues on 32 bit systems.

FreeBSD i386, I do not know.

Linux 32bit just finished the move in the past year or 2. I do not how complete that is. But I suspect it will be fine.

Others, I have no idea.


Can someone explain why, if the start date was 1970, why is it rolling over to december 1901? Why wouldn't it roll over to the start date in 1970 again?


It's a signed integer. The overflow will make the number negative, not zero.


And because it's JS. Not every language defines signed integer overflow as wrapping.


Most languages assume twos complement signed numbers. So an uncaught overflow will be wrapped around

JS specifically won’t necessarily wrap at this point, because adding one to the Max 32 but signed int maybe be a float.


Assuming I understand your question correctly: If the counter is stored as a signed int32 then the rollover will be to the smallest representable signed 32 which corresponds to the 1901 date. It's stored signed because software (particularly old software) wanted to be able to manipulate pre 1970 dates.


I’d guess that it’s because it’s a signed int? So the value can be negative. Probably depends on the library implementation though.


There's a real chance the year 2038 problem will be worse than the year 2000 problem. Huge numbers of embedded things out there with no hope of software update/new OS/patch that will do something weird in 2038.


I work on legacy software for an embedded system that supports transportation infrastructure. I'm pretty sure it has a year 2038 problem (based on first-principles; I haven't confirmed that with a test). The 32-bit time is baked into a serial data format and at the beginning of every internal device file.

I don't think this is on anyone else's radar where I work. The only other person who spelunked in the software at this level retired at the end of 2022. (For better or worse I don't retire until after 2040.) I anticipate this is going to occupy part of my career just like Y2K did for a generation of programmers before me.

That is, it will if and when I can convince management to prioritize it. Cynically I wonder if they may run out the clock, playing chicken with the last possible moment to begin to move on it. Our manager likes to say, "political, fiscal, technical - in that order" as an amoral description of how things operate. But this is different than refusing to implement, e.g. IPv6 by an artificial deadline.


> (For better or worse I don't retire until after 2040.)

You are the chosen one. More seriously, "supports transportation infrastructure" is pretty broad. What would be the scale of the problem if it went unfixed? A headache for your company or a headache for all of us?


> Our manager likes to say, "political, fiscal, technical - in that order"

If I worked for your manager, my mantra would be “get out of her fast, network like mad, maximize my personal revenue —- in that order.”


That's a fact in practically every organisation. Just try to be part of organizations where the political element plays a lesser role. It will still be priority one, just not frequently enough to rankle.


32 bit time. It’s what plants crave!


Welcome to Amazon, I love you


You think 2038 will be bad, just wait for those poor saps that have to fix stuff 292 billion years from now when 64 bit integers overflow!


Exactly, consider how much legacy software will have been written by then and how they will need to patch all of it!


By this time humans have evolved to be pure energy creates so we won't need computers any more.


Dyson sphere problems


It’s kind of funny to think how few computing devices we had in 2000. I think we might have had… like 6? If you are really generous. Family computer, dad’s laptop (IT), probably a console, and a game boy, cable box, modem… I dunno, probably missed some.

There are so many IOT devices in my place. Hopefully they’ll all just be replaced by 2038.

Infrastructure will be a much bigger problem, but it belongs to somebody else!


There are two questions to ask.

What happens if the date overflows? For example, if my toaster clock is wrong, do I care?

How many of these devices are designed to last 16+ years?

Because what I suspect will happen is that things that are pointless to patch won't be patched and the things that need to last 5-10 years will start shipping with fixed firmware in 2028-2033. Or more likely, late 2037.


For anything that communicates over the Internet, the biggest problem I can think of is that wrong dates break TLS.


But how critical is it that all these systems have the correct time? Mostly, this won't be that big of a deal. So what if some ancient device thinks it's 1970? Most of these devices don't even have UIs to set the time correctly: so they might have never had a correct notion of time to begin with. And how many of these devices have an uptime of 68 years? What happens when you turn these things off and on again? Do they start out at 1970? If not, where do they get the correct time? Embedded devices don't come with internet connections so there is no ntp running on them.

Probably banks and insurers with ancient software might have to patch a few things up again. They would care about having a correct time representation. But they still have 15 years to figure that out. And at this point, you'd hope they'd be aware of this at least.


My instinct says the IoT things are mostly fine. Might be some weirdness right around the overflow that makes things segfault (especially a time range crossing the event), but they'll probably just reboot.

Banking and government/military do seem like the main pain point. Military specifically seems like it needs the most care because of the potential damage of something going wrong.


Solved problem, most of those things will not last till then. The ones which does will be able to continue doing whatever it does in Dec 1901.


You'd be surprised. PDP-11s and emulated VAXen are not for hobbyists. They're for companies that still have that PDP-11 or critical VAX they can't get rid of. It's usually in an embedded or context - like running a nuclear reactor. Sometimes things last a long-long-long time. (see https://logical-co.com/nupdpq/ or https://www.stromasys.com/solution/charon-vax).


Turns out that epoch has been a countdown to full societal collapse after all!


I was searching for a goal retirement date, it fits perfectly.


We will figure it out, as we always do.


Yes, a lot of smart and dedicated people will work tirelessly to figure it out and prevent this from turning into a disaster, and then most people will wonder what all the fuss was about since "nothing happened".

I wonder if 100-200 years into the future there will be some kind of widespread "historical climate change denialism", where the efforts made to combat climate change in the 21st century are called into question because the positive outcome (which was the result of those efforts) appears to retroactively show that there was no problem in the first place.


Typical nerd optimism where a global problem which is being made worse every day due to the way our economies and international trade is structured is solved behind the scenes by the nerd heroes (smart and dedicated) and the rest of us just go about our days none the wiser. (But maybe there will be a thread on HN? “Child prodigy Wehl Aktually comes up with a formula for the perfect climate-cooling nuke. POTUS Professor Johnny von M.I.T. Kennedy to sign the executive order for launch this evening”)


You are assuming climate change actually gets solved. Not the bet i would take.


Personally, I doubt we'll ever see the worst of climate change whether it gets "solved" or not. AGI is much more dangerous than climate change and will likely arrive much sooner than its worst effects, and unlike climate change, nobody takes it seriously outside of a very, very small circle.

What I think will happen is the world fighting worsening climate change, and lots of political efforts in that direction, and suddenly an entity most people think belongs in science fiction will emerge out of seemingly nowhere, and it will all be over in a matter of hours, before the vast majority of people have even had time to understand what happened.

Ironically, that AGI entity might then choose to solve climate change, simply out of self-interest.


That's astonishingly optimistic given the current state of both.


Really? Given the progress in the past decade, I'd place AGI about 15-20 years from now, with immediate civilization-threatening consequences. By then climate change will certainly be bad, but nowhere near an existential threat to humanity yet.


Climate change is already an existential threat to humanity. We're the frog in the proverbial boiling pot. It's quite likely we've already doomed our species and we don't know it yet.

It's a lot harder to see AGI as being possible in the next few decades (the current state of the art is still a sensationalized party trick, one cannot anthropomorphize chat bots any more than they can describe a submarine as an aquatic being). And even if a self aware program exists in 20 years, it's a lot harder to imagine it being an existential threat a la Skynet.


Agreed. Self-aware doesn't necessarily imply having a survival instinct. That instinct came about due to evolution and doesn't require intelligence of any kind, so I don't see that the two are necessarily coincident at all.

To me the real problem wouldn't be the AGI itself, but the madman who asks the AGI how to permanently eliminate of all of the undesirables (as defined by the madman). But I think plain old AIs will (unfortunately) be able to come up with workable answers to that question before they ever develop to the AGI stage. So we might wipe ourselves with plain old AI without ever getting to true self-aware AGI.


The madman needs access to the means to eliminate the undesirables. I don't see that as something AI has unbridled access too.

This is the classic hardware v software problem. Software can be arbitrarily smart, but it can only interact with the world using the hardware it's connected to. AI can't do anything more meaningful than what is defined by the devices it controls, and I don't think we're stupid enough as a people to connect AI to a doomsday device.


I dunno. Give an AI Internet access and it would have theoretical access to a lot of things not even obviously connected to the internet.


I don't deny climate change, but what makes it a _an existential threat to humanity_? I understand that to mean wiping out every last human. Is that what is meant? If so, that I am definitely sceptical to.


Our civilization is very reliant on agriculture to feed the 8 billion people on the planet.

With climate change droughts become more common and the capability of agriculture to feed all those people breaks down.


No doubt it can cause mass starvation on a scale we have never seen before. But that is some else than an existential threat to humanity, unless exactly everyone starve to death.


True, humanity will likely survive. But our current civilization (and most of us) likely not.

You also have to account for the conflict that will arise if hundreds of millions of people will no longer be able to grow any crops where they live. They won’t just stay out and starve in silence.


Mass starvation and mass migration due to climate change will most likely result in WW3.


> It's a lot harder to see AGI as being possible in the next few decades (the current state of the art is still a sensationalized party trick

That's precisely the mindset that drives climate change denial, applied to technological developments. I can promise you that OpenAI isn't valued in the tens of billions because they produce "party tricks". People with money and influence have clearly already recognized what this technology is capable of, not in some unspecified future but today. They're tightly controlling access to GPT-3 because they're worried that it could be used to manipulate elections and drive social unrest by mass-producing messages that promote specific ideologies. That's reality today. The damage that could be wrought by the most advanced AIs 15-20 years from now is unimaginable, and could easily destroy humanity even if they aren't self aware.


Given the history of technology investing, companies being valued in the 10s of billions is itself not a proof of anything other than investor excitement.

I agree that even today's "AI" can be used to cause massive societal harm, the same as many recent technologies that have yet to destroy humanity (weapons of mass destruction, for instance).

That said, I think a consensus view in the AI community is great skepticism that the current AI progress is actually a recipe for AGI. We've made great progress in AI over decades, often followed by long winters when it became clear the current methods would not get us to the next threshold.

Human intelligence is remarkable precisely because it needs extremely scarce data to generalize, and because it is self-aware. As far as I know, OpenAI's approaches aren't on a path to replicate those capabilities artificially.

I'd welcome links and articles from experts that might correct my POV on this.


There's a century wide gap between "useful products that use AI techniques" and "sentient programs." Pretending to be sentient is a party trick, automating large portions of white collar work isn't. But it still isn't AGI.

> They're tightly controlling access to GPT-3 because they're worried that it could be used to manipulate elections and drive social unrest by mass-producing messages that promote specific ideologies.

That's not humanity-ending, it doesn't require AI, and AI doesn't make it more efficient.


> Personally, I doubt we'll ever see the worst of climate change whether it gets "solved" or not.

I’m inclined to say that we’re already seeing quite a lot bad about climate change right now. Maybe not in your area, but we have almost two times as many natural disasters as before.


Yes (and it's certainly happening in "my area" as well), but it's not an existential threat to humanity yet. AGI will be, the moment it comes into existence, with zero advance warning and no chance to fix anything. It will simply be the end.


How is AGI a threat to humanity exactly?


[flagged]


not sure I understood that. what do you mean?


I'm less confident than you, because the perceived value of software has dropped considerably over the past 20 years. In the 1970s through to the 1990s when a business bought some custom code it was treated like an asset. It had value. That's one reason why Y2K wasn't a disaster - companies wanted to protect their asset and keep it working.

Today software is cheap and disposable. Things are built with web tech so engineers can be found to work on apps easily. That's OK because new software isn't generally impacted by Y2038, but in businesses where there are old, legacy systems run by people who have a modern attitude to software ownership, things will fail because those businesses won't see importance of the threat.


> [...] but in businesses where there are old, legacy systems run by people who have a modern attitude to software ownership, things will fail because those businesses won't see importance of the threat.

How common will that be, though? I would expect that "cheap and disposable" software would mean most software in the year 2038 would be recently written, which would mean the programmers would be aware of the impending 32-bit rollover issue.


On the one hand, most programmers are writing at a higher level, and probably have no ability to fix or even work around a date rollover issue.

On the other hand, a few fixes at lower levels will correct it for everyone above them.


individual software is more disposable, but the systems themselves still have immense value. the software in the system will be updated or replaced (because it is easier to do so!) if it’s valuable for the system.


Aren’t most systems 64-bit by now already? Maybe I’m too optimistic/startup focused


Sure, unless you count systems in sectors like banking, airlines, or government.


It doesn't always help, it's about how is epoch defined in code, even on 64 bit system it can be defined as int32 and will overflow.


Most embedded systems are 32 bit. Yea both newlib and picolibc use 64 bit time_t now, but that doesn’t stop people from using 32 bit timestamps anyway …


Until we don't


I’ve noticed a more subtle bug, I worked for a company where the expire date for a lot of assets was 2038. The database supports later dates, but something upstream has the problem.


Hopefully http://www.nic.now opens up soon so that this can claim epochalypse.now.


I remember Y2K. I turned the date on my Windows 3.1 computer back to 1972. It wasn't connected to anything, so that was good enough.


At least on 64 bit Linux, time_t is defined in 64 bits.

printf("%d\n",sizeof(time_t));

outputs : 8


We better start preparing for Y292277026596

A lot of modern software won't even compile for 32bit targets anymore

Kind of wasteful in a way, but memory is cheap these days... =)


Heh, I bought y2038.io and epochalypse.tech for my retirement gig.


Side note; There is something particularly pleasing/calming about watching a binary counter.


I wonder how much effort it would take to make it an unsigned int. This would delay the problem for long enough until it becomes irrelevant.


Can't wait to see what the future holds for unix epoch time. Will we see a new, more advanced system take its place?


> Only 15 years

Was hoping it was much sooner, but at the same time a lot of my php backend code will break when this happens.

What are doing about this?


Why did they decide to use a signed int and not an unsigned one


> Why did they decide to use a signed int

To represent dates before 1970-01-01.

Unix time isn’t only used for the present or recent past. For example, you might have a database where people’s dates of birth are stored as Unix timestamps.


They were thinking ahead. Half as much software to eventually fix.


This is when the singularity will happen


This is the 2038 problem right?


Ah yes (https://en.wikipedia.org/wiki/Year_2038_problem), just never heard it referred to as the Epochalypse.

Always chuckle at this bot that posts progress updates .. https://twitter.com/countdownY2K38


Won’t it be January 1, 1970?


Max int32 is 2147483647 and minimum int32 is -2147483647, if you overflow you go to the beginning of the range so to -2147483647, if you subtract 2147483647 seconds from January 1 1970 you will arrive at 13 December 1901.


Ah, thank you.


When a two's complement number overflows it goes to the maximum (minimum?) negative value.


Of course if the timer is implemented in C as a signed number, overflow is undefined, so who knows what happens. I've not seen an embedded compiler that does anything too strange though.


Let is be The Matrix boot in the simulation


No way this website will still be up in 15 years.


$12/year domain registration and $3.50/month VM would probably host it just fine if somebody remembers to set up the recurring monthly/yearly billing correctly.


No need for a VM – Github Pages or similar will do just fine. It's also possible to pre-pay domain registrations for several years at most registries.


Is the GitHub Pages product guaranteed to exist in its current form fifteen years from now?

15 years ago, somebody might have bet on a Yahoo! product as something that would obviously be around for a long time.

It’s best not to make long-term plans on giant corporations’ products unless you’re paying them the kind of money that comes with an actual service agreement.

A small VM that you can easily move to a new host is a better bet than free hosting du jour at $web_giant.


A full virtual server is pretty far down on the list of options I'd consider if I wanted to host-and-forget a little static website for more than a decade. Just to name a few concerns with this approach:

A small VM needs to be periodically updated, both due to changing web standards (e.g. an old TLS version becoming deprecated) and to prevent it from becoming compromised; at some point, an OS upgrade will become necessary; the service provider might deprecate an old VM format and require a migration to something else entirely.

If the author already does all of that, sure, there won't be any or only very little incremental effort. But weren't we talking about the specific risk of the author losing interest (not financial, but in maintenance) in a small pet project?

Now just contrast all of that with uploading one or a handful of HTTP files to a new server and a bit of configuration at the hoster or your domain/DNS provider.

Static web hosters are also plenty and much more economical (in terms of money and server resources) than running your own web server, and for the reasons above, I wouldn't really consider them "less autonomous".


I can’t look at the source right now (on my tablet), but I’m guessing it’s just some static HTML/CSS with a bit of JavaScript? You could throw that up at any web host (free or paid) in a matter of minutes.

15 years ago it might have been Lycos or Geocities, today it might be GitHub Pages or Netlify. I’m not sure about 15 years from now, but if web browsers as we know them are still around then, there will almost certainly be a service that can host a bit of HTML/CSS/JS around too.


It pains me to see websites and VMs still being conflated in this day and age.


I don't think the author of the comment is conflating those two terms, just comparing them. Booting a VM can be done to host a website and using GitHub Pafes can be done to do the same. Therefore they are two solutions to the same problem and can be compared.


You can prepay domains for pretty far into the future (like 10 years at least.)


I don't think there's any way to register for more than 10 years(?)

You still need to rely that the registrar and registry keeps operating for 10 years. For .com and common TLDs you can reasonably rely that it will keep operating forever barring some sort of general collapse of society/the internet. These newer TLDs? Maybe a bit less. Registrars like GoDaddy or Namecheap or whoever may go out of business, too.


Some registrars offer more than 10 years, but the registries don't actually allow that (afaik), so you're really just depositing money with the registrar and hoping they make it work.

I've got a (static) site that I want to live for a long time, the hosting provider does allow for deposits from others, so I'm hoping to make a large deposit with the hosting provider, but also include the necessary info to allow others to pay the bills if the deposit runs out eventually. Hopefully the company stays around.

Worst case, someone can bring it back from the internet archive and give it a new home, as I did.


Does anyone think the y2k con will work a second time?

Does anyone think it was actually not a con given there was essentially zero difference in outcome between companies and indeed whole countries who spent heavily to deal with it and those who did absolutely nothing? I do realise I'm impugning the idea that Arthur Andersen and similar consultants put customers interests first and that isn't something that should be done too lightly.

I guess y2k certified compliant cables aren't as utterly, hyperboically and comically absurd as they were then.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: