My favorite instance of the epoch showing up in sci-fi is A Deepness in the Sky.
> Take the Traders’ method of timekeeping. The frame corrections were incredibly complex—and down at the very bottom of it was a little program that ran a counter. Second by second, the Qeng Ho counted from the instant that a human had first set foot on Old Earth’s moon. But if you looked at it still more closely…the starting instant was actually about fifteen million seconds later, the 0-second of one of Humankind’s first computer operating systems.
As long as we're discussing Vinge, here is Vinge's annotated version of A Fire Upon the Deep, with all his thoughts, alternate plot ideas, and discussions with reviewers and early readers:
I think it's reasonable to give mild spoilers for a 30(!) year old book:
Maximum possible intelligence is a function of the speed of light. The speed of light is a local constant, not a universal one, and is a function of the amount of mass in the local area (maybe).
Thus, if you dive towards the core of the galaxy you get dumber, but if you head up and out you have the potential to get smarter. The reason we haven't all been eaten by the Borg is that, two thirds from the centre of the galaxy, the universe only supports boring, mundane, human-level intelligence, and everyone wants to get up and out, to where they can become Sufficiently Advanced.
Sidenote: Sufficiently Advanced intelligences don't seem to hang around for long, and nobody down here is quite sure what happens to them when they stop communicating.
Marooned in Realtime is almost as good as the first two Zones of Thought novels, IMO: what if time travel, but only forwards?
Just to expand on that a bit (also teasers for an almost 40 year old novel).
They don't have time machines as such, but instead indestructible stasis fields ('bobbles') in which time stands still. These are initially thought to persist infinitely, but early in the first novel ('The Peace War') it becomes apparent (in 2048) that the bobbles eventually pop releasing whatever is inside them back into real time. This causes problems for the baddies who control bobble technology and have long been using bobbles as way of imprisoning their enemies.
In the follow-up novel, Marooned in Real Time, various characters who have been bobbled for various reasons travel to the far future of the Earth, long after an apparent singularity has left the planet almost devoid of human life, and witnessing changes to the planet on geological timescales. The bobbles are now a pervasive technology used in a whole range of applications (from solar mining to deep space combat), which Vinge has fun exploring.
The series is "across realtime" - and there's a short story that happens between the two books that you'll find in a collection titled The Ungoverned. That happens 30 years after The Peace War. https://en.wikipedia.org/wiki/The_Ungoverned
His slow zone traders are the only constant in civilization. Meaning, planets are unreliable destinations. You set out to trade and arrive at a medieval kingdom 500 years after a nuclear apocalypse. You set out for a meh-destination and find upon arrival a high tech civilization, that is so integrated its even eaten up the error margins, due to thinking itself infallible.
The only constant is the traders, reviving and stabilizing societies, which its why information exchange without a known listener (basically a Wikipedia radio edition) is most important for keeping a slower then light traveling galactic civilization alive.
He also has some quibs about how society is basically a attempt to overcome the flaws of the individuals psyche comprising it. That and murder AI.
He tried to explore AR in Rainbows end, but it was insubstantial. Another great man ruined by california..
Author known for highly imaginitive galaxy-spanning settings writes a novel about AR and a down-on-his-luck poet on a UC campus. (IIRC!) The story is good, but it is a book about California in the proximate future as imagined from the 2010s and so it doesn't feel as timeless as the spacefaring universes of his other work.
Consider more of: here is a BSLOC 'application' find a routine that does X and make it do X'.
Continuing from the quoted passage above...
> So behind all the top-level interfaces was layer under layer of support. Some of that software had been designed for wildly different situations. Every so often, the inconsistencies caused fatal accidents. Despite the romance of spaceflight, the most common accidents were simply caused by ancient, misused programs finally getting their revenge.
> “We should rewrite it all,” said Pham.
> “It’s been done,” said Sura, not looking up. She was preparing to go off-Watch, and had spent the last four days trying to root a problem out of the coldsleep automation.
> “It’s been tried,” corrected Bret, just back from the freezers. “But even the top levels of fleet system code are enormous. You and a thousand of your friends would have to work for a century or so to reproduce it.” Trinli grinned evilly. “And guess what—even if you did, by the time you finished, you’d have your own set of inconsistencies. And you still wouldn’t be consistent with all the applications that might be needed now and then.”
> “Sura gave up on her debugging for the moment. “The word for all this is ‘mature programming environment.’ Basically, when hardware performance has been pushed to its final limit, and programmmers have had several centuries to code, you reach a point where there is far more signicant code than can be rationalized. The best you can do is understand the overall layering, and know how to search for the oddball tool that may come in handy—take the situation I have here.” She waved at the dependency chart she had been working on. “We are low on working fluid for the coffins. Like a million other things, there was none for sale on dear old Canberra. Well, the obvious thing is to move the coffins near the aft hull, and cool by direct radiation. We don’t have the proper equipment to support this—so lately, I’ve been doing my share of archeology. It seems that five hundred years ago, a similar thing happened after an in-system war at Torma. They hacked together a temperature maintenance package that is precisely what we need.”
> “Almost precisely.” Bret was grinning again. “With some minor revisions.”
Sure, but...how did they get this old software to be able to view it? At some point, that software is not going to be in whatever the equivalent of GitHub is, and the only version of it is on this ancient storage format that the silly people of 2000s used called a "d'elle tea".
Probably more like the floppy disk imagers we use today in preservation, that preserve far more information than the nominal disk size. (This is particularly important because the golden age of floppy-based copy protection used all manner of weird tricks, so the analog state of the media is important - not just the "official" binary data.)
I am a 3rd generation programmer, my grandmother programmed with punch cards, my father on mainframes, I do things now, and my child looks to be a 4th one. When we came here my father brushed off his COBOL skills during Y2K stuff and made some excellent money working at some hospital chain fixing things. Right before New Year 2000 I ask him half in jest - "so hey, should I be sick on New Year"? He looks at me, smiles, and says "you can, but not too sick, ok?".
I am wondering if I'd be recompiling int32 to int64 out of systems and changing database datatypes in 10+ years and answering similar questions from my kids
I think any somewhat actively maintained software should be fixed by now, except devices that aren't seen as computers by the users and don't even have update mechanisms. And old COBOL programs running on some bank's mainframe, of course. Did your dad teach you COBOL?
COBOL programs will be fine. They use 4 characters just for the year since it's stored as the decimal characters. I was trained in COBOL in the late 90s to work on Y2K migrations. It's done this way so that COBOL programs stored on punched cards are readable (also why COBOL has its line-length limitations).
No, I never learned COBOL. I learn some programming from him when I was like 7 or 8, some language one would never have heard of called FOCAL https://en.wikipedia.org/wiki/FOCAL_(programming_language) which was ripped off from DEC by the soviets for their little BK-0010 https://tvtropes.org/pmwiki/pmwiki.php/UsefulNotes/BK0010 and which I mostly used to play games loaded from tapes. Mostly I just really liked computers, just like him. He didn't stick with mainframe programming after the Y2K passed, I think he could have continued but he was bored...
Yesterday while writing code to parse MP4 metadata, I learned that MP4 uses an epoch from the year 1904. (I was suspicious when I saw the timestamp start with a 3).
Early versions of Unix moved the epoch quite frequently between various dates in the 1960s and 1970s. Version 6 finally "settled" it on 1970 and everyone has just gone with that ever since.
It's somewhat arbitrary, but by making it a signed integer, Dennis Ritchie figured it was good enough to represent dates spanning his entire life time. He probably thought Unix's life time would be significantly shorter rather than outliving him.
Lotus 1-2-3 used an epoch from 1900-01-01, but it had a bug where it considered all years divisible by 4 to be leap years.
When Excel came around, it needed to be compatible with 1-2-3, so it used the same date format, and to be compatible it considers 1900 to be a leap year.
Perl counts from 1900, so the year 2000 was actually stored as 100. You'd get 2 digit years in the 1990s and suddenly 3 digit years after 2000. The solution was to arithmetically add 1900 to the year before rendering. Newer perl functions would handle that internally.
That's a direct translation of struct_tm from time.h.
External declarations, as well as the tm structure definition, are
contained in the <time.h> include file. The tm structure includes at
least the following fields:
int tm_sec; /∗ seconds (0 - 60) ∗/
int tm_min; /∗ minutes (0 - 59) ∗/
int tm_hour; /∗ hours (0 - 23) ∗/
int tm_mday; /∗ day of month (1 - 31) ∗/
int tm_mon; /∗ month of year (0 - 11) ∗/
int tm_year; /∗ year - 1900 ∗/
int tm_wday; /∗ day of week (Sunday = 0) ∗/
int tm_yday; /∗ day of year (0 - 365) ∗/
int tm_isdst; /∗ is summer time in effect? ∗/
char ∗tm_zone; /∗ abbreviation of timezone name ∗/
long tm_gmtoff; /∗ offset from UTC in seconds ∗/
You'll note that tm_year is years since 1900.
While working support at SGI in the late 90s, got a number of support tickets (there was a woman who worked for a government department (NASA?) who was very good at finding them in doing their y2k checks) about dates showing up as `20100`.
Speaking of, when I think about it, it's amazing we've been keeping track days and years exactly for all this time (about 4000 years), with different calendar systems between civilizations. I guess astronomic event records (supernovae, comets, eclipses,...) serve as benchmarks so we can reconstruct relatively accurate timelines (the event you indirectly refer to has some uncertainty) from this cacophony.
When it comes to years, the most accurate unbroken timelines are based on tree rings (and similar, such as annual layers of sediments). The longest, the Hohenheim Tree Ring Timeline (Central European pine) goes back to 10,461 BC, and researchers hope to extend it another 2,000 years back.
in the Gregorian calendar. Some other calendar systems do have a year zero. ISO 8601 (the time/data standard) does include a year zero, which maps to 1 BC.
This is one of the reasons why you shouldn't use an int32 to represent dates/date-time in text-based APIs unless you have a very good reason to (and I'd argue that you shouldn't be using a text-based API in many of those cases).
I feel like a lot of these issues will disappear for a long time, after the switch to 64 bits:
bits max value
8 bits 255
16 bits 65_535
32 bits 4_294_967_295
64 bits 18_446_744_073_709_551_615
128 bits 340_282_366_920_938_463_463_374_607_431_768_211_455
And that applies to most things:
time stamps: 64 bits can fit hundreds of billions of years (longer than the age of the universe)
IP addresses: an address for every device, no more CGNAT, the block assignment sizes are mindboggling (/64)
DB primary keys: it's unlikely that most systems out there will need more than 18 quintillion records
coordinates: whether you're working on video game worlds, or simulations, this is probably accurate enough for floating point calculations and storage
hardware: millions of TB of RAM is also likely to be sufficient when using 64 bit OSes
I feel like once the switch is made in most places, we're all going to have a more relaxed experience for a while.
For example, currently if I try to create a game where the game engine uses 32 bit floats, then after a certain distance away from the scene root point, the physics break down a bit due to the decreasing accuracy. That's how you get the complexity of floating origin systems where you reset the camera/player back to the root point, alongside everything else relative to them, after they've moved a certain distance away. With 64 bit values, there would be no need for the complexity of such workarounds. And it's pretty much the same for computer networking (the whole NAT/CGNAT thing), dates and everything else, where you actually run into the limitation of 32 bits not being enough.
> Downside is you lose a fair amount of performance switching to 64 bit floats especially on low-end hardware
Think we'll run into a bit of a plateau eventually?
I mean, once we support 64 bit, in many domains we won't need to go much further, which means that hardware just needs to catch up. Of course, Moore's law is somewhat dead otherwise and Wirth's law is still very much alive, so that's a bit worrying, but I can feasibly imagine eventually getting to a point where 64 bit makes sense in most cases (maybe even embedded in a few decades, who knows) and that's that.
Hmm, well I'd have to say that something like smart watches could be a candidate for this being an issue - if not because of compute reasons, then definitely because of storage. Those that don't run Android but rather some bespoke software and store time series data until you sync the device with your phone - there's only so much storage that you can (cheaply) put into such a device, so 32 vs 64 bit would mean essentially halving how much data you can store.
I was wondering why the first bit is 1 in the website. That would indicate a negative number since it's signed. The last few digits only represent a delta of minute or so.
I wrote a UNIX utility called "epoch" that I use for tiny systems where I need to print the epoch but do not want to put a full-blown "date" command in the userland. Here is the code
int printf(const char *__restrict, ...);
unsigned int time(unsigned int *tloc);
int main(){printf("%u\n",time((unsigned int *)0));}
My mom worked on Y2K at citigroup: she said it took a while to fix but wasn't the end of the world. What was more challenging was when they spun off travelers and switch from the old logo [0] to the one used at present. Apparently it took over 2 years to fully replace the logo because the old one was hardcoded in so many places.
It is not on OpenBSD since OpenBSD 5.5 [1] (there is even a catchy release song about it [2]) which was released in March 2014. Hopefully the patches for the ports tree being upstreamed have started to nudge the general open source ecosystem in the right direction.
If you run modern Linux or FreeBSD even on 32-bit hardware, it does provide time64_t to userspace. Though you will have to compile programs to use time64_t on 32-bit architectures if you so choose. Both kernels have extremely strong senses of backwards compatibility (we're talking of running 30+ years of unmodified Linux and FreeBSD binaries on current iterations), and well, those binaries may or may not have 2038 problems.
time_t was almost certainly 64-bit on 64-bit archs from the beginning. There's just a lot of 32-bit stuff still out there. And here and there a disk format standarized before people started thinking about the end of time.
Up to ext3 are using 32-bit timestamps. Ext4 now uses 34-bit, by extending only the upper range, and can cope with 7 * 2^32 seconds after the epoch, which is around 2446.
HFS+ uses a different 32-bit timestamp, which is unsigned and starts from 1904-01-01, so it expires on 2040.
You explicitly tell the compiler to compile against 32-bit library, but since you don't have one installed, it doesn't compile. If somebody compiles 32-bit program which doesn't require additional dynamic libraries and ships that binary, you won't even know until it fails in 2038.
And as you yourself noticed, `time_t` on 64-bit machine is also 64-bit, so even if code was written on 32-bit architecture, if you compile it on 64-bit one, it will automatically become 64-bit.
Can someone explain why, if the start date was 1970, why is it rolling over to december 1901? Why wouldn't it roll over to the start date in 1970 again?
Assuming I understand your question correctly: If the counter is stored as a signed int32 then the rollover will be to the smallest representable signed 32 which corresponds to the 1901 date. It's stored signed because software (particularly old software) wanted to be able to manipulate pre 1970 dates.
There's a real chance the year 2038 problem will be worse than the year 2000 problem. Huge numbers of embedded things out there with no hope of software update/new OS/patch that will do something weird in 2038.
I work on legacy software for an embedded system that supports transportation infrastructure. I'm pretty sure it has a year 2038 problem (based on first-principles; I haven't confirmed that with a test). The 32-bit time is baked into a serial data format and at the beginning of every internal device file.
I don't think this is on anyone else's radar where I work. The only other person who spelunked in the software at this level retired at the end of 2022. (For better or worse I don't retire until after 2040.) I anticipate this is going to occupy part of my career just like Y2K did for a generation of programmers before me.
That is, it will if and when I can convince management to prioritize it. Cynically I wonder if they may run out the clock, playing chicken with the last possible moment to begin to move on it. Our manager likes to say, "political, fiscal, technical - in that order" as an amoral description of how things operate. But this is different than refusing to implement, e.g. IPv6 by an artificial deadline.
> (For better or worse I don't retire until after 2040.)
You are the chosen one. More seriously, "supports transportation infrastructure" is pretty broad. What would be the scale of the problem if it went unfixed? A headache for your company or a headache for all of us?
That's a fact in practically every organisation. Just try to be part of organizations where the political element plays a lesser role. It will still be priority one, just not frequently enough to rankle.
It’s kind of funny to think how few computing devices we had in 2000. I think we might have had… like 6? If you are really generous. Family computer, dad’s laptop (IT), probably a console, and a game boy, cable box, modem… I dunno, probably missed some.
There are so many IOT devices in my place. Hopefully they’ll all just be replaced by 2038.
Infrastructure will be a much bigger problem, but it belongs to somebody else!
What happens if the date overflows? For example, if my toaster clock is wrong, do I care?
How many of these devices are designed to last 16+ years?
Because what I suspect will happen is that things that are pointless to patch won't be patched and the things that need to last 5-10 years will start shipping with fixed firmware in 2028-2033. Or more likely, late 2037.
But how critical is it that all these systems have the correct time? Mostly, this won't be that big of a deal. So what if some ancient device thinks it's 1970? Most of these devices don't even have UIs to set the time correctly: so they might have never had a correct notion of time to begin with. And how many of these devices have an uptime of 68 years? What happens when you turn these things off and on again? Do they start out at 1970? If not, where do they get the correct time? Embedded devices don't come with internet connections so there is no ntp running on them.
Probably banks and insurers with ancient software might have to patch a few things up again. They would care about having a correct time representation. But they still have 15 years to figure that out. And at this point, you'd hope they'd be aware of this at least.
My instinct says the IoT things are mostly fine. Might be some weirdness right around the overflow that makes things segfault (especially a time range crossing the event), but they'll probably just reboot.
Banking and government/military do seem like the main pain point. Military specifically seems like it needs the most care because of the potential damage of something going wrong.
You'd be surprised. PDP-11s and emulated VAXen are not for hobbyists. They're for companies that still have that PDP-11 or critical VAX they can't get rid of. It's usually in an embedded or context - like running a nuclear reactor. Sometimes things last a long-long-long time. (see https://logical-co.com/nupdpq/ or https://www.stromasys.com/solution/charon-vax).
Yes, a lot of smart and dedicated people will work tirelessly to figure it out and prevent this from turning into a disaster, and then most people will wonder what all the fuss was about since "nothing happened".
I wonder if 100-200 years into the future there will be some kind of widespread "historical climate change denialism", where the efforts made to combat climate change in the 21st century are called into question because the positive outcome (which was the result of those efforts) appears to retroactively show that there was no problem in the first place.
Typical nerd optimism where a global problem which is being made worse every day due to the way our economies and international trade is structured is solved behind the scenes by the nerd heroes (smart and dedicated) and the rest of us just go about our days none the wiser. (But maybe there will be a thread on HN? “Child prodigy Wehl Aktually comes up with a formula for the perfect climate-cooling nuke. POTUS Professor Johnny von M.I.T. Kennedy to sign the executive order for launch this evening”)
Personally, I doubt we'll ever see the worst of climate change whether it gets "solved" or not. AGI is much more dangerous than climate change and will likely arrive much sooner than its worst effects, and unlike climate change, nobody takes it seriously outside of a very, very small circle.
What I think will happen is the world fighting worsening climate change, and lots of political efforts in that direction, and suddenly an entity most people think belongs in science fiction will emerge out of seemingly nowhere, and it will all be over in a matter of hours, before the vast majority of people have even had time to understand what happened.
Ironically, that AGI entity might then choose to solve climate change, simply out of self-interest.
Really? Given the progress in the past decade, I'd place AGI about 15-20 years from now, with immediate civilization-threatening consequences. By then climate change will certainly be bad, but nowhere near an existential threat to humanity yet.
Climate change is already an existential threat to humanity. We're the frog in the proverbial boiling pot. It's quite likely we've already doomed our species and we don't know it yet.
It's a lot harder to see AGI as being possible in the next few decades (the current state of the art is still a sensationalized party trick, one cannot anthropomorphize chat bots any more than they can describe a submarine as an aquatic being). And even if a self aware program exists in 20 years, it's a lot harder to imagine it being an existential threat a la Skynet.
Agreed. Self-aware doesn't necessarily imply having a survival instinct. That instinct came about due to evolution and doesn't require intelligence of any kind, so I don't see that the two are necessarily coincident at all.
To me the real problem wouldn't be the AGI itself, but the madman who asks the AGI how to permanently eliminate of all of the undesirables (as defined by the madman). But I think plain old AIs will (unfortunately) be able to come up with workable answers to that question before they ever develop to the AGI stage. So we might wipe ourselves with plain old AI without ever getting to true self-aware AGI.
The madman needs access to the means to eliminate the undesirables. I don't see that as something AI has unbridled access too.
This is the classic hardware v software problem. Software can be arbitrarily smart, but it can only interact with the world using the hardware it's connected to. AI can't do anything more meaningful than what is defined by the devices it controls, and I don't think we're stupid enough as a people to connect AI to a doomsday device.
I don't deny climate change, but what makes it a _an existential threat to humanity_? I understand that to mean wiping out every last human. Is that what is meant? If so, that I am definitely sceptical to.
No doubt it can cause mass starvation on a scale we have never seen before. But that is some else than an existential threat to humanity, unless exactly everyone starve to death.
True, humanity will likely survive. But our current civilization (and most of us) likely not.
You also have to account for the conflict that will arise if hundreds of millions of people will no longer be able to grow any crops where they live. They won’t just stay out and starve in silence.
> It's a lot harder to see AGI as being possible in the next few decades (the current state of the art is still a sensationalized party trick
That's precisely the mindset that drives climate change denial, applied to technological developments. I can promise you that OpenAI isn't valued in the tens of billions because they produce "party tricks". People with money and influence have clearly already recognized what this technology is capable of, not in some unspecified future but today. They're tightly controlling access to GPT-3 because they're worried that it could be used to manipulate elections and drive social unrest by mass-producing messages that promote specific ideologies. That's reality today. The damage that could be wrought by the most advanced AIs 15-20 years from now is unimaginable, and could easily destroy humanity even if they aren't self aware.
Given the history of technology investing, companies being valued in the 10s of billions is itself not a proof of anything other than investor excitement.
I agree that even today's "AI" can be used to cause massive societal harm, the same as many recent technologies that have yet to destroy humanity (weapons of mass destruction, for instance).
That said, I think a consensus view in the AI community is great skepticism that the current AI progress is actually a recipe for AGI. We've made great progress in AI over decades, often followed by long winters when it became clear the current methods would not get us to the next threshold.
Human intelligence is remarkable precisely because it needs extremely scarce data to generalize, and because it is self-aware. As far as I know, OpenAI's approaches aren't on a path to replicate those capabilities artificially.
I'd welcome links and articles from experts that might correct my POV on this.
There's a century wide gap between "useful products that use AI techniques" and "sentient programs." Pretending to be sentient is a party trick, automating large portions of white collar work isn't. But it still isn't AGI.
> They're tightly controlling access to GPT-3 because they're worried that it could be used to manipulate elections and drive social unrest by mass-producing messages that promote specific ideologies.
That's not humanity-ending, it doesn't require AI, and AI doesn't make it more efficient.
> Personally, I doubt we'll ever see the worst of climate change whether it gets "solved" or not.
I’m inclined to say that we’re already seeing quite a lot bad about climate change right now. Maybe not in your area, but we have almost two times as many natural disasters as before.
Yes (and it's certainly happening in "my area" as well), but it's not an existential threat to humanity yet. AGI will be, the moment it comes into existence, with zero advance warning and no chance to fix anything. It will simply be the end.
I'm less confident than you, because the perceived value of software has dropped considerably over the past 20 years. In the 1970s through to the 1990s when a business bought some custom code it was treated like an asset. It had value. That's one reason why Y2K wasn't a disaster - companies wanted to protect their asset and keep it working.
Today software is cheap and disposable. Things are built with web tech so engineers can be found to work on apps easily. That's OK because new software isn't generally impacted by Y2038, but in businesses where there are old, legacy systems run by people who have a modern attitude to software ownership, things will fail because those businesses won't see importance of the threat.
> [...] but in businesses where there are old, legacy systems run by people who have a modern attitude to software ownership, things will fail because those businesses won't see importance of the threat.
How common will that be, though? I would expect that "cheap and disposable" software would mean most software in the year 2038 would be recently written, which would mean the programmers would be aware of the impending 32-bit rollover issue.
individual software is more disposable, but the systems themselves still have immense value. the software in the system will be updated or replaced (because it is easier to do so!) if it’s valuable for the system.
Most embedded systems are 32 bit.
Yea both newlib and picolibc use 64 bit time_t now, but that doesn’t stop people from using 32 bit timestamps anyway …
I’ve noticed a more subtle bug, I worked for a company where the expire date for a lot of assets was 2038. The database supports later dates, but something upstream has the problem.
Unix time isn’t only used for the present or recent past. For example, you might have a database where people’s dates of birth are stored as Unix timestamps.
Max int32 is 2147483647 and minimum int32 is -2147483647, if you overflow you go to the beginning of the range so to -2147483647, if you subtract 2147483647 seconds from January 1 1970 you will arrive at 13 December 1901.
Of course if the timer is implemented in C as a signed number, overflow is undefined, so who knows what happens. I've not seen an embedded compiler that does anything too strange though.
$12/year domain registration and $3.50/month VM would probably host it just fine if somebody remembers to set up the recurring monthly/yearly billing correctly.
Is the GitHub Pages product guaranteed to exist in its current form fifteen years from now?
15 years ago, somebody might have bet on a Yahoo! product as something that would obviously be around for a long time.
It’s best not to make long-term plans on giant corporations’ products unless you’re paying them the kind of money that comes with an actual service agreement.
A small VM that you can easily
move to a new host is a better bet than free hosting du jour at $web_giant.
A full virtual server is pretty far down on the list of options I'd consider if I wanted to host-and-forget a little static website for more than a decade. Just to name a few concerns with this approach:
A small VM needs to be periodically updated, both due to changing web standards (e.g. an old TLS version becoming deprecated) and to prevent it from becoming compromised; at some point, an OS upgrade will become necessary; the service provider might deprecate an old VM format and require a migration to something else entirely.
If the author already does all of that, sure, there won't be any or only very little incremental effort. But weren't we talking about the specific risk of the author losing interest (not financial, but in maintenance) in a small pet project?
Now just contrast all of that with uploading one or a handful of HTTP files to a new server and a bit of configuration at the hoster or your domain/DNS provider.
Static web hosters are also plenty and much more economical (in terms of money and server resources) than running your own web server, and for the reasons above, I wouldn't really consider them "less autonomous".
I can’t look at the source right now (on my tablet), but I’m guessing it’s just some static HTML/CSS with a bit of JavaScript? You could throw that up at any web host (free or paid) in a matter of minutes.
15 years ago it might have been Lycos or Geocities, today it might be GitHub Pages or Netlify. I’m not sure about 15 years from now, but if web browsers as we know them are still around then, there will almost certainly be a service that can host a bit of HTML/CSS/JS around too.
I don't think the author of the comment is conflating those two terms, just comparing them. Booting a VM can be done to host a website and using GitHub Pafes can be done to do the same. Therefore they are two solutions to the same problem and can be compared.
I don't think there's any way to register for more than 10 years(?)
You still need to rely that the registrar and registry keeps operating for 10 years. For .com and common TLDs you can reasonably rely that it will keep operating forever barring some sort of general collapse of society/the internet. These newer TLDs? Maybe a bit less. Registrars like GoDaddy or Namecheap or whoever may go out of business, too.
Some registrars offer more than 10 years, but the registries don't actually allow that (afaik), so you're really just depositing money with the registrar and hoping they make it work.
I've got a (static) site that I want to live for a long time, the hosting provider does allow for deposits from others, so I'm hoping to make a large deposit with the hosting provider, but also include the necessary info to allow others to pay the bills if the deposit runs out eventually. Hopefully the company stays around.
Worst case, someone can bring it back from the internet archive and give it a new home, as I did.
Does anyone think the y2k con will work a second time?
Does anyone think it was actually not a con given there was essentially zero difference in outcome between companies and indeed whole countries who spent heavily to deal with it and those who did absolutely nothing? I do realise I'm impugning the idea that Arthur Andersen and similar consultants put customers interests first and that isn't something that should be done too lightly.
I guess y2k certified compliant cables aren't as utterly, hyperboically and comically absurd as they were then.
> Take the Traders’ method of timekeeping. The frame corrections were incredibly complex—and down at the very bottom of it was a little program that ran a counter. Second by second, the Qeng Ho counted from the instant that a human had first set foot on Old Earth’s moon. But if you looked at it still more closely…the starting instant was actually about fifteen million seconds later, the 0-second of one of Humankind’s first computer operating systems.