Is there any way to even know?
It was happily chugging away, running our shitty phone system, and hadn't been restarted in 10+ years.
Edit: This also happened to me at a utility company I worked at. We had a server that ran some critical calculations for us but nobody could recall where it was physically located. It's way worse now that everything's virtualized - the cruft just sits there for years until you suddenly run out of resources and start looking closely.
ah yes, the good old "responds to pings but where is it?"
In some sense, traceroute does often give you some location information.
However, if you're talking about GPS or other geolocation information, even now if a machine supports IPv4 and/or IPv6 and isn't a phone, it likely doesn't know any geolocation information about itself. No provision for geolocation information was provided in the ICMP echo or echo response packets.
It was a silly hack that turned out to be clever and incredibly useful.
You have attempted to access a blocked website. Access to this website has been blocked for operational reasons by the DOD Enterprise-Level Protection System.
Took us an afternoon trying to find out their origin and make sure they weren't malicious but in the end they were just eating watts.
I logged into the server and started analyzing traffic, turned out that traffic on a upstream VoIP switch didn't always match the traffic leaving the server I was debugging. It was as if a identical system was getting and responding to parts of the traffic. - and after some more debugging I discovered that there was a older identical system online somewhere in their server room. Years ago - all services had been stopped, the system backed up and migrated to a new server. One day there had been a power loss, and when the servers rebooted, the old system everyone had forgotten about launched it's previously stopped services - causing the customer all sorts of weird VoIP problems.
As a bored government employee in the early 1990s, I become fascinated with the WWW. I was a network admin (NetWare 2.15c), and there was a fat, unused internet pipe and several unused phone lines. I started to mess around with Linux (Slackware kernel 0.97 I think), and after two weeks I had it talking to a Hayes modem. Voila, instant ISP! A little while later I installed Slirp, and it became my personal dial-up connection for many, many years.
Before I left that job in 1995, I moved the server (headless) to a broom closet (wrong of me, I know, I know). Knowing gov't culture, no one would mess with something like that. It was up and running at least until 1998 or so, at which point I moved to another country, and when I moved back, I no longer had the dial-up number. I like to think it is running to this day.
I think we as an industry have gotten much better about this. In the old days, as small minicomputers and micros expanded into less and less technical businesses, wiring standards and server room design guides were not well-known or followed.
Often, people just sort of winged it. Some employees are more naturally methodical, have better memories, and are longer-tenured than others. Also, many of the stories are from universities, where long-term thinking isn't guaranteed (but embarrassing stories have always been popular to share!).
A quick DDG finds this story from the University of North Carolina:
USENET archives would be a great source for more of these.
<erno> hm. I've lost a machine.. literally _lost_. it responds to ping, it works completely, I just can't figure out where in my apartment it is.
And I think the downvotes are uncalled for, the general idea is excellent, even if Bitcoin mining is iffy in a corporate setting (very bad if you keep them, and the finance people won't know what to do with them, plus their tax treatment is not trivial).
Stuff was moved in hastily from an acquired company when there was no room in the server room, hooked up because important (mail server), and subsequently forgotten while departments where shuffled around the building and employees left over a period of many months.
A week after I started the server started having issues. And nobody knew where it was... Finally found it in some storage room.
So I guess 4 years ago somebody migrated the servers to a virtual environment and then just forgot about them. The IPs were migrated to the VMs and these servers were left without an IP address on the network connections, so apart from going to the data room and checking every machine (what they did during the maintenance) there was no way to find them. They were still up and running after 4 years.
I work at an ISP / housing / colocation company, and occasionally hardware goes missing (nobody knows where it is anymore, it's not where it's supposed to be). Maybe some of them are broken, some might be stolen, but I'm fairly sure others are still running, not serving any purpose.
And from time to time we stumble over some virtual machine (or even phyiscal server) where nobody knows anymore what it's supposed to do; the standard procedure seems to be firewall it off, wait for a few weeks or months to see if anybody complains, and if not, shut it down, maybe archive it.
They sit there idling and unattended, burning power and disks, until some script kiddie finds whatever default root password was used or how to exploit some random apache/ssh flaw.
At that point the possibilities are endless: bitcoin miners are quite unnoticeable in most environments, but DDOS/spam zombies, proxies, bittorrent seedboxes, botnet C&C, "warez" and http servers serving drive by exploits are fairly common.
Protip: ask your datacenter provider to power your servers down (be it VMs or dedicated gear) after racking them up. Powering them back up when you really need them will only take you a minute and you'll save big on power, bandwith, security and peace of mind.
My father's @gte.net email delivery has recently become spotty. After hours of phone calls with Verizon, no one at any level of support can seem to find the old GTE mail servers.
Some Googling for smtp.gte.net found this IP address, 18.104.22.168, which seems to respond to pings but not smtp traffic.
And in large organizations, it's hard to keep track of who's using what and if they still need it.
Imagine working at a hospital or a bank, emailing people about a server, and everyone says it's not being used. Then you turn it off and something critical gets broken at 3am a week later resulting in an emergency. Who gets blamed, you or the people who forgot the server was being used? The people who set it up may not even be working there anymore.
Next year I'm going to find that system on the 20th anniversary of that page.
The owner of the BBS, handle "Kilroy" as I recall, hadn't logged into his own BBS in _years_. Nor anyone else's, for that matter.
We used to joke that he'd probably died, and Mom and Dad just left his Commodore 64 sitting there, wondering why their phone bill was twice as much as their neighbours.
After that I added a sign to the machine. "DO NOT TURN OFF UNDER ANY CIRCUMSTANCES." And made sure I wore headphones to block out the sound of the fan.
A month later, 4 internal customers on board and 5 other apps installed, we get a new technical expert. The machine was there. He checked. No Weblogic server running. No tomcat. No database software. Not even Microsoft Word. Clean machine. Format, reinstall...
The day after, Corporate gave us the budget for a server room. \o/
One of the university's Novell servers had been doing the business for years and nobody stopped to wonder where it was - until some bright spark realised an audit of the campus network was well overdue.
According to a report by Techweb it was only then that those campus techies realised they couldn't find the server. Attempts to follow network cabling to find the missing box led to the discovery that maintenance workers had sealed the server behind a wall.
Eventually Brad left that particular DC operator, leaving the startup with no inside contact - but the servers stayed up for years to come (and the company was never charged for the resources).
These machines were eventually "replaced" with newer hardware in a nearby facility, but the DC operator has no idea where these old machines are physically located within the facility (thus cannot remove them), so they remain active to this very day, sitting idly by...
They eventually found it covered in dust in a small utility closest. None of the fans were working either (other than the CPU/PSU) yet it still chugged along doing its job.
TLDR; a SaaS system that not only auto-scales, but does so autonomously. When it has excess funds, it buys more VMs and provisions them with itself. If cashflow is low, it shuts some down. Add in the ability to create accounts (using bitcoin as currency) and you could very quickly lose the paper trail and have a SaaS owned by the Æther.
12 years later, the phone number still is working, even though the companies phone system has been relocated to another room and was replaced by another vendor. No idea how this happened.
It could very well be that somebody migrated all the relvant phone numbers off the test system, and chose to preserve those numbers where they didn't know what they were needed for. Might be cheaper than risking losing functionality that somebody depends on. (Especially if the one doing the migration didn't know it was an abandoned test system).
I think these servers will outlive me.
We actually study this special case in Embedded systems engineering.
Another fun story happened years before. There was a large power outage at the data center of another large Brazilian portal. Three days later, someone calls our office, asking if we remember what kind of hardware the server was running on. The machine didn't boot and they needed to get to its console. Unfortunately, nobody knew where the machine was physically located or what did it look like. In the end, it was found inside a Cubix chassis, an early blade-like machine.
Almost a year after I had thought that I had tracked everything down we had an odd error start cropping up on our sites where the domains on a couple older sites wouldn't resolve on just some computers. After digging through it we found that there was some name-server that I had missed but we still didn't know where it was until a thunderstorm killed out office router. In the process of replacing it we tracked one cable to a closet in the basement where, lo, the mystery name server was sitting.
Later, when the project ended no one bothered to go through the whole formal process to get the machine out of the premises again. The machine was just left there connected to the network.
In the months after the project ended different departments moved into the building. Different people with different tasks. Yet the machine stayed connected. I guess no one had the courage to shut down a machine they didn't know about.
For a year or so I still used to check occasionally if I still could log into the machine.
It disappeared eventually but unfortunately I don't remember how long exactly it lasted.
It was probably the largest image database back then. That website had thousands of photos from every female celebrity categorized by name. The website owner collected all celebrity images from public news groups. One day the updates stopped and everyone wondered what happened. Many months later the website vanished (ca. 2002) and I read in the news that a friend found the dead body of the owner and no one paid the bills. As usual a domain grabber replaced it with a scam website shortly afterwards. I remember only a few frame-pages were backed up by Archive.org - though I forgot the URL of the website. Does anyone remember that former rather famous site?
Now that I think of it, I 've got 3 for 2 different clients.
One of my favorites was this one, on ARS:
Having a perfect CMDB for hardware and following all procedures step by step could identify / prevent cases like that. But in reality, in normal environment it's going to happen.
Some of the data was so bad that we ended up having one guy who's job was to physically audit what hardware was really in the data centres, and then attempt to reconcile that with the CMDB data.