Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Are there forgotten servers out there?
144 points by forgottenacc56 on May 30, 2015 | hide | past | favorite | 105 comments
I was wondering if there are "forgotten" servers out there. No longer doing anything, but still up and running.

Is there any way to even know?

Yes - in my network admin days back in 2007, we were decommissioning our datacenter as we'd virtualized everything and as we were gutting the room we found our PABX/Voicemail server under the tiles, happily running away with OS/2. Nobody ever knew where it was and it had been installed 13 years before. The floor was on a UPS and generator so the power had never been disconnected.

It was happily chugging away, running our shitty phone system, and hadn't been restarted in 10+ years.

Edit: This also happened to me at a utility company I worked at. We had a server that ran some critical calculations for us but nobody could recall where it was physically located. It's way worse now that everything's virtualized - the cruft just sits there for years until you suddenly run out of resources and start looking closely.

>nobody could recall where it was physically located.

ah yes, the good old "responds to pings but where is it?"

Does the ping protocol have any capability to provide location information? Because it should if it doesn't..

The usual method is to get the device's MAC address, use that to track down/identify the switch port it's connected to, then physically trace the network cable that's plugged into that switch port.

Or, if the hardware supports it, eject the CD/DVD ROM drive.

Do servers have onboard speakers these days? Can you make them beep in a distinct pattern?

Many/most servers (or the ones I've dealt with the most, anyways) have an LED that can be activated remotely -- i.e. through IPMI, ILO, etc. -- but I can't remember what's it called at the moment. It can be very handy when you're remote and need to be able to direct someone (physically present) to a specific server.

Not full-fledged speakers I don't think. But buzzers I assume most have.

Or light up IF led or so (Fujitsu). There are often indicators to make sure it's the right server.

So often the switch cables disappear in a bundle into a cable tray in the ceiling.

That is a small hurdle when you have alligator clips and a multi-meter.

There isn't a separate ping protocol. Echo packets are part of the Internet Control Message Protocol (ICMP). ICMP echo packets are used by both ping and traceroute.

In some sense, traceroute does often give you some location information.

However, if you're talking about GPS or other geolocation information, even now if a machine supports IPv4 and/or IPv6 and isn't a phone, it likely doesn't know any geolocation information about itself. No provision for geolocation information was provided in the ICMP echo or echo response packets.

Check out how ping actually works and you'll discover...there's no such thing as a ping protocol!

It was a silly hack that turned out to be clever and incredibly useful.

Meh Wtf!? From .ch I get:


You have attempted to access a blocked website. Access to this website has been blocked for operational reasons by the DOD Enterprise-Level Protection System.

APPLICATION: qos-mission-critical-pan"

There is the LOC record for DNS but that would probably require manual updating unless you attached a GPS receiver to your server/device.


Same here, found abandoned routers in the ceiling at the school where I was doing junior sysadmin work.

Took us an afternoon trying to find out their origin and make sure they weren't malicious but in the end they were just eating watts.

Prior to opening the comments I thought no way somebody recently found OS/2. Not to one up you or anything, but we just found over a dozen (~15 I think) OS/2 machines in our environment two weeks ago..

At a place I worked we had a service that relied on a cron job running a script every hour. One day someone wanted to know where the server was so he could connect an external hard drive to it and copy some files in. Not a single person knew where the server was or even what it's IP address was. Since it had been setup years ago, everyone who had worked on it had since left the company and no one ever documented it. So somewhere either in our office or in one of three public clouds we used was a server happily running a script every hour that no one could locate or stop. We eventually found it when we moved offices and a super old Dell box in a tiny dusty closet was unplugged and the script stopped running. Now I always document the physical locations of services too.

Oh, Yes! In a previous job we developed VoIP servers (pbxes). One day a customer started experiencing some really weird problems and I was asked to debug it.

I logged into the server and started analyzing traffic, turned out that traffic on a upstream VoIP switch didn't always match the traffic leaving the server I was debugging. It was as if a identical system was getting and responding to parts of the traffic. - and after some more debugging I discovered that there was a older identical system online somewhere in their server room. Years ago - all services had been stopped, the system backed up and migrated to a new server. One day there had been a power loss, and when the servers rebooted, the old system everyone had forgotten about launched it's previously stopped services - causing the customer all sorts of weird VoIP problems.

I set up one of these deliberately in younger days...

As a bored government employee in the early 1990s, I become fascinated with the WWW. I was a network admin (NetWare 2.15c), and there was a fat, unused internet pipe and several unused phone lines. I started to mess around with Linux (Slackware kernel 0.97 I think), and after two weeks I had it talking to a Hayes modem. Voila, instant ISP! A little while later I installed Slirp, and it became my personal dial-up connection for many, many years.

Before I left that job in 1995, I moved the server (headless) to a broom closet (wrong of me, I know, I know). Knowing gov't culture, no one would mess with something like that. It was up and running at least until 1998 or so, at which point I moved to another country, and when I moved back, I no longer had the dial-up number. I like to think it is running to this day.

There are many (possibly apocryphal) stories about servers lost in ceilings, locked closets, etc that just kept on serving, sometimes critical services.

I think we as an industry have gotten much better about this. In the old days, as small minicomputers and micros expanded into less and less technical businesses, wiring standards and server room design guides were not well-known or followed.

Often, people just sort of winged it. Some employees are more naturally methodical, have better memories, and are longer-tenured than others. Also, many of the stories are from universities, where long-term thinking isn't guaranteed (but embarrassing stories have always been popular to share!).

A quick DDG finds this story from the University of North Carolina:


USENET archives would be a great source for more of these.

This guy's server: http://bash.org/?5273

<erno> hm. I've lost a machine.. literally _lost_. it responds to ping, it works completely, I just can't figure out where in my apartment it is.

Couldn't he configure it to do Bitcon Mining, then go look for it with thermal imaging?

Back then he could have configured it to search for Mersenne primes (https://en.wikipedia.org/wiki/Great_Internet_Mersenne_Prime_...) or the like. There's always been capacious sinks for computer power ^_^.

And I think the downvotes are uncalled for, the general idea is excellent, even if Bitcoin mining is iffy in a corporate setting (very bad if you keep them, and the finance people won't know what to do with them, plus their tax treatment is not trivial).

There are numerous way to heat a machine other then bitcoin mining( actually I doubt if bitcoins mining is the most efficient way)

fsvo efficient

He could, except this quote predates Bitcoin by at least five years. Oldest reference for this page in the wayback machine is in 2003. The quote may be even older than that.

Been there. Except in my case it was two 8 story office buildings.

Stuff was moved in hastily from an acquired company when there was no room in the server room, hooked up because important (mail server), and subsequently forgotten while departments where shuffled around the building and employees left over a period of many months.

A week after I started the server started having issues. And nobody knew where it was... Finally found it in some storage room.

What to do in this cases? What are the options?

Disconnect cables from each switch and see when the ping replies stop :).

Unless it's on a WiFi connection, in which case things get substantially more complicated :)

Typically you can login to the router and see machines attached, and kick/ban MAC addresses.

That still won't help you find where the machine is located though. With cables you can presumably just follow the cable when you realize disconnecting one stops the pings

Wardriving software like kismet will show your the RSSI of the AP. Is it possible with stock hardware to determine the RSSI of connected devices? Then it is just the hot/cold game. I'm sure actual radio triangulation/direction finding would be possible as well but I'm not sure how difficult that would be.

You could have an electrician identify circuits and start throwing circuit breaker switches to achieve the same result? (obviously might be a bit chaotic, but it'd be a good time to test your UPS infrastructure at the same time, right?)

And now you have really lost the machine.

You can turn off the electric breakers one by one

Clean your apartment.

If the server has a "PC speaker" like desktop computers then you could make them beep to help find them.

Send it an ASCII 7?

This week I got a support call from one customer: "hey, we have a maintenance scheduled for our Dell servers, is it ok if we shut down the two pre-prod servers?" "what pre-prod servers?" [all the non-production servers were virtualized about 4 years ago] "well, there are two servers here, with pre-prod labels and running Linux; no applications are running on them"

So I guess 4 years ago somebody migrated the servers to a virtual environment and then just forgot about them. The IPs were migrated to the VMs and these servers were left without an IP address on the network connections, so apart from going to the data room and checking every machine (what they did during the maintenance) there was no way to find them. They were still up and running after 4 years.

Auditing your racks periodically is good for many other reasons. RackTables and similar tools are invaluable!

Thank you for mentioning it, didnt know about the tool!

Would an arp-scan work or checking the arp cache of switches for MAC addresses that weren't tied to an IP address? Presumably the servers attempt[ed] to get an IP address at some point (like broadcasting DHCPdiscover or some such) - wouldn't it keep operating if an IP weren't acquired?

Not the arp cache but the Mac adress table, but yes, switches should know where they are.

What is server virtualization?

Oh god yes. A few years ago some labmates and I discovered that a post doc had set up an entire computer cluster of 30 workstations / associated infrastructure, colocated it at a data center nearby, then left without leaving any documentation. Everyone that knew the cluster existed left soon after, and so labmates that arrived later were running simulations on their personal laptops while this cluster sat idle. We discovered it only because my PI received a large bill for rack space and remembered it existed.

Sure, happens now and then.

I work at an ISP / housing / colocation company, and occasionally hardware goes missing (nobody knows where it is anymore, it's not where it's supposed to be). Maybe some of them are broken, some might be stolen, but I'm fairly sure others are still running, not serving any purpose.

And from time to time we stumble over some virtual machine (or even phyiscal server) where nobody knows anymore what it's supposed to do; the standard procedure seems to be firewall it off, wait for a few weeks or months to see if anybody complains, and if not, shut it down, maybe archive it.

Same experience here. Servers get ordered, racked, provisioned and left powered on waiting for a sysadmin to make use of them... which quite often doesn't ever happen, because team priorities change over time, people move on/get fired, entities go through reorgs, etc.

They sit there idling and unattended, burning power and disks, until some script kiddie finds whatever default root password was used or how to exploit some random apache/ssh flaw.

At that point the possibilities are endless: bitcoin miners are quite unnoticeable in most environments, but DDOS/spam zombies, proxies, bittorrent seedboxes, botnet C&C, "warez" and http servers serving drive by exploits are fairly common.

Protip: ask your datacenter provider to power your servers down (be it VMs or dedicated gear) after racking them up. Powering them back up when you really need them will only take you a minute and you'll save big on power, bandwith, security and peace of mind.

My father has an email account hosted by Verizon with the domain @gte.net. General Telephone & Electric merged with Bell Atlantic in 2000 to become Verizon.

My father's @gte.net email delivery has recently become spotty. After hours of phone calls with Verizon, no one at any level of support can seem to find the old GTE mail servers.

Some Googling for smtp.gte.net found this IP address,, which seems to respond to pings but not smtp traffic.

I am sure there are tens of thousands if not more. Server admins are very risk averse and don't tend to turn off machines just to check if someone's using it.

And in large organizations, it's hard to keep track of who's using what and if they still need it.

Imagine working at a hospital or a bank, emailing people about a server, and everyone says it's not being used. Then you turn it off and something critical gets broken at 3am a week later resulting in an emergency. Who gets blamed, you or the people who forgot the server was being used? The people who set it up may not even be working there anymore.

Years ago I shut down a start-up, and as part of some business partnership, we had a physical server hosted in some big company's datacenter. They forgot about it, and I didn't have time or incentive to drive over there to fetch one old machine. So it just kept running there for a couple years, at first just continuing to serve my failed company's website, and then doing nothing, other than serving as a personal download proxy for me. That lasted for about 2-3 years until they finally found it and shut it off, so I took it home.

We came across one at work (NASA) a few years back, a server running in a far off room that no one remembered anything about. It had probably last been used by a grad student at least a decade ago, but was still up and running and doing, well, something. We eventually had it removed and turned the room into a collaboration space.

My 1996 homepage is still online under its original URL. Which likely means the system hosting it is still online. Except the original ISP no longer exists, nor does the ISP that bought the original ISP.

Next year I'm going to find that system on the 20th anniversary of that page.

My first websites are still somewhat online too, as Archive.org and/or the Archive Team managed to backup the site :)

I've always wanted to intentionally setup servers in hidden places just to see how long they stay online. For example, putting a Raspberry Pi in the back of a rack in a server closet. Maybe run a Tor relay on it or something like that.

Back in the late 80s / early 90s there was a BBS in my hometown (Windsor, Ontario, Canada) called GB Hotel. It wasn't very popular, but if all the other BBS lines were busy, it was something to do.

The owner of the BBS, handle "Kilroy" as I recall, hadn't logged into his own BBS in _years_. Nor anyone else's, for that matter.

We used to joke that he'd probably died, and Mom and Dad just left his Commodore 64 sitting there, wondering why their phone bill was twice as much as their neighbours.

Seen it a few times. Company acquires another company or an employees leaves. And then you find out years later during an office refit that the PC under their desk was running a critical business process.

Once I was an intern at an air force base. In my cubicle, stuffed into a corner, was an old SGI Octane workstation. It was absolutely covered in dust and looked neglected. Its fan was very annoying to me. One day, I unplugged the machine because I figured that it was completely purposeless. Less than 2 minutes later, someone I had never seen was standing at my desk, berating me for turning off a server that ran a set of critical data reduction processes for the nearby wind tunnel.

After that I added a sign to the machine. "DO NOT TURN OFF UNDER ANY CIRCUMSTANCES." And made sure I wore headphones to block out the sound of the fan.

Should've added a note saying what it does too seeing how you did the old "power it down & wait for someone to get angry" already.

Yes, I should have. But I don't remember if I did or not. This happened in the fall of 2005.

If you had physical access, I wonder if you could have set up fan control?

I had physical access to the machine, but did not have a login.

We started a corporate project in BigCorp. You know: Once a corporate software is installed, you claim you're in charge of it, you can hope to get the budget for it, and you're competing with the other European teams of Big Corp. So you install as much as you can and pretend to have been delegated by BigCorp's headquarters. So we installed Artemis in a VMWare on a desktop and got it up running in the first 16 hours of the team ;)

A month later, 4 internal customers on board and 5 other apps installed, we get a new technical expert. The machine was there. He checked. No Weblogic server running. No tomcat. No database software. Not even Microsoft Word. Clean machine. Format, reinstall...

The day after, Corporate gave us the budget for a server room. \o/

happened to me, a guy left, and I found out after they tried to upgrade his machine


One of the university's Novell servers had been doing the business for years and nobody stopped to wonder where it was - until some bright spark realised an audit of the campus network was well overdue.

According to a report by Techweb it was only then that those campus techies realised they couldn't find the server. Attempts to follow network cabling to find the missing box led to the discovery that maintenance workers had sealed the server behind a wall.

I know of one interesting story here. Early on in (what would eventually become a very successful) tech company's life, they acquired some collocated space for free thanks to a friend (call him Brad) of an early employee.

Eventually Brad left that particular DC operator, leaving the startup with no inside contact - but the servers stayed up for years to come (and the company was never charged for the resources).

These machines were eventually "replaced" with newer hardware in a nearby facility, but the DC operator has no idea where these old machines are physically located within the facility (thus cannot remove them), so they remain active to this very day, sitting idly by...

If its in a DC can't they track down the MAC of whatever is responding to their IP, and then trace it to a switch port?

Previous company I was at had a small server setup to run a few automated tasks every morning. It must have been running for at least 10 years before a new employee asked where it was physically located.

They eventually found it covered in dust in a small utility closest. None of the fans were working either (other than the CPU/PSU) yet it still chugged along doing its job.

I'm always fascinated by StorJ.


TLDR; a SaaS system that not only auto-scales, but does so autonomously. When it has excess funds, it buys more VMs and provisions them with itself. If cashflow is low, it shuts some down. Add in the ability to create accounts (using bitcoin as currency) and you could very quickly lose the paper trail and have a SaaS owned by the Æther.

There's the story about the novell server left running in a walled-off room:


Not really "doing anything", but related: I had to set up a voip server for a test project in 2003, which was supposed to be terminated half a year later. Being a test project, I was advised not to document anything formally, just to place it in the rack and ignore everything else. I left this job soon after.

12 years later, the phone number still is working, even though the companies phone system has been relocated to another room and was replaced by another vendor. No idea how this happened.

> No idea how this happened.

It could very well be that somebody migrated all the relvant phone numbers off the test system, and chose to preserve those numbers where they didn't know what they were needed for. Might be cheaper than risking losing functionality that somebody depends on. (Especially if the one doing the migration didn't know it was an abandoned test system).

I heard a company forgot about their third datacenter (second backup), as several sys-admins left the company. It was found in an old bunker below the basement of a building.

i work for a pretty large enterprisey-corp and this is very close what happened to us. Early 2000's we had a DR setup for a specific platform at a 3rd party hosting provider. Mid-2000's was major transition of infrastructure folks.. the DR site being completely out of date and never used, was abandoned and forgotten. They continued paying the bill for the 3rd party for no joke, 4 years until someone realized that it was useless hardware.

Intriguing question; I would think that a certain percentage of servers might have survived their owners post-mortem. Of course there could be many additional reasons for abandoned servers.

AWS instances on the free tier that starts to charge your credit card after a year.

Oh, I must pay attention to that.

We have several gorgeous AS/400s still going strong, quietly gathering dust while they wait for the next time we need to test something on them.

I think these servers will outlive me.

Well, every six months or so I review my various VPSes and I usually find at least one I'd forgotten about, so unless I'm unusually forgetful I'd say "yes" :)

Apart from normal servers, lots of embedded devices probably also have integrated servers. There must be many, many of them idling away somewhere.

MOTES have an average lifetime of 5 years. Unless they are linked to a power grid and stable internet. Most of these are either broken/frozen/unreachable/out of power.

We actually study this special case in Embedded systems engineering.

Right after I was hired by a large portal part of a much larger telco, we were decommissioning one of our data centers. In one of the planning meetings there was a discussion about an undocumented class C range that nobody knew what was, yet had some significant traffic. I never knew what that network was or what was running there. As far as I know, someone may have lost their spam relays or their phishing hosts.

Another fun story happened years before. There was a large power outage at the data center of another large Brazilian portal. Three days later, someone calls our office, asking if we remember what kind of hardware the server was running on. The machine didn't boot and they needed to get to its console. Unfortunately, nobody knew where the machine was physically located or what did it look like. In the end, it was found inside a Cubix chassis, an early blade-like machine.

I took over a job at a marketing firm once who had lost their senior developer about a year before. They had been running on luck and a whatever quick fixes their front-end developers could hack together when anything went down. Needless to say, no one had any idea where the servers where or even how many servers we had. Took a while to track them down.

Almost a year after I had thought that I had tracked everything down we had an odd error start cropping up on our sites where the domains on a couple older sites wouldn't resolve on just some computers. After digging through it we found that there was some name-server that I had missed but we still didn't know where it was until a thunderstorm killed out office router. In the process of replacing it we tracked one cable to a closet in the basement where, lo, the mystery name server was sitting.

Many years ago I did a job for a big corporation with a big IT department. Servers were in the data center of course and to get one you had to fill out a lot of different forms. When the project was on fire we were allowed to bring in our own machine and use it as a server.

Later, when the project ended no one bothered to go through the whole formal process to get the machine out of the premises again. The machine was just left there connected to the network.

In the months after the project ended different departments moved into the building. Different people with different tasks. Yet the machine stayed connected. I guess no one had the courage to shut down a machine they didn't know about.

For a year or so I still used to check occasionally if I still could log into the machine. It disappeared eventually but unfortunately I don't remember how long exactly it lasted.

Maybe some remember an UK website with "celebs" in the URL, in the late 1990s. (black background with small stars, website used frames)

It was probably the largest image database back then. That website had thousands of photos from every female celebrity categorized by name. The website owner collected all celebrity images from public news groups. One day the updates stopped and everyone wondered what happened. Many months later the website vanished (ca. 2002) and I read in the news that a friend found the dead body of the owner and no one paid the bills. As usual a domain grabber replaced it with a scam website shortly afterwards. I remember only a few frame-pages were backed up by Archive.org - though I forgot the URL of the website. Does anyone remember that former rather famous site?

Of course there are, the question though is how "forgotten" and what is a server. There are web sites out there running on servers where they haven't been changed or updated in years. Presumably they are forgotten in one sense or another.

I've got one, that I set up for a client but who decided to not need it. Still running and paid for. I imagine this happens a lot.

Edit: Now that I think of it, I 've got 3 for 2 different clients.

The reason I enjoy these kind of stories is the uptime. I know it is not a real indicator of anything but I enjoy seeing machines that have been running for years. (I used to frequent a website were people actually tracked their uptimes).

One of my favorites was this one, on ARS:


A few years ago when I upgraded to a new more powerful box at my hosting provider I noticed half a year later the old one (which I had whiped and done some testing on after transfering) was still chugging along, serving the silly 403 page I set up. After a few days of talking to the confused support it got shut down.

I have a forgotten server happily chugging away somewhere, it still send me mail sometimes. it might be lonely.

Yes, it happens. From the small company to a corportation, those can be: vms, old servers, racks of servers...

Having a perfect CMDB for hardware and following all procedures step by step could identify / prevent cases like that. But in reality, in normal environment it's going to happen.

I'm curious - what IS the perfect CMDB software?

My belief is that the only way to make the perfect CMDB software is for your company to write it itself. Every CMDB system winds up being obliterated into twisted machinations of the original vendor's object model to the point where the vendor can't even help their customer anymore. I've witnessed CMDB implementation projects at least a dozen times and have seen 0 actually succeed at what the customer really wanted to get out of it (because people are sold unicorns and puppies in the ITIL model that no vendor can actually deliver because it's theory ware). Almost all CMDBs or equivalent system of records database has been RDBMS based in my experience on the enterprise IT side. One exception is that Facebook has their own that's not from some random acronym vendor used with clients being pluggable such as from Chef in place of the provided ohai.

My first proper software development job was to consolidate the 7 different CMDBs that the company had - they were an ISP/hosting provider who had done a rapid succession of acquisitions, all of which came with their own database of hardware.

Some of the data was so bad that we ended up having one guy who's job was to physically audit what hardware was really in the data centres, and then attempt to reconcile that with the CMDB data.

I'd assume like 20% :-)

I used to administer a website that was running on a server set up by the original site owner at a hosting company he worked at. The site kept running for 5 or 6 years after he stopped working there.

Yeah, a department I worked in had over 1000 vm's up with nothing installed. That's what you get when you have high turnover, huge cash flow, and management that didn't care.

Somehow this thread reminds me of Wall-e.

I wonder what the Carbon Footprint of all this unused compute power is?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact