Hacker News new | comments | show | ask | jobs | submit login

I set up a dialup gateway for a company with NetBSD 1.3 in 1998 on a compaq pentium 90 desktop with 32MiB of RAM. I got a call last year (!) from the owner saying it stopped working suddenly. Pulled the disk on it and plugged it into an IDE/USB adapter and looked at the syslog as I had no ps2 keyboard and it didn't have USB.

Suspected hardware failure at that age.

Max uptime: 8 years, 122 days!

It was still being used (on a dialup). It stopped working due to the dialup company stopping service rather than a hardware failure.

Has been replaced by a cheap ADSL connection and router. Ironically this had only been an option for about 6 months due to the rural location and no DSLAM at the local exchange.

Wonderful OS although I'm ashamed to say I left telnet open to the public internet.

The same can actually be said for Windows NT4 as well which tends to show up unexpectedly sticking things together.

Edit: some other notes that might be of interest to long running UNIX admins: Firstly the log files had eaten up nearly all the disk space (2Gb) so rotate them! Secondly the clock had drifted by about 5 days so use ntp. Thirdly, don't assume that if you leave something that it'll be sensibly secure in a few years so they need to be kept religiously up to date. Fourthly, plan for connectivity modes to change over time and keep them up to date; the company was down for 4 days whilst BT got their arse in gear (not that they cared as they had 3G that worked reasonably well). Fifthly, buy good quality hardware - it does last!

"...I'm ashamed to say I left telnet open to the public internet."

how many people are really running war dialers these days :)

very good work, you should definitely have that on your resume

According to the logs, quite a few telnet attempts until about 2003 then it tailed off. Connection was up for around 2-3 hours a day. No incoming calls and wardialers as the modem was set not to answer.

No statistics on SSH though as it wasn't even running and possibly wasn't even installed. I didn't check! :)

(not putting that bit on my resume ;)

Very interesting comment; I especially liked the content of your edit.

Say, if you were setting up a system intended to last over a decade right now would you use NetBSD?

No I wouldn't unfortunately.

I would, probably controversially, use Windows Server 2012 on mid-range HP DL or ML series kit. Since Windows 2008 R2 and the scriptability provided with PowerShell and PowerShell DSC have come around, it's a better compromise on usability versus automation that anything else I've seen so far. Not only that, it has a huge supported lifecycle.

Bear in mind I come from a very strong UNIX background going right back to M68K Sun3 era, through Solaris/HPUX and Linux and have used all on the desktop.

But in this case the server seems to have been a "setup and forget" kind of deal, and that also implies that it was a smaller company...

I don't think PowerShell (and DSC) is that relevant in that kind of setup, scripting and automation tend to become relevant when you have a larger setup. It sounds like there wasn't much to automate anyhow (maybe system upgrades).

Also, doesn't at least Red Hat also offer a 10 year support cycle?

Personally I would raise one issue with Windows that seems to make UNIX(-likes) a better choice for this kind of setup:

System updates are problematic with Windows because of the need to reboot whereas of course on UNIX systems you only need them after kernel upgrades. I would wager that rebooting would be the thing that "breaks" an unmaintained box such as this one, some day. Then again, system updates are another candidate.

It was a small company (family farm machinery outfit) but their requirement is far more common than a massive software deployment. There are literally millions of servers churning away on trivial jobs in people's offices etc.

Powershell allows automated maintenance and edge cases to be dealt with efficiently. DSC allows repeatable deployments i.e you can use it to represent all your basic tenets of a reliable secure system. You don't build a system then throw it in the field with fingers crossed that you remembered every step when you built it. I have personally written hundreds of pages on procedural documentation on this and I can now script up that knowledge easily.

RedHat is an option but the system is fragmented and randomly documented. Microsoft documentation is absolutely wonderful in comparison. Not only that, I can grab a competent sysadmin for the windows platform easily here in the UK. The Linux guys are few and far between and generally cowboys from experience (even at the £60k level).

Windows rarely needs rebooting unless you are doing something wrong. Not all updates need to be applied. We pick individual ones in scope with what is being deployed and used and push them out rather than use windows update. This is standard practice on servers. Desktops get windows updates on time as the attack window is usually way larger.

It is feasible not to update windows server if configured properly and if it's on a corporate LAN for example.

Windows has monthly security updates that need to be taken seriously. This is probably more of an issue on the desktop than servers, but it's still the reason we meet on Patch Tuesdays to discuss testing and rollouts.

PowerShell is awesome for Windows/Exchange admins and SQL Server DBAs. But as a developer/admin/analyst, I still bump into it's limitations and have to either turn to C# or Python to accomplish my tasks. And PoSh is not cross platform.

Increasing the problem we have with Microsoft isn't the core technology, but the stupid management decisions over the last decade. For our small clients, licensing costs become a issue and the lack of trust in Azure means that there is plenty of opportunities for OSS.

Our senior tech guys do the patch Tuesday meeting but a big chunk of our kit is internet facing. If you're offline or behind a (decent content filtering) firewall then the impact is potentially lower or non existent.

Impact analysis is essential to discover whether or not the issue is serious.

Our dev team use powershell for all sorts of things from data processing and clean up to fuzzing. It's great because it tries to preserve both the Unix semantics and introduce and object model which means it bridges the world of raw data and COM etc which has traditionally been pretty sticky. It can be quite slow in some circumstances though which is my only concern (this is usually due to the fact strings are immutable on the CLR). I used python for a bit as well but it doesn't hit the spot for COM.

regarding cost, the only major issue for us is SQL Server licensing but we argued with them and waved a postgresql server around and got a hefty discount ;)

We don't use Azure at all mainly due to security concerns (we store financial data). I use it for a couple of side projects as it's cheaper than a couple of dedicated servers.

I've not encountered any major trust issues with Azure despite the whole NSA controversy. I don't get much love for OSS other than from people who want to cut costs and they usually don't pay up.

How would you say PowerShell DSC compares to *nix tools like Puppet or Ansible?

It's equivalent to puppet from a conceptual point of view. Puppet is better library-wise but its had longer to mature. Not much in it though.

Ah, I remember that days where you could buy rock-solid desktops. What would I buy now that would last a decade? HP own Compaq but they are not what they used to be.

Indeed. HP's Z workstations are pretty good though.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact