Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What’s the jankiest piece of tech you’ve seen a company depend on? (twitter.com/_brohrer_)
118 points by fortran77 on Dec 16, 2021 | hide | past | favorite | 169 comments


In the server room of a state government agency, sitting on top of a server rack was an old, yellowing AST 128 desktop PC (pentium 128). It sat there with it's little green light glowing and no one paid much attention to it. One day, a newer employee unplugged it and put in the excess equipment pile. Later that day, people were trying to track down a state-wide outage of the business license issuance process. They traced it back to the shared IBM mainframe, and then to an RPC service, and finally back to the AST PC that had been generating the unique license numbers on the HP Non-Stop and sending them to the IBM.

To prevent such an outage from occurring again, a yellow sticky saying "don't turn off" was attached to the AST.


The entire World Wide Web was protected by a similar sticky note, for a time:

https://commons.wikimedia.org/wiki/File:First_Web_Server.jpg


To be fair, if the entire World Wide Web had gone offline then, it would have probably affected Tim Berners-Lee, Robert Cailliau, and maybe one person who was reading http://info.cern.ch/hypertext/WWW/TheProject.html


Hmm. Makes a good story, but it sounds a bit apocryphal. What kind of new employee unplugs anything in a server room without significant vetting? Perhaps they hired an idiot, but most new employees wouldn't be that aggressive.


I'm imagining that sever room to be a dumpster pile in itself, and perhaps they were moving equipment around and forgot to plug the relic back in.


The new employee working under the direction of the slightly less new manager working to meet the demands of the recently appointed director to "clean out that server room, it looks unprofessional!"

Nobody knows anything, and documentation is rare.


I could imagine their manager telling them to unplug it, and that if it was still used they would know it soon enough.

That’s a way to build knowledge I guess :)


management


They tore down Chesterton's fence before they knew why it was there. oops!


Any idea why they would use this architecture?


Have you ever tried to connect an IBM mainframe to an HP Non-Stop (Tandem) mainframe? Some developer figured out a way to do it without spending millions on new mainframe software. The key piece of the kludge was a windows interface to the IBM RPC service that both had a USB dongle and wouldn't run on anything newer than Windows 2000.

This was something she had done as a maintenance request and no one else really knew anything about it, until it failed 10 years later. The server people did ask everyone who should know about the PC, so it wasn't just a plug pull.


>In the server room of a state government agency


Nice


Oof. As someone who has never participated, every time a post like this shows up I'm reminded of how weird the twitterverse is. Such a microcosm of weirdness, the handles, the academic references (I don't care if you went to MIT for undergrad), the ego jockeying, the shared language of "almost-memes" like 'not all heroes wear capes', 'this is why we can't have nice things', and 'this is what peak performance is' etc et al. And tech bros navigate it as a game of sorts trying to embody some online peak performance version of themselves to signal "yes I'm part of the tech master race".

But! External cultural tear downs aside, I'd like to show some internal cultural self awareness:

Being mostly an academic but having done stints 'in industry' or collaborating with 'real' coders I can say without a doubt that all the jankiest tech lives, and thrives, in academia. Scripts that were written by some professor with no coding background in the early 90s continuing to output data for analysis today. Entire mega projects written by single-minded individuals trying to be the academic version of a 10x developer but they're creating 100x spaghetti code (I'm looking at you RHIC/CERN). Single matlab scripts that are 10s of thousands of lines long that take voltages and currents and output a number with an "uncertainty" that nobody has really ever bothered to externally validate or quality control besides the grad student who never finished his thesis 10 years ago (from an optics lab I worked in).

The list goes on and on. Hilariously, the most beautiful pieces of tech/code I've ever seen have also been in academia that will never see the light of day. There was this ~1000 line piece of OO code that ran a Kalman filter in near-real time on the all of the particles produced in Gold-Gold atom collisions (and simulations!) at RHIC that was probably the most elegant piece of work I've ever seen, written by a coffee chugging burnt out grad student that sat alone in the corner day in and day out.


Speaking of BNL/CERN, there is the code SixTrack which is/was used for both RHIC and LHC for collimation studies and long-term tracking. That's the worst I've ever seen. There were a few files of "normal" length but the main sixtrack source file was fifty-five thousand lines long and featured a custom-written preprocessor's directives throughout (since Fortran has no preprocessor, or for some other erason) that allowed different features to be enabled at compile time. And this was used to design the LHC collimation system and to check the machine's dynamic aperture. I should also add that this is recent/still the case.

P.S. agreed about Twitter, I always wonder what these people are like in real life.


I imagined some people would crop up with experience at RHIC/CERN. I never touched the SixTrack code directly but I've heard legend. The compile time features was a constant on all the projects I touched there and was such a hilarious relic. "The code is flexible! Just know these completely undocumented flags or spend months grokking the dense/obtuse/scattered source files and you can make the code do anything you want, all these hard coded constants will change".

As ugly as things like SixTrack were, I vaguely trust them just because there were usually a lot of eyes on them... even if they were hideous. The more obscure chunks of code that were somewhere in the analysis chain were the ones that really terrified me. I worked on a lot of the code for simulating detector upgrades (I won't get more specific than that) and that was a terrifying place. Designing entire multi-billion dollar detectors systems based on hundreds of different pieces of code and output data that were glued together with no over-arching design or integration. The code 'review' usually came down to "that output looks good"... or not...

Related to this, I now work in the atmospheric sciences, and a completely-unrelated-but-oddly-similar problem exists in the various modeling communities. So many fragmented pieces of code glued together and untouched or touched with no real computing oversight. So many wasted computing cycles. So many failure points.


RHIC/CERN seems like the perfect storm of incredibly intelligent and motivated people, coupled with time pressures and a lack of software development knowledge. Add in the fact that their "business" logic requires a PhD, and you've got an extremely limited subset of software developers who even could work on the code.

Most places, you'd figure people just wouldn't be smart enough to Rube Goldberg their way into a functional program. But someone who does high energy physics as their primary job is assuredly capable of torture a compiler as thoroughly as they want.

F.ex. look at the issues: https://github.com/SixTrack/SixTrack/issues

> remove gamma_e from calculation of kick by elens. the gamma_e factor comes from moving from the electron frame to the lab frame. It is actually compensated by the lorentz transformation applied to the electron current density

You know you're in for fun when the docs start with "SixTrack is wonderful, but it is bloody complicated." https://twiki.cern.ch/twiki/bin/view/LHCAtHome/SixTrackDoc


God, you are making me miss my academic days.

Our department's sysadmin was the real hero. He kept people's software running for years after they left the department. We were involved in some global-level public health data, so this wasn't trivial use-cases.


Ooooooooooooooooooo. The health research sphere is something I've always wondered about. I imagine in some ways it could be the absolute worst, being a terrifying nexus of privacy, academic, and health concerns.


Ha! I know a current coffee jugging grad student who is in a similar position.

They have some survey software in a game engine which was written by a previous grad student. They re implemented textboxes using a bunch of key capture commands and if statements, in one giant file with almost no functions.

Another program he showed me cleans survey data from excel and is one giant script.

I pray for him every day.


Coming from the other side, as a professional SWE that helps my friends in academia with reviewing/debugging their code sometimes... You're absolutely right, some of the most mindfucking code I've seen was from academia. Some are pretty decent and elegant but mostly it's a janky mess of FORTRAN libraries being called in weird ways, code that isn't written at all to be read later on, etc.


I have many stories. Here's a fun one.

I worked for a company a long time ago that traded with companies all around the world. A lot of their customers were small businesses in rural Africa, South America, etc. and they in turn had local banks they used to facilitate payments to and from.

A large number of these banks did not have a reliable internet connection, or if they did, they most assuredly didn't use it for payment instructions. Instead, I had to jerry rig a system that used TELEX -- yes, TELEX -- connected to IBM's Lotus Domino/Notes suite of mail transport software. That way we could "email" domino and it'd connect to some fantastically expensive TELEX hardware which then went over the wire on whatever the hell network they'd use for that in the 2010s.

We had carefully crafted email addresses with the TELEX numbers embedded (ever wonder why email address specs are so complicated? So people like me can do dumb shit like this!)

We'd then send them very carefully formatted instructions in uppercase ascii, to conform with (I forget which, SWIFT perhaps?) payment instructions that were clearly invented in the typewriter era and then hoisted, kicking and screaming, into the digital age with nary a change.

Fun times. Worked really well and tied in with all the other stuff they'd hang off of Lotus Notes. If you ever wonder why a lot of businesses struggled to move away from it? There's your answer.


> instructions that were clearly invented in the typewriter era and then hoisted, kicking and screaming, into the digital age with nary a change

Based on experience, any time I see a static number of fields of a specific type on a digital record, I assume it's because that's how many blanks there were on the preceding paper form.


It's also because COBOL and languages of that era had fixed widths for most of the "records" they had, for good reasons that many modern programmers are rediscovering every day...


You mean the paper form with 10 “blanks” for your address? Thereby ensuring the address would never fit in the space required?


I mean the paper form that assume no more than X things ever need to be in a record. I.e. ICD


Yowzers. I've not seen any TELEX equipment in over 30 years. Even then it was outdated.


A long time ago I worked as the infrastructure & systems manager of an biostatistical institution at a medical university. The institution had a large staff of highly qualified professors and researchers and others, doing cutting edge work on huge datasets, even by today's standards. It was a really well funded research operation.

One day the phone starting ringing off the hook, the mail blew up and I had a large group of people outside my office (no open work space there!) being very upset all of the sudden. It seemed that almost all of them were relying on this quirky internal service for some input of sorts to what they were coding on/with, and all the computations and calculations and modelling broke without it. And apparently, it had broken. So ofc I start looking into it.

All I got was an IP address, and my system lists and IP registers showed nothing. I went on to look in an old patch panel registry, and found a reference that might be something to look into regarding where it physically might be located. But ofc the patch panel wasn't in use anymore, but I knew they sort of moved it 1:1 to a new panel in the far corner of a basement. Got a new lead on where the patch terminated, and went there only to find an empty room. A lone network cable ran from the connection on the wall, through a hole in the back wall to the next room. The other room had no marking, and my key pass didn't work. I called the maintenance guy who came running, and his keys didn't work either. So we took the decision to simply remove the lock by force.

Once inside, I come upon a very strange sight. Again an empty room, with only a very, VERY dusty desk and chair, with an ancient Unix machine and monitor. No one had touched that thing in many, MANY moons. It was disgusting. Someone had set it up to do its work, and then left the building, without notice or documentation, and it had been forgotten. It was a very Tron: Legacy kind of moment. I had a look at it and it said that the raid was downgraded, but still working but somehow halted the machine. I took a chance and rebooted it and after a while it came back online. All the researchers were happy again! I eventually took the liberty to move all the source code off the machine and got help from a co-worker to set it up in a new Linux environment. It worked almost out of the box, my co-worker made some minor fixes to make it compile. For all that I know, it's still running.

This is a old war story that I never will forget, very fun to talk about :)


Made me think of the Red Door episode in the IT Crowd

https://www.imdb.com/title/tt0609852/


Some time ago I was working as first mate on a ship ostensibly for civilian transportation, but most of the business was spent as a shipping company. The long haul freighter that was the core of the business was maybe 60-70 years old, and we rode it hard. Often it would break down in transit, and we'd have to make unscheduled stops for repair. The ship used analog computers to control propulsion and navigation, and the damn compressor was constantly on the fritz. Once we took on a ragged pair of passengers at a backwater port, an old man with a snot-nosed kid. When they got to the hanger, the boy says, "what a piece of junk," and the captain explained "she may not look like much, but she's got it where it counts, kid."


I'll weigh in outside of Twitter. Back when I worked for a large telecom, we decided to move a datacenter from a mid-west, barely staffed site out to one of our larger datacenters.

This was a year-long several-hundred rack problem that culminated in a hellish weekend of running cables and powering things up. It was "all hands" in IT and didn't matter that I worked in a capacity that kept me far away from hardware most of the time -- I had past expertise in that area, I was called upon.

Everything went relatively smoothly until about three months later when I received a phone call from our department VP. Apparently a vendor that we billed for Cost of Access[0] had not paid us since we moved DCs; they hadn't received any bills and were happy not informing us of the problem. A long investigation resulted in the discovery that the bills were submitted electronically, by modem (this was 2008, not 1998). The modem phone line was moved and located at the new datacenter, hooked up to a USRobotics 56K modem, which faithfully dialed a number, connected at 1200 baud (?!) and submitted the bill. And we're talking a bill that's big enough to be noticed on quarterly reports.

The offending computer had been brought to my office building in the suite downstairs, and my VP said "You've got some experience with old hardware, can you just take a stab at it?" It wouldn't boot; the CMOS battery was bad. I noticed three numeric values printed on a label affixed to the case and realized, immediately, that these were IDE (ATA) drive geometry settings. So the CMOS battery failed a long time ago, and the engineer solved it by printing the numbers on the front of the case.

And what a PC it was ... an IBM NetVista circa somewhere near the year 2000. Running PC-DOS. It sat in a rack on a shelf surrounded by servers, completely ignored since it was hooked up to UPS, not accessible to the corporate network, and was never patched or rebooted[1]. It was an accidental somewhere-near-3/4-million-dollar-desktop-server.

[0] Cost of access for the kind of telecom that we were was, I think the largest non-staff expense the company had, but it also worked the other way -- we billed people CoA for our network.

[1] The device could not be accessed on our network except from one other host, via one port; it was as air-gapped as it could be and other than the few minutes spent "sending the bill" each month, it didn't carry any data of value.


Nice! Somewhat disappointed that the solution wasn't dialing with the correct area code instead of local (or somehow acquiring the local line matching that number and setting up call forwarding), but recognizing drive geometry settings is even more awesome!


> by modem (this was 2008, not 1998)

I'm mildly amused you consider modems obsolete.

Here in the OT world and where systems need to be air gaped [1], modems are absolutely used to provide separated, secured links and non-IP links. They are not going away any time soon.

[1] Example, the dial-up time servers provided by the NPL for offline systems https://www.npl.co.uk/getattachment/products-and-services/Ti...


I should have clarified -- the modem was really obsolete for the purposes it was being used.

The carrier involved was one that everyone has heard of. All of our other carrier billing was handled using a variety of authenticated links/services to the carrier. And this particular carrier had other, better, ways of doing it, too. For whatever reason, we had never altered that process. It was permanently fixed a few months after we got that host working again.


I too have a need to keep old computers online and it's frustrating when the CMOS battery dies and all those super important motherboard settings that make it boot from a hard drive in the first place are lost, over and over on every reboot.

Not everyone knows that old computers do not necessarily boot from the hard drive by default.


Why not just replace the CMOS battery so you only have to do it once every 4th blue moon vs everytime the thing reboots?


You can even replace most CMOS batteries on a running machine without causing any issues. The power from the system will keep the BIOS set, and it's a 1 minute job for many servers, slide out the rack, pop the top, thin phillips screwdriver to the CMOS battery to pop the old one out, pop the new one in and button her up.

Heck, you can buy 2 CMOS batteries for a buck at the dollar tree. They'll last a year or more with the light usage that would be.


Assuming the battery is not soldered in.


This model (and every other that I've seen, though, admittedly it's been a while since I've looked at a typical rack mount server) had an easily removed battery.


In machines of this age, I don't think anyone would have considered soldering something in like that. That didn't happen until halfway through the 2000s.


Not exactly a CMOS chip, but the Dallas Semiconductor DS1287[1] real-time clock chip had a built-in battery that was notorious for failing.

[1] https://www.classic-computers.org.nz/blog/2009-10-10-renovat...


I'm not sure what that engineer's reason specifically was, but I'll chock it up to laziness.

The CMOS batteries, themselves, have a shelf life and for whatever reason, we probably had none of the specific size required on hand. Being that the device was on battery backup in a rack, it probably led to the engineer thinking "it'll never be rebooted, a label will do"


Not all batteries on 486's are replaceable and serviceable.

Yes, we've soldered airplane batteries to 486's to keep them running longer


I've seen some 1980s shit running THIS YEAR.

I'm working on porting a foxpro database from 1988 that's running an active business which I won't talk about much before it's done, and have actually encountered in Costa Rica an auto parts shop in one of the sketchiest parts of the capital city of this country with a green on black phosphor screen running what looked like dBase III for what they were doing on old IBM PCs.

It's pretty crazy what's out there still. I think the one everyone here is familiar with but might not know is really ancient is the travel booking systems for your plane tickets and accommodations, dating back to the 1960s:

Karsten Nohl - Where in the World Is Carmen Sandiego? (33c3) [1]

[1] https://www.youtube.com/watch?v=vjRkpQever4


>I'm working on porting a foxpro database from 1988 that's running an active business which I won't talk about much before it's done, and have actually encountered in Costa Rica an auto parts shop in one of the sketchiest parts of the capital city of this country with a green on black phosphor screen running what looked like dBase III for what they were doing on old IBM PCs.

This is amazing value for money. A simple system for a small business working reliably for 30-40 years. In our world of tool churn (often seemingly for its own sake), it's unfathomable. But this is often what the customer really wants from us, they're just afraid to ask out loud for fear of being laughed at.


I work in software for manufacturing. We were doing a customer panel and someone remarked they weren’t in the market because they “just upgraded their software”. Follow up question revealed that happened 8 or 9 years ago, so you’re pretty spot on.


Sometimes their good value, other times their roughly equivalent to using paper.


Sometimes paper is great value too.

I'm not saying their aren't horror-shows out there, but technologists drink far too much of their own kool aid, as class.


Yeah, but that’s still the case with modern computers.

Marketing: “With this app, you can now buy pizza on your phone!”

Snark: *slaps a yellow pages on the desk along with a rotary dial landline*


A buddy of mine works for a company in a similar situation that rebuilds transmissions for industrial equipment. They were not allowed to upgrade their computers because the one piece of software that makes the whole company "work" is some ancient dBase system that is accessed through DOS. The bids they received to "modernize" it were astronomical mainly to nobody being around that actuall understood the bizLogic involved in a way to rebuild it.

I asked what happens if the PSU/HDD/etc of the machine running it dies? I was told that everyone retires. It was an actual conversation that had been given serious thought. At that time, the company was 30+ years old, privately owned and no employee had been there less than 15 years.


Sounds like something that could be exported to a virtual machine image and hosted with remote access, unless it runs some proprietary hardware with a PCI interface or something.

No need to hang your whole company on a single box's longevity.


Speaking of booking systems, here's an interesting video about a campground in Germany which has been using an Atari ST with custom booking software every day since 1985:

https://www.youtube.com/watch?v=6LxPEz9x2fs


Old isn't janky... hmm... to me janky means duct tape and bubble gum.


This basically only runs on A: old hardware + operating system (no go, need to do other workstation tasks at that company), or B: a windows 7 virtual machine with DOS compat set up, anything else isn't working, windows 7 validation servers were recently taken down leaving things in a precarious state of support. That's basically what I would call held together by duct tape at this point


Kind of like Docker, you mean?


Someone: Operating systems are too complicated.

Docker: rebuilds virtualized operating system


Oh yeah, man, FoxPro... for a year or so in 2008 I was responsible for a FoxPro database of music venues and artists for an agency. Felt really interesing to maintain a system that's older than me.

When we modernized all desktops in the office, I set up Win2k in Qemu on each of them, which loaded the FoxPro thing from a shared network mount. I'm just realizing 19-year-old me never checked if FoxPro supported simultaneous access. It surely did, at least I hope so :-)

edit: However, 19-year-old me was smart enough NOT to touch that Solaris or whatever-it-was server hosting the FoxPro thing.


I'm working right this second on an application that originated in the mid-late 1980s in dBase II, then moved to FoxBase, then FoxPro and is currently Visual Foxpro. (and I've been developing/supporting this all along, since 1988) Might finally get phased out in 2022... maybe. It does what it does very, very well.


re: FoxPro - It's Visual FoxPro for Windows, which makes it a lot less ancient than 1988, but a significant number of voter registrations in Ohio are handled by a piece of software written in Visual FoxPro.

Amusingly the vendor had a mandate to add "2FA" to the application last year. They added TOTP to the "login". All the data is stored in DBF files that all users have read/write access to anyway (because the database engine runs in the user's context on the client PC-- the server is just a dumb disk drive).


AdImpression.php is a multi thousand line file in Facebook’s monorepo that supports billions of dollars in revenue. It has the best/worst ratio (depending on your perspective) of code quality/readability to revenue generated that I’ve ever seen.

What’s impressive is how many attempts, both successful and less successful have been made over the years to improve it. Refactoring this file isn’t easy because any change could potentially lead to millions being lost. But people keep at it.

It actually looks pretty decent nowadays compared to a few years ago. I believe it has type annotations now but I haven’t seen it in a long time.


Fortune 1000 company, our mail transfer agent for sending out certain mass mailings were four Macintoshes. Worse, they were never upgraded to MacOS X, because they were so legacy they couldn't be. They were also not officially supported by either desktop support or server support. Although server support had to get involved when our IP address space was reassigned (they got their IP addresses pre-NAT).

At a Fortune 100 company I had to spend a day debugging a print job sent from an IBM mainframe in New York City to a Unix print server, out to a printer in Mumbai.

At one VC funded startup, the first server the company ever bought had no backup system and could not be turned off. We had to rearrange the server room once and someone had to hold the server in the air due to the way it was plugged in as the room was rearranged. It took years to get the budget for the server to upgrade it to, the day the new server came in it was hijacked by the analytics group.


> It took years to get the budget for the server to upgrade it to, the day the new server came in it was hijacked by the analytics group.

And for better or for worse this is why teams love using the cloud. It goes a long way toward bypassing “old school” IT departments :-)


One of my customer is a world famous cultural institution. Around 2007 IIRC, they asked me if I could devise a way to backup their payroll system, because I was their go-to Unix/Linux expert and "it runs on some Unix thing".

So I went with the CIO to the HR office to check the "thing". The setup was as follows: between the two office windows, there was a small table with on it, a Sun SS20 workstation of 1993 vintage, connected to a bunch of 1 or 2 GB SCSI drives, all covered with a 5 cm, hairy blanket of dust. A tangled nest of serial cables (yep, the RS232 style) were going from a serial expansion MBUS card to all of the HR people's PCs in the room (through USB adapters). They were connecting to the thing using good old HyperTerminal...

The machine also had a modem that was used, eons ago, when it was under some form of support from the supplier of the solution, to monitor the setup. Of course nobody had done any update nor any system maintenance since 1998, so it was still running Solaris 2.x.

Of course I didn't have the root password. The thing was running a database (Informix or Oracle, not sure) that I planned to backup to a Linux server through NFS (keep it simple). So I basically had to write a shell script using a syntax compatible with this antique kit to run through cron... Fortunately they had a spare SS20 that I used for this purpose!

The pay of several hundred people (maybe a thousand) was relying upon this system... several million € of pay checks every month!


I worked at a well known UK newspaper about 6-7 years ago. I found out that the daily print newspaper was edited in quarkXPress without a CMS. After publishing the newspaper each night, a team of night workers would come in and copy-paste articles into the tablet edition CMS before morning (for iPad & android tablet subscribers). Entirely by hand. The main website was on a totally separate CMS so the next day, news articles were copy-pasted into that CMS by an offshore team in India. Also by hand. There was also an archive team whose job it was to copy-paste everything into some other separate archival system. By hand.


This is janky. Most of this thread describes OLD tech, but this is janky tech.


If you haven't worked at a place that entirely runs from a giant Excel file created by Janice who worked there for 2 years back in the 90s, then you just haven't lived.


That spreadsheet where Janice records the Mercedes Benz, the colour TV, and a night on the town?


Good one, he said wistfully, realizing exactly how old he was.


The original original digitalocean was a mechanical turk of Perl and bash scrips with cronjobs and a giant mysql db. It would randomly shout things like "RUN THE ARIN REPORTS" (Not a slight btw, it was janky sure, but it worked..) :)


> mechanical turk

I don't think that means what you think it means.

(a fake machine, operated by human)


I know what the phrase means and chose to use it. :) His name is Jeff Carr, I very much doubt he nor anyone else from then would object too much to me calling him the mechanical turk of digitalocean, guy almost literally didn't sleep for 2/3 years, probably the most unique, loving and amazing human I've ever met, ever.


Microsoft Equation Editor 3.0. The piece of add-in for MS Word that has been there since 1995, and finally removed in 2018 due to a security problem. Yet, they are still everywhere for work related to high school maths.

And there aren’t better replacements. I could type in the old equation editor blindfolded. Every keystroke has predictable behaviour. Ctrl-F for fractions, Ctrl-R for square roots. No other equation editors offered that. The new equation editor is not the same, I need to press so many unnecessary arrow keys just to get what I could have done before in half of the time.


I seem to remember Microsoft in the past also manually patched the binary for it because they probably lost the source code to it. [1]

[1] https://blog.0patch.com/2017/11/did-microsoft-just-manually-...


Not sure if this counts but I worked at a small casino and the servers managing the floor could barely function. The payment system was a mess too not just due to technical issues but also from "territorial" issues.

The servers were IBM eServers servers old and out of date when they were purchased the warranty was extended multiple times. A Debain cluster for slot machine control, and a database server was also the domain controller using Win 2000 cluster.

The clusters always failed and auto-failover failed. So me not officially IT would have to manually failover the cluster or the casino couldn't operate. There was also another Java app called process initialization which also failed and I had to restart that too manually.

I was also the only person the building/province/400km region who knew the admin passwords for the systems I still recall them.

Add to that the debit machines would lose connection and the debit machines at lunchtime for a 200 person buffet were offline. Often a reboot of the DSL mocdem or most often it was a trip to the third floor ventilation room/cross connect. The five port switch had six cables and people would pull one and put in their cable for their system. That's territorial part my department is more important than yours so cabled yanked out no warning.

As a system it was terrible and a multi million dollar business relied on it and relied on me as a non-IT person to maintain it. It wasn't anywhere near my job description and I got nothing extra for it but management demanded I fix it. They ended up laying me off suddenly one day tow hours into my shift after 13 years no explanation.


>They ended up laying me off suddenly one day tow hours into my shift after 13 years no explanation.

This is really the key here we all need to remember... its good times right now, but it may not always be good times, so important to remember that regardless of how important our jobs might be in the moment, when the corporate winds change they won't even blink to can us. First time I got laid off I was in the middle of a production fix that was affecting the homepage of one of the busiest sites on the internet. Didn't matter for shit because my number along with that of thousands others got called.


You might enjoy this Andy Weir short story:

https://www.goodreads.com/book/show/49661162-randomize


You did that job for 13 years? Looking back, what would you have done different?


In a way I did the job for 32 years but 13 of it with that company. I started out at age 14 fixing grey area poker machines, pinball, and arcade games. My first day I drove a manual transmission truck at age 14 to a bootlegger's to empty an illegal slot machine and I was handed a beer. I was self-taught in computers but mostly Windows and Linux but never took the leap in the mid 1990s in my 20s.

It was a no-win situation if I refused to not do what was really not my job I'm sure they would have found a way to fire me. I was being paid an OK wage it was double minimum wage. I did enjoy the technical work I saw it as experience in IT not just playing with simulators, or reading books but actual running systems.

It's hard to say what should have done differently but certainly I should have taken night classes or part time classes in IT. After being laid off I ended up doing that at a technical college which was only 500 yards away. I got $13K severance, and government paid for one year of my classes and unemployment. I pretty much had a two-year vacation to study computers. After I was gone I heard the place emptied of many long-term staff I'd like to think I was the spark that did that.


Anything that doesn't involve the red stapler guy is not enough


A pentium 4 laptop hooked-up to a makita battery and a couple of PCMCIA serial ports of which a few were connected to 'thin client' serial terminals (axel branded?) and a variety of chemical sampling/testing machines connected to the rest, including the parallel port for a results printer (with tractor paper).


My sister was called to do some work on code she'd written in the 1980's. She found her original change log notes in some of the files!


I worked for a job search company that bought all of their listings from a third party. Every day, they would get an XML feed of active job listings that was a few GB in size, from what I remember. To update their database, they had a program (written in C++ for speed!) that would go through each one of today's postings and scan the old list to see if it was new, then write update instructions to a SQL script that would be executed at the end of the run. (According to a senior developer there, they didn't just interact with the database directly because the database couldn't handle updating large amounts of data (???))

It would take about 8 hours every day for the program to run. So for 8 hours of every day, the company would be returning stale listings. I rewrote it in PHP and brought the time down to ~20 minutes, which could have still been significantly improved. But I didn't stick around for too long after that.


The physics department were my girlfriend did her Master thesis had some really big issues with "technology management". A big optical setup which was used for measurements every week worked thanks to a yellowish 90s MS-Dos PC which was programmed by a now retired professor to automate the calibration of some lenses. The script running in this old PC had no backup, wasn't understood by any other person in the department and there was no documentation for it. Moreover, the PC and the script had no way of interacting with a network (as new lab automation equipment does) so there was no possibility to interact with the setup from other rooms, let alone from home. All this because the professor was too proud of his programming back then to even think that it was necessary to migrate to something more maintainable.


1999. A very small hosting company. We relied on a guy who ran our three servers out of his apartment in San Jose, CA. They were just mini towers. During the time of the rolling blackouts in CA, he illegally ran a generator on his balcony to keep things online. He also apparently had multiple window air conditioners running to keep his apartment cool. We weren't the only servers he was running!

Then there were the awful scripts he wrote. I didn't know Linux at the time, but I remember there was a bash script for removing accounts from a server. You'd run /bin/removeaccount.sh $username. But woe was you if you forgot to give it an argument. The script was simply

   #!/bin/sh
   userdel $1
   rm -rf /home/$1
Right. So if you forgot the username, it runs rm -rf /home/. That was fun.


The Command and Control system for the Metropolitan Police (London, UK) is the baggage handling system from Heathrow Airport in the 1970s, with 'Luggage' renamed to 'Police Car' (or something similar).


Not a company per say, but last week after going through the terminal 2 security checkpoint at San Diego Airport, I passed by a shelf holding a Windows laptop - probably 7-8 years old - open, running an application, with a sign taped to the shelf reading 'DO NOT CLOSE OR UNPLUG THIS LAPTOP!!!'.

I really, really, really wanted to close it to see what happened but I didn't want to spend the holidays in jail so I just chuckled and moved on.


I'm not proud to say that I was technical lead on a project that ran a customer's business for about 6 months from an old laptop under the dev's desk. This happened last year!

In our defense, it was because the server we wanted to move the code to was under control of another of the client's vendors and they were dragging their heels about giving us access. But still!


I would have said Excel, but that would be a lie. You can do a lot of stuff with Excel when done right. So I'd say the following:

Convoluted Excel sheets without documentation doing things the ERP system that is in place is supposed to do. Especially ones that are not feeding data back to said ERP.


The next financial apocalypse will definitely be caused by an errant VLookup without the appropriate ISERROR around it


Ha! I see you got the shirt as well!


Back in the early 2000's I was the networking lead for a start-up managed services company. We made heavy use of Extreme Networks switches at the time. Though our experience with the hardware was largely positive, we did have a batch of rack switches that were consistently flakey, so we took them out of service and stacked them in a closet awaiting RMA. As we had the the unfortunate timing of starting business shortly before the dot.com implosion, the company didn't stay afloat for long and I jumped ship.

Fast forward 18 months and I found myself working as a systems engineer for a Pharmacy Benefits provider. I was not part of the networking team, but the networking lead approached me to help troubleshoot some pernicious network stability issues for their POS systems. They were also an Extreme Networks shop, so they they figured I might have some insight. The networking lead tried to set me up with access to the switches and was confused when I appeared to already have an account provisioned. Once I connected and began looking at the switch logs, I had simultaneous deja vu and "are you kidding me???" reactions. These were the very same switches I had personally configured and later scrapped during my previous gig. Apparently my new employer had bought the switches during a fire sale and put them into production with little to no configuration and they were just insanely lucky that their ip ranges and vlans overlapped what I had previously used.

I didn't stay very long at this company. Thankfully the haunted switches didn't follow me to my next job.


About five years ago, I was assigned the task of writing a script that would automagically pull data out of Salesforce once a week and create mailing labels for catalogs. Seems simple enough.

Except that the catalogs were in a warehouse a few miles away from the company office. Inside the warehouse were ten label printers, each hooked up to a different computer.

My script had to organize the mailing label data into ten groups, based on USPS bulk mailing criteria, then dial each computer in the warehouse and send the right labels to the right computer, as each computer was assigned a different bulk mailing region, again based on USPS rules.

The people in the warehouse would pull each label off of the printer, slap it on a catalog, and drop the catalog into one of the bins below the corresponding printer so they could be taken to the regional bulk mailing facility all pre-sorted.

The warehouse computers couldn't be hooked up to the internet because the cable company wouldn't wire that side of the street. It did the other side because it was on a block with homes, but didn't see the cost benefit of doing the side with a single warehouse. A one-off line from the cableco was cost-prohibitive. I was told that before I joined the company, they tried DSL, but it was unreliable.

It's surprising how a technology we considered so fragile back in the 70's and 80's is sometimes the only thing that works today.


Web sites running on Mainframe, OS390. If any line of code/HTML is longer than 80 chars, it only reads the first line then stops. Because punch cards only took 80 chars.


Fortran had a 72-column limit on punch cards (the last 8 columns were used to number the cards in the deck). Even when we got VT100's there was some caution about going past column 72 (ISTR some of the VDU's would have a marker at column 72, possibly the compiler gave a warning too).


That is amazing.

I love quirky old tech-debt that people still work around. It means there's either a sysadmin that is super into it and knows everything from inside-out, or they're a sysadmin that's pulled all their hair out but damned if they don't keep it running!


I want to visit this site and marvel at this achievement.


A large insurance company as of X years ago implemented iteratively reweighted least squares in pure SQL running against the claim database to fit their risk models. Honestly their implementation was impressive, in a WTF way, but doing literally thousands of queries and then manual adjustments of coefficients meant it took days to fit models that would have taken seconds to download the relevant data and train a model with actual statistics software.


I worked for a health insurance claims benefit management software company whose flagship was written in a language called DB/C (random bonus, the two co-owners of the company felt that two spaces for a tab stop was too small, four too large, so had standardized on a three space tab).

DB/C (or Databus) is a COBOL-esque language (https://en.wikipedia.org/wiki/Programming_Language_for_Busin...).

Over time, TCP/IP, sockets support had been bolted on, with varying degrees of rigidity to spec (and a whole ton of unimplemented "we don't need THAT" stuff).

But the prize for eyebleeding was the co-owner who had a megalomaniac (he could be heard, and I quote word for word, saying things like "People are idiots, I don't know why they would write [software X] that way. I can do it in a weekend" (and he would do "it", too, in 60 hour long benders. But not well. And with a whole bunch of arbitrary bullshit that you'd expect), masochistic, NIH obsession.

He wrote a fully functional XSL parser and interpreter. In a COBOL-esque language.

And while functional, oh boy, you probably saw few things uglier in your life.


> (random bonus, the two co-owners of the company felt that two spaces for a tab stop was too small, four too large, so had standardized on a three space tab)

That's how I indent my code. Not just because I think it looks best, but as a small rebellion against the whole tech industry's over-fixation on powers of two.


Your insurance company used SQL? Luxury! My insurance company had everything in a collection of inter-linked Excel spreadsheets on a shared file server, millions of dollars a day being moved around with VBA macros :P

(Though for the specific use-case of statistical analysis, perhaps excel is actually the lesser evil...)


I worked at a shop that developed its own custom, in-house templating engine implemented as an Apache extension. In the 90s that was rather common, before everybody settled on PHP or one of the JavaScript frameworks. That's not the bad bit.

The bad bit was, the one person in the company who knew how to compile the extension had left years ago, so it only was available by copying a magic binary .so. It only ran on SPARC Solaris. While it had grown an embedded Python interpreter for more modern (for the time) development, the latest Python version it could run was 2.1, meaning a lot of Python 2.x features I was used to were missing.

This isn't really a janky dependency but... I also did consulting development with a friend for a large lab equipment company. It was impressed to me what a really super important client this was as they were one of a few companies licensed in the US to handle cocaine. So you'd think their security would be through the roof. When I got to their offices, I found that every staff member's desktop PC had Bonzi Buddy and several other pieces of malware cruft on it.


I used to work at a regional telecom company, and one of the jobs I did there involved keeping part of the order processing system alive. It consisted of something on the order of 100 PCs running OS/2 (this was in the mid-90s), with 3270 emulators that would connect to the mainframes, scrape information on the order forms, and plug them into forms on Unix boxes for another part of the order process. And the reverse.

But, this software was notoriously unreliable. There were two of us trying to keep this system moving, and we couldn't go to lunch together because we couldn't step away from the system for more than half an hour without getting paged.

We had one person at the data center where these systems lived that would walk the aisles of machines, looking at the screens, and trying to decide if the PC was hung and needed rebooting. I wrote some code that would watch logfiles these machines generated and would alert if the log for a specific system hadn't updated in ~30 minutes, and we could then ask the person on the floor to check on it. But, some orders could take an hour to process, and weren't really hung. So the person would have to go eyeball the screen and see if it looked like it was working or not.

This was developed by the same team that spent a year developing a tool called "the blue GUI", and I went with them out to the service center that was going to be using it so they could show it off. If memory serves me, it was a giant window with a shitton of input fields laid out in columns and rows. How this was different from a spreadsheet, I really never understood, but a team of 10 had been working on this for a year.

The lead developer I went with was sour on the 4 hour drive back, because the management of the service center declared that it was totally useless.


A small industrial automation shop, 40 total, 5 software people. The "IT department" was some kid who hadn't worked IT before. Spent most of his time learning how to manage exchange by messing around with our live locally hosted exchange server.

Their controls were written in C. The development process was you checked a computer out of the warehouse, installed Windows, then some janky realtime OS addon, visual studio, and MySQL. You wrote the control software on the only computer it would ever run on. To keep it on the up and up, they literally bought a copy of VS for every system they sold. They even paid for MySQL licenses (long before Sun or Oracle was involved).

The version control was a home brew system that you would upload zip files to. It would timestamp them and put them on the only hard drive in the company that was backed up. The backups where a couple of cheap portable hard drives. Once a week the IT guy would swap which one was plugged in and take the other one home for offsite backups.


> The backups where a couple of cheap portable hard drives.

Which places them light-years ahead of many operations.


> To keep it on the up and up, they literally bought a copy of VS for every system they sold

Their dedication to licensing is admirable, but I wonder if this was because they didn't know which DLL's were redistributable and which weren't?


Good old sneaker net backups


Honestly, this sounds like the epitome of "If it looks stupid, but it works, it isn't stupid".

Custom IT system, planned out so that any scriptmonkey with enough bananas could figure it out, with robust and redundant backups? That's a dream. any MSP worth their salt would salivate over a contract with that setup.


Oracle. I'm not joking. I've seen actual companies use this stuff as a database server.


Seriously it takes Oracle thirty minutes to create a new database. Stuff that Postgres or MariaDB can do in seconds.

Also, why does the database have 5 GB of dependencies ? Postgres and MariaDB both have < 100 MB (maybe 10 MB) downloads from the repos.


I like how the arguments used to import the database need to match the arguments used to export the database. I can recall on two separate occasions receiving a database dump from a client site that took us two weeks to figure out how to successfully import. The second time we wound up writing a series of bash scripts that used the strings command on the database dump file in order to grep/sed/awk enough information to reconstruct the arguments needed to successfully import the database. We wound up taking it that far because the other two offices that worked on the product we did had DBAs and ours didn't, so their process when they needed to replicate bugs customers were reporting was to pester the DBA to import a DB for them which would take several days, and our process was to import it instantly via our magic script. Fun times.


Back in the late 90's, one of the largest financial services firms in the US was still running its order-routing network on PDP-11s. One reason was that the code relied on the physical layout of the RK05 disks. They had people around the country scouring salvage auctions for spare parts that were no longer being manufactured.


My boss at the gear company had software he used on his computer to do setups for Bevel Gears, that his dad had written in about 1984. He kept an old 32 bit computer around because it could still run TK-Solver! version 1.0 and thus those scripts (actually, a set of equations)

I virtualized it all in DOSbox, and set the printer output to spool to a real file, which then got printed if it wasn't blank, once the DOSBOX exited.

Fairly janky, but future proof for the most part.


Pretty much my first on-site service job: the company I was working for made motion-control systems for animation, controlled by a PDP-11. The customer was an animation studio, so dead in the water without the system working. Problem was it wasn't booting, so I had to go on site, open the side of the cabinet and reposition the goose-neck lamp used as the light source for the paper-tape reader. Problem solved, happy customer.


Not quite “running a company”, but a memory of bodging things together…

When I was in college, I ended up working for college IT. Fun job. A requirement came from somewhere that we needed to allow students to edit web pages (personal home pages) using the Windows lab PCs, with Microsoft Frontpage. Students had a home drive on H and a web drive on W (I think, it’s over 2 decades ago). Thing is, those weren’t on NT or a box running SAMBA, they were on a Netware server.

I cobbled together the combination of a Linux “server” (read spare PC, probably Pentium 60/90 era) running Apache, with the Frontpage extensions for Apache. So far so good.

Then I got an account set up in NDS for access to the data volume, and made IPX/SPX work on the Linux box. Turns out the perms were a bit wrong, and I could write files to SYSVOL, but that was easily fixed.

A bit of config magic in Apache, and student web pages could be edited in Frontpage, saved to Netware, read from Linux, and all the Frontpage bits we cared about actually worked. Faculty teaching web page design/building were very happy.

Also the era of Netscape Enterprise Server with roaming profiles on a RS/6000 box, but that was easy.


Same place where we built a Student Information Service using REXX and whatever monstrosity IBM provided that let web pages render green screen entry forms. So you write stuff in RPG, add some REXX for data manipulation, and it spat out web pages. Crazy, but functional.


When I worked at BBC online in 1999, I had the job of encoding radio and TV clips for the website using the free version of RealPlayer.


I remember it was nearly impossible to download the free version of RealPlayer back in the late 90s. They'd do everything they could to hide the link or obscure its location in an attempt to upsell you. Funny that the beeb would rely on the free one, too!


Today, many many things in online video related "things" rely on code released by the beeb, so at least they gave back!


Speaking of janky tech... the installer code for RealPlayer was "special"... DAMHIK.


A business critical application that was continuously pulling data from Novell file shares, DCOM remote controlling Excel and really long .BAT files to manipulate it, and FTPing it to other places with some abandonware FTP app. All on 3 shitty old PCs with "don't touch me" post-it notes on the front and hard monitor burn-in. In a break room next to a copier.


I remember the first time I saw a BAT file that had a conditional in it. Mother of god. I needed eye bleach.

If your BAT file has a conditional, you are using the wrong tool. BAT files should be completely top to bottom with no branching. Anything else is madness.


I do have to give MS credit for PowerShell. There are things I don't like, but it's clearly well thought out and purpose made to fill a gap that existed for a long time.


Pretty tame compared to some stories out there, but I recently worked for a company that had the most amazing process for importing customer data when customers migrated to our SaaS from a competitor.

First the data dump provided by the customer is run through a VB6 conversion program that imports the data to an Access97 database. Then the Access db gets opened using a legacy VB6 version of our product from roughly the year 2000 where they verify that everything was converted and imported correctly. Then they run the Access database through a second VB6 conversion program to migrate it into SQL Server so it could be used with the current SaaS product.

The source for the legacy product was at least in source control, but only one engineer still working for the company had ever touched that application at all. From what I was told the source for both of the conversion programs was lost years before I joined. The company's ability to import data for new customers was 100% dependent on this rickety pipeline.


Most of the answers are about software. Here's one about hardware, from my personal files.

In one of my early businesses we manufactured these small boards with about 50 through-hole components. Think mid-80's range.

I didn't have the money to buy any real manufacturing equipment at all. This was all self-funded, which, at that time in my life, meant thousands of dollars rather than hundreds of thousands or more.

I bought a really old Bridgeport CNC machine for $1800. I worked. The transmission sounded like it had rocks in it, but it worked. I used this machine to make a lot of our fixtures and tools. The prior owner had replaced the control system with one that took DXF files and cut one depth layer at a time. I had to design the toolpaths using AutoCAD rather than any tool that would output G-code.

I remember at least three things we made.

The first was a fixture that consisted of two rails with a groove cut into them so you could place about 25 board into the fixture and stuff the components. No budget for metal, so it was made out of pine and CNC machined. This allowed for the component leads to go through and not touch the table.

The second was a set of component benders, also made out of wood, so you could bend the various component leads as required with reasonable repeatability. The jig to bend MOSFET leads to 90 degrees was this contraption with a hinge, a bunch of springs from Home Depot and a pocket for the transistor to fit into. You'd push down on this lever and it would bend the leads to 90 degrees. I wish I had pictures. We bent thousands of MOSFETs that way.

Soldering. Boy, was this a horrific thing. Again, no money for a real soldering setup. What we did instead is build a preheating box with some stuff I got from a surplus store. We would then dip the boards (just the bottom) into flux. After that we'd dip them (again, just the bottom) into a molten solder bath. The pot was about 4 x 4 inches, just enough to have enough solder and maintain temperature. From there they went back onto the wooden frames for slow cooling. Talk about janky!

Finally, the through-hole leads had to be cut. This was not something you wanted to do by hand. We did, for a batch or two, and then decided that was enough.

We made a jig (again, wood, this time I believe it was oak) to hold about 50 boards upside-down on the CNC mill. We then got a small circular thin blade (not an end-mill, that does not work at all as we discovered) and programmed a slow lead cutoff cycle. Think of it like a cutoff wheel from a Dremel tool but without the screw sticking out of the bottom. We could cut the leads to within one or two mm from the board with reasonable consistency.

There's more. Everything in that garage was janky as can be. We eventually made enough money to move out of the garage, buy some SMT equipment, redo the designs in SMT and hire a few more engineering students to help. Crazy as can be. Had as can be. Fun too.


Ingenious; This is Creativity!


Back in early 2000s, when Avanade was started as a joint venture between Microsoft and Accenture, I was helping them get off the ground. They had a rented Regis office in downtown Seattle with bunches of trestle tables and people just doing all kinds of crazy things trying to bootstrap the company. I was tired of crappy wireless and brought with me a cheap 4 port 10/100 Netgear switch, some consumer grade piece of something that was just laying around. After a few days, someone commandeered this and a couple of my yellow cables and for about a month it ran their public web site from the server in the corner room, with big yellow stickies to not unplug the yellow cables. It was probably the most action this poor little switch ever had.


My dad used to work at a cable company back in the day, one of his on-call jobs was to ensure the satellite dishes were clear of snow - this was accomplished by a broom duct taped to a 30' pole. It didnt take much snow to kill the signal so this was a critical tool.


When I was at Siemens, who had an astounding multi-national PKI on smart cards that integrated with everything, we had to use some software from Hitachi to change our passwords. It rarely worked IME.

Hitachi is a name that I think of for power tools and heavy equipment, but not for software.


Hitachi-san is a huge conglomerate.

SEGA built a lot of hardware around Hitachi's SH CPUs. First time I visited SEGA I was surprised to see that they also had...Hitachi elevators in the building. I noted that I was giving my talk using a Hitachi OHP and Hitachi whiteboards and furniture. I looked up to see how much time I had and observed I was using Hitachi time.

As it happens SEGA was never our customer though we did so much work for them; Hitachi paid our bills.


I've been on a kick of reading books about computer history of the 80s/90s, and one of the weird things that I didn't know before I started was the fact that Hitachi got pretty deep into mainframes and they were a major IBM competitor for a time.

IIRC, they have reorganized a lot but are still around and have followed IBM's path into consulting and selling other people's hardware.


The microprocessor arm of Hitachi merged with Mitsubishi's chip division and is now called Renesas.

https://www.renesas.com/us/en


They bought Pentaho, the ETL and BI tool maker, in 2015: https://en.wikipedia.org/wiki/Pentaho


Hitachi is the first name I think of when I think hard drives; back when HGST still existed they were always topping Backblaze's reliability charts.


Around 1999 I did a bit of work for a small shop with DOS 3.3 machines sending data to CNC machines over SCSI cables. They'd bought this stuff when it was end of life upgraded by the previous owners, and gotten the technician that knew the system along with it... a decade before that.

The tech had died several years ago and they were fully aware that when they had a problem with thsese machines that they couldn't fix, the business was done. They made parts for the fuses of artillery shells; so the actual business of their business was trivial compared to the connections and compliance paperwork, anyway.


SCSI? Pretty fancy, most systems like that are on DB25 RS232 connections.


Why couldn’t they upgrade any of the hardware?


"that company don't exist anymore" kind of software stack and also they were running everything to death explicitly. If they could have upgraded the electronics to modern, they wouldn't have bothered.

Would be fun gear to get surplus now. I'm pretty sure there's easy cheap hardware to make your Pi either side of a SCSI bus now and then one could poke at the hardware capabilities.


Once upon a time, I worked on a small project that involved updating a chunk of Fortran code that did some sort of engineering calculation. The company hired some crusty old engineering professor somewhere who had no clue how to use any kind of source control to do the actual code update. My part of the project was to fix/update the part of this tool where the DLL built from this Fortran code was used by an Excel VBA script - you entered the input data in Excel, then some VBA fed it into the DLL, and stuck the output in some more cells. Getting all that to work right was an adventure.


I love this Thread; Human problem-solving ingenuity on full display!

See also: https://en.wikipedia.org/wiki/Jugaad


In 1988, I started working for a company whose software was written in dBase II and Foxbase. That was pretty cutting edge at the time. But, although they had run coax cable for an Arcnet Novell network, none of the workstations (8086 and 80286 running MSDOS) had network cards yet. So they were carrying a large (full size/height) 80MB ESDI drive from machine to machine on a padded tray and plugging it into external ESDI ports on the workstations to copy customer data to from it.


An expect script I wrote that went through an interactive ncurses-like menu on an AIX host which started a batch job. That batch job processed all the financial transactions and closed the 'ledger' for the day. If it didn't run the bank would be completely fucked on the next day.

It was running on a VM I set up and I was pretty much the only person who knew it existed and what it was doing. Pretty janky lol


Nothing major, some as400 app running emulated on windows 95 (custom patched) machines. This was in 2007, in a large public office. I didn't expect to become a win95 user again, and enjoyed the useless login GUI a lot.

ps: the setup was twisted but this was the app I mention often in nostalgic threads. Nicest applicative I've used.


Threads like this always remind me of this gem from The Register in 2001:

https://www.theregister.com/2001/04/12/missing_novell_server...


Terminal pass-through printing being used in 2018.

https://www.techrepublic.com/forums/discussions/passthrough-...


Microsoft Access.


You think you dislike Access? What if I told you that me losing my virginity was delayed solely because of MS Access? Now think of the printer scene from Office Space!


I know someday I will be punished for some of the evil, uncalled-for cruelty I have inflicted on poor Microsoft Access when it was all I had available to me. In my defense, I was on an informal development team (ahem) in a company with a dysfunctional tech culture, and we made do with what we had while fighting to get our hands on more appropriate tools.


JIRA


My previous team lead hated JIRA and would frequently rail on it during our agile ceremonies. Me who likes coming up acronyms would refer to it as that "Janky Irritating React App" during our ceremonies as a result. Good times.


Janky Is Really Allright.


two years ago I was hired to work on a fairly medium company, they were rewriting their front end for the third time, using a new framework (previous two were deprecated). I offer to create our own component based framework but they dismissed me, soon I was off to a better job anyways. They should be on their 4th front end refactoring by now...


It had not occurred to me to describe application architectures as Jenga towers, but I'm going to start doing so!


Bash as a CGI. It took all of a couple minutes to work out my first ever remote execution bug.


i ran a small web hosting "company" with some college buddies. Our main router was an old pc running freebsd with no case sitting on the floor of our apartment. Interestingly, it never went down once in the 3 or 4 years we were operating.


The tech we invent for ourselves and our one-offs is usually the jankiest we have.


ACH transfers. And, everyone in the US depends on them.


* sharepoint * itunes * crystal reports * amazon alexa


Most internally developed, private microservices.


a rollup (unrolled) map over the server racks because of a leaky drain pipe from the floor above.


Excel


Windows.


TYPO3 CMS


us‑east‑1


More generally, modern cloud


Are you saying that cloud used to be better? (Real question, I don't know if cloud is getting worse)


It used to be simpler. Subjectively, I think it used to be less janky. Or, at least fewer things used to run on cloud, so the jankiness was a smaller problem.

Not sure, maybe my comment is a case of "grass used to be greener" ;-)


edays (aka E-Days)


Linux before 2005.


Managers ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: