Hacker News new | past | comments | ask | show | jobs | submit login
The US nuclear forces’ messaging system finally got rid of its floppy disks (c4isrnet.com)
175 points by sohkamyung on Oct 18, 2019 | hide | past | favorite | 122 comments



I know of several military systems that still use floppy disks, that comforts me for a few reason:

* Their maintenance is well understood - every fault that is likely to happen has happened and has been recovered/repaired from. It's very unlikely there will be new surprised down the road.

* Their degradation is well understood - it's been around long enough that parts have failed and have been repaired.

* Isn't easily communicated with through modern tech - no USB ports, no network access - people aren't picking up floppy disks in the car park and plugging them in.

* Robustness to electromagnetic interference - the bits are so big that a few electrons being knocked out of the way won't cause an issue.

* Software well understood and tested over time - the only way to _really_ trust software is by having it work without failure.

* Simple implementation - because there's no computational space or power, it only does the minimal it needs to. It's not rendering a GUI, it's not running some complicated neural network.

Old hardware isn't sexy, but it does work. I still use an oscilloscope with a phosphor display and a programmable power supply with instructions on a floppy disk. Floppy disks occasionally fail, but there's no substitute for making good backups over several mediums/locations.


And the downside: you will only have a few technicians that really understand how things work, hiring new ones is next to impossible as teenagers have never seen neither the hardware nor the software that is running on it. It scales terrible to new locations. I recall a few years ago NASA was looking for 286-processors for the Space Shuttle. I cannot imagine your spacecraft is getting safer because of outdated hardware.()

Then again, armies are known to prepare for the previous war. In WW-1 they used trenches and then came the machine-gun and the gas. In WW-2 all soldiers were prepared against gas, the French dug the Maginot line and the Germans just skipped around. Now we have amazing tanks but the next war is online...

() I have to correct myself: NASA was looking for 8086 chips in 2001 - not 286 chips a few years ago. https://www.geek.com/chips/nasa-needs-8086-chips-549867/


Since human knowledge isn't passed down through genetics, people generally don't know about something, until they have worked with it. That's what education is for, whether it's on their own or formally in a class room. I assume anyone being groomed for these positions, learns it like any other trade.


Yes, but you'd rather not have to train at all for these positions, and have a robust hiring pool of people who already have training because that training is applicable to a wide range of jobs.

You're also less likely to be able to find people willing to train in something they know is obsolete and will have no utility outside the job at hand, which means you have to spend more money to find and train these people.

I guess all this is less of a problem in the military, where you do as you're told, but still.


We teach them every detail of everything they need to know in their technical school when they come into the military.

Old equipment is actually interesting to work on -- difficult, but interesting. It's also very valuable for understanding modern hardware at a deep level that isn't often taught anymore.

How many people do you know who have actually replaced and aligned the heads on a UYH-3 or run end to end tests on serial data channels that traverse thousands of feet and multiple switchboards from a UYK-43?


> How many people do you know who have actually replaced and aligned the heads on a UYH-3 or run end to end tests on serial data channels that traverse thousands of feet and multiple switchboards from a UYK-43?

I think that's interesting and valuable on its own from an academic/research/nostalgia perspective, but not when we're talking about maintaining systems critical to the functioning of a nuclear weapons arsenal.


Perhaps you're responding to a different discussion?

Read my comment in context and it will become quite clear.


UYK-7 maintenance... fault code lists and swap cards... love those days underway.


I missed the UYK-7 by a couple of years. But I lived the 43 and Q-21 for years.


Older technology is probably much simpler though. It might actually be possible to train someone, or for that someone to learn from a manual.


Whenever I want to be depressed about the loss of human knowledge, I think about how no one will know how to build a VCR, or all the "hacks" and things that were needed to get it working. I'm sure they'll understand the concepts very well. It will be "simple" physics because it was discovered first (Like how a HDD uses earlier physics than a SSD).

But then I think of all the amazing people (that are on youtube) who restore old technology, and build replicas, etc, and I feel okay again about human knowledge.

But I think it is possible that as time goes on, knowledge about these systems can get lost, even if they're simpler to understand.


I imagine there's more than one way to build a "VCR" or a thing that replicates its purpose: to read magnetic tape.

As long as there's engineers, there will be hope ;)


Hiring people experienced with floppies isn't harder than hiring people experienced with nuclear weapons.


The difficulty in finding people with experience with nuclear weapons is inherent to the task.

The difficulty in finding people experienced with floppies is a choice made, a burden taken on due to a lack of willingness to use more-current technology.

Not saying bleeding-edge whiz-bang tech is a great idea for nuclear weapons control and comms systems, but when it's hard to source parts, and hiring new people usually requires extensive training on technology they've never seen before and will never see again, that's a problem.


They must have either bought too many or excessed some chips that didn't test perfect, because when I did my NASA internship around 2007-08 an electronic's surplus store in Houston had boxes of 8086 chips, 8088's as well. I still have a couple I bought with the intention of making some hobby SBC's (single board computers).


> I cannot imagine your spacecraft is getting safer because of outdated hardware.()

Given the possibility of hardware bugs; eg FOOF, Intel dev bug, outdated hardware is well tested hardware, and as such, is arguably safer than newer, less-well tested hardware.


I doubt there's a material difference in well-testedness in hardware from the 70s/80s vs hardware from, say, the early 00s. Certainly, don't incorporate brand new off-the-shelf commercial hardware designed in 2019 for such a task, but a 15-20 year lag should give you similar (if not better) reliability than a 30-40 year lag.


It's the military. They don't need people with experience. They'll train as many as they need.


Any technically competent person can get up to speed on old tech in pretty short order. Many people do it as a hobby. There's a heck of a lot less "stuff" you need to understand to work with it.


Absolutely. When you have a system that absolutely must not fail ever, you want it to be set up so that all the code involved--not just the application but the system it runs on--can be manually reviewed in a practical timeframe. If you're running on any modern OS, even a really good, stable one, that's millions of lines of code that you're just hoping don't have any weird, undiscovered edge case interactions with your program.

On the other hand. Based on the article, it sounds like maintaining the nuclear launch system frequently involves working with soldering irons and microscopes, manually replacing individual wires, and it takes years of training to reach an acceptable skill level. That has the potential to be nearly as dangerous as unreliable code.


> If you're running on any modern OS, even a really good, stable one

I suspect here you're thinking about mainstream OSes, like Linux, one of the BSDs, etc.

But there are quite a few very small, well-tested (RT)OSes that are actively maintained and suitable for the "absolutely must not fail ever"[0] use case.

[0] Which is, of course, impossible, but you'll get a lot closer with a modern realtime OS written to purpose vs. a Linux-type deal.


absolutely must not fail ever

I know I'm being pedantic, but no such system exists. The best you can do is reduce the probability of failure to an acceptable limit.

Especially since this was a military system, some team, somewhere, estimated the Mean Time To Failure for this system and was satisfied with the answer.


This could be a sign that a new generation of technologies are reaching a similar level of reliability.


This could be wishful thinking.


I'm asking myself what does it look like when the baseline tech is superseded, though I'm not arguing one way or another. I think that this kind of trend switch would be a gradual movement with ups and downs, where people would often be asking themselves "is this a serious trend", "is this wishful thinking", etc. None of these things prove that a meaningful long-term transition is happening, but I think that they're there when it does.

Actually, I'm not even thinking about floppy disks vs X. Rather about qualitative changes as a whole.


> While SACC’s hardware is decades old, its software is constantly refreshed by young Air Force programmers who learn software development skills at Offutt’s Rapid Agile Development Lab.

I am not so sure I want “Rapid Agile” development for something that literally controls nuclear missiles.


`git commit -m “added factoryboy to generate random target coordinates to test the launch sequence.”`

`git commit -m “Fuck fuck fuck, forgot to replace the launch API endpoint with a mock one.”`

Scrum master: “Let’s implement the stuff with the highest customer value first.”

Developer A: “I guess that means being able to launch the nukes.”

Developer B: “Yeah. Auth can come later.”

Developer C: “Nobody answered us about how they want the abort sequence to work. I guess we’ll defer it to the next sprint.”


Just set the password to 00000000 for now and we can change it later....

https://en.wikipedia.org/wiki/Permissive_Action_Link#Develop...


The Kanye West method


So, who thought it was a good thing to have CI/CD on this anyway? Is that NORAD on the phone?


"Oops, we nuked our staging database. Literally."


True story— the launch codes were 00000 for decades.


Gatherer 21, or gatherer 24, does naught approve.

Hunter/Gatherer , hunter12, Get IT.

lol and ROL. my gentlemen. women. polar bears, whatever.

I approve. Just making a joke.


A guide from the US DoD on "detecting agile BS" popped up here several months ago, so I think someone there might agree with you on that (edit: nevermind -- looks like they embrace agile but this document is about detecting waterfall or spiral development pretending to be agile)

https://media.defense.gov/2018/Oct/09/2002049591/-1/-1/0/DIB...


To be sure, that draft was written by the Defense Innovation Board[1], who are about as far removed from actual cogs in the machine as you can get.

The final release of that paper along with other SWAP study publications can be found here[2].

[1] https://innovation.defense.gov/Members/

[2] https://innovation.defense.gov/software/


Indeed. "Move fast and break things" might not be the best idea in this particular case.


Isn‘t this the very definition of an ICBM?


Interestingly move fast and break things is the last thing an ICBM should be sneaking off and doing.

Yet it definitely is designed for that.

I wonder if they ever feels frustrated.


The value proposition is to be a deterrent, I’d say they have performed really well in that regard, no need for frustration.


Actually launching? YAGNI. We can easily implement that if it ever comes up.


As long as the potential enemies believe it is implemented, that would be ideal.


Don't be ridiculous. Our nuclear missile launch systems need to be redeveloped with Rapid Agile methodology, and using all the latest Javascript frameworks, MongoDB databases, Kubernetes microservices, and a great web interface that can be easily accessed over the internet.


I think we should just rewrite the entire stack in Rust. It will make us safer, right?


This! And deploy it on Arch Linux.


I don't see why Agile or other similar approaches can't handle reliability and safety as the primary goals. Things change often in most other software because that's what people demand of it.

No one will be changing the Nuclear launch systems often where never ending iteration is a threat.


It's all about how you define "Agile."

I work in safety critical software, my biggest problem with agile (and other popular development methodologies) has always been that there is too much equivocation and other semantic games by their advocates to turn into something useful to me.

When we have a development tool vendor on site they often ask if we're "waterfall or agile?" I hate this question because the real answer is "neither, but sorta both, and can we just move on because it's not going to fit into your simplistic model of the world?"


I would love to see an aviation reliability level plan for use with agile software development.

One thing that agile solves that introduces the most instability into a project is the ability to constantly alter scope. It takes a lot of good management to make sure that changes in scope don't have secondary or tertiary order effects, that would be my focus for a high reliability agile process.


Oh, I hear you: fail early, fail hard, fail often...wait...


"LEAN" army suddenly takes on a way-too-proper meaning.


Don't forget this is the US military, home of the tortured acronym.

I mean (man), this is totally RAD.

They're making Star Wars and Star Trek references, so 'rad' is cool too, right?

Or it's an early/late 1st April story.


RAD is an Agile methodology very Agile eg Daily Sprints


So when we upgrade the hardware, how much of it will be sourced from China?

Is it even possible to fully secure the supply chain?

Introducing any outside code or hardware is a liability. Unless there is a really urgent problem, I think they should just freeze their entire computer platform, that way they can eliminate any chance of any new threats.

I can not possibly imagine any cost justification for doing so. I have no problem with taxpayer money being spent to keep ancient computers running forever. Keeping our nuclear missiles secure is priceless.

What new features do they even need that they don't have now?

The thought of some ux monkey using npm anywhere near our ICBMs makes my physically ill.


"The thought of some ux monkey using npm anywhere near our ICBMs makes my physically ill."

Almost spilled my coffee reading this one.


Sorry Sir, we can't launch because that jerk removed left-pad from the npm registry!


> So when we upgrade the hardware, how much of it will be sourced from China?

FYSA[1], would it surprise you if that figure was precisely zero?

[1] https://www.dmea.osd.mil/trustedic.html


That is good, but, supply chains are still a major attack vector. I think we should try to minimize that.

I am quite certain that there are no Chinese back doors in hardware that was put in the ground in the early 1980s.

Security wise, we can only go downhill from there.



"What new features do they even need that they don't have now?"

That's not the question you should be asking. The question you should be asking is, what features would the Commander in Chief want?


Twitter integration probably.


In the 80's, I was the project engineer for the Air Force DSP satellite ground communications network upgrade. This system used a 20 MB Bernoulli Box to store the early warning messages. It was replaced in 2005.

This was the system used to pass early warning to the Patriot anti-missile (Scud) batteries during the first Gulf War.

Internetworking the Air Force system to an Army system was considered revolutionary in 1991. Today, it would be equivalent to "stone knives and bearskins".


If your team had Link 16 integration requirements, I extend my deepest sympathies.


Nothing that complex. The link between Buckley and the Patriots was over AUTODIN.


also in the 80s, when I served in the secure communications group we had a Burroughs computer that was the first generation to have not used tubes, it was the main computer for this application. Could boot by switches or paper tape.

not that the base computer for personnel and other records was much more modern, cards were still used till 89 for many applications; this was on a Sperry 1100/70 ? I remember my first turbo pascal program was used to replace a Sperry program used to upload card images to the system; we still had paper cards for some departments.


The system we replaced consisted of an IBM Series/1 and some unknown Xerox computer the size of a refrigerator. The Xerox box was being kept alive by cannibalizing other boxes that had been taken out of service elsewhere.


While SACC’s hardware is decades old, its software is constantly refreshed by young Air Force programmers

Well that sounds like a disaster waiting to happen.


Unlike Silicon Valley, the armed forces spent a lot of time training their programmers.

After a tour any one of them could be a VP of Engineering at a FAANG or Unicorn, and they generally choose to remain in the public sector because money isn't the most important thing in their lives.


Long term reliability metrics can only be gathered for technologies that have already been deployed in the long term. That’s why many mission critical applications like this one use “obsolete” technologies.


Well may be it would be a good reason, but do we have an evidence that it was intentional?


> That’s why many mission critical applications like this one use “obsolete” technologies.

...or perhaps it's a much more simple risk probability v. consequence severity decision?


> You can't hack something that doesn't have an IP address.

That's a very risky assumption to make. There are many attack vectors besides TCP/IP: social engineering, side-channel attacks, EM pulse, backdoors in ICs...


Remote sabotage from anywhere and spies roaming the cooridors are quite different things.


Well, that's the funny thing.

The claim is: "No IP address == unhackable."

So, for a second stop assuming we are in a nuclear bunker. There are many unconnected systems that are still hackable. For example, the Stuxnet worm was able to infiltrate an unconnected SCADA system using USB exploits. Many systems radiate data in EM and ultrasonic (side-channel). And obviously, there are known and unknown backdoors in PCBs, ECs, and ICs. All of these do not require an IP address.

I would even argue that having an IP address can improve security by increasing monitoring, improving continuous security updates and decreasing a false sense of security. This was the original goal of DARPA all along.


This guy needs to go back and watch/rewatch wargames.


Amazing what you can do with just a telephone and a modem.


Just read up on Stuxnet to see a real world case. No network connection did not stop them.


To be perfectly honest, I'd rather they used floppy disks than the latest trendy JavaScript frameworks.


> “highly-secure solid state digital storage solution”

So not just any thumb drive, an expensive thumb drive.


I understand the 747 still gets nav updates via monthly diskettes.

Most have replaced the unit with a floppy simulator that takes an SD card (or USB key?).

The load takes just as long. But it doesn’t fail (forcing you to get an undamaged disk and start over).

I wonder if that’s why they say they got rid of the floppy disk. Maybe they maintained the underlying tech.


Can confirm, that's how it works (for the 747).


I was kind of hoping they were replacing the floppy disks with CDs or something else equally old school and hard to hack.


As far as I know, we don't even know the length of time CDs will last because they haven't been around long enough. I would love someone to correct me if I'm wrong.


CD-R are maybe 10 years. Floppies are more reliable. I have plenty of C64 and Apple II floppies that work fine.


There are CD-R and DVD-R made for archival.

    MAM-A/Mitsui has tested their Gold reflective layer CD-Rs with their Mitsui dye, to withstand
    the full spectrum of light, same as the sun, for 100 continuous hours without damage.
    Using data from tests like these, industry standard guidelines predict that MAM Gold CD-R
    will last greater than 100 years! (In fact, if you extend the data, MAM-A predicts a lifetime of
    up to 300 years before failing at the Orange Book limit of 220 CPS)


Please avoid using code blocks for quotation. It creates side-scrolling windows. Just a sideways carrot > or italics is sufficient.


Floppies are more reliable. I have plenty of C64 and Apple II floppies that work fine.

5,25" floppies (especially lower density variants like the ones you mention) are quite reliable indeed, but the most recently common 1.44 MB 3,5" floppy is terrible.


Nope, when I was using 1.44MB floppies, they were extremely reliable.

However, after most people had moved on, when I did use more recently-manufactured ones, they were indeed terrible.

My theory is that, when Zip disks and CD-Rs took over and people stopped using 1.44MB floppies, the manufacturing of them changed, and both the disks and the drives became much crappier than they used to be. The drive manufacturing moved from Japan to Thailand, for instance, for some of the drives I looked at, and the components were noticeably crappier (drive motors much smaller, for instance). These were the days when every PC had a 1.44MB drive just for Windows XP drivers, and no one ever used the drive for anything else because it was just too small, so the OEMs really cut back on the quality since the drives weren't expected to be used much.

But back in the early 90s, it wasn't like that; those disks were totally bulletproof, just like all the other floppies.


Here's me replying to a 12 day old comment, but...

You are right that at some point they started making (much) worse quality 1.44MB floppies (and drives, probably, though I have had nothing but good experiences with mid-2000's floppy drives so far), but I still say they're much less reliable overall than most 5,25" formats.

I say this partly because I have a retro-computing hobby and when acquiring random floppies from various places my experience is that 360K floppies from the mid-80's almost invariably still work fine while DD and HD 3,5" floppies from the late 80's to mid 90's are about 50-75% unreadable garbage at this point. They just don't survive. I also remember having to throw out a lot of 3,5" floppies (name-brand, too; Sony, Verbatim, etc...) due to them becoming unreadable in just a few years in the late 90's.

I mean, it's not surprising. Each bit is simply much physically larger on a 5,25" floppy.


I'm having the same issue with CD drives. The quality now is garbage, compared to the days of caddy loading. Recently bought 4 at $18 each, and will just through them out when they break.

Even the Windows support of CDs has gone to shit. Windows 10 usually can't figure out if the burned CD was already ejected and inserted with a fresh one, and randomly retains files from the previous burn.

Ripping audio CDs is slower than it used to be as the drives seem to become unbalanced.


Punched cards are for me one of the most reliable mediums ever, in terms of proven to last without media decline (though keep away from snails).

So be thankful they didn't reinvent those, only using plastic instead of paper based card. Though plastic punched cards would last a very long time and also be imune to EMP, water...many things that other forms of media fall foul of.

Just don't get the storage capaicty there and if they are using systems that fit data on a floppy. Well, a CD based media would perhaps be overkill.

As for CD quality, that can vary a lot with some lasting since day one, others - rusting away (literally I have old CD's from the early days, some fine, others have rust holes due to impurities in manufacture).

But like most things, you don't appreciate the quality of the product until time has past and it still just works.


Floppies from the last decade of their reign were absolute garbage. I had many that didn't last more than a week before they started throwing errors. That's when I switched to FTPing schoolwork to my free 10 MB ISP spacr despite still being on 56k


M-DISC claims a much, much longer lifespan for writable DVD/Bluray discs. Readable by normal drives.

https://en.wikipedia.org/wiki/M-DISC


Pressed CDs should last much longer than CD-R.


This is true, but tape lasts much much longer. I know people have issues with floppies, but I wonder if that is more to do with dirty floppy drives that need to be cleaned, and the fact that it's harder to use tape for digital storage vs analog as you have to read encoded signals and convert to digital values vs having a few small dropouts on a cassette recording or a vhs.

Floppies are really just tape in a disc format, and use heads to read high and low signals and deduce the 1s and 0s.

It stands to reason they should last quite a long time considering audio/video/data tapes can last over 50 years.


One of my bucket list projects is to use an SDR to forward error correct and modulate raw data into something that is compatible with NTSC VHS recorders.

I wonder what the capacity of VHS tape is, given the restrictions of the NTSC modulation? I believe, back in the 80’s, someone made a ISA card for PC backup to VHS.


https://www.youtube.com/watch?v=TUS0Zv2APjU

NTSC video signals offer a lot of bandwidth but very very few promises about the quality of the reproduction of the stored waveform


CDs already get disc rot, sometimes just after a few years if they are cheap.

Don't forget, CDs have been out since 82, and they were not the first optical disc format around, either.

Tapes however, can last 50+ years. It's a much better medium than most think it is, for analog or digital storage. Heck, a reel to reel tape unit offers the best analog sound quality you can get.


I have CD's from that period - some fine as, others disc rot most evident. Even had rot in gold layered ones (few VCD's I had). Gets down to manufacturing quality and in just the same way silicon chips are made, not all chips are equal and work perfect forever. Same with media.

Tape has done well, however when you look at how it was used and how it is used today. The density of data is much tighter today than the 80's era. So tolerance of errors was greater and more forgiving as data would be stored in a larger area, so a few atoms going adrift won't stand out, when you get into density when a few atoms is your data, errors are statistically more prone to happen.


But the optical discs from before the CD era used gold as the base material, CDs use Aluminum.


Most likely that would be what they use due to its security. It's what most of the air force has transitioned to for backups and transferring data in classified and unclassified environments alike.


They explicitly said solid state. I suspect they are trying to future-proof their tech stack somewhat to ensure reliable sourcing.


Old school tech is ridiculously easy to hack. New tech is much more hardened. The only advantage old tech has is that it doesn’t have peripherals that connect to the internet. At the end of the day the most secure computer is a new one, that isn’t on the internet, and is physically secure.


Unless its been tampered with in the factory or in transit for delivery, the NSA has been known to modify hardware in transit for spying purposes.


So its either have an old system that is trivially compromised or have a new system that has the potential of having an unknown backdoor.


Old computers don't have whole computers built into the motherboards running inaccessible operating systems that can't be switched off.


POWER9 doesn't either. I'd buy a laptop with that for 3~4k next summer if it existed...


Only for people who know what they’re doing. If a hacker faces hardware they’ve never seen before it will take some time to read up on it, especially if datasheets and such is not available online.

I fully concede it’s security by obscurity, though. A technician possessing any experience with that hardware is probably going to make short work of its security measures.


We need to fire whoever decided to make this decision, and then fire whoever decided to hire that person, and then reevaluate the entire existence of the agency that managed this entire process. Nuclear weapons are not a joke. These are the kinds of things that actually threaten our immediate existence. This whole thing makes me seriously sad, anxious, and scared. I don't know what else to say.

From what I understand, military personnel generally "operate" equipment and technologies that are built and designed by private industry. I can only imagine the idiotic buzz-word sales pitch that some whiz-kid fast-talking slime-ball in a suit from a fancy looking tech company made to some meat-head, decorated military general with no understanding of what a digital threat model is and barley knows how to check his email on a windows 7 machine on IE, all for quick federal contract. I hate it.


Just getting my amiga/commodore 64 500 3.5/5.25 setups back up and running, 75% of data is demagnetized away. Looking around sourcing any of these formats is more or less impossible today. HOW did they ever managed to keep 8 inch disks in a read and writable stage, without exhausting possible read/re-writes?

Secondly, how does one replace a 8 inch drive (most likely MFM interfaces) with a solid state disk?


You know how you can buy an SD2IEC device[0] for your C64?

Probably something like that.

[0] https://www.c64-wiki.com/wiki/SD2IEC


I can’t wait for them to get Internet access!


Reliability over sexiness. There is reason space craft launched when most of the people here on this board weren't even born still work. https://voyager.jpl.nasa.gov/mission/status/#where_are_they_...


I don't know what the right solution is but I recently watched this documentary about a serious accident at a missile silo in Arkansas

https://www.pbs.org/video/american-experience-command-and-co...


Doesn't matter which tech they use, when the most ignorant in society are in charge.


" Air Force software development hubs like Boston-based Kessel Run or Los Angeles-based Kobayashi Maru"

Is this article a troll, or is the Air Force just trying to be 'cool'.


is the Air Force just trying to be 'cool'.

Why wouldn’t the Air Force be full of geeks?


I Know some one in our DnD group who when working for the MOD in the UK, got into trouble for naming a set of servers after the kings of Rohan.


You might be interested to know that they also nicknamed Rainbow Canyon as "Star Wars Canyon", and the various branches fly their jets through at high speed for training purposes.[0]

In other words, the Air Force (and other branches) can, indeed, be "cool."

[0] https://www.youtube.com/watch?v=W0NHXiB-R58


Surely Kobayashi Maru should be a Navy development hub.


I think an unwinnable scenario works quite well with the nuclear weapons theme.


But no more. At long last, that system, the Strategic Automated Command and Control System or SACCS, has dumped the floppy disk, moving to a “highly-secure solid state digital storage solution”

Did SuperMicro make the board?


That story made a big splash but was never proven. Bloomberg claimed sources it conveniently couldn't reveal. I'm just as suspicious of China as the next guy, and they've probably got something in the supply chain, but probably not that. Besides, a ME or BMC 0day does the job just as well with very little risk.


Did SuperMicro make the board?

Suppose for the moment that they did. Surely you're not going to stake your reputation on an unconfirmed rumor from Bloomberg for which no single shred of hard evidence (or evidence at all, frankly) was ever offered, because that would just be silly. So what's your point?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: