Hacker News new | past | comments | ask | show | jobs | submit login
All United Flights Grounded Due to Mysterious Problem (wired.com)
217 points by throughnothing on June 2, 2015 | hide | past | favorite | 205 comments



I was flying from Dublin to Newark on Saturday on a United flight. At some point during our flight the entertainment system needed to be rebooted. When it came back up, the splash screen hit me with a huge amount of nostalgia. It was RedBoot with a kernel build date from 2004.

Obviously this is the entertainment system and not something more critical, but it's telling. There is a huge cadence mismatch between software cycles and capital good replacement cycles. Airplanes, factories, HVAC systems, even home appliances last for decades. Software on these systems needs to get upgraded, I can't even imagine the number of security patches that have gone into the Linux kernel in the last 11 years.


This is the Achilles heel of the entire Internet of Things and smart appliance trend, and I think this will bite everyone bad. After 50+% of these vendors go out of business in the next decade, their products won't get updated, and people will wonder why their "smart" TV can't watch movies from whatever replaced Netflix/new whizbang video service. They won't be as likely to buy any "smart" thing again.


This is exactly why I prefer buying "dumb" tvs and simply adding chromecast/appletv/firetv, etc. Much better experience.


My wife and I still have our LCD TV from 2005. It's about 5 inches thick but it works great. The TV is even starting to look very cool/retro and we've been complimented on how it completes the look of our living space!


I still have a CRT TV. Do I win, or is there someone around with a B&W set? :)


I think it's the balance between new and old. You know, if your TV is either too old or too new it looks like you're trying too hard.


I have a Mac 128k in my basement, does that count?


I only have a Mac 512KE. :( But it's featured prominently in one of my rooms if that gets me any bonus points.


My Amstrad CPC 464:

https://vimeo.com/63990420


beware the complisult.


Hahaha! My thoughts exactly. There's a fine line between a compliment, a jibe and an insult. All subjective, of course.


It will take more than that to get me down!


I opted for a Chromebox running Ubuntu+Kodi. Its quite easy to set up! http://forum.kodi.tv/showthread.php?tid=194362


I am running my home server on ChromeBox (Linux Mint running Samba attached to a Bravia TV). I don't think I can get something better.


It doesn't really matter that much if it's dumb or smart. You can use a Chromecast with either, and refusing to buy one kind limits your choices.


I think his point is based on the (very reasonable) assumption that an equivalent quality Smart TV is going to be more expensive, which isn't worth it in general since you're not using the features that cause the difference in cost.

There of course may be realities of the market to cause it to not work out this way, but it's a pretty sensible assumption.


Otherwise-equivalent Smart TVs are usually less expensive than displays (whether TVs or monitors) without the "Smart" features. This is a matter probably largely of economies of scale -- because that's what manufacturers think the consumer market wants, that's what they make most of and flood consumer channels with, and dumb alternatives are a specialty product.


Good to know, thanks. I haven't bought a TV for about five years and I've never bought (or even been interested in) a Smart TV, for roughly the same reasons described here.

EDIT: hullo's comment below would seem to contradict this; 32" TVs from Samsung seem like a fair data point to look at (albeit just one data pt), since it's a very mainstream manufacturer and a non-niche size.

"Picking a manufacturer (Samsung) and size (32") at random, I see the smart TV for $499 and non-smart options for 219, 269, 299. http://www.samsung.com/us/video/tvs/all-products Just for example."


My last three "TVs" have been commercial display panels. These are the NEC panels that you see in the airport (turned on their side) for departure/arrival boards.

They are more expensive than a best buy model. Not terribly so, however, otherwise the airport couldn't buy 200 of them.

They are incredible displays.


How would one go about finding them? (search terms, manufacturers...)


"Commercial display panels NEC" led me to this: http://www.necdisplay.com/category/large-screen-displays



All else being equal, would I skip the smart features? Sure. You don't get that choice though. At least, I never have. Aspects like display technology, color reproduction, contrast, latency, etc are much more important to me than smart vs. dumb, and those things are how I make my decision.


That assumption is not reasonable at all. Not only are "dumb" TVs no cheaper, hardly anyone even sells them anymore.


Picking a manufacturer (Samsung) and size (32") at random, I see the smart TV for $499 and non-smart options for 219, 269, 299. http://www.samsung.com/us/video/tvs/all-products

Just for example.


Its not a direct comparison.

The Smart TV even without the "Smarts" are generally higher end TVs.

Though not indicated on the main screen the $299 option is actually a SmartTV if you click through. You have to get down to the $269 option to get to non-smart TV.

The difference between the $269 and $299 seems to be about the difference for putting a processor in a TV and making it smart....

(Note I don't make the claim that the SmartTVs are cheaper but for the most part its being absorbed in the cost of higher end devices so you get the Smarts "for free" on better sets)


Oh that's a good spot I missed the 299. But still $30 is the cost of a chromecast or fire stick, to the gp's original point.


I can get a 50" LG tv ("dumb"; just connect a Roku over HDMI) for $700 on Amazon.


I'm pretty sure you can't get a 2015 model LG that's 50" or larger that's not a Smart TV.

The cheapest 50" TV they have is $799 has interactivity. http://www.lg.com/us/tvs


I think you're confused about the meaning of the terms "reasonable" and "assumption". An assumption isn't unreasonable just because it turns out to be untrue. As I said, there may be realities of the market that invalidate the assumption (and apparently there are).


What makes an assumption unreasonable, in your opinion?


I'm not who you're replying too, but I'll try:

An assumption is reasonable if it follows logically from facts we know about the world to some conclusion. For example, cars with more features usually cost more than cars with fewer. A washing machine with a detergent dispenser and 11 different wash modes costs more than one with 3 modes and no dispenser. A thermostat that just sets the temperature and does nothing else will almost always be cheaper than one with wifi connectivity and a companion iOS app.

Based on that knowledge, its reasonable to assume that a TV with more features will cost more than one with fewer. This assumption is wrong of course, but it wasn't unreasonable.

So that means an assumption is unreasonable if there's no reason you'd make it in the first place.


There is no reason to make this assumption (about smart TVs costing less) in the first place. You can retroactively justify anything by twisting and cherry picking facts. A product with more features may cost more if it costs more to produce or customers are willing to pay more for it because they see it as more valuable. Neither of those applies to smart TVs.


> There is no reason to make this assumption (about smart TVs costing less) in the first place.

Sure there is. 99% of everything else I've ever experienced in my life have had a positive correlation between features and price. I'm actually struggling right now to think of another product where—in general—more features are cheaper than fewer. Other than television sets, I cannot think of one right now. (Maybe if I spend some time on it, I can think of another) Therefore, knowing what I know about prices of things, it is totally reasonable to assume that TVs follow the pattern.


I don't mean to argue because basically I agree with you, but a great example of more features costing less is the average midrange AV receiver which probably offers a tuner, digital inputs, and some sort of DSP as well as multiple amplifiers for 5.1 or greater outputs, compared to a 2 channel stereo amp aimed at the audiophile market - which although it [i]may[/i] measure better, probably doesn't sound any better in most peoples living rooms.


For that 99% of everything else, those features were probably useful things that users wanted.


Perhaps a more concrete definition of an analogous concept would make this clearer: Are you familiar with the concept of a prior distribution[1]? The point of a prior is that, before you get any of the evidence, you can still have some sense of how likely each event is. If you asked me whether a microwave with more features would be more expensive, I would feel pretty comfortable saying "yea probably". As mentioned downthread, this is based on the fact that 99.9% of the time, in every single industry, products that are more functional will be more costly (all else held equal, obviously).

[1] http://en.wikipedia.org/wiki/Prior_probability


Like I said, that prior is wrong regardless about how comfortable you feel about it. The real question you should be saying yes to is whether a microwave with more USEFUL features would be more expensive. A microwave with the "feature" that "it has a 10% chance to self-destruct each time it finishes heating something" would cost less, since no matter how hard you try to spin that as a feature, a reasonable person would see it as a defect.


On the increasingly minuscule chance that you're not just trolling (posterior probability!), it should be trivially obvious that what makes TV-smartness a "feature" but not stochastic microwave self-destruction is not some objective measure of usefulness, but rather the fact that only the former is marketed as beneficial with the reasonable expectation (on the part of the marketers) that some/many consumers will believe it.

If you truly don't understand this distinction, then this whole conversation (indeed, this entire posting/thread) is hopelessly beyond your comprehension.


A bold claim coming from someone who thinks "It's not a bug, it's a feature" is successful marketing rather than a punchline.


> A bold claim coming from someone who thinks "It's not a bug, it's a feature" is successful marketing rather than a punchline.

Thanks for confirming my suspicion; the fact that "it's a feature not a bug" is pretty unethical has nothing to do with whether it can successful or not. It's a useful logical tool to learn to distinguish "is" from "ought"; whether or not you think that Smart TVs are better than dumb TVs or not has no bearing on whether or not they're marketed as such and most importantly believed as such by the majority of consumers.

Hell it's not even one of the worst heuristics in play: (higher) price and popularity of a product as a heuristic for quality may be even worse than "more features" as a heuristic, and these two are extremely prevalent. And yet just because I don't like them doesn't mean I pretend that these tendencies simply don't exist.


Depends on the market. Where I live, Smart TVs are easily about 2X the price for a similar "dumb" TV.


All well and good until the "smart" part fails and makes the "dumb" part unusable. It's only a matter of time until it happens. I would rather take my chances with a "dumb" TV and add the peripherals I want.


Smart TVs tend to have more complex UIs than dumb ones, which are annoying if you don't intend to use the smart functionality, as well as confusing (in terms of duplicated functionality) to less technical users.


And also designed by people who have no business doing UIs in the first place, which makes them crappy. It's a similar situation as with printer "value-added" software from printer vendors.


You can use a Chromecast with either, but one of them can also spy on you without your consent.


If you don't trust the TV manufacturer to include spying devices in their product, there is no reason to believe the "dumb" TV you brought from them is any less capable.


And how exactly is a dumb TV with no WiFi radio, ethernet port, or any other forms of outgoing communication going to phone home? Please enlighten me.


How can you prove that your TV doesn't have a WiFi antenna inside? There's plenty of space to hide something like that in.


... and what network will it be connecting to? Sure maybe a free hotspot here or there, but many of these TVs will not have access to even those.

And hell yeah you can prove it's not in there, either with a quick wireless scan or even a simple physical tear down of the housing which can then be put back together.

Show me a dumb TV that has been wired up to spy on people that has been in the wild before.


You can do those things, but you're not going to.


> You can do those things, but you're not going to.

This is a 100% trash point. I have yet to see an unknown device on my network from a dumb screen, let alone any additional microphones or cameras on said dumb screen. I have also not seen or heard of any reports of it becoming a spying type device. You mean to tell me the dumb panel I bought, which is from a major manufacture with known tear downs and a ton of buyers, managed to sneak this hardware in (even something like a cellular radio) and nobody noticed?

I'll repeat this for you so you perhaps it will impact you this time: Show me a dumb TV that has been wired up to spy on people that has been in the wild before.

Please come equipped with citations, references, and examples before commenting.


No, because you're not worth spying on. Show me a unicorn. Please come equipped with a living sample and a detached horn before replying.


> Please come equipped with a living sample and a detached horn before replying.

You cannot show up to a thread, a demand that a negative be proven. That's not how that works, and the burden of proof is on you.

> Show me a unicorn.

If you're not going to be constructive here, just stop commenting.


I did not demand a negative be proven. The only one here who cares about proving anything is you. I was only using a rhetorical device to illustrate the absurdity of what you said. Sorry if that wasn't clear, I realize English is probably not your native language.


I don't trust the TV manufacturer to secure their initial release of their "smart" os, and I surely don't expect them to keep it patched. Anybody who breaks into your video-enabled tv can see everything you're doing...


Not if it isn't connected to your network.


> Not if it isn't connected to your network.

Not if it isn't connected to a network. But its hardly as if devices with manufacturer-paid cellular connectivity built in and preconfigured don't exist, so there's no reason that it has to be your network.

To the extent that manufacturers are either monetizing networked services or deriving useful data from them, making them independent of end-user networking choices has a pretty clear benefit, and I wouldn't be surprised to see it become a common thing in Smart TVs.


Valid point. But then you're left with a bunch of crap in the UI that is unnecessary and annoying at best.


Really? I'm not. My smart TV defaults to just displaying whatever the incoming signal tells it to.

And if some manufacturer decides to clutter things, I just won't buy it. Problem solved.


It's not too much of a stretch for smart TVs to start including cellular connectivity (and advertising it as "zero-setup").


And who will pay for that cellular connectivity?

This is getting well into ad absurdum (a common problem on HN). Smart TVs are perfectly fine for the person who doesn't want a smart TV. Just don't use any of the smart TV features and don't connect it to the network. Problem solved.


Nobody thought Amazon would put a cell phone antenna into an eBook reader, but look what happened! :)


That's exactly the example I was thinking of as precedent.


In my experience, "smart" tvs almost always offer a degradation of user experience.


a smart TV can be more expensive and have unwanted features and bugs... an hidden webcam is an example and another is when they stop sending you software updates for your TV because it is "too old"


Projector!


Roughly OT: I have yet to have seen anything that makes me look forward to the IoT, and when I talk to the voices of vendors in my head I'm basically telling them "Stay out of my refrigerator, my furnace, my toilet and my smart doorlock that I'll never have."

I just don't feel that anything relevant to the IoT is missing from my life. At all.


I can see "Check if I left the stove on and turn it off remotely" being a neat thing to have. But not neat enough to let a 10 year old appliance with no software updates and the capability to burn my house down be accessible to the internet.


Yeah, but chances are you won't use the functionality often enough to configure it in the first place.

That's the real problem with the Internet of Things: most of the things we own are not all that useful when we're not in close proximity to them. Thus, not only are users and manufacturers unlikely to update them in the future; users are just as unlikely to connect the thing in the first place.

Home automation through things like light switches, etc. has a use case, but those products have been available and Internet-connected for over a decade and we still haven't seen wide adoption. I recently priced it out -- it would cost me over $5000 to swap out the outlets and switches in my house for Insteon devices. And that's just the hardware; not the electrician required to connect it all or the time I would spend configuring everything. Home builders aren't going to spend that kind of money building this into anything but the most high-end homes -- the IoT hardware alone blows through the fixtures and appliances budget that most home builders allocate.

People want systems that "just work". IoT does not "just work", and none of the current or announced implementations address the big problems around configuration (namely, every house is different so every implementation is custom). And in some cases like a stove or a refrigerator, any amount of configuration is going to be too much.


About home automation of light switches: I put in ~10 X-10 devices, plus the ~20 controllers to activate them, circa 2002. I did it myself (I'm an EE), so the installation cost was 0.

It worked OK for a while, but the setup was not robust and eventually some controllers would not work some devices at some times. The annoyance factor in going from 0 errors to a 1% error rate is HUGE.

Five years passed, and I have been slowly replacing all these X-10 devices with hard-wired switches or with Insteon. Of course, the original 1959 wiring paths (12 gauge Cu FTW) still work fine.

Now, when I see connected/automated homes in design mags, all that tech seems more like a long-term maintenance headache than a desirable feature. If I had an unlimited budget, I would not build those features in, I would just install conduit and run old-school copper wires through it.

My lessons: (1) The design life for a home is decades, the refresh rate for home automation devices is years; (2) Upgrading/tinkering is fun the first time -- but only the first time; (3) Your spouse hates it more than you do; (4) The existing device does one thing without fail, replacing it with a device that does more things but sometimes fails is not a net gain.


Agreed on all counts. The only ones I can see taking off in any sense are easy to retrofit things like light bulbs and the occasional wireless plug controller like a WeMo if you have a particular load that needs one, not as a standard on every outlet.

But prices need to come down, light output needs to go up (wireless bulbs seem to top out around 60W equivalent), and switches need to not be a $60 optional accessory (looking at you, Hue Tap). Controlling lighting with your phone is neat, only being able to control lighting with your phone sucks.

Long term, I'm sure I'll end up with more IoT devices. But it'll be because they got shoved down our throats and I didn't want to pay more to avoid new "features," not because I wanted a wireless microwave.


Insteon seems to have the best ecosystem of IoT stuff. But you're right; the lack of compatibility across all these different "open" standards is frustrating as a user. You basically have to pick one platform and stick with it.

But I'm personally not convinced by IoT. Every implementation I've seen adds complexity without really improving functionality. I have a few Insteon switches in my house, but only in a few places where I need them (e.g. on the lights in front of my house so they can be turned on with a timer when I'm out of town).

I actually have a "connected refrigerator" made by Samsung. I tried for 5 minutes to get it set up before I gave up. I really couldn't think of what I would need to use a connected refrigerator for. IMO this is the usage model for the vast majority of IoT devices: if I, a major geek about this stuff, am not willing to spend more than 5 minutes setting it up, who actually cares enough to bother with any of it?


My guess is that for each person some small thing will make sense, which is sufficient to drive the costs down and the technology forward.

For example, good sleep is really important to me. So I used the Philips Hue bulbs and a NUC to build a smart home lighting system that behaves much more like daylight. When I describe it to people, quite a number really want it. Not bad enough to get my code off GitHub and install it, but they'd pay something extra for it. And I now see pre-packaged commercial products getting proposed, so I'm sure they'll have options.

The interesting thing happens when that cycle drives the costs a fair bit lower. Look at phones, for example. Smartphones were a weird, exotic thing. Then they were a high-end consumer thing. Now they're the default. A couple years back I went into a store looking for a cheap phone and asked for one without a web browser. The clerk looked puzzled and said, "Well, they all come with web access." I'm sure I could have ordered a dumbphone somehow, but it would have been harder and cost more.

So my question is: how much extra will you pay for your house not to be connected? Because that's the real test for me about where the IoT thing will end up.


I don't think so. Cost is one side of the equation, but usability is another. And at the end of the day, I don't think there's enough consumer demand to push IoT where it could be. I think eventually we'll get there -- at some point in the future, it will just be cheaper to buy an Android SoC than a handful of cheap analog micro controllers -- but for the next 10+ years we're going to keep hearing about IoT as "the next big thing" because there's no consumer demand for it.

The situation with your daylight system is a perfect example. People think "Oh, that's cool" but won't actually get off their ass and spend a few hours setting it up and configuring it to their liking. Unless it comes out of the box, it's a non-starter.

IoT devices as they are today require a systems integrator to come in and tie everything together and configure them. So your lighting system is integrated with a presence system that is also integrated with your thermostat. Each of these has to be configured on a case-by-case basis, because no two homes are alike. But if your products require an integrator, suddenly the integrator is your customer, and you begin sacrificing end-user focus for things that make the integrator's job easier.

And then you realize that the largest consumer of IoT platforms isn't consumers themselves, it's installers like ADT. Their focus is on selling simple products that require minimal support, not feature-rich ones. Why? Because users simply aren't interested in paying any amount of money for advanced features. Until that changes, IoT is going to be a niche hobbyist market. Even if it's built in to every device you buy, if it's not worth anything to you, you're not going to configure it in the first place.


I think we might agree entirely. I think mobile phones were a relatively fast cycle because people were pretty happy to replace their phones every few years. That's not true with, say, washing machines or furnaces. So it will definitely take a while.

But if you look at devices with cycles in between, I already see it happening. 10 years ago when I bought a stereo receiver, I bought something entirely dumb. Last year I replaced it and ended up with something that was internet connected. Not because I really cared, but because the equivalent model came with that. And in retrospect I'm glad; their phone/tablet app is a way better remote control than punching a bunch of mysterious, no-feedback buttons. The same thing happened with my DVD player; when the old one died I just bought whatever Consumer Reports recommended and it too is internet connected.

I think that you're right that the development of pluggable Android is what will push this forward. And Google clearly agrees; their project Brillo is surely one effort among many.


Right; but in the case of a stereo receiver (or any TV-connected devices), the IoT model makes sense: this is a device you interact with on a regular basis. There is already an ecosystem of devices that support things like Bluetooth, AirPlay, etc. so when your receiver has them, it's actually kind of useful.

I've always thought that the reason for keeping Android as open source was to enable this use case. Eventually, Android-capable SOCs will come down to a very low price point, and for these use cases they don't need to push much more than a few kb per second. But it retains the advantage of having a development ecosystem that is well understood and widely available (you can't throw a rock in China or India without hitting an Android developer).


> I used the Philips Hue bulbs and a NUC to build a smart home lighting system that behaves much more like daylight

Neat. Did you try to coordinate room lighting color changes with computer screen colors controlled by fl.ux?


Flux can actually do that directly! https://justgetflux.com/news/pages/bigupdate/


I didn't coordinate it, but they are roughly in sync because they're doing similar things. Code here:

https://github.com/wpietri/sunrise

It's minimally sufficient for my needs, but I'd be glad to work with others to expand it.


I would say only the DIY part of the IoT is something to look forward to. When all it takes is $20 and a few lines of Lua to set up a display, some sensors etc. that's connected either to the actual internet or just your private wifi, that's actually a cool thing. Basically any time you've thought "Hey, I could use an RPi and automate this.. Nah, they're too big and expensive."


That will only matter if companies don't train customers to expect frequent upgrade cycles. It's worked for consumer electronics, but there will be strain in previously "came with the house" objects. You'll need to sidestep people's habits, in the way that the microwave did, perhaps.

And we're only worried about vendors going out of business because it's the early days and it's largely startups pushing the trend. With a Samsung or Apple, it's more that they'll quickly (by home equipment standards) stop supporting whatever doesn't stick to the wall.

There is a case to be made for self-contained objects that don't derive most of their value from an ecosystem, but work normally with no network. Work up from a toaster, not down from a computer.


Are you arguing that frequent upgrade cycles are a good thing?


Definitely not, for the consumer. For the manufacturers, sure. Supporting a cheap piece of hardware for several years when consumers are going to demand it works with HomeKit, Google @Home, or whatever it's called now, and their descendants, and Microsoft's play, and the many competitors yet to come — that'll suck.

Traditional hardware makers are going to have to factor the support of this software component into their prices now.


I bought an LG smart TV a while ago. It stopped receiving firmware updates after a couple years and is now way behind current models in terms of features--and, I can only imagine, security patches.

I will go out of my way to buy a dumb TV next time.


Mine updated itself to add a big banner advertisement to the main screen. It originally had no ads.


> I will go out of my way to buy a dumb TV next time.

How? I don't see any dumb TVs for sale.


Isn't dumb TV just another word for monitor these days?


> Isn't dumb TV just another word for monitor these days?

A dumb TV still has a TV tuner, so they aren't equivalent.

But unless you plan on plugging it directly into an antenna for OTA broadcasts, they are basically equivalent.


They certainly seem to be similar! But, a number of differences do arise from intended use.

- TVs don't need anywhere near the same level of display quality. They are viewed from ten feet away and do not render small text, so they don't need as clear a picture. They also don't really have to go over 30fps, and latency is less of a concern. Basically, they have looser constraints in many ways, making them cheaper

- TVs have a plethora of inputs of many formats

- TVs have remote controls


Because you no longer need a tuner? Could be.

But I doubt you could buy a 60" glass front monitor for the price they sell TVs :)


Heh, I own a Google TV (from Sony.) It's pretty amusing to me that a Google branded TV is currently running Android version 3.2


But, why do you care about the lack of updates if you would be happy with a 'dumb' TV? Just plug your desired source into the input HDMI socket of your 'smart' TV, obsolescent or not, it won't matter, and move on with your life.


They don't even need to go bankrupt. I have an LG television that has been promising new widgets ever since I first got it, and no new widget has ever appeared.

The impetus there was that LG changed its net-connected TV platform in 2011, and instantly dropped all support for older devices. One would think that a final update could remove that "coming soon" box from their proprietary added-feature screen, but they haven't even bothered to do that.

So I can watch NetFlix and YouTube on that device, but not Amazon instant video, or Crackle, or Crunchyroll, or Vimeo, or any of the dozens of selections available to better supported platforms. Having learned my lesson, and aware of the increasingly stalkerish behavior of "smart" televisions, my next TV purchase was very specifically a dumb screen. If I want an internet-connected service now, I use the Wii, or XBox, or the extended desktop from the nearest computer.

I will likely refuse to buy any network-enhanced appliance in the future, unless I am able to root/jailbreak it and install software without the manufacturer's stamp of approval. I probably wouldn't do much beyond installing ChillBox, or FridgeBSD, or CryogenMod, or whatever, but it feels like the possibility might keep them a little more honest. Because you know that refrigerator hackers would be capturing and picking apart every packet that thing sends out, quickly discovering that every time someone closes the door, it sends a tattle out to fridge-use.org about how long you stood there with the fridge door open, along with before-and-after photos of your food.

Though it would also be embarrassing if they marketed value models of a product line by disabling features in software/firmware, and some NetBSD-loving punks could come along and write a simple script that turns the doohickey that retails at $200 into the one that sells for $800.

So it's already too late for me. "Smart" appliances are just another low-capability computer that I will have to support as the in-home IT guy. And I will have to presume that they come pre-loaded with all manner of crapware and spyware. I would forever need to be checking on chipsets and revision numbers and compatibility lists. No thanks. It's hard enough managing the congestion on the home WiFi already.


Probably the only reason why your ca. 2005 linux media pc won't work in your current living room is that its hardware can't support your new 4K flatscreen. Open software could go some way towards addressing the update problem, but it would need to have a much more robust code signing infrastructure behind it, to avoid becoming an even worse morass of security problems than the morass we have today. (Generally, not in open software.)


Heck, just look at the huge number of Android phones that you can buy at retail with an out-of-date version of Android with known security holes which will never be patched or updated. And those are relatively complex devices which are trivial for users to update if given the option, based on the adoption rates for e.g. iOS updates.

A company doesn't have to be out of business to not do security updates; they can not do security updates starting day one. There was an article a while ago about tons of home router vendors with insecure software from a third party, where the third party had resolved security issues years ago but the vendors had never bothered to update, leaving hundreds of thousands of devices vulnerable over the last few years.


Actually one month ago in Amsterdam I've used a Philips smart TV that complained about "youtube is not supported anymore yadda yadda". Programmed obsolescence at its best (this TV probably wasn't more than 3 or 4 years old).


I know it is just an OS kernel and security is far more comprehensive than having a secure kernel, but this is an amazing start:

https://sel4.systems/

I'm surprised IoT conversations are still happening with Linux as a contender for the OS, let alone Windows.



There is a huge cadence mismatch between software cycles and capital good replacement cycles.

How well would you say that Tesla is coping with this as a company? For that matter, what about Apple?

I can't even imagine the number of security patches that have gone into the Linux kernel in the last 11 years.

Let's take a step back and think about this statement. Isn't this insane? We know enough to be able to build something much better than this. The reason that we don't, is that we've just kept on pragmatically building on what we had before. We're like a corporation that keeps pouring money into its "stovepipe" system because we keep on making short-term decisions. (Somehow "stovepipe" has come to mean "vertically isolated," but I seem to remember that it also used to refer to the tendency of iron stovepipes to corrode and need constant patching.)


How well would you say that Tesla is coping with this as a company? For that matter, what about Apple?

Apple just kind of assumes that you have the latest shiny, because why wouldn't you? This induces a phenomenon I call the Apple Turnover: when a software update aimed at new Apple things comes out and makes your old Apple thing not run so good anymore. Sluggish iPhones are the hallmark example today, but I was bitten badly by this in the mid-2000s when Panther would no longer compile C++ files. You see, one of Apple's OS updates for Panther came with Tiger's libstdc++, which used the new Itanium ABI. This was so Xcode for Tiger could compile programs to run on Panther, but without heroic efforts to set up compiler flags in every package you built to link against the old static libstdc++, compiling on Panther would link against the new libstdc++ by default and fail horribly, rendering C++ code uncompilable. (Deleting or renaming the new libstdc++ was not an option; it was a heavily depended on system component and I think even the header files were changed for the new library.) And a lot of stuff depended on C++, including C-API stuff like SDL. And Apple did fuck all to fix it.

So if you buy a shiny Apple toy, your choices are to commit to upgrading early in the new product cycle or risk an Apple Turnover rendering your purchase, if not useless, then with degraded functionality even relative to the same device when you bought it.

And the pisser is during the 80s and 90s, Apple gear was legendary for running well, and being supported, many years if not more than a decade after its purchase date.


Eh, I think Apple is doing ok with Macs and iPhones at least.

I'd guess the typical hardware replacement cycle for a computer is 3-5 years. My 2009 MacBook Pro is on Mountain Lion. It runs just fine, and the OS continues to receive security updates. Both 10.9 and 10.10 also officially support my machine, I just haven't bothered to upgrade. Rumor is that support for older machines is one of the areas of focus for 10.11. We'll see in a week or two.

The typical hardware replacement cycle for a smart phone is probably 2-3 years because of contract upgrades. My iPhone 5 is running 8.2 and runs just fine. 8.3 supports it as well, but I need to clear some photos off to make space to run the installer.

Looking at the entire Apple installed bases for computers and phones, Apple users seem to do a very good job of keeping up with supported OS versions.

I don't know as much about the iPad. At work I have an iPad 2 that is running iOS 8 and seems to work fine.


For devices that _can_ have the OS upgraded - stats seem to show a remarkably high level of upgrading by users, at least for major version numbers:

http://david-smith.org/iosversionstats/

If you scroll down to "Device Breakdown (sorted by Usage)", you see a bunch of devices with 90+% stuck on the "last supported OS version" - 97% of iPad1G on iOS5, 97% of iPhone 3GS and iPod Tough 4G on iOS6, 91% of iPhone 4 on iOS7. Even including thise devices, they're showing 75% on devices on iOS8 and 20% on iOS7.

I'm in the "left behind" category - with both my iPad1 and Mac Mini single core being "stick" at iOS5 and OS X 10.6 respectively. Im somewhat disappointed at the lack of patches for the known security holes in iOS5 - especially since those numbers still show over 3% of the iPads in use are not upgradable past iOS5...


When updates are free, what are the realistic alternatives? Android devices, for example, just reach end of life much more quickly. Given the option to update at the expense of performance or no option at all, it at least seems the better of the two choices.


Maybe we need to start being more realistic about what things "cost" in performance and start evaluating the cost/benefit ratio. It's OK to say no to a feature if it is going to degrade the user experience. Just because we have a phone with a quad core 2 Ghz processor does not mean we need to design the OS to require it.


I didn't say the Android situation was good, although with CM I am generally able to squeeze a few months to years of additional life out of an Android device.


I think this is a strategy by Apple, and will eventually backfire. But then again what do I know about consumer preferences, I am typing this on a Toshiba Satellite PS25-607--that is literally held together with tape, and opened so many times I can visualize the screw holes?


I honestly think most companies don't care about longevity anymore? I look at all the appliances in my home, and just plan on throwing them out when they break down. I remember years ago, a car manufacturer did a study on what consumers care about when buying a vechicle. It wasn't engine size, or the stuff they were advertising; it was cup holders. I don't think we have evolved that much? (vechicles were forced to last longer because it was such a huge expenditure, and even the most thoughtless buyer was forced to look for longevity.) I literally think durable good manufacturers(with the exception of most automobile companies) design products to fail within three years? I can hear the CEO now, "Get the name in their head, and we have them for life--my life as a CEO?". Worst case senerio, the product becomes a joke like Fiat did in the 70's; the CEO is long gone, but on his rein, the books looked good?

The appliances in my house were bought because of all the bells and whistles. I didn't buy them, but am forced to work on them when a sensor fails. They have gotten so complicated, parts so expensive, service manuals hard to get; I just throw them away when they fail. I don't like it.

I think, if consumers start demanding it, we will go back to buying a based on longevity, and not on the newest feature? Every time my dryer's alarm goes off, I am reminded it's a durable good. It's almost reminding me to save up?


100% agree. What I'm really getting at it is that you need to have an upgrade strategy. Obviously these planes didn't. The military actually has a very good strategy these days simply because they were burned badly by this problem in the 80s and 90s already. I imagine the commercial airlines are currently learning this lesson.


Exactly because of this I think Fred Wilson is spot on in saying expensive things will stay dumb and the smarts will be in the cheaper replaceable gadgets[1]. Cars have already done that with audio (bluetooth works reasonably well). Hopefully one of the video casting solutions will solve video as well so we can stop mounting smartphones on the dash and just cast the images into the car's screen.

[1] http://avc.com/2011/12/cheap-willl-be-smart-expensive-will-b...


This reminds me of the rumours that Apple would release a TV. Frankly I never believed it and (IMHO) you have to be pretty ignorant of Apple to think the rumours had any credence whatsoever.

The upgrade cycles are simply too long for TVs.

But more importantly, Apple's whole model is to treat things like this as just "dumb complements". Your mobile device is, from the carrier's perspective, is increasingly becoming an a dumb Internet pipe (first with the App Store, later with the likes of iMessages and how LTE works, etc).

The TV for Apple is simply a dumb display to stick an Apple TV into. A sub-$100 device you can replace every other year if need be. A $3000 TV is replaced a whole lot less often.

Why would Apple want to be in the business of (eventually) supporting 5+ year old TVs for such a low-margin business? Or what makes you think users would pay for the Apple brand and/or upgrade more often to make it worthwhile?

So as far as IoT goes, I have trouble seeing a future where someone says "I need to buy new lightbulbs because mine don't get firmware updates anymore" or "I need to buy a new fridge because it can't talk to my new phone".


Smart TVs doubtless sound like a great idea if you're a TV manufacturer who desperately wants to shorten upgrade cycles and sell higher-margin "value add" in a world where so many people already have HD TVs that are pretty much as big as they have room for. For everyone else? Not so much.

Not only does it make sense to put the smarts and connectivity in devices that are either cheap, that people already have anyway, and can be easily upgraded but the user interface in a phone/tablet/etc. tends to be far better than a typical remote.

In general, I tend to prefer the Chromecast model of just casting video from a general purpose device, but the Kindle stick and Apple TV are OK as well. By contrast, I rarely used the Smart TV features on my Panasonics because they were just so painful to use.


Considering the ridiculous amount of testing required (rightly so) for any change on the jetliners, they probably cannot keep up with the patches.

One Windows running company I worked for long time ago simply didn't apply the patches. They said it broke things...


Depends on the risk it poses and the attack vectors available. If you can assume jetliners use a custom hardware with no web tie ins and no physical access (USB/otherise) then security patches covering attacks that require those vectors are kind of moot aren't they? A whole lot of testing and verification would go into applying a patch that is rendered useless by other security precautions.


I've had demos ruined by iOS 8.x patches! (By changes to pretty well established code, like NSTimer and UIImageView updating.) I could easily believe that Windows patches would break things.


> Obviously this is the entertainment system and not something more critical

According to recent reports, the entertainment system is not fully isolated from the plane's navigation systems. However, Boeing has denied this.

http://www.cnn.com/2015/05/17/us/fbi-hacker-flight-computer-...


That's because the researcher didn't do that on an actual plane. In fact it wasn't even a simulator, it was a virtualized simulator where he put the systems all on the same bus. On top of that the guy was only able to listen in on traffic.

The entire story is bogus.

The systems are isolated just fine.


This particular device was mentioned in a discussion here awhile back: http://www.teledynecontrols.com/productsolution/ned/overview...

One of the collateral PDFs for the device shows it connected to "IFE (In-Flight Entertainment)" (http://www.teledyne-controls.com/pdf/NED_Brochure.pdf).

Just the existence of such a device gives me reasonable cause to believe that there's more interconnection than "isolated just fine". While I have a lot of respect for the engineering processes that go into aviation systems, I've also had enough practical experience in IT security to know that blind faith in things being done right is foolish.


Can you provide a source?

[edit] I've seen the claim made more than once in this thread without a source. I understand that the article was probably very wrong. I'm just interested to know if a reputable publication has confirmed this or if it's just conjecture.


I know this doesn't mean diddly squat, but my sister worked for a manufacturer of airline wiring harnesses up until very recently, and when I asked about it, she mentioned that they were not interconnected or exposed to one another. I don't believe they did the really huge jets, but regional and private sized jet harnesses instead, so it could be different in that aspect also.


Can you provide a reputable source to prove that President Obama is not a disguised lizard alien? ☺ I mean the original article was so obviously nonsense on the face of it that no refutation should be necessary.


The IFE system on a 747 I was on a couple of years ago was running Windows 98, complete with graphical boot screen :-) (Edit: Removed airline, I like them too much.)


Okay well then you must be lying because that's just not possible (the airline thing, not the Win98)


I think he may be trying to tell us that they're holding him hostage and we should call the police.


19 January 2038 will be a very interesting day, since that's when signed 32 bit unix time will overflow. I don't want to think about how much software would rely on that, especially critical embedded hardware.


An embedded system with a 30-year lifespan and no maintenance plan?


Interesting. I worked on the support team for the software that creates and files the flightplans for UAL. It was a horrible piece of SW/architecture with many outages. We tested in production daily and had direct access to the databases. I'm pretty sure they never changed their policies.. So yeah, this sounds very much like it!

Edit: Just got confirmation, this software was the root cause. No hacks/whatsoever!


This is what I love about HN. There are people that actually worked on x system-in-the-news wandering around. Thanks!


Good thing no one on the internet ever lies.


Interesting, United recently added a pretty lucrative bug bounty program (a rarity among airlines) a couple of weeks ago. http://www.united.com/web/en-US/content/Contact/bugbounty.as...

It would be ironic if the bug bounty program directly/indirectly lead to this.


Except:

  Bugs that are not eligible for submission:  
  * Bugs on internal sites for United employees or agents (not customer-facing)  
  * Bugs on onboard Wi-Fi, entertainment systems or avionics


The onboard wi-fi and IFE are provided by external vendors (Panasonic, LiveTV, etc) and United probably doesn't want to pay for their bugs.


More importantly, I think they are trying to avoid giving any incentive to hack a plane mid-flight. The policy also stated that criminal actions may be persued in cases of attempting to access these systems.


Whether or not they pay bounties for them, they're definitely paying for them.


Ironic? Maybe "fitting" instead?


I think the use of the word "ironic" was quite fitting.

Ironic: A state of affairs or an event that seems deliberately contrary to what one expects and is often wryly amusing as a result.


I wouldn't want to be paid in miles.


A lot of people seem to be jumping to the conclusion that their systems are malfunctioning because of being hacked rather than their systems malfunctioning on their own. Hard to know what is happening from the outside, but their systems may just merely be bad.

That said, the plane communication protocols aren't terribly secure, so it's certainly feasible someone is playing around with them. Maybe they'll decide it's in our interest for us to know at some point.


Is this what an actual cyberattack / cyberwar looks like?

Imagine how much money is being lost right now as a result of this disruption. Somewhere hackers are popping champagne.


UAL stock took a $1 nosedive (hah) at 10am.


Find out who made money by shorting it, there's your list of suspects, or at least co-conspirators (if this turns out to be a hack).


Or people who use twitter and like to gamble.


Or bots who subscribe to Twitter feeds and open trades based on sentiment


In that case the suspects could be the ones buying it after it dropped $1, since it went up afterward.

As long as securities react to hacks, there will be a massive incentive to 1) hack, and 2) overstate the hack's significance. Furthermore, as bots become more sophisticated, confusing them becomes easier. If you know bots will short CompanyX when "CompanyX hacked" hits the headlines, then you have an unfair advantage just by being the first to know of the hack.


That's why I said "list of suspects" instead of "culprit".


Why wouldn't the stock drop more?


More likely real hackers (programmers), working for united, were trying to roll out to production and experienced an issue.


It's interesting that the Wired article mentions the recent controversy about claims that aircraft systems can be hacked, but explicitly ignores the incident last week as having any possible relation to these events, where a bigoted employee denied a Muslim passenger an open can of Diet Coke because it "might be used as a weapon", while giving the passenger in the adjacent seat an open can of beer.

The response from United was unapologetic and absolutely disgraceful: https://hub.united.com/en-us/News/Company-Operations/Pages/s....


The blowback hit them and they changed the response (and removed the original response)

>>> UPDATED: Jun 3, 2015 at 1:45PM

While United did not operate the flight, Ms. Ahmad was our customer and we apologize to her for what occurred on the flight.

After investigating this matter, United has ensured that the flight attendant, a Shuttle America employee, will no longer serve United customers.

United does not tolerate behavior that is discriminatory – or that appears to be discriminatory - against our customers or employees.

All of United’s customer-facing employees undergo annual and recurrent customer service training, which includes lessons in cultural awareness. Customer-facing employees for Shuttle America also undergo cultural sensitivity training, and United will continue to work with all of our partners to deliver service that reflects United’s commitment to cultural awareness. <<<


A few things about that:

1) The beer isn't free, the passenger paid for the entire can or used a 1K drink chit.

2) UA flight attendants are famous for making up rules and many try to avoid handing out entire cans of soda, and this one wasn't even a United flight attendant.

3) What on earth would that have to do with today's event?


Whether or not the beer was free has absolutely nothing to do with this. It's not about the beer or diet coke, but about the blatant bigotry exhibited by an employee against a passenger on a United flight, as well as inexcusable behavior by another passenger.

Similarly, whether or not this was a United flight attendant is also of absolutely zero relevance. They may have technically been an employee of Shuttle America, but were part of the cabin crew and a representative of United on that flight, working under the United brand and wearing United uniforms. Therefore, when United releases a statement making no apology for abhorrent behavior exhibited by their representative, it reflects directly on them.

It may have nothing to do with this event, just as Chris Roberts tweeting that he hacked into the in-flight entertainment system may have nothing to do with this event. It's merely interesting that Wired explicitly ignored the actions of United as having any possible relationship to this event.


Hope it's not due to people using unopened cans of Diet Cokes as weapons.


Pretty sure that if it was a hack or credible bomb threat, that they would not have been flying again after only an hour. A scenario like that would suggest just a normal IT glitch and a reboot/restart to fix and validate that it's back to normal.


I'm honestly surprised that critical software at the core of more operations-heavy companies does not go down more often. Possible causes range from a software bug to database master failover to a data center outage, but realistically there are single points of failure at delivery companies, airline companies, and more that could stall everything. I'm surprised this software doesn't break more often. When was the last time that UPS had delivery delays due to a software outage?


I was one of the people grounded this morning. They said they were having mechanical problems. They took the plane out to the runway and taxied it around "to try to figure out what's wrong," then let us on.


From what I understand of the Chris Roberts fiasco, their avionics systems weren't airgapped from the other systems. If that is the case and not just hype, then no fucking wonder shit like this can happen.


So it's clear, Chris Roberts was referring to tests conducted in a simulator, not a plane. In fact it was a virtualized simulator at that, and one where the systems were purposefully put on the same bus in contradiction of what you'd find on an actual plane. On top of that, his claims to affect the engine were bunk and the most he accomplished was to listen in to network traffic, even with such a contrived scenario.

The whole fiasco is BS. Airplane networks are as safe as it gets.


I'd agree here, and only add that if, in fact, he had accomplished manipulating command and control packets on a live plane, I doubt the FBI would have let him walk with a stern talking to and the loss of his personal computer effects.


Thanks for the info, that's why I added the qualifier about if, because I suspected it might be BS.


What does "air gapped" mean?


"Air gapped" means that there is a physical separation between two networks (i.e., there's "air" between them). If two networks are physically connected, there is always a chance that someone could bypass any software security restrictions. If the two networks aren't physically connected, there's no way to gain access to one network from another.

In this instance, the inflight entertainment network and the avionics network were physically connected, and the security researcher was able to gain access to the avionics network by connecting to the inflight entertainment network.


The phrase no longer makes literal sense in this day of ubiquitous wireless communications.

"If the two networks aren't physically connected, there's no way to gain access to one network from another" is no longer true.


That's true, it's a bit of an anachronism. But everyone knows what it means.

Like skeuomorphism. Nobody uses floppy disks anymore, yet the floppy logo is universal for "save".


It's depressing that the various rounds of Win 10 logos floating around all use the floppy-for-save concept. Floppies haven't been in vogue at all this century.


But "floppy for save" has been ubiquitous. Its not so much a skeumorph as a well-recognized ideograph.


The phrase makes literal sense. Radio is a physical connection. Computers can be built without the wireless transceivers in order to avoid that physical connection.


As long as we're being overly literal and pedantic, I'd like to point out that wireless connections are still physical, in that they utilize physics to make the connection.


> An air gap or air wall is a network security measure that consists of ensuring that a secure computer network is physically isolated from unsecured networks, such as the public Internet or an unsecured local area network. It is often taken for computers and networks that must be extraordinarily secure. Frequently the air gap is not completely literal, such as via the use of dedicated cryptographic devices that can tunnel packets over untrusted networks while avoiding packet rate or size variation; even in this case, there is no ability for computers on opposite sides of the air gap to communicate.

The case above is an example cited as a life-critical system: > Computers used in aviation, such as FADECs and avionics

https://en.wikipedia.org/wiki/Air_gap_(networking)


You can also accomplish a one-way airgap using optics that can only pass data in one direction.


The security of such a thing assumes that:

1) The system can't be compromised through input data.

2) There's no other output mechanism by which an attacker could retrieve data (or watch the plane crash, which technically counts as output).

I'm not sure I buy the idea for contexts where a genuine security airgap is required.


You normally use such a scheme to pass data from the more critical to the less critical system. For example in nuclear power plants (that's where I know this scheme from) your low-safety systems can use data acquired by a highly safety relevant system (e.g. to display the pressure, temperature and power generation of the reactor core without being able to influence the safety systems that primarily acquired the values).


Ah! Thanks, that clarification makes way more sense.


Can you elaborate a bit more please?

This sounds almost as if you configured a network adapter to only send and not receive, but you used "optics", so I'm guessing you refer to some sort of fibre-based device that actually impedes light to go in the other direction, and thus the security is via physical means?


Google for something like "fiber converter". Normally if you run a copper ethernet cable between two buildings its a lightning magnet and next thunderstorm you'll get a big enough ground loop to blow up the gear on one side or the other if not both. So for $50 (seriously) you can buy a gadget thats been around for at least 20 years that has ethernet on one side and two fiber SC connectors on the other side of the box and you run non-conductive fiber between the buildings and sneer at the lightning gods whom will simply take their vengeance in another way LOL.

All you need to do is only run one fiber jumper, the TX from one side to the RX on another, not the two you're "supposed to" use. SNMP traps run on UDP and work just fine, old style syslog runs over one way UDP too. Note that your production/secure network can DDOS your IT/insecure network if enough people on the secure side try to use a DNS or NFS server on the insecure side, it'll just spam packets forever and never get its response, so its not like firewalls are completely pointless.

In the really old days we'd do something similar to RS-232 cables, physically yank out pin 2 or pin 3 (conveniently they swapped TX/RX on 9 pin vs 25 pin RS232). PPP negotiates connections, SLIP doesn't care and works great.

(edited to add another thing we did back when 10 meg ethernet was "new" and just replacing thinnet and thicknet, was pulling the approriate pins for TX, before auto negotiation existed, before 100 meg ethernet even existed, you could get away with that... I suppose if you had two smart-ish switches that could be forced to 10 meg no negotiation, you could do this today...)

Also there were production machines that output error, alert, and log messages to theoretically directly attached parallel port printers and there also existed converters that could go from parallel port to serial port (presumably for serial port printers, which really did exist in the 80s), which makes a pretty good unidirectional connection from a secured device to a semi-secured logging server. Bidirectional parallel ports didn't really exist until maybe 1990 or so and never did standardize, not really.

I made a lot of money implementing this kind of stuff in the 90s. It was fun.


Thanks for the informational post!


> so I'm guessing you refer to some sort of fibre-based device that actually impedes light to go in the other direction, and thus the security is via physical means?

Correct. To simplify, imagine a diode on one side and a photodetector cell on the other. On a microcontroller, this would be an opto-isolator (http://en.wikipedia.org/wiki/Opto-isolator).

Of course, the higher levels of the OSI protocol you're using need to support this sort of physical layer. Its typically used on very primitive, low-bitrate connections (sensors mostly, although I've seen it used in highly sensitive installations using scada equipment).

Bruce Schneier has a great piece on air gaps. I've included a link to it below.

https://www.schneier.com/blog/archives/2013/10/air_gaps.html


It's really not all that complicated.

You get a couple of these Ethernet to 100FX converters: http://antaira.com/products/media-converters/unmanaged-conve...

Then you hook them up with just one fibre strand (instead of the usual 2). You obviously can't use TCP over that since that requires a 2-way connection, but UDP works just fine. You'll probably want to wrap your data in some error correcting code, too.


> You'll probably want to wrap your data in some error correcting code, too.

I've found that using hashes and sending duplicates of the data seems to weed out any transmission issues in a one way system.


sure, at a communication complexity of 2N+k, where k is the hash size.... Error correcting codes allow much shorter error-free communications, with lower probability of failure than your method.

Consider an N bit stream of data, with a probability mu of any given bit being flipped. Then the probability of your stream containing an error is gamma := 1-(1-mu)^N. Since you're sending the identical stream twice, there's a 1-(1-gamma)^2 chance that your overall transmission is unrecoverable. The hash will tell you that the transmission failed, but not how to correct it. Furthermore, there's a probability that your hash has a bit flipped somewhere, too...

An error correcting code makes a guarantee that if up to m bits are flipped, the original message can be recovered exactly. A Reed-Solomon code can correct up to m errors while adding just 2m bits to the message length; even with a pretty conservative upper bound on the number of expected errors, this should be way less than 2N+k.


I really don't understand why Schneier is so popular. The article in the link doesn't have a clear audience - some of it is written for complete security naives, other parts are for power users ('turn off autorun services'), and some fundamental parts are flat-out wrong for both audiences ('you can't set up a computer without connecting it to the internet') or misleading (implying a 'small' USB pendrive of only 1GB will help).

And if you want to 'leave no space', why fill up a disk with 'random files' rather than just a single file that uses up all the space? If you're going to the lengths of encryption + airgap + cloudantivirus (!) + etc, you're a power user at this point, so why not just consume all space rather than just collect together files?

And hell, if you're being that paranoid, then use the paranoid OS for your desktop, OpenBSD. Why go to all the effort in the article and not take the extra step to get familiar with an OS that has an earned reputation for security, and that most exploit-writers don't target? I mean, OpenBSD does OOo and pdfs as indicated in the article - what's this airgapped PC going to be doing that requires Windows in particular?

It's ironic that this author people turn to for commentary against 'security theatre' writes articles doing the same.


I love HN!

Thanks for all the useful responses. I always learn something new by reading the comments :)


How can that ever work? If you're using TCP/IP, you always need to receive an acknowledgement that what you sent actually got to the destination, right?

Or does raw fiber use different protocols?


You're right, TCP/IP wouldn't work. UDP would work fine though, as well as anything lower level. You'd just continually send packets at a rate that was slow enough that there'd be no reason that the other side would drop them.

Imagine that the designers want to send information about the plane's expected arrival time to the in-flight entertainment system. You could send a packet once per minute, without any knowledge of whether or not anyone is even listening. If an update is missed, it doesn't matter since the information rapidly becomes stale.


The GP probably means an opto-isolater.

It obviously wouldn't work for carrying something like TCP/IP but rather much lower level signaling.

http://en.m.wikipedia.org/wiki/Opto-isolator


I was at a security trade show earlier this year and some companies were marketing "FTP diode": one way FTP transfert.

I did not have time to discuss it with them, but how a tcp based protocol could be "one way" just does not make sense to me...


Maybe it's a variation of TFTP (which uses UDP) that doesn't ack the connection or the file? That would be rather amusing, broken TFTP sold as a security app.


I read about one of these devices once, vendor info said that it runs a small and "verified" kernel (probably seL4 or something like that) that does the TCP stuff and took care not to let any real information flow in the other direction. Apart from FTP, it support SMTP, probably others as well that I forgot. It did not sound very convincing to me. Even if the kernel is totally secure, you could probably have a side channel via ACK timing?


This sounds an awful lot like a standard FTP server configured with a write-only directory.


Completely disconnected. It refers to the physical metaphor of having a gap of air (no wired connections) between critical systems (like the flight software) and other systems (like the entertainment system), although it's sometimes implemented via software means such as cryptographic tunneling rather than a physical gap.


>although it's sometimes implemented via software means such as cryptographic tunneling rather than a physical gap.

I've never heard the term "air gap" refer to something that is actually only segmented in software, crypto or no.


Department of Defense requires air gaps between red and black[1] networks, but blackers[2] exist that can tunnel red traffic over block networks.

[1]https://en.wikipedia.org/wiki/Red/black_concept

[2]https://en.wikipedia.org/wiki/Blacker_(security)


Means that they are not physically connected to the other systems, that there should be no possible way to reach the avionics from e.g. the in-flight entertainment system.


You can transfer files across an air gap using:

http://en.wikipedia.org/wiki/Sneakernet


You can try. Though sneakernetting your malicious payload into the cockpit while in-flight is not recommended.



Roughly, not connected in any way.


that makes me think of my local power plant that just installed valves to remotely control gas pressure in the pipelines ... I hope no-one hacks them.


What I really HATE about things like this one is that United will not refund us (I was heavily affected yesterday), or will offer offensive amounts of dollars/miles as refund.


Could be a similar issue to the recent electronic flight bag issue. Certainly reads that way from the article. They seem very quick to blame a hack when it could very easily be a bug in the system or an administrator error.

http://www.theguardian.com/technology/2015/apr/29/apple-ipad...


Have they actually officially blamed anything?


The initial descriptions sound more like someone pointed a testing tool at the wrong environment, rather than a hack.


Not sure if it's related, but the United website was down this morning for a while as well.


Spoke to folks at another airline a couple of years back. Flight registration, SABRE, and aircraft maintenance were all managed through the same systems. I found that both surprising and unacceptably risky.


Well, if 9/11 is any indication, commercial jets can deliver pretty destructive payloads. Avi Rubin summarized some hacks in his TED talk[0], where hackers gain complete control of vehicles. It is unwise to just spread FUD this early, however, if these systems bare any resemblance to cars (and it is likely that they have many of the same characteristics i.e digital control of key steering/speed/avionics) then it is possible someone has the information to control a fleet of missiles on American soil. Unlike 9/11, these people will not be the US government, and could be actual terrorists.

Who knows, maybe this is just a 16 year old who got accosted going through security and wanted to burn off some steam.

[0] https://www.youtube.com/watch?v=BHHCvcCUOWU


Could this be related to the resent articles about a researcher taking control of a play via the entertainment system?

http://www.wired.com/2015/05/feds-say-banned-researcher-comm...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: