Hacker News new | comments | ask | show | jobs | submit login
McLaren needs Compaq laptops with bespoke CA cards to maintain the remaining F1s (jalopnik.com)
227 points by mstats on May 1, 2016 | hide | past | web | favorite | 150 comments



This is a glimpse into what we'll see much more in the IoT-future. And while for a superexpensive car someone will take care, the same is not necessarily true for your home control system, washing machine, (insert other technology with lifetimes far beyond normal IT cycles), which may rely on an ancient app that doesn't run on any modern system any more.


Glimpse? It's already here: see what happened to Nest. The last episode of the Bad Voltage podcast basically features a long discussion on what IoT devices should be able to do when they lose the "I" for one reason or another.

Tbh, I think we'll eventually need laws: something saying "companies should guarantee serviceability and functioning of advanced devices for 10 years, rain or shine (bar nuclear holocaust)". For all the loathing this idea gets, it's the only way to tell the market to "not be stupid". Once businesses start planning for extended serviceability, they'll work on things like durable interfaces that can be easily implemented on newer patforms, offline modes etc etc, which will be innovative and will greatly benefit consumers. As these sort of systems become routine, the 10-year window will naturally extend itself.


> For all the loathing this idea gets, it's the only way to tell the market to "not be stupid".

No, the market will be just fine.

IoT is a fad. Much of it is just taking things that were simple and reliable and making them complex and flaky, in order to market as "smart" or "advanced" or "high tech".

Fire alarms that inexplicably run Linux. Thermostats that brick themselves one morning after an auto update. "Smart fridges" with a big touch screen that nobody will actually use. [1]

Buyers will get burned. IoT will lose its "wow" factor and all this will be seen, in hindsight, as comically unnecessary and embarassingly tacky. Consumer Reports will give out a bunch of black circles. The market will correct.

[1]: https://twitter.com/internetofshit http://favstar.fm/users/internetofshit


As I posted in another comment, DLink has the right idea with their WiFi cameras.

They've got all the IoT goodness - an app that lets you view from anywhere, etc. But they also function perfectly if they're not allowed to access the internet and expose their video stream in a standard way. It's the best of both worlds.

I've set up my cameras on a WiFi network that can't talk to the outside world. If DLink goes out of business, everything will keep running as though nothing has changed.


This kind of marketing is what needs to be trademarked (a la WiFi Alliance).

Where one of the stimulations of using an IoT mark is clearly differentiating on the packaging what functionality is "unconnected" and what is "connected", and then having graceful degradation.

The larger problem is that the code behind most of these embedded devices is the most atrocious copy & paste & hack from StackOverflow just enough to get it working, then shipping.


I agree, this would be ideal. Knowing what the product does when the cloud service inevitably goes away is a good thing.

I get the impression that some part of DLink understands that this is a feature that some people want. Even the people who design the packaging have made vague references to not requiring the cloud service. It's not spelled out as clearly as I'd like, but that might come in time.

As for the code quality, I honestly have no idea. I've had a couple of them running for months on end serving an MJPEG stream to a computer running 'motion', only powered off when they needed to be moved around. They do what I need them to without needing any babysitting.


Telephones worked a lot better before the internet. They still haven't recovered.

I assume the same thing will happen here. Most people do not research. Reliability will fall and, while it may recover a bit, will not recover fully.


To the extent that telephones are worse now than they were before (I'm not actually sure what you're thinking of?), I'd attribute it first to the cratering demand for telephones.


On POTS, call quality was arguably better, calls didn't get dropped, a handset and physical dialpad are ergonomically better, and the system in a region was less likely to go down because of a power outage. There many ways in which POTS is worse but there is no inherent reason why the good things about it had to be lost.


Has any of that changed? The fact that people aren't using telephones doesn't mean they've gotten worse. What was better before the internet and is worse now?


I was mainly comparing landlines to cell phones and VOIP but even the traditional landlines may be a bit worse as they've gone from a switched to packetized infrastructure.


Well, comparing cell phones to landlines and calling it worse seems to miss the addition of a key feature: mobility. It's kinda like comparing a home kitchen to one in an airplane and finding the airplane version woeful and without any improvement over what the microwave brought us in the 50's, all while neglecting to mention that one version is able to fly...


At the moment, the traditional phone network largely runs on a circuit switched architecture. Different companies have different plans for it; some seem perfectly content with it the way it is, others can't get out of it fast enough.


Yes, there absolutely is: packet switching is vastly more efficient than circuit switching.

Almost all of the good things of the PSTN directly equate to circuit switching; almost everything that sucks about VoIP: packet switching.


I don't know, conference call capacity is now ubiquitous and in Japan 4G phones now work over VoIP allowing for much higher quality calls.

And landlines still work...


That's not true. I remember the first GSM networks in my country, from the nineties, along with available phone models, and they weren't that great.


He probably means landlines or maybe analog cell phones. VoIP and GSM aren't great.


"Buyers will get burned. IoT will lose its "wow" factor and all this will be seen, in hindsight, as comically unnecessary and embarassingly tacky."

The tacky part is the most important one.

Circa 2000-2003, an LCD screen, or a flat panel TV, were somewhat interesting and exotic (and expensive). Now they are cheap and everywhere and ones ability to distinguish ones self, or ones surrounding is now based on not having a screen.

The Tesla 17" monitor that they use to fill up all the space they didn't design is, in my opinion, "peak display panel".


My Nest has saved me money and helped the power grid and the environment. I don't think that one is a good example of complexity for it's own sake. Plus using the Nest is just a lot nicer than the expensive pure garbage that came before it.

Security cameras, a remote deadbolt, door/window sensors you can remotely monitor (a lot cheaper than a subscription to ADT or whatever), these are all pretty nice as connected devices. I don't see going back.

I don't want to be throwing money down the drain either though. And the market will fail here. Most Nest users that got burned will buy another Nest. Of course you will because there's maybe one other thermostat at the home store that isn't garbage? And they still cost about as much.


> door/window sensors you can remotely monitor (a lot cheaper than a subscription to ADT or whatever)

ADT's value is that they continuously monitor the system status, contact you when it changes, and dispatch the police for true alarms. Unmonitored alarms have been a thing for decades.

> Of course you will because there's maybe one other thermostat at the home store that isn't garbage?

What is your definition of "garbage"? I have had no issues buying thermostats, either before or since the Nest.

> And they still cost about as much.

Huh? The Nest is $250. I never have, and probably never will, pay more than $100 for a thermostat, and usually pay less than $50.


I don't much care if police or fire is dispatched while we're out of the house. That's what insurance is for. I much prefer having the ability to monitor myself without an exorbitant monthly fee (I'm guessing it varies, but in my area it's typically about $100/month). It's mostly just to ensure I didn't drive off and leave the garage door open again anyways.

I bought several high-end 5 or 7 day programmable thermostats. Looking at Home Depot right now most of those are still pushing $200. And they're "Indiglo" pieces of junk that will be almost entirely illegible in a year's time IME.

The Honeywell Lyric for example can't turn on/off the fan on the device. You also can't schedule on the device. Need the app for that. And the app appears broken for a lot of users.

Nest just got a lot of things right. My only real complaint is the scheduling UX is kinda goofy. But it does work, and it's much quicker than any other I've tried. Otherwise it does everything I want and it's saved me money. That's a convenience and utility worth paying a bit more for to me.

This stuff mostly comes down to saving money to me. In the same way I buy my iPhone outright and spend $45/month for a 5Gb/unlimited-minutes phone plan, I like the idea of spending a bit more up-front to be more liquid later.

But back to thermostats. After reading the reviews this morning, unless you want to buy into a Kickstarter or something (with it's own risks), the market still seems to be mostly "garbage" IMO. Maybe the Ecobee is a decent Nest alternative? I dunno. I've had my Nest for about 5 years now. It's paid for itself. Hopefully I don't have to find out what else is out there anytime soon.


> I don't much care if police or fire is dispatched while we're out of the house. That's what insurance is for. I much prefer having the ability to monitor myself without an exorbitant monthly fee (I'm guessing it varies, but in my area it's typically about $100/month). It's mostly just to ensure I didn't drive off and leave the garage door open again anyways.

I care, because it's the reason I have a security system in the first place. I don't care much for a system that simply tells me if I left a door open. Also, there are things insurance can't replace. I'd rather pay for the risk reduction of having a monitored system then have to go through losing a bunch of personally important but otherwise middling value stuff again.

BTW, I only pay $58/month for full monitoring (burglar and fire), and could cut it to about $40/month if I shopped around.

> I bought several high-end 5 or 7 day programmable thermostats. Looking at Home Depot right now most of those are still pushing $200. And they're "Indiglo" pieces of junk that will be almost entirely illegible in a year's time IME.

I don't buy high-end thermostats. I put a $50 Honeywell (http://www.homedepot.com/p/Honeywell-5-2-Day-Programmable-Th...) in my mom's house and it has worked flawlessly for six years. I have the same basic thermostat in my own house, though that one is almost 20 years old. If it failed, I would get another one of those $50 Honeywells (or build my own). I'm not going to buy anything in Home Depots "WiFi" or "Touch Screen" categories until those come way down in price.

> The Honeywell Lyric for example can't turn on/off the fan on the device. You also can't schedule on the device. Need the app for that. And the app appears broken for a lot of users.

The Lyric is a piece of shit and I would never buy it.

> But back to thermostats. After reading the reviews this morning, unless you want to buy into a Kickstarter or something (with it's own risks), the market still seems to be mostly "garbage" IMO. Maybe the Ecobee is a decent Nest alternative? I dunno. I've had my Nest for about 5 years now. It's paid for itself. Hopefully I don't have to find out what else is out there anytime soon.

I think I understand your "garbage" comment now. You are only looking at the advanced, connected thermostats. In that case, I agree that everything in that segment except Nest is crap. Thing is, I don't care about that segment at all. Other than remote control I don't see anything any of those thermostats give me over standard 5-2 programmables that justifies the extra $200.


> I care

And that's cool for you. My MIL is the same way. Wouldn't live without a security system. I'm not one of those people so I don't want to pay those prices.

> BTW, I only pay $58/month for full monitoring (burglar and fire), and could cut it to about $40/month if I shopped around.

I live in a house in a relatively high-crime area of Dallas. It's not that cheap for me according to what neighbors say. But anything at all really is too much for me I suppose so eh.

> I don't buy high-end thermostats.

Ok... well, then in my poorly insulated 1970's home in Dallas I'd probably be spending an extra $50+ every month during the summer cooling the house when I don't need to. Thinking about it I suppose the value is highly variable on your home and location.

Yes, who doesn't love the old dial thermostats? That's some of the charm of the Nest's design I suppose. But efficiency isn't their strong suit. And I like being able to adjust the thermostat in the dark on my way to bed without turning on the lights.

I don't find much value in doing anything online however. I'd be perfectly happy with a Nest that only did the scheduling and motion sensing without any of the "OMG you can turn on the heat on your way home from work!" stuff. I don't mind being less than optimally comfortable for an hour or so while my A/C warms/cools the house once we get home. If Nest started charging for it I certainly wouldn't pay for it.

But that's just my experience.


> Unmonitored alarms have been a thing for decades.

Not in quite the same way, as people haven't been carrying around constantly connected mobile devices they could be alerted on. (You can't remote view a camera on a pager)


I've got a handful of DLink WiFi cameras that, in my mind, are almost the perfect way to do IoT.

They've got an app and can do all their stuff through the cloud, but I have them running on a separate WiFi network that is unable to access the internet. The cameras don't actually need that access, I just lose the cloud features.

This is, for me anyway, a really good way to do it. Consumers that want the stuff that the cloud gives them (easy remote monitoring, etc) can have it. Consumers like me that would rather roll their own recording system and access the cameras by VPN can do that too. This is possible because the cameras serve up a MJPEG stream at a specific URL, in addition to however they communicate with the cloud stuff.

If DLink does go out of business (I hope they don't because I'm going to buy more of these cameras), all of my stuff will just keep working. I'd love to see more IoT stuff that works this way.


Similarly, my house has two Honeywell networked thermostats. These are networked devices controllable via smartphone or web app, etc. But if the company stopped providing the service behind it, they're still perfectly adequate programmable thermostats. They'll keep on working fine, albeit requiring manual programming, rather than turn into paperweights.

I fail to see the excitement about the Nest, and I claim that yours hasn't saved you any money compared to mine, even adjusted for effort involved. The time it took me to set up my device - when I actually know what our work/home schedule is, so no training is involved, is minimal. And there's no confusion in the device when, at this time of year, I want to just use outside air through open windows to keep my home cool. A device that can leverage the human mind's power is great. A device that simply replaces one flavor of (trivial) mental effort with a different one isn't terribly valuable.


I agree with your assessment of the Nest. The number of times I've wanted to turn the heat on or off when I'm not at home is exactly zero.

I've got a manually-programmable Honeywell and it's been working fine for over five years. Apart from the very occasional adjustments (daylight saving time, resetting the furnace filter reminder, and replacing batteries) it needs very little care and feeding.


So... you prefer "Internet" of things that do not connect to the Internet?

(go ahead and downvote - just could not resist)


Yes, many of us would like for the "I" in "IoT" to stand for "intranet". Cloud-dependence and routing all that data through the Internet is both insecure, wasteful and completely idiotic from engineering standpoint. It's done because companies are testing a new way of extracting money from non-tech-savvy but tech-liking population.


Ironically, in some cases that's exactly what I want! But what it really boils down to is that the Internet piece should be optional and the device should still function (perhaps in a reduced capacity) if whatever it's accessing on the Internet goes away.


Absolutely!

There are many useful things I might want to do privately on an isolated network, that are not really compelling if I have to send everything to a third party with a 10 page privacy policy agreement.


I want them connected to my personal network. If they connect to the Internet, it'll be through the gateway I wrote.


Actually i would not mind a fridge with a built in RFID reader that can figure out its contents and dump me a list when i stand there wondering what to buy for the evening.


That's the sort of thing that will seem stupid right up until the point where it works great with no effort.

I feel like a lot of IOT stuff is like that.


I don't know about everyone else but I don't always want the same things in my fridge other than a few standards like milk, eggs, butter. And I can see at a glance if I need those. Everything else I buy more on the basis of what I feel like eating that night or the next couple of days.


Yeah, I'm not that enthusiastic about it. But it brings up interesting things like a system showing recipe ideas you can make without going to the store or whatever. Which maybe that is stupid too, but there are lots of possibilities. So in some sense, the 'works great' up there has to include a compelling use case.


And it doesn't need the "Internet" part to work at all. Just show me the list, or copy it to my phone over bluetooth or a local network.


Yeah it would require that all products that today have a barcode comes with an FRID tag, so its not something that would happen over night.


And you would need a way to reliably read a nfc tag in an ocean of other tags.


And you'd need the tags to be embarrassingly cheap. If you want your box of cereal to have an RFID tag, the RFID tag has to be a fraction of the price of a piece of cardboard.


I think you're dead wrong on IoT being fad but agree on the popular consumer examples you've provided. There are some gems out there now, but most of the rest are worthless without the internet. These products will wind up getting skipped over eventually and companies will begin making their products more robust in order to maintain presence in the market.

As (home) automation protocols settle down, devices will not need the Internet directly themselves, they'll communicate exclusively through a hub (like my house and Z-Wave, at the moment). These devices won't/don't need the internet to perform their basic functions, but you might lose some advanced functions without it, and that should be okay.

The only IoT devices I have in my house that actually talk to the Internet are my home automation hub (for the app on my phone) and the Nest Cams I've got. Had I not gotten them for free I would be using something else not directly connected. I didn't perform some wizardry to make this happen, I know the space and learned what was available and how to piece it together. I bought half the stuff from Lowe's/Home Depot and the other half from Amazon. This is all consumer grade stuff, nothing fancy. When the internet goes out my whole house still works, but I lose fancy features like chained events (or scenes) and I can't use my phone to control the house. Everything else works just like the house should.

That is where we will get to for everyone.


I like to think of it in terms of garage door openers.

Very few people will retrofit their garage door opener to be connected for $50.

Almost everyone will have a connected garage door opener when there aren't any being sold that do not have connectivity.


There have been burglaries in my neighborhood. The thief does a drive by with one code and sees which door will open. Then waits a few days to revisit the garage to steal bikes.

I wonder what would be more profitable: Stealing the bike? Or: leaving a note that links to a personalized sales pitch, including a vid of their door being opened?


Yeah, but that's a problem even without an internet connected opener.

For example, my left most garage door opener happened to get onto the same frequency/code cycle as the neighbor across the cul de sac's door opener about a year ago and their opener opens/closes my door EVERY TIME. Mine is an old Craftsman piece of crap but it's the door I never use so I just leave it unplugged rather than trying to actually fix the problem.

If instead of RF with half assed code hopping we could use more secure tokens through a gateway device that introduced very little lag, we'd have way more secure garage doors than what we have now.

Unfortunately those Internet module additions to garage doors are just adding weak Internet security to weak garage door security.


The European Union already has these laws, the goods must be fit for purpose for a "reasonable" duration. In the UK the maximum is 6 years, but it depends on the item and how much it is used.

However, up to now most things break because they wear out, or from a manufacturing fault. That's quite different from the manufacturer switching off some supporting servers, since most people would expect most old devices to work for longer than the minimum.


I recall the Norwegian consumer agency and Nokia was in a row back around 2000 about the length of mandatory warranty on mobile phones. Nokia wanted it to be 2 years, same as most electronics, but the agency wanted a couple more added or some such.


That doesn't improve the accessibility of the devices, which is part of the problem. Access is a feature that depends on the ability of the consumer and is therefore often not considered, but is also not promoted by producers. To simply legislate that products shouldn't fail doesn't improve the authority of the consumer much, now does it? That's blanked card legislation and is in theory not viable. 10 Years seems like a rather arbitrary rule, anyhow.

You are probably not actually a lawyer, neither am I. I could still agree with the desire for intervention when the majorities on both sides of the problem are unreliable.


>For all the loathing this idea gets, it's the only way to tell the market to "not be stupid"

Ah, the old "stay out of the business of the things I like, and regulate the things I don't" argument.

>greatly benefit consumers

This is why the "market" will innovate. If these companies don't provide benefit to you, or you don't like their practices, don't buy the product. There's no need for government to step in to write a bunch of arbitrary rules.


The free market is going to work everything out. That's why we don't need to regulate child labor/family medical leave/net neutrality... I could go on. History proves this argument a fallacy.


>The free market is going to work everything out

Are you actually trying to tell me that in places where, say, child labor is a problem they have free markets? That free markets caused child labor issues?

Show me anywhere there's remotely close to a free market, as opposed to massive government meddling and coercion, and we'll determine how well off the people are.

>History proves this argument a fallacy.

You have a bizarre view of history. I look back and see your vaunted governmental powers causing tremendous human suffering.


Yes, because standard business practice of fucking your customers over to the extent allowed by law is of course a result of regulations, and if governments didn't interfere then companies would not be fucking people over all the time.

Think about it. You probably owe your life to regulations ensuring your food is edible, and your medicine isn't poison.


>Think about it. You probably owe your life to regulations ensuring your food is edible, and your medicine isn't poison.

This is just laughable. Yeah, nameless, faceless companies who earn profit from consumers eating and drinking their goods are just going to poison everyone. It's not like they are composed of actual people with friends and families too, right? They want to kill everyone.

Again, show me the greatest historical body-counts being business related, not governmental or politically motivated.


The usual example is Bhopal: https://en.wikipedia.org/wiki/Bhopal_disaster

For food poisoning, see https://en.wikipedia.org/wiki/2008_Chinese_milk_scandal : "A [WHO] spokesman said the scale of the problem proved it was "clearly not an isolated accident, [but] a large-scale intentional activity to deceive consumers for simple, basic, short-term profits."

It's worth looking at the various incidents in the US leading up to the establishment of the FDA, such as https://en.wikipedia.org/wiki/United_States_Army_beef_scanda...


According to what you linked there, the factory of record was 49% owned by government entities. Is that the "free market" at work?

Some businesses have done bad things. Many more do good things. Not sure what the point is? Do some bad accidents mean the free market doesn't work?

Meanwhile, across the way, HNers are screaming about the Brazillian government enforcing laws regarding WhatsApp...


Even worse, even when the hardware is all still perfectly functional the devices will stop working as soon as the company that sold them gets acqui-hired and shuts down the web services they use.


You can probably just look at smartphones that drop out of support almost as fast as they ship to get a feel for where IoT will end up.


I was going to say the same thing. Apple supports iPhones for about 3-4 years and that's considered exceptional in the industry. People own cars and appliances for decades.


Well cars and certain appliances used to be fairly mechanical devices, and thus easily maintained with hand tools and a bit of effort.

My dad used to love working on cars until they got CAN and all that, and he maintained their washing machine for nearly 3 decades, until the frame rusted apart.

The latter he could do because it was a simple construction of two drums, a couple of engines, and a simple analog control unit.


Cars at least are incredibly more reliable now than they were just twenty years ago, not to mention twenty years before that, and go back another twenty years and you have the heyday of "planned obsolence".


Don't single out McLaren. Countless much more important systems can only be accessed through ancient tech. The Stealth bomber (the b-2) is a product of the same technological era as the F1. Would anyone here be surprised if the new ran a story about them being grounded because that one last laptop capable of talking to their systems finally gave up the DOS ghost?


I was not surprised when the news ran a story[0] around two years ago about the B-2 bombers getting computer soft- and hardware upgrades.

> “We’re re-hosting the flight management control processors, the brains of the airplane, onto a much more capable integrated processing unit. We’re lying in some new fiber optic cable as opposed to the mix bus cable we are using right now. The B-2’s computers from the 80s are getting maxed out and overloaded with data,” Single said.[0]

[0] http://www.dodbuzz.com/2014/06/25/b-2-bomber-set-to-receive-...


Ya, but what's the air conditioner running on? Flight management is the shiny end of things. How does the mechanic on the ground talk to the hydraulic system to perform a check?


There is an entire programming language called ATLAS that still runs on a lot of different computers. I assume the hydraulics could be tested with that.


If only the writers [0] of Silicon Valley had experience with the software for ground support equipment on aircraft like the F/A-18.

Actually, most of the F-22s still use i960MX [1] processors and I've heard funny stories about the techs raiding old laser printers to find spare chips for the ground diagnostic equipment.

[0] http://www.wired.com/2014/04/mike-judge-silicon-valley/

[1] http://www.militaryaerospace.com/articles/print/volume-12/is...


I guess I'd expect that it could be virtualized. Or, at the very least, that the hardware-specific API was somewhat-generalized or at least generalizable, so that it could be updated or used via a VM with a similar (even if dated) piece of hardware. Is it a lack of familiarity and developers for the code, or is it just not well-engineered?

At those dollar amounts, it seems like any proven-solved "problem" is solvable again, likely "easier" due to the technology improvements...


In these days, a proper API would have been a .sys driver loaded in config.sys, or a TSR (terminate but stay resident) program leaving some routines in memory, hooking a software interrupt as the entry point.

The (even then) ancient laboratory equipment I used in the 2000s, and often moved to more modern machines, never did this. It was all monolithic applications directly talking to parallel ports (port read/write to 0x378) or to memory mapped on the ISA bus. On the other hand, most stuff came with schematics, and from this most of the logic (short of a few PALs with blindingly obvious functionality) could be deduced. It all was quite simple stuff.

But as long as hardware is still available (as are the Compaq computers in the article) investing the time isn't worth it. Would I have to do this today, I'd probably run the applications in DOSbox, add a emulation of the old hardware to the emulator, and pass the commands via USB or Network to a FTDI USB to something chip. Or to a BeagleBone/Raspberry/... with its GPIOs connected to the real hardware (mind the 5V levels).

Unrelated to software running on old PCs, here's a guy who replaces the old CPU in the lab equipment with an embedded board running an emulated version of the original software:

http://www.jks.com/5370/5370.html


Ha!

This stuff is old enough it might need real mode with the A20 gate. Intel, in a rare fit of backwards incompatibility, removed the A20 gate a while back. EPT might be able to emulate it, but that assumes a hypervisor author tried.

And how are you going to plug that bespoke adapter into a modern system? CardBus? EISA? Who knows?


Yeah i recall reading about some particle sensor at a university lab that was running on some ancient DOS beast because anything newer would throw some timings off.

Never mind that the manufacturer was long gone, new sensor would be expensive, and the lab would have to suspend all experiments up to a year while running a series of baseline stuff to align old and new results.


I had to buy a very old laptop because the software for configuring Motorola sabers depended on the cpu for timing communication with the radio during the flash process. People have bricked their radios trying to emulate old clock speeds. I'd hope that they'd have a more robust protocol in expensive aerospace platforms, but I wouldn't be surprised if that wasn't the case.


Do you seriously think virtualisation was a thing in the 1980s when the B-2 was being designed and built? It wasn't even a thing when the F-35 was being designed and initially built.

It will be single-threaded apps running natively and talking directly to various bus adapters.

The most advanced processor available to them was a M68020, maybe. Multi-tasking was still hard, Linux and Windows 3.0 were still a decade away when the systems engineers were designing the B-2.

And that's assuming they were able to change the hardware platform in the 1980s - the designs would have been drawn up in the 1970s.

There is absolutely no way on Earth that anybody would have thought "what we need here is a VM". I'd be amazed if it even had a Hardware Abstraction Layer, and that had a bit of historical form, albeit not in real-time operating systems yet.


It certainly was a thing in the 80s, IBM was selling it in the late 60s. [0]

I don't know how difficult it would have been to put in an airframe though... I'm guessing: very.

[0] https://en.wikipedia.org/wiki/Virtual_machines#History


The first commercial computers were not PCs. You sound like you don't know anything about the history of computing. Almost every system technology we use was invented in the 60s and 70s.


You think they were sticking mainframes into the B-2?


The '020 was Popek-Goldberg compliant at least! Good chip.


You should write an article about the B-2, I think that would be great.


Why can't such systems use emulators?


Because you can't emulate the physical interface, you need to build an actual one. And if it has some custom ASIC in it, you're SOL.


Yeah, but it's from 1992. I wouldn't be surprised if it could be bit-banged with an arduino.


But you'd have to know what it does for all possible states.

Not impossible but non-trivial.

They are the kind of people that could have an ASIC ground down and reverse engineered. Non-trivial.

They have extremely skilled engineers and lots of money.


Well being that this is a connector, from 1992, I have strong doubts that it is anything complicated.

And you only have to know the states/transformations being used. Easily scoped.


Whodathunkit? "CA card" isn't exactly a Google-friendly phrase, even with the quotes. Which means this article doesn't say much.


From the comments: "Conditional Access - basically a 'dongle' card that responds to a security challenge with an appropriate response."

So the horrors of DRM strike again.


Interesting. I had a consulting gig for a client with a similar problem (though his dongle was LPT based, it wouldn't talk to anything but an actual original hardware port). Extremely specialist software system core to business, protected by hardware dongle. Permanent license but the vendor went away. As did the dongle OEM.

The extent of their problem in replacing the current system is really determined by just how bespoke the dongle is. I had the luxury of working with a windows based stack for the dependent software, albeit 16 bit. Not quite sure how I'd start on DOS, probably time to crank out SoftICE. First step is to hook into the bitstream between SW and HW (write a filter driver on Windows), likely encrypted so one must RE the routines out of program disassembley and put them in the filter driver (or whatever DOS equivalent hack). Thanks to __asm (no longer allowed in Win kernel?) in the MS compiler this may be slightly easier than you'd imagine. Once you're looking at the raw challenge/response you're back to tracing through disassembley in the debugger to find the places where the response is checked. If the system in use is one that was widespread, most likely the challenge/response system was implemented in a predictable way (after the code was written, often not by the original programmers, usually following step by step guides from the dongle OEM) which in some circumstances makes it possible to emulate a sufficient portion of the hardware without knowing any more about it than how it responds to a small subset of challenges.

This is 'easy mode', and it only takes a tiny bump in complexity to ramp it up to 'you'll be lucky' territory. Completely custom hardware and/or non extant vendors will do it.

People have mentioned decapping the chips, but you don't do this to legacy gear that is in your core business path. One of the 'fun' parts of these kinds of gigs is that the client will likely only allow you limited access to the hardware as their business relys on continued access.


That was my thinking too. Chances are, the dongle is not some kind of a pinnacle of crypto. People we emulating Aladdin HASP dongles for ages, for example, - and those were quite tough. I doubt that something done for DOS would be much harder.


Yup, this is precisely how the HASP emulators work. You plug them in and they install a filter driver that sits between the app and the driver and listens to all the challenge/response pairs. Once you're done, the filter driver can just playback the responses and you can remove the dongle. This technique works because the dongle basically works like this [0] :

  dongle(challenge:UInt8[4]) -> Response:UInt8[4] {
    return do_some_sekret_bit_shifts(challenge)
  }
When you implement the protection scheme, you are advised by the manual to use a small number of hard coded challenges for which you know the response ahead of time, which is what most people did. No API was provided to compute challenge responses without sending them to the dongle (that would reveal the sekret sauce[1]). There is of course no reason that you couldn't just pull the 4 LSBs off the clock every now and then and submit them to the dongle and store the result to be used later which defeats this type of emulation as the C/R pairs vary across sessions. For some reason most implementations didn't. I guess programmers don't like dongles much more than users.

There are some families of dongles (and some implementation patterns) that are way more complicated than this, but essentially the threat model they are designed for is 'people who don't have a dongle'

[0] I might be misremembering the size, but this was definitely a common pattern for several families of dongle.

[1] IIRC there were a few models of some lines for which the implementation of 'do_some_sekret_bit_shifts' was known or discovered enabling full emulation, but I can't recall which ones off the top of my head.


From the Jalopnik comments[1] it appears to be the docking port which connected to an optional "Automobile Adapter" #4 in this image[2].

[1] http://jalopnik.com/it-s-a-completely-proprietary-interface-...

[2] https://i.kinja-img.com/gawker-media/image/upload/xazddfqwsv...


While I don't doubt that the dock connector is in play, the "automobile adapter" is a charging accessory (per pages 12 and 13 of the manual: http://www.elhvb.com/mobokive/edwin/laptops/Compaq/Compaq%20...).


Aahh. Good info. But, yeah some sort of proprietary connector/accessory.


The "Automobile Adapter" indeed is only a switching power supply. I still own mine, with the proprietary Compaq connector chopped off. It's laying in my junk drawer and inside is guest a boring switched step up converter.

It could connect to the docking station just as easy as to the laptop itself, the connectors are identical on both.


Totally agree - to the point that the Arduino is running a "bespoke" MCU... I wonder if it was available in an artisanal mason jar would improve the appeal.


If it ain't broke don't fix it. Especially when the computers are relatively cheap compared to the bespoke hardware. My favourite example of this was the school district which used an Amiga to control the district's HVAC systems. Replacing parts occasionally on a 30 year old computer is far cheaper than an entirely new and unproved system.

http://hackaday.com/2015/07/23/this-little-amiga-still-runs-...


But it is broken. This is not about rewriting a working server component in Rust just because or chasing the latest Javascript framework-du-jour.

What will it cost McLaren to renege on their maintenance obligations on those cars if one of these laptops are dropped or broken or stolen or just bite the dust (they are "getting less and less reliable"), and they can't source a working one?


The article does state that they are working on an interface compatible with modern laptops.


Especially as all but the floppy is solid state.

I think they were running into problems with the radio modem rather than the computer, btw.


Solid state doesn't mean that it won't wear out. Electron migration over time will eventually render ICs unreliable. To be fair, that's less of an issue at the process sizes of 90s semiconductors - but it still happens.


From IT perspective nice to see hat McLaren still continues to provide support for this old consumer product with only a hundred running installations. They could have just issued sn end of life statement and ask customers to upgrade to newer version.


Yeah, but that's one hundred installations worth $10M+ each. It's worth being in the business of supporting them when repair bills can top one million dollars.

http://jalopnik.com/5982805/rowan-mr-bean-atkinsons-insuranc...


Imagine their service fees for a car with a present day market value of $13 million


I have a friend who owns a 2007 BMW 745. The car is an electronics nightmare and needs BMW's service software to keep it running properly. For example, installing a new battery requires a system reset through the service software. Problem is that the software runs on an old laptop that dual boots into a custom image of windows xp or some Linux based system. The machine needs an rs-232 port, which is not rare itself yet, but you also need a special OBDII to ethernet to rs-232 adapter. The maintenance software is a nightmare to work with and requires constant restarts to get the network connection to close in order to run some other test. You can get the computer on ebay with the software and additional hardware but they are they are getting rare and old.


You are probably referring to the BMW INPA software, which can be executed on any Windows 7 machine. It is hard to set up, which is why the usual solution is to just use a complete image of an OS (with the necessary software already installed) inside of a VM.

I personally use it all the time. The only issue I've encountered so far is that I need to use older versions of the RS232 to USB driver. The current ones don't work.


Not only INPA, but the whole suite of programs for the 745.


I don't know about this one. I was able to do some hacking on my 2015 m3 using just my apple laptop running boot camp Windows and an OBDII adapter (found on eBay) and some bootleg proprietary bmw software. Are you saying it requires more than this?


Different cars and software requirements. :)


A while ago I met someone with a side business in maintaining the Aston Martin Lagonda's high-tech dashboard. It's a set of CRTs driven by proprietary 70s 6v logic, and he claimed to be the only person still in business who knew how it worked.


Given they are widely considered to never have really worked, and they almost bankrupted AM, I suspect he may have just re-engineered them somewhat.


Sorry for being picky, but is there any context where the word "bespoke" provides more information than "custom"?


Custom implies customised - a standard product which is modified.

Bespoke implies, especially around tailoring, something created from scratch.


> Custom implies customised - a standard product which is modified.

No, not really:

> 1. Created under particular specifications, specially to fit one's needs: specialized, unique, custom-made

> 2. Own, personal, not standard or premade

https://en.wiktionary.org/wiki/custom


Glad I'm not the only one who had trouble with the word.

It looks like the word only ever applied to custom-made clothing. Yet somehow it was reappropriated to apply to computer hardware/software as well. Somehow this is an improvement over just saying "custom" -- a word everyone already understands :/


There's a difference.

A bespoke suit is made from scratch to specification or set of measurements. A custom guitar will be a variation on a production model.

"Bespoke" is more custom than custom.


No.

And to make things worse, any phrase containing the word "bespoke" just sounds grammatically incorrect. "Bespoke" is the past tense of the verb "bespeak", but that means the participle should be "bespoken". The phrase ought to be "a bespoken CA card", not "a bespoke CA card", in exactly the same way that one would write "a broken CA card", not "a broke CA card". But unfortunately, people insist on this ungrammatical-sounding word because they think it sounds fancy.


In my opinion, only as a dad joke referring to some custom bike spokes.


Geez, tough crowd...


> but MSO’s team understands that they can only remain the most desirable modern supercars ever made if they work on keeping them functional, drivable and just as fast as they were back in 1992.

JavaScript frameworks could learn something from this. In the frontend web development world, backward compatibility and stability are severely underrated.


> JavaScript frameworks could learn something from this. In the frontend web development world, backward compatibility and stability are severely underrated.

I'll play the devil's advocate for the HN's favorite whipping boy: anyone's decision to keep up with the latest and greatest JS libraries is entirely voluntary. It's like complaining that fashion magazines keep declaring new "colors for the season" every summer when you can keep wearing jeans and white tees from 4 years back.

The "web development world" is fashion driven (aesthetically), and currently "flat" is in. A site that looks like it's from the 90's is a signal that the site is stale (and hence less current), or sometimes an overt signal by owner to say "we don't care about aesthetics, we provide overwhelming value in other areas" (HN and craigslist come to mind). The web is immature compared to automobiles: the standards are always evolving to keep up with real-world usage. 'border-radius' is an improvement on background images from the 90's (which still work, btw).

Browsers cannot be accused of not being backward compatible - that is why websites from 1992 still render correctly in modern browsers, sans <blink> (thank goodness)


Sorry since this is not exactly "on-topic", but wanted to share my favourite video about the F1 "Ferrari Enzo versus Mclaren F1 - Fifth Gear" https://www.youtube.com/watch?v=2kLlmxUAB5A

I absolutely love this car. Definitely built for a purpose, not much hand-holding, just the sight of this manual switchboard, the position of the seat, the minimalistic dashboard, ... <3


Maybe not exactly on-topic, but this is the kind of digression we welcome here: specific and intellectually curious.

It's only the generic, predictable tangents (typically flamewars) that we try to weed out. Tangents that go someplace interesting are fine, and weirder is usually better.


What does "CA" mean in this context?


Conditional Access


Our control system at work needs a DOS program to perform diagnostics on the modems controlling communication to our subsea wells. (System made in the mid 90s).

Now when we service these modems the OEM vendor comes with DOS running in a VM on a normal pc. When you know what we we rent this PC for (few $K per month), I just can't help but laugh. This PC was also not possible to purchase from the OEM.

#oil


Next time you rent it copy the VM. Problem solved.


Yeah that's obvious, but we (operator) aren't legally qualified or allowed to touch this software or communications network. Only the OEM is allowed to use it. So even if we had the VM, we would be legally not be able to use it. We rent the ability for an OEM technician to use the laptop when we send the technician to our site. An added irony is that the protocol is just serial comms over a standard rs232 port.

But then on the other hand, you type the wrong thing and you can lose communication with a non trivial amount of a countries oil production (2,5%).


I think we have expectations that hi-tech cars from the likes of McLaren are 'hi-tech' through and through - they are not!!!

The McLaren F1 is a car for the track, the few examples that exist do go out and race. Over a race weekend I imagine the car is taken apart and put back together again in a multitude of ways, e.g. wheels taken off and different ones put on. Note how those wheels are held on with just the one big bolt that has to be tightened massively. That is not 'hi-tech', that is using the appropriate race-grade technology for the job.

I have only stared into the bowels of a McLaren F1 once, but I bet that beyond the gold there are lots of things held together with nuts, bolts and clips that look crude compared to bicycle technology with bearings that really are cruder than on a bicycle. Yet these parts can be swapped in and out and adjusted easily.

My point being that high-end race cars are not entirely high tech, under the hood there is stuff that is 'bits of bent tin'.


The F1 was never designed to race. It was built as a road car from the outset.

The GTR (racing) programme was spurred on by a customer, subsequent to the release of the road-going F1.


A lot of banks still run mainly on FORTRAN or COBOL. I guess it's "if it ain't broke, don't fix it" attitude.


I've worked at a financial place that ran COBOL for back-end services. Worked great. Maintenance sucked. But it was pretty impressive.


Financial institutions also advertise job openings for "senior COBOL developers". They don't bother to look for "junior" ones.


Serious question - why isnt this stuff, including today's manufacturing, using easily interchangeable modules with standard interfaces? Why doesn't everything just fit into a single box that can then be replaced with box v2 in a few years, adding more features and performance and removing this maintenance nightmare?


https://en.wikipedia.org/wiki/On-board_diagnostics OBD-II was made mandatory shortly after this car was built specifically for this reason, manufacturers wanted to maintain their lock in so in the interest of the customer regulations had to be mandate.


Sure, but that's more industry wide.

Why don't manufacturers just do this themselves so that they can easily replace their own components in the future?


Did you type this from a macbook?


No. I'm not sure what your point is but if you're talking about macbook components, they can still be changed out.

Regardless, the macbook itself should/would be the replaceable module since it comes with standard interfaces and is just a computing unit.


It sounds like McLaren didn't get very good documentation from whoever designed the interface and management software (and now the information may be lost). Spending more money on that documentation up front would have likely left them in a lot better situation today.


"We'd like documentation on this proprietary interface. How much does that cost?"

"No."

"What do you mean?"

"That's our proprietary IP and not for sale."


I've been in this situation. You arrange for a law firm to hold a copy of the code in escrow, to be released to the customer only if the company ceases to exist. Pointless of course IMHO since they'd also need a complete replica of your build environment etc etc but many customers insisted on it anyway, and we billed them for it, everyone's a winner!


Indeed, the required knowledge usually extends far beyond the code. How does escrow work in the event of acquishutdown ?


I'm sure the lawyers billed sufficient hours to cover all eventualities - but when we were acquired, our product replaced the acquiring-company's product in the same space, and all our customers came with us, so it was never needed in the end.


Just as well as a backup procedure without a tested restore procedure. You just don't know until you try to restore.


Then choose a different vendor. I gotta believe McLaren had a healthy gross profit margin on these vehicles and could just shovel money at the problem. I also find it hard to believe that only a single vendor could provide what they needed. Based on my experience I'd blame lack of foresight before I'd blame physical impossibility.


So what? I'm sure they could replace them if they felt like it was worth it. Clearly they don't.


Just about any aftermarket engine control unit would be able to take the place of the system in the F1


Meanwhile, millions of people receive their paycheque compiled on a 70s Tandem.


The price of failing to keep your technology up to date.


Why can't the system be emulated? Am I missing something?


If it has a custom ASIC in the connector card then it would be rather difficult. Possibly "grid the top off the chip and recreate the mask" difficult.


In theory, absolutely. In practice I've seen ASIC based challenge/response systems on industrial software implemented in such a way that they could be sufficiently 'emulated' simply by knowing a handful of challenge/response tuples. Believe it or not, this behaviour is (was?) actually specified in many dongle OEM's implementation guides. Basically they say : chuck out a bunch of fairly random challenges and more or less ignore the responses and only really check at a few critical points in the program flow.

This gives an insight into the threat model dongles are supposed to protect against. If you have access to the running system, the dongle and a halfway decent interactive debugger/disassembler cracking dongles is simply a matter of time.[0]

I've seen as low as 12 distinct challenges and a single significant response. This was in software for designing systems orders of magnitude more complicated and expensive than an F1.

[0] And if you don't have access to the dongle, it's mainly a question of more time and maybe a bit of code patching - but do you really want to patch the code that runs your chemical plant?


I can only imagine that they must be clever enough to have tried this, surely.

Or perhaps they are so full of recent comp sci and aero graduates that have never busted out a logic analyser that no-one ever said "we should just sniff the protocol, how hard can it be"


I make no comment as to the difficulty of their specific case. They (or more likely the consultants they hire) may well try/have tried this and find they have a more complex problem set. But bear in mind that 'sniffing the protocol' will require reverse engineering the encryption in use between the software and the driver and the whole deal requires someone who is both comfortable working at disassembler level and familiar with the platform and APIs and with the device driver model. Because it's not enough just to sniff the protocol, you must also locate the response checks in what might be a large piece of complex software - especially when you're looking at it disassembled.

And of course, you have to know all of this before you even know the right questions to ask, or what kind of skill set you need to buy in.

To those of us who grew up with +Orc, fravia and woodman, sure, this is like the first thing we'd try, but even for that generation this is a relatively esoteric skill set.

There's not enough info in the article to get anywhere near assuming this protection model applies in this specific case and I'm absolutely not suggesting that it does, only that I have encountered legacy or orphaned software systems which have been protected in this way and been able to successfully transition them away from legacy hardware keys.


Yes, I was agreeing with you and you went to the effort of explaining why :)

There a quite a few people ITT saying "just emulate it" and I imagine they have never lifted a soldering iron. Getting the thing to run when you designed it yourself can be had enough, let alone one that is hostile to analysis!


Who said it cant? Its economically non viable in the face of $300 replacement laptops.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: