Hacker News new | past | comments | ask | show | jobs | submit login
Self-Driving Cars Must Meet 15 Benchmarks in U.S. Guidance (bloomberg.com)
251 points by etendue on Sept 20, 2016 | hide | past | favorite | 297 comments

From the regulations: "Fall back strategies should take into account that—despite laws and regulations to the contrary—human drivers may be inattentive, under the influence of alcohol or other substances, drowsy, or physically impaired in some other manner."

NHTSA, which, after all, studies crashes, is being very realistic.

Here's the "we're looking at you, Tesla" moment:

"Guidance for Lower Levels of Automated Vehicle Systems"

"Furthermore, manufacturers and other entities should place significant emphasis on assessing the risk of driver complacency and misuse of Level 2 systems, and develop effective countermeasures to assist drivers in properly using the system as the manufacturer expects. Complacency has been defined as, “... [when an operator] over- relies on and excessively trusts the automation, and subsequently fails to exercise his or her vigilance and/or supervisory duties” (Parasuraman, 1997). SAE Level 2 systems differ from HAV systems in that the driver is expected to remain continuously involved in the driving task, primarily to monitor appropriate operation of the system and to take over immediate control when necessary, with or without warning from the system. However, like HAV systems, SAE Level 2 systems perform sustained longitudinal and lateral control simultaneously within their intended design domain. Manufacturers and other entities should assume that the technical distinction between the levels of automation (e.g., between Level 2 and Level 3) may not be clear to all users or to the general public. And, systems’ expectations of drivers and those drivers’ actual understanding of the critical importance of their “supervisory” role may be materially different."

There's more clarity here on levels of automation. For NHTSA Level 1 (typically auto-brake only) and 2 (auto-brake and lane keeping) vehicles, the driver is responsible, and the vehicle manufacturer is responsible for keeping the driver actively involved. For NHTSA Level 3 (Google's current state), 4 (auto driving under almost all conditions) and 5 (no manual controls at all), the vehicle manufacturer is responsible and the driver is not required to pay constant attention. NHTSA is making a big distinction between 1-2 and 3-5.

This is a major policy decision. Automatic driving will not be reached incrementally. Either the vehicle enforces hands-on-wheel and paying attention, or the automation has to be good enough that the driver doesn't have to pay attention at all. There's a bright line now between manual and automatic. NHTSA gets it.

I don't understand this anti-autonomy cheerleading. It's like people on HN live in a parallel universe where there have been a bunch of deaths from cars running Autopilot, whereas in the world I live in, it seems to be somewhat safer than a human alone. Like, people can mess up either way, but they seem to be less likely to do so when the car is also looking out for them. What am I missing?

You have to compare the one death using autopilot to one death of people driving Teslas without autopilot. Musk tried to compare it against the universe of drivers (Teslas, kids driving crappy cars, etc), which was a complete false comparison.

So the reason it was a big deal is because it was a huge fatality. Tesla drivers are generally a pretty safe bunch. Statistically, if autopilot hadn't been engaged, that death would not have occurred. Autopilot makes Tesla drivers less safe, not more safe.

Also, the government is doing self driving industry a huge favor. These fatalities could screw over the whole industry if they get out of hand. Musk is giving self driving a bad name.

Two deaths using Tesla's autopilot. [1]

[1] https://www.youtube.com/watch?v=fc0yYJ8-Dyo

Why was this not widely reported?

The dash cam video was only released last week, in conjunction with a lawsuit. Now it's on all the mainstream news outlets, from the Wall Street Journal to the New York Times to Fox News.

This is yet another "Tesla hit slow/stopped vehicle on left of expressway" accident. There are now three of those known, two with video, one fatal. Watch the video. The vehicle is tracking the lane very accurately. Either the driver is very attentive or the lane following system has control. Then, with no slowdown whatsoever, the vehicle plows into a stopped or slow-moving street sweeper.

Here's one of the other crashes in that situation.[1] This was slower, so it wasn't lethal. There's another one where a Tesla on autopilot sideswiped a vehicle stopped at the left side of an expressway.

[1] https://www.youtube.com/watch?v=qQkx-4pFjus

Jeez, that's pretty bad. This seems like the most basic case that autopilot is supposed to solve, and yet it still crashes?

IMHO, autonomous car should use at least two autopilot systems from independent vendors, to avoid single point of failure.


"A man with a watch knows what time it is. A man with two watches is never sure."

Two is not enough for voting. If you have two clocks you don't know what time it is.

It's enough to catch errors to alarm human and brake.

Sounds like we need three, à la Minority Report.

Dangerous. What if they end up with different interpretations, which one is actually correct ?


If you are on a highway at full speed, a sudden brake can lead to an accident (folks behind you not reacting on time).

So don't brake suddenly when unnecessary, only in case of emergency. In other cases just slowdown to full stop and alarm human driver in process, giving him more time to react.

IMHO, emergency braking must be mandatory for every new car with top speed greater than 60 km/h.

Last I heard, nobody has been able to verify if the car was actually in autopilot mode. However, the emergency braking also clearly failed, if the police report that no attempt to stop was made is true.

In regard to safety its equally important to report doubtful cases as they may be a sign of something occuring.

It's classical Single Point of Failure: single fault of just one subsystem leads to crash.

Surely the car keeps a log?

In thr most famous Tesla case, the driver was a huge advocate and YouTuber who pushed autopilot to its limits intentionally, not being a good copilot.

It's not anti-autonomy.

It's the offence that an engineer feels about something being marketed as something it's not.

Tesla is fooling the public. The opinion of the general public who don't drive Tesla's cars is that automated driving is already here and Tesla is leading the way.

In your reply to a well-argued post, you offer no similarly well-argued refutation of its points, just an emotional appeal based on generalities (I say 'emotional' because you take an explicit 'it is us against them, you are either for us or against us' position.) What you are missing is that opposition to simplistic arguments and false dichotomies where safety is an issue is not opposition to autonomy. What you are missing is that one can look forward to autonomy while advocating reasonable caution.

Autonomy will be great. We aren't there yet. Tesla is/was deceptively marketing their capabilities so they can risk their customer's safety with the opposite of informed consent (mislead consent? [0]). They are doing it in order to collect data that will get them to real autonomy first. That's fucked up and greedy. The other comment demonstrated that it is in fact risking their customer's lives. It's safer than some random human alone, but not against a comparable human alone.

[0] You can say they tell you to keep your hands on the wheel and all that, but they themselves manufactured/fanned a ton of hype to the contrary. It's like arguing that you should have paid more attention to the EULA.

i don't see any anti-autonomy in that post?

he's definitely anti "disguising level 2 as autonomy" though.

I'm not sure they understand just how many intermediate steps requiring non-obvious technical progress are going to be required between 2, 3, 4, and 5. On a premapped track, 3 is just fine at present, but Google is nowhere near 3 for the sort of adverse conditions and unmapped road alterations that are common in much of the road network. This is going to be an iterative design process if it's moving forward at all.

A likely future is one where automation is only enabled for consumers as an option on a minority of roads (starting with the Interstate Highway System) that have been heavily mapped and managed, and we work from there, developing the algorithms at high sample size, then slowly extending out into the state highways and arterials. The roads and maintenance actions will likely also, as the tech progresses, have some modifications made to increase reliability.

These cars are going to need a large quantity of sensors; The Uber self-driving car has "something like 20 cameras, a 360 degree radar, and a bunch [7] of laser [rangefinders]", and this is a decent start; a Tesla and even a Google car is simply not equipped for enough edge cases to let a consumer near without making them hands-on-wheel liable to take over.

99.99...9% of my driving takes place on heavily mapped well-managed roads (occasional pothole notwithstanding) that are heavily trafficked by other cars. As far as I'm concerned, if Google can do this, their vehicle is fully autonomous.

Only like three towns are "heavily mapped" to the level required for a Google self-driving car to work. And the problem is that if the car relies on the map, your first car to see a road or a change in the road, won't know what to do.

"As far as you're concerned" doesn't mean a whole lot.

For the Tesla comment, what more could you expect from them right now? If you have auto pilot on your hands have to be on the wheel or it vibrates and then it starts slowing down e eventually since it knows it's not safe.

People can work around any system for this, but stuff like this makes it have to do it consciously. Seems pretty reasonable on Tesla's part.

You could expect them to not call it autopilot, and instead call it lane keeping, like it really is. The name does a really good job at setting a terrible expectation, almost to the point of undoing any attention grabbing mechanism to keep users engaged.

The NHTSA is hinting in that direction, indicating that manufacturers must clearly distinguish between driver assistance systems (levels 1 and 2) and real self driving (levels 3 - 5). There will probably be some standard on this. There has to be; consider what happens when car rental fleets start having some of these features.

There was an accident with a Volvo where someone was showing off the pedestrian safety system, and hit a pedestrian. It turned out they hadn't purchased the pedestrian safety system.[1] Something is needed to prevent problems like that.

Then there's mode switching trouble. Classic problem with aircraft control systems. Tesla disengages the "autopilot" if the driver touches the brake. The trouble is that this also disables automatic braking, as the driver is assumed to now be in control. So tapping the brake without applying it fully in a hazardous situation causes a crash.

All these driver behavior problems with shared control authority are hard. Maybe harder than going for level 3 and letting the automation do it.

[1] https://www.youtube.com/watch?v=_47utWAoupo

It reminds me a little of the way in which airbags are always described as "SRS" (for "Supplementary Restraint System"), to try and make it clear that they're supposed to be used with seatbelts, not instead of them.

I think the name isn't too blame, it's the misinterpretation of it (and possibly tesla overselling it).

"Autopilots do not replace a human operator, but assist them in controlling the vehicle, allowing them to focus on broader aspects of operation, such as monitoring the trajectory, weather and systems."


I've used an autopilot on a yacht and didn't expect it to dock or avoid ships. A plane autopilot doesn't freak out when the pilot takes their hands off the controls. So there seems to be room to allow the name but tighten how it's used.

> You could expect them to not call it autopilot, and instead call it lane keeping, like it really is.

That is all that most auto pilots in planes do, however. I don't get where "autopilot" somehow came to mean full autonomy in cars but not in planes (and other vehicles like boats) where the term was used previously.


Yes, but with planes and ships (on ships it's almost always called track pilot and speed pilot, by the way) you can let go of the controls for extended periods of time. In cars, right now, that's a recipe for disaster. Functionally they may be the same, but practically speaking it's very different.

This is the key point. People don't care about what it does in detail, they think about the user experience.

Autopilots in planes can take you across the world. The car analogy would be only having to pull into and out of your driveway. And while they don't use it so frequently, aren't most of the planes most people fly in (big commercial aviation) certified to autoland themselves? The connotation of autopilot is absolutely that it can fly itself. Same with tesla's autopilot.

An autopilot in a Cessna can most definitely not take you around the world. Also, autopilots even in a commercial jet keep the plane steady along a route, they don't deal with collisions at all...you could call it "lane keeping" if you want to sue the airline industry for misleading us over a term they created.

Autolanding is not autopilot. In fact, autolanding is not even certified if multiple autopilots are not available to provide redundancy.

Most people who have flown don't even know what a Cessna is, much less its capabilities. It's totally irrelevant. Same with dealing with collisions, which aren't analogous challenges between flying and driving. To a non-engineer, the point is that autopilot gets you from point A to point B, not the details of how it has to achieve that.

> Also, autopilots even in a commercial jet keep the plane steady along a route, they don't deal with collisions at all...

Yes, they do. Then the problem is when the 2 autopilots of the 2 planes take the same evasive manœuvre...

Ah, the awkward old dance where two people try to walk through the doorway and end up mirroring each other's movements trying to get out of each other's way...

The aviation world actually solved this problem a long time ago. Everyone turns to their right. FAR 91.113:

> (e) Approaching head-on. When aircraft are approaching each other head-on, or nearly so, each pilot of each aircraft shall alter course to the right

(among other collision-avoidance regs in that section)

I'm pretty sure the aviation world inherited this protocol from the nautical world.

The planes talk to each other and run a common protocol that tells the pilots what to do to resolve the conflict. The problem is when one of the pilots doesn't listen to the nice robot voice yelling at him to DESCEND NOW!...

TCAS has the unit with the higher serial number yield to the one with the lower serial number.

The name is not the problem. Regardless of what you call it, once people find out empirically that their attention is not needed most of the time, even the most well-intentioned minds will wander.

They don't have to implement SAE 2 and if their implementation isn't safe, doing the best they can might not be good enough.

I think there may be a fundamental flaw with lane keeping. It removes the driver from doing anything but still requires constant vigilance. That might be asking too much. My ADD is too strong to wach the road without having to do any part of the driving. I suspect a lot of people are the same way.

If most drivers are just keeping their hand on the wheel while day dreaming, Telsa should be forced to just disable the feature until the tech is ready for Level 3. Or use the Level 2 tech as a backup only.

Heck I refuse to use cruise control at all because it makes me get board. My personal solution to avoid boardemn while driving is to drive faster. Clearlly I am not a safe driver but, I'm going to need full automation to help me out.

There's a good chance that's what happens. Auto-braking (level 1) is likely to become standard, like anti-skid braking. Full automatic driving (levels 3-5) will be options. Semi-automatic steering (level 2) may disappear as the higher levels start to work. The shared responsibility between driver and control system is too messy at level 2.

Not totally accurate. You actually get two warnings before autopilot disengages. You have to put a hand (not hands) on the wheel at certain intervals, but you certainly can be hands-off a majority of the time.

They'd never do it but I imagine a system that was loud, flashy, and embarrassing, that activated when autopilot is misused (i.e. auto-pilot is demanding user intervention and intervention is not given) would be most effective in incentivizing drivers to change their behavior.

I.e. a loud beeping noise that annoyed pedestrians and other drivers until you took the wheel. Kind of like how accidentally triggering your car alarm in the parking lot will lead to a very hasty correction on your part.

The car already cuts the stereo for a moment and beeps at you. If you ignore the first warning, it mutes the stereo and beeps until you resolve the situation.

It sounds like they are suggesting something more like 'cuts the stereo and begins announcing on internal and external speakers that the driver is not paying attention and the car will slow to a stop in 5,4,3,2,...'.

Having your hands on the wheel is not a reliable indicator that you are paying any attention and are ready to take control.

An eye-tracking system might work.

They would be better off making SAE 1-2 features 'game' like.

You get positive points for avoiding situations that are noticed (but not within the audible warning threshold) or correctly reacting to input (warned but not yet in automated 'fail safe').

Now if only they could automatically make cars exiting a rolling slowdown on the freeway actually get back up to the indicated speed of travel in an expedient manor.

Excellent analysis. It makes me optimistic that we as a society are going to be able to work this out. We might not of course, but it's possible.

"Automatic driving will not be reached incrementally."

Not true.. There are other ways of doing this incrementally. For example, slow speeds, closed roads (no pedestrians or other cars), only in good weather, ideal conditions, etc.

What he's saying is that it's not good enough to be able to control the car 90% of the time. Either it needs to be robust enough to operate safely without human intervention 100% of the time or it needs to somehow enforce that the driver is alert and capable of taking over 100% of the time.

We can't have an autonomous car that expects a driver to take over in a dangerous situation if that driver hasn't had to maintain control the entire time. For instance, there are youtube videos of drivers moving to the passenger seat in a Tesla with autopilot on.

There are other ways of doing this incrementally.

Yes, and they're all undesirable, unworkable, or useless, as your own post points out.

They are not undesirable at all. There are plenty of circumstances where these are great use. Airports, for example. Downtown cores. Smart Cities developed from scratch. Resorts. The list goes on..

I'm sure when you open Jurassic Park youll enable auto pilot.

Resorts are irrelevant, smart cities are an oxymoron and would come way past the time wheen level 5 would be worked out.

Downtown cores are pretty much the worst possible place for level 2 systems at this time.

The idea that Google is anywhere near level 3 is merely some incredibly good marketing, and a fat pile of deception. Google has an impressive tech demo, not a product nor a reliable technology.

Nobody is above level 2.

Google's self driving system only basically works with the route preplanned and premapped ahead of time, specifically for that car. Even small changes in the environment are potentially devastating. And even mundane weather changes it isn't prepared to handle.

It should be well understood that if the only people who can safely handle the vehicle are professional test drivers on a preplanned route, the car isn't ready to say its at the level it claims it is.

Can you back that up?

Circa 2014, highlights below: http://www.slate.com/articles/technology/technology/2014/10/...

"These maps contain the exact three-dimensional location of streetlights, stop signs, crosswalks, lane markings, and every other crucial aspect of a roadway."

"But the maps necessary for the Google car are an order of magnitude more complicated. In fact, when I first wrote about the car for MIT Technology Review, Google admitted to me that the process it currently uses to make the maps are too inefficient to work in the country as a whole."

"To create them, a dedicated vehicle outfitted with a bank of sensors first makes repeated passes scanning the roadway to be mapped. The data is then downloaded, with every square foot of the landscape pored over by both humans and computers to make sure that all-important real-world objects have been captured. This complete map gets loaded into the car's memory before a journey"

"The company frequently says that its car has driven more than 700,000 miles safely, but those are the same few thousand mapped miles, driven over and over again."

"Chris Urmson, director of the Google car team, told me that if the car came across a traffic signal not on its map, it could potentially run a red light, simply because it wouldn't know to look for the signal."

Google's entire business advantage is based on cloud. I welcome anyone to prove that this has changed.

Google is definitely level 3. Their presentation at SXSW this year shows it,


Nobody's saying their cool little 3D renders aren't awesome looking, but that doesn't really mean much. Google has PR down to an art form, but if you asked Google to drive one of their cars to Chicago, they couldn't do it, no matter what the weather. They only work in a small nearly closed course environment.

> if you asked Google to drive one of their cars to Chicago, they couldn't do it

Nobody here is making that claim.

If Google operated a taxi service within Austin, we would say the car operates at level 3. SAE levels say nothing about where the car is operating:

"At SAE Level 3, an automated system can both actually conduct some parts of the driving task and monitor the driving environment in some instances, but the human driver must be ready to take back control when the automated system requests"

It wasn't mentioned in the bloomberg article, but the 15 areas covered are:

  • Data Recording and Sharing
  • Privacy
  • System Safety
  • Vehicle Cybersecurity
  • Human Machine Interface
  • Crashworthiness
  • Consumer Education and Training
  • Registration and Certi cation
  • Post-Crash Behavior
  • Federal, State and Local Laws
  • Ethical Considerations
  • Operational Design Domain (operating in rain, etc)
  • Object and Event Detection and Response
  • Fall Back (Minimal Risk Condition)
  • Validation Methods
Not sure if they're specifically ordered, but it seems positive that Data recording and Privacy are up at the top.

This seems suspiciously like government getting something right... with regards to a quickly-evolving new technology market...

has this ever happened before?

Can you think of a concrete example where the government got it wrong?[1] For the sake of argument, the federal government in the last 50 years? Maybe the encryption export ban. Or the CDA, but that was quickly reversed and the part that's left (Section 230) was really instrumental in the rise of the modern web.

[1] And I don't mean wrong as in "NSA spying" because you disagree with the policy. I mean like, "regulations mandated everyone use Beta tapes and laser disk even though they quickly became obsolete."

ITAR comes to mind. Its basically bans export of dual-use (mil/civ) technologies. Its has made for incalculable harm to our aerospace industry and other industries that produce things that are classed dual use (encryption used to be one of them). This is the same law that banned encryption.

You have things like companies in Aviation Week (a big aerospace industry mag/site) running full page ads for sensors and other aerospace items proudly claiming its ITAR free (means not made/designed in US). A company I worked for bought a high power (2.5kW) laser from Germany. It failed and cannot be sent back to Germany for repair due to ITAR (tooling needed to fix it cannot be easily moved and probably would fall under ITAR). High end CNC machine tools will brick themselves if they are moved without the manufacturer specifically blessing the move due to ITAR regulations (earthquakes can trigger the "I've been moved without permission" response).

There is a countless list of other harms it has caused, but I have no direct experience with. ITAR is fairly easy to get around for the "bad guys" because they can just not buy US goods.

How about the DMCA? It's routinely abused; it's easy to issue takedown notices and difficult to defend against them.

e-cigs. We don't know their exact relative harmfulness/safety, but there is fairly broad consensus that 'vaping' is better for your health than cigarettes.

Clearly more work needs to be done to rigorously investigate and evolve the space, but recent Federal regulations have essentially put small innovators out of business.

In sum: a harm-reducing item is being regulated into the ground.

Some HIPAA regulations that pre-date the rise of shared virtual servers in "the cloud" are quite outdated and cause quite a bit of trouble for no real benefit.

> Some HIPAA regulations that pre-date the rise of shared virtual servers in "the cloud" are quite outdated and cause quite a bit of trouble for no real benefit.

What HIPAA regulations are you talking about? Other than HITECH guidance (which can sort-of be seen as a "HIPAA regulation"), HIPAA regulations don't generally specify technologies at all, and I can't think of any that I would describe as outdated or troublesome due to the rise of shared virtual servers and "the cloud", whether they predate it or not.

The biggest thing is that we can't run software with unencrypted PHI on physical hardware that is simultaneously running other people's code. In practical terms this means that we have to pay AWS some $ to get dedicated instances and also we can't use ELBs in the standard (easy) way. There are some other things as well.

> In practical terms this means that we have to pay AWS some $ to get dedicated instances

This is a feature, not a bug. It also is neither HITECH nor HIPAA; it is instead AWS's requirement in order to sign your BAA.

> we can't use ELBs in the standard (easy) way

Also neither HITECH nor HIPAA. ELBs are used in a PHI-related scenario identically to any other scenario. Unless you are referring to using it as an SSL terminator, in which case I would say "the standard (easy) way is always wrong".

> The biggest thing is that we can't run software with unencrypted PHI on physical hardware that is simultaneously running other people's code.

There is no, AFAICT, no regulation under HIPAA or related law that requires this. Certain service providers may have determined that they cannot provide guarantees of privacy/security without this technical restriction.

That seems like a fairly reasonable thing given you're talking about encrypted PHI... it's some extra $ for a considerable reduction to overall attack surface when processing the most sensitive type of personal data.

I don't think this meets OP's definition of "wrong".

In practical terms I don't agree that the threat of someone doing all of the following things is worth worrying about (in comparison to many other more likely failures):

1) determine what physical hardware in aws the target is running code on

2) somehow get the aws virtual machine manager to let the attacker run their malicious code on the same hardware

3) somehow pierce the protections of the virtual machine to read memory being used by the target application

4) figure out how the data is stored in memory in order to make sense of anything that was read

> In practical terms I don't agree that the threat of someone doing all of the following things is worth worrying about

In AWS case, this is an AWS rule about when they will sign a HIPAA BAA, even though there is no HIPAA regulation that specifically prohibits the arrangement at issue. AWS clearly thinks it is worth worrying about.

When you run your own public cloud, you can determine what risks are worth accepting potential liability for.

Yes, I agree that Amazon is behaving perfectly rationally given the legal environment. My point is that the legal environment has been designed in an un-optimal way from a technical perspective. Identifying such a situation was rayiner's request.

> Yes, I agree that Amazon is behaving perfectly rationally given the legal environment.

I'm not commenting on Amazon's rationality (I haven't actually evaluated the security concerns that would determine that.)

> My point is that the legal environment has been designed in an un-optimal way from a technical perspective.

And you haven't pointed to anything in the legal environment that is suboptimal from a technical perspective. You haven't even pointed to anything in the legal environment at all.

Amazon (as a BAA) has certain administrative responsibilities for putting administrative and technical safeguards in place to prevent breaches, and certain obligations and liabilities in the case of breaches. HIPAA and related laws and regulations do not specify the specific administrative or technical safeguards, though they do specify areas that must be addressed.

Amazon has decided that the particular technical arrangement you prefer is too high of a risk, but you haven't pointed out anything that indicates that this is the result of an outdated regulation that results in poor technical choices rather than technology-neutral regulation and a reasonable evaluation of the security concerns of the particular technical arrangement you would prefer.

People said the same thing about cold boot attavks against encryption keys. Yet today the police and others are using that and other NAND attacks regularly.

HIPPA is a very easy compliance standard to meet. If it seems difficult to meet those requirements with your standard tool configurations, you should think about what that means with respect to the integrity of your data.

I would like to see a case of a cold boot attack by the police.

Memory forensics is a thing.

Google around with terms like forensics and "Volatility" or "Volatility toolkit" and you should find some presentations and other references.

I know what memory forensics is and I use Volatility and Second Look and quite a few other things pretty often, I've asked specifically about an instance of cold boot attack that you claimed in a hyperbole that are used often or at all by the police.

You know what I don't need a case, please find me a jurisdiction in which cold boot attacks have passed forensic certification, e.g. a link to the process like for example from a body equivalent to the ASTM https://www.astm.org/Standards/forensic-science-standards.ht... would suffice.

I asked my doctor to email me my records and was told it is illegal due to HIPAA it makes no fucking sense but that's what it is.

It isn't illegal to email records under HIPAA. But your doctor probably doesn't have a system set up to securely email records (such things do exist), and their practice has probably adopted privacy policies that don't allow emailing for that reason. Doctors aren't generally compliance experts, and are much more likely to know what the policies of their place of work allow than the distinctions between what HIPAA allies and what their place of employment has adopted as policy based on the particular technology they've decided to adopt and their particular level of risk tolerance and other factors.

Such as? HIPAA generally has to do with organizational access controls and not specific technologies.

Also, certain provisions of FERPA precluding use of cloud accounts for holding student data. I think those may be the archetypal examples.

The law that allows the Government to access all cloud-hosted emails that are older then 6 months, without a warrant.

> Can you think of a concrete example where the government got it wrong?

Stem cell research.

I think 2bitencryption's point is more along the lines that the government regulates fields and that regulation quickly becomes unworkable because its a "quickly-evolving new technology market." For example, if the government mandated that all computers had to be Windows 9x-compatible that might have prevented the rise of the iPhone.

The embryonic stem-cell research ban didn't have anything to do with the underlying science and technology--it was based on a moral objection to the practice of destroying embryos for research. If the government had, for example, mandated the use of some testing methodology that soon became obsolete, that might be more on point.

Ah, I see. Thanks for clarifying, I thought it was a broader question.

The VA Healthcare debacle. The government was unable to timely provide healthcare and instead of trying to fix the problem they hid and covered it up by falsefying records. The new system they made is complexly incompetent where doctors and patains are force to spend days on the phone with people who have no clue what they are doing.

Software patents.

Well there was the whole Federal Government inventing the internet thing at one point.

Yes. There's been a decades-long smear campaign against public management of common infrastructure.

It is truly unfortunate that among those perpetrating the smears are the public managers themselves, in the guise of doing their jobs.

What do you call a smear campaign against a target that's actually incompetent?

What do you call an indiscriminate smear campaign against a large target, parts of which are incompetent, and parts of which are competent?



Frequently. Passenger air travel. The highway system. Many others.

The FAA and NTSB have really worked well.

Interesting "Federal, State and Local Laws". So what happens when the self driving car violates a law and the police office pulls it over. Who gets the ticket? If there is no steering wheel... can it even get pulled over?

If I were to guess, once self-driving cars are widespread, cars won't be pulled over any more for driving-related issues (ie, cops won't radar any more). However, they will probably be required to have a kill switch that can pull it over for other reasons (ie, if the cops thought your car was the getaway vehicle from a bank robbery)

This is of course once almost all cars are self-driving so it'll be interesting to see what happens in the midterm.

> If I were to guess, once self-driving cars are widespread, cars won't be pulled over any more for driving-related issues

There will still be black people in cars

Solution: since you don't need to see out of the car anymore, all windows have 0% VLT tint unless it's stopped.

No problem, they'll just unload .45 rounds into the vehicle until they hit something that makes it stop.

Cops pull over cars for traffic infractions for three primary reasons. The first is for driving safety, the second is for revenue, and the third is because it leads them to arrests of idiot criminals who can't be bothered to fix their tail lights.

The revenue part is coming under a lot of scrutiny recently, since it's being proven to have very regressive effects. The Federal government will make it more expensive than the revenue they generate from it.

I imagine it would be like radio tag toll lanes now. If you go through and do not have a tag, they mail the vehicle owner the ticket. Most violations would not need stops. I don't think private vehicle ownership will survive self driving cars.

NHTSA level 1 and 2, the driver. NHTSA level 3-5, the vehicle manufacturer.

The cynic in me immediately interpreted "Post-Crash Behavior" as the exact opposite priority from "Privacy".

I was thinking it meant the car would not drive itself away and hide after a collision :-)

That could actually be an interesting problem. It might sound like "in the event of a impact >Xg, stop, shut down, and wait for police/NTSB/etc to come investigate". But if, say, you hit a deer on a wilderness road in the winter, that behavior could lead to the passengers all dying of exposure.

Do self-driving cars have a button labeled "fuck your rules and DRIVE"?

A self-driving car that refuses to move after a minor accident on a wilderness road sounds like a great opening scene for a horror movie.

Probably any self-driving vehicle should have a button you can push that amounts to consenting to "The car will now record that you initiated a manual override. You are now in full control. Anything you do is your responsibility." Insurers will throw a fit if you push this button but it's better than being stuck in a bad situation because your car can't figure out a way out of it.

Likewise, in most discussions of self-driving cars, it is noted that they probably won't work well in the snow. Someone (presumably not from a snowy area) will then say that the car will pull over and wait, as you shouldn't be driving in a snowstorm anyway. They never say what's supposed to happen next, with the highways full of people whose cars have stranded them. Are they all supposed to call for cabs? But wait, cabs have been replaced by self-driving cars...

I'm pretty sure they'll eventually figure out how to get self driving cars working in snow, or rain, with protocols on when to stop that match when humans should.

Yes, unless it's level 5. Then you walk.

That's a very short section, but it looks more like how the driving system itself responds if a sensor is damaged in a crash. Basically, that it should hand control back to the driver. And also if you crash and then repair the system, it must somehow be validated/tested before being put back in service.

The data collection "black box" side of it is in a different section.

> it must somehow be validated/tested before being put back in service

I pity the mechanics for that one. You just know the car manufacturers are not going to want some unwashed shade tree mechanic, or even a legitimate independent garage, to have access to do that.

Let alone self-maintenance. There goes the right to repair...

If you are driving a modern car, it has a blackbox that records accident data.

Thanks for listing them, I was curious indeed.

By the way, in which area do the following requests fall:

- Yielding to an emergency vehicle with sirens on.

- Moving backwards to a safe and large enough spot when the route is too narrow to fit self-driving car and oncoming huge lorry (and there is no line marking the limit between road and ravine).

- Upon instructions from authority, recognize that the highway is closed due to an accident and, no matter what the driving code says, you actually have to make a U-turn on the highway and follow the crowd. Alternatively, just take that route (yes, the one with the large no-entry sign at the beginning) or that narrow path in the wood (yes, it exists, even if Google Maps isn't aware of it). At the bare minimum, park yourself off the road and let the others move on.

- Verify whether a queue is forming behind you. Listen to the honkers, they may be right. When you are an obstacle to the most part of traffic, moving to the side and letting others pass from time to time is sincerely appreciated.

Now if the could just come up with some "Data recording and Privacy" regulations for all electronic devices. So, Google and Facebook can stop creeping me out. They're like the creepy neighbors that always look at out the window to see what I'm doing.

Do you need to use their services? Have been (mostly) Google and Facebook free for a long time, and I don't live under a rock. Maybe you should also try the "FB&G"-free diet too ...? (It's not for everyone :-)

The problem is communicating with people that use those services

I'm astounded that it seems like these regulations are going to be sensible and promote the technology. It's a good thing that these are going into place, since autonomous vehicles should definitely not be legislated on a state-by-state basis.

> I'm astounded that it seems like these regulations are going to be sensible...

Was that hyperbole? I would say the majority of regulations (at least in OECD countries) are sensible, and many that are not are intended to be, are outdated, or are politicized.

    > I would say the majority of regulations (at least in
    > OECD countries) are sensible
I think it can be shocking to non-Americans just how much the Americans distrust and think their lawmakers and -- especially shockingly, their civil servants -- are both incompetent and have malicious intent.

American friends have found it incredible -- for example -- that something like NICE[0] can exist and people don't assume it's trying to kill them all; cf "death panels".

I also wonder in what other developed countries Jade Helm 15 would have been controversial[1]...

[0] https://en.wikipedia.org/wiki/National_Institute_for_Health_... -- especially their guidance on how much a year of life is "worth"; see the "Cost Effectiveness section

[1] https://en.wikipedia.org/wiki/Jade_Helm_15_conspiracy_theori...

Regarding your Jade Helm 15 question: round this neck of the wood we host every year the worlds largest cold weather military excercise, with 15-20 000 soldiers from all across NATO. We literally have US Marines and other foreign forces playing invasion right where we live, using air force, army and naval assets. There has never been any conspiracies or fears regarding this. Perhaps mainly because we appreciate you guys having our backs and knowing how to fight in snow, just in case Ivan comes over for a "visit".

On a related note, there is a truly hilarious story from a guy over on Reddit who served in the Marines, they were stationed in North Carolina and never been in snow; and who came to this exercise and of course got their asses truly handed to themselves in a snowball fight by a bunch of Norwegian schoolkids. Highly recommended reading; first comment after the OP here:


Here's a non-mobile link to the comment itself:


> I think it can be shocking to non-Americans just how much the Americans distrust and think their lawmakers and -- especially shockingly, their civil servants -- are both incompetent and have malicious intent.

The EU is often criticized (e.g., Brexit) as being something that promulgates useless regulations (e.g., curvature of a banana).

    > The EU is often criticized (e.g., Brexit) as being
    > something that promulgates useless regulations
Sure. Some people hate single-payer healthcare too. Nothing like watching Food Inc for reminding you why the EU love you.

The "curvature of a banana" thing is a myth, the regulation just says it should have a good appearance

I quite like the human rights and environmental protections afforded me as a EU citizen. British people see the beginnings of their work rights already in the crosshairs since Brexit. Glad I live in the Netherlands now.

> The EU is often criticized (e.g., Brexit) as being something that promulgates useless regulations (e.g., curvature of a banana).

Yes but those almost always turn out to be made up by the Daily Mail.

The EU regulation is often criticized as bad and dumb, but not as malicious.

At first I thought you were making up a super funny and clever thing, but by god, you're only referencing a pre-existing funny thing [1]. Although funny in a different way.

[1] https://en.wikipedia.org/wiki/Commission_Regulation_(EC)_No....

Hey, nevermind that the article you linked expressly notes that the entire policy is around a standard for the classification of produce, and that the EU is first and foremost an economic union with the goal of harmonized trade regulations.

You know, the type of thing where standard gradings and classification of produce and manufactured goods would be fairly important? (and you know, in no way different to any other modern nation or industrial group).

In fairness, Britons don't think NICE can exist -- see this case where an American went on a radio show and praised its CBA approach, while the British participants vehemently denied that it does exactly what it actually does:

>>...Britain had achieved cost-effective treatment for everyone, at the cost of some people missing very expensive treatments that might help them. I was rather congratulating myself on this answer, because NICE is beloved of health wonks everywhere; Obamacare’s Independent Payment Advisory Board (IPAB) is an attempt to sort of replicate it. Pointing out something the British health system can do that the American system can’t, and doing so in dryly factual tones, seemed like a good way to endear myself to the British audience.

>>The other guest, a British health official, interrupted to basically accuse me of lying; the British health system, he said, did no such thing.

>>Now I reiterate: I had not called NICE a death panel, or said that it was bad; I had simply described what NICE does, which is keep the NHS from blowing its budget on very expensive treatments that deliver relatively little value per pound spent. You can read NICE describing what NICE does on its website; the description is not significantly different from the one I gave. Being told that this was flat out wrong was surreal. Things got even more surreal when I began again to explain what NICE does, thinking that perhaps I had been unclear, and the host interrupted me and said something like “As you know, that’s false.”


Many people automatically assume a government can't make up sensible regulations. There are a lot of them in the US. It's a meme you hear all the time, especially in a POTUS election year.

One problem is that regulations tend to accumulate. I like what Canada does with its "one for one rule" which removes one piece of old outdated regulation for every new regulation made. In fact, at first when British Columbia implemented this law, they did 2 for 1 to clean out old laws, until later switching to 1 for 1.


I'd never heard of one-for-one, but it's such a brilliant idea! Thanks for sharing

Economists, based on empirical research, by and large agree that "Regulatory Capture" will normally make regulations work in the interest of the major companies in an industry, rather than the public interest.

This is a social science result, not a meme.

Belief in the pervasiveness of regulatory capture is really less the product of empirical research and more a restatement of fundamental principles of liberal economic theory as old as Smith, buttressed by some good anecdotes from certain markets. When it comes to actual empirical research, disentangling the interests of the public and established corporations is pretty difficult: often they are shared, particularly when it comes to safety regulations.

It'd be easy enough to show that a future testing regimes increase the market share of domestic self-driving car manufacturers and push the market price up; less easy to show that it wasn't also in the public interest to have that testing regime in place.

Regulatory Capture will by definition always result in Regulations working in the interest of the major companies(or special interest) in an industry, rather than the public interest.

Definition: Regulatory capture is a form of government failure that occurs when a regulatory agency, created to act in the public interest, instead advances the commercial or political concerns of special interest groups that dominate the industry or sector it is charged with regulating.

The question is does regulatory capture always happen?

It tends to happen when the following are true:

a) Previous government experience is highly valuable in private sector employees

b) Government pay is less than this value

b) affects regulatory capture in two ways: it allows civil servants the opportunity to get massive raises by going private (and incentivises them to be nice to future employers) and cripples the recruitment of highly talented individuals who are less dependent on industry advice. I don't think attacking a) is feasible in a modern regulatory state, but b) is readily doable if a government is willing to significantly deviate from standard salary scales for high-value industries. For example, SEC salaries would have to be much, much higher than Department of the Interior salaries. AFAIK, Singapore already does this and has very high talent retention rates. Even within the US government, it isn't entirely unprecedented, since an E-3 Navy special forces operator probably makes 8x the salary of an E-3 Army public relations specialist.

That covers materialist capture, but there is also non-materialist capture:

>Materialist capture, also called financial capture, in which the captured regulator's motive is based on its material self-interest. This can result from bribery, revolving doors, political donations, or the regulator's desire to maintain its government funding. These forms of capture often amount to political corruption. Non-materialist capture, also called cognitive capture or cultural capture, in which the regulator begins to think like the regulated industry. This can result from interest-group lobbying by the industry. Another distinction can be made between capture retained by big firms and by small firms.[11] While Stigler mainly referred, in his work,[12] to large firms capturing regulators by bartering their vast resources (materialist capture) - small firms are more prone to retain non-materialist capture via a special underdog rhetoric.[11]


or a better question does a free market alternative exist?

I think it might be deeper than that. I don't feel that the US government, on it's own, is incapable of drafting up reasonable legislation. The problem is that the US government is 100% for sale to the highest bidder, and corruption runs deep (we just call it "campaign contributions" as if that makes it better). If sensible regulation is proposed, it'll last 30 seconds before the good senator from [some self-driving car company's home state] has turned it into a document crafted to drive business to his "contributor".

This isn't a political statement as it cuts across both parties, which renders it all the more insidious.

Surely this is based on 0 personal direct experience with the people that write these kinds of regulations.

I have worked with engineers that write technical regulations. They are generally focused on doing a good job at the task at hand. To think some mid level person that is hired into a normal job and never meets a politician in their career cares about campaign contributions is asinine.

What do you think the people at NASA and NAVSEA and NIST do all day?

This is not an informed opinion. This is an opinion carefully shaped by the same influences from different industries over the last 30 years who generally benefit from the removal of their regulatory environment (miners, oil industry).

The real byline is in your proposed commitment to trying too improve government process: you don't have any. You think it's hopeless. You're apathetic. Which is what everyone, pushing any agenda, wants from you.

And on top of that you have obstructionism (from both sides) and deliberate efforts to sabotage the other side.

Or even if the politician isn't influenced by the campaign contributions they'll just run 1 sided ads against the other side that usually have margin at best factual basis.

My biggest gripe is overreach. You start with sensible building codes, and eventually the city council is telling you what color bricks you have to use before they'll approve your plan. Yes this happens.

In, say a relatively historic area where all the buildings are the same, what is wrong with mandating brick colour?

If it is on my property, then I think brick color is free speech.

Except that you signed a contract when you bought the property.

Search for the "unconstitutional conditions doctrine". Government can't get around constitutional limits via contract requirements.

Since when was choice of brick colour a constitutionally protected right?

Above, someone suggested they consider the color of the bricks on their house free speech, which is a constitutionally protected right. I don't necessarily agree (although, I don't think some shitty HOA should be able to dictate the color of my damn house), just clearing up your confusion.

I'd ask, in any area, why does your form trump my function?


I lived near a small historic town with many buildings standing since the 1850s. Tourism is a HUGE industry that brings in dollars to local businesses. It's in the town's interest to preserve that income, so they mandate color and style codes for new buildings as well as restoring older buildings. This is complemented with many folksy festivals and re-enactments as part of drawing in tourist dollars.

When people balk at these codes, and they do all the time, there are several other larger and modern cities nearby where they aren't restricted in any design sense.

As long as the codes are written by people who know their architecture and architectural history 100%, I am completely comfortable with that. It's just that I've had far too many experiences with historical preservation codes written by amateurs with limited knowledge of architectural history who effectively ban any attempts at making a building more airtight and efficient while letting aesthetic travesties like asphalt shingle roofing and poorly proportioned window trim stand.

This I understand. I can tell you that this little town is maintained by both historical and design professionals, and new buildings (and renovations to a practical extent) have all the modern conveniences and safety while preserving the historic aesthetic.

There is no clear sharp line between the two; form has always been part of the function of architecture. What your house looks like affects the rest of your community; therefore there is a democratic process that gives them input.

As someone living in a country where there are very lax building codes, I would welcome regulation that mandates what colors you're allowed to use!

Why is that? I can see why there would be regulations on brick quality, but why color?

I used to live near an old historic town with buildings standing since the 1850s. New people and businesses are moving in all the time and stuff needs to be built. The town uses color codes and other design elements to preserve both the older buildings as well as making newer buildings match the historic tone. This is explicitly done to promote and keep tourism flowing into the town, which is a HUGE part of their income.

If an industry or contractor balks at these ideas (and they do every now and again) there are several other larger modern cities a few miles away with access to the Interstate and train yards. These don't share the "historic preservation" codes of this little town.

If the town allowed a free-for-all on design it would wreck it's main source of income and likely cause decay over the years as tourism dropped off.

Why does the government need to prioritize the interest of people who profit from tourism [hereafter I will call them "tourism people"]?

Why don't "tourism people" just pay people constructing buildings to use the colour "tourism people" want? That should be fair to both parties.

The town itself is less than 3,000 people. Tourism is it's major industry, and without it the town would disappear.

> Why don't "tourism people" just pay people constructing buildings to use the colour "tourism people" want?

That implies they could ignore that rule at any time. Reimbursement programs are an increase in paperwork, which many would simply ignore for convenience. This would give the town the "Tragedy of the Commons" problem, erasing it's historic sense (and primary revenue source), and it would become another run-down town like many others in the region.

Is that fair to those who invested heavily in keeping their businesses and homes in that area? Their answer is a resounding "No"

If somebody balks, just like this, there are other more modern and relaxed cities within a few miles that can accomodate their building ideas. These cities even have more access to freeways and trains, so economically it makes sense to put their businesses inside those cities.

Instead, the primary draw towards this town IS it's historical authenticity, and thus the people living there keep it maintained through it's building codes. There is no other reason why a business or homeowner would live in that area, so it makes sense to keep with it's character. If that's too onerous, then perhaps your motivations for building should be reexamined.

Some people love teal and pink. Having to look at or try to sell a house next to a monstrosity can be pretty horrible (or the neighbors with junker cars all over the lawn). Having been in a couple of regulated areas, I'm not much of a fan of the micromanagement that happens. However, having had my value/quality of living majorly degraded by an industrial operation moving in next door in unregulated BFE, the risk of living somewhere without rules is higher than the cost of compliance, at least for me.

The regulation on brick color mentioned above is at the city level, not federal or state. You have districts that enforce ascetic rules typically to preserve the look of a neighborhood.

And is the fundamental belief of libertarians: that governments can do nothing right, either morally or pragmatically.

That's not accurate. They seem more concerned that every bit of power given to government to do something right will eventually be used to do something wrong.

To be fair, there's a long and repeated history in the US of


GOV: We know this law is overreaching, but we promise we'll only use it the "right" way.

... 2 years goes by ...

GOV: If you don't <plead guilty | accept this plea bargain>, we'll tack on a charge of breaking <this law that is overreaching>, even though you didn't violate what it's supposed to be about, and add 20 years to your sentence.


It's seen over and over. The US citizen's distrust of government getting more power than it absolutely needs isn't paranoid, it's based on the actions of the government.

Conversely, and more relevantly to regulations on self-regulating cars, there's a long and repeated history of


GROUP: The regulation is anti-business, anti-freedom and massively outdated. We should sweep it all away and deregulate this sector as much as possible. The market will take care of the bad companies.

...2 years goes by...

GROUP: Do you know how important it is that this industry survives? Please give us some money to fix it. And some of the behaviour of some companies in our industry is unethical and dangerous and really should be stopped. Why didn't you step in earlier?

That's exactly what the libertarians on HN post. I seriously hope it's a very vocal minority though.

I'm (mostly) a libertarian, and I disagree. I think the government is fundamental at keeping the society sane, secure and organized.

I agree. I find my self leaning libertarian as of the last few years. To me it isn't about no regulation; just no "nanny" regulation

No that is simply not true. Just because someone doesn't think that the war on drugs is a good idea or that prostitution should not be criminalized, etc. doesn't mean they think "governments can do nothing right"

libertarians != anarchists

You're right, we probably should be celebrating our lead tainted water, fracking, telecommunications monopolies, $300 Epipens, unaccountable financial institutions, pipelines across native land, corrupt campaign contributions, infrastructure decay, lack of decent health care...

I just assume that the government work do anything without a benefit for themselves. In this case they are probably getting some nice benefits from the automakers.

I say this as someone who has failed to even get a non-form letter answer from any of my elected officials state level or higher. I'm convinced that money is the only way to affect policy.

A regulation that seems sensible to one cultural group might seem like ridiculous overreach to another cultural group. The US is one of the most polycultural countries in the world, so any regulation is bound to piss a lot of people off. It's better to be judicious with regulation in such situations.

Gun control is a great example that seems to confuse a lot of non-Americans. To your average San Franciscan, who has never used a gun and has no particular reason to use one, restrictions on e.g. magazine size probably seem quite reasonable. But go to an agrarian Texan rancher, and the situation is entirely different. Good luck thinning out a stampeding herd of wild hogs with a ten round fixed magazine. Similar situation with pot; the average SF resident is probably fairly familiar with it, whereas the rancher probably isn't. In either case, ignorance breeds irrational fear, which is a bad (but unfortunately likely) foundation for laws.

So yes, many regulations are not sensible, and it's harder to get away with in the US because the US isn't a monoculture. Even those regulations that are sensible (by whatever metric you like) are likely to anger some non-negligible group.

“Democracy must be something more than two wolves and a sheep voting on what to have for dinner.” ― James Bovard, Lost Rights: The Destruction of American Liberty

I think lately far to many people already have the answer before there is any discussion

I think democracy only functions when people are open minded and willing to put themselves in others shoes.

I'm ignorant on ranching. Are wild hog stampedes a non-trivial threat to ranchers?

The stampedes aren't, but hogs are a horrible invasive pest that causes many billions of dollars of destruction per year. They've tried trapping, poisoning, everything. The only thing that has any effect on the population is some serious firepower. The problem is so bad that you can hire people to fly around at night in helicopters with sniper rifles and thermal vision to shoot hogs. I've helped a few friends with hog control, and you'll often startle 10-20 at a time and then they'll go into hiding and you're done for the day. You need to put a lot of bullets downrange fast to have any effect.

In would say that anyone who thinks they have a handle on any significant percentage of the regulations in just one country is fooling themselves or uniquely talented. The sheer bulk neatly precludes it.

I'm not sure that anybody after the Code of Hammurabi, or the Twelve Tables, or the ten commandments could even pretend to understand all the laws and regulations that their brethren have enforced on each other. Law is almost like a gas, expanding in every direction to fill whatever space and attention it is allowed.

How can a website where the majority of people likely live in Silicon Valley actually believe that the majority of government regulation is good? Any regulation needs to be intensely scrutinized because the implications are not even completely understandable when they are created.

while I agree that regulation should be intensely scrutinized, and that the implications of a piece of regulation are usually not fully understood until after it has been implemented, the idea that the majority of regulation is "bad" is myopic.

Well-written regulation (and I would argue that the majority of regulations in the US are well-written) serves the public interest. Two immediate examples that come to mindt are the Glass-Steagall act, which separated commercial banking from speculative trading until it was repealed by GLB in the 80s (opening the door for the financial crisis) and the FDA. I would prefer to live in a country where glass stegall was still in place and the FDA was even stronger than it is today.

I get where you're coming from, but I live in a state where, until very recently, brewers were not allowed to sell beer directly to customers. There are still large swaths of my state that do not allow the sale of alcohol on Sunday, period. And when brewers were allowed to sell beer on tours of their brewery, suddenly the government started messing with the regulations again to basically undo everything.

The majority of regulations in the US are sensible.

But regulations that are computer-focused? Less so.

I think the biggest benefit for self-driving cars is that there is really no big corporation or lobbying group that is against this technology. Car manufacturers probably would have put up the biggest battle but they are all pushing for the adoption of the technology. Had car manufacturers felt this technology was a thread to their business model, you'd definitely see a lot of push back and crazy stipulations.

We will see what happens when auto sales decline due to really cheap auto-taxis.

Hopefully, car companies will deal with reduced demand by going upscale with more fancy cars for a smaller market.

Of course, someone needs to build all of those auto-taxis. They are going to be do very very well for themselves.

Interesting point -- presumably this means that the existing manufacturers think they can compete in this market, or steer it someplace where they can compete. I wonder if they're right.

There's no "Car neutrality" law so far.

> autonomous vehicles should definitely not be legislated on a state-by-state basis.

Why do you say that? I have no opinion either way, just curious

Cars move. Splitting the laws by state would mean also splitting the market, and allowing cars on the road which would be illegal to e.g. drive into Texas.

Really, it should be international.

But this kind of fragmentation already exists today with different driving laws between states. A legal student driver in one state might not be allowed to drive in another.

Now that I'm thinking about it, it's strange that vehicles are regulated at the state not federal level. They're a big component in interstate commerce, and therefore ought to be within the jurisdiction of Congress to regulate, even under a relatively strict reading of the Constitution.

And it is a massive problem.

For example vehicle window tinting laws vary wildly from state to state (and arguably they're more liberal in states that get hotter, and more restrictive in states with gang issues) so you can own a vehicle that is legally tinted in your home state, but gets ticketed when it crosses a state border.

Daylight running lights are another example, some states require them, while others do not. So you can buy a brand new vehicle which could get ticketed since it lacks DRLs.

Are there any states where turning on the headlights wouldn't satisfy the law? Using headlights all the time probably won't kill the bulbs in a year, but none the less, is $40 a year really a massive problem?

Similarly, most people don't care about tint. Those that do but are agonized about being able to travel to other states can simply figure out the maximum allowed in the region they plan on traveling in. I guess that reaches the level of irritating, but what are the massive consequences for Joe Driver if he can't darken his windows?

> window tinting laws vary wildly from state to state

Looks like they're strictest in Alaska, California, D.C., Delaware, Iowa, New York, Pennsylvania, and Rhode Island.



Which US states require DRL now?

To answer my own question, according to Wikipedia:

"Several states on the Eastern seaboard, the Southeast, and Gulf Coast (except Texas) have enforced vehicular laws since the early 1990s that require headlights to be switched on when windshield wipers are in use. This prompted the phasing in of DRLs in the affected states (from Maine to Florida including Louisiana, Mississippi, and Alabama)."

So it appears that DRL aren't required, but frequently standard equipment in states that require headlamps on if windshield wipers are on... Wikipedia does not list any states requiring use of headlamps all the time, though.

Vehicles are regulated at the federal level. That doesn't preclude states from applying additional regulation, such as the California emission standards.

A lot of safety features and emissions requirements are regulated at the federal level.

I think it actually has the effect that a state like CA or TX becomes the de facto regulatory-maker just based on size of the car market.

Yup, California ruins everything for the rest of us. They are the reason we can't have nice things. One notable example are the CA regulations on gasoline cans, which have driven out the design that worked well and replaced it with "spill-proof" designs that are terrible and, ironically, much easier to spill with.

Well, worse (and more troubling) is the effect on textbooks used in schools.

yes. it reminds me of California car exhaust/emissions standards which, AFAIK, became the national standard eventually.

States rights is more important for progress than you might think. Each state is an incubator for ideas.

single choice monopolies impede progress, whether governmental or corporate. It's better to have states naturally group together than to force it with some top down measure.

I was under the impression that most of the requirements only applied to registering a car in that State not operating the vehicle??

So, for example, NY requires yearly safety inspections and you'll get a ticket if your inspection lapsed. But you don't have to get a safety inspection to drive in NY if your car is registered in a state that doesn't require safety inspections.

I could be mostly off base on this one.

Though some laws are so local sometimes it's impossible for an out of towner to know the local laws like going right on red is, as far as I know, illegal in NYC but legal... Everywhere else? How does someone from Texas supposed to know that?

>Though some laws are so local sometimes it's impossible for an out of towner to know the local laws like going right on red is, as far as I know, illegal in NYC but legal... Everywhere else? How does someone from Texas supposed to know that?

You've pretty much picked an outlier. And I might be inclined to argue that someone from Texas trying to drive in Manhattan for the first time has other problems :-)

There are a few other things like whether you can pass on the right on an interstate and the aforementioned when headlights need to be on (though I often see this last point signed). But these are usually getting into corner cases and don't really affect how the average person has to approach driving.

It should also be mentioned that many of the divergent cases tend to be bad ideas for safety reasons anyways (passing on the right, using your cell phone when driving, even right-turn-on-red--don't do that in pedestrian-heavy areas).

Places with divergent laws make some effort to inform visitors of the divergence--you'll sometimes see electronic noticeboards saying that using your cell phone is illegal here, and sometimes permanent ones too (e.g, on entry into Virginia on interstates, you are immediately informed that radar detectors are illegal).

Right. And that fragmentation is probably a bad thing.

I'm now wondering a world without left or right turning wheel. Maybe even a right handed UK ?

I'm not. Autonomous cars will face strong competition from regular cars and the manufacturers have got to be worried about liability. Hence, the private sector in this case has it's interests more or less aligned with the regulators so you should expect fairly effective regulation, like with the FAA and commercial airlines.

Unlike healthcare exchanges?

Pages 14 - 30+ of the embedded report (pages 16 - of the PDF) are particularly interesting and promising, especially the portions about transparency around privacy and ethics issues.

The report recommends that "Manufacturers and other entities should develop tests and verification methods...". Does anyone know whether verification here means software verification, or does it mean something else in this context?

Edit: Just noticed that I got to the PDF via elicash's comment and not via the linked article. Here's a link to the PDF: https://www.transportation.gov/sites/dot.gov/files/docs/AV%2...

The report makes reference to "Assessment of Safety Standards for Automotive Electronic Control Systems" by NHTSA, which itself reviews ISO 26262, MIL-STD-882E, DO-178C, the FMVSS, AUTOSAR, and MISRA C.

In this context, they mean verification and validation in the systems engineering sense. Software would be included in that it is a part of the whole system.

I have a hard time understanding the current AV SW stack.

On one hand, at the low level, sensor, motor control, etc you likely have traditional hard real time/MISRA C code, but on the higher layers you probably things like DNN, image recognition, which are much less deterministic.

So I am not sure how do you reconcile these two worlds, and prove it is safe and always work in timely manner.

It seems the only sound approach would be to validate the whole system on a real road.

A few comments on this:

First, as etendue says, it is not easy. The problem of mixing “Boolean” verification with probabilistic, less-deterministic verification is especially hard. I discussed this a bit in [1], if you care to take a look.

Also, I think most current AVs are not driven by DNNs at the top level (comma.ai [2] is one exception). See [3] for some discussion of that, and of verifying machine-learning-based systems.

Finally, one possible way to check that AV manufacturers “do the right thing” in correctly verifying the combination of DNNs, Misra C, digital HW, sensors and so on is perhaps to create a big, extensible catalog of AV-related scenarios, which ideally should be shared between the manufacturers and the certifying bodies – see [4]. I think there is some hint of that in the DOT pdf – still working my way through it.

[1] https://blog.foretellix.com/2016/07/22/checking-probabilisti...

[2] http://www.bloomberg.com/features/2015-george-hotz-self-driv...

[3] https://blog.foretellix.com/2016/09/14/using-machine-learnin...

[4] https://blog.foretellix.com/2016/07/05/the-tesla-crash-tsuna...

Thanks for your input, really interesting topics on your blog as well.

Thanks. I did a second pass through the policy paper, and put a summary of the verification implications here: https://blog.foretellix.com/2016/09/21/verification-implicat...

I think the simple answer is that it is not easy. To start, rigorous design processes with risk analysis upfront are certainly necessary, as are well-defined operational contexts for the autonomous functionality, and a very disciplined approach to clearly defining safety-critical subsystems and minimizing their surface area.

There's a surprising amount of work in the literature that serves as a guide for using neural networks in safety-critical contexts, e.g., http://dl.acm.org/citation.cfm?id=2156661 and http://dl.acm.org/citation.cfm?id=582141.

Now you understand the job of systems engineering :)

Verify components, validate the entire system is the typical approach.

That sounds pretty much just like web application development, or any other front-end user-facing development. You can verify internal components through testing, but once you introduce non-deterministic random variables like browser software your user is using, and your user's themselves, all you can do is validate the entire system through real-world testing and hope you've covered the edge cases you need to handle and will fail gracefully for the ones you missed.

The point I was trying to make is that if you have actuators running MISRA C that are going to be driven by something written in Tensorflow, does it still makes sense to have a requirement to use MISRA C in the first place for the low level part ?

I'd be very wary of using complex SOUP like TensorFlow, even if brought under my quality system. I think a good answer here is that once one goes under design control the subset of functionality needed should be implemented in-house under the organization's SDLC.

Of course these things are meant to be used (1) to train the system, (2) as a player in the prototype. Exactly like in the old school ML-based systems: you train in Matlab or CudaConvNet, and then you load the trained classifier into the custom-made player highly tuned to your hardware and problem domain.

Most certainly - safety should be guaranteed at the lowest level, even if Tensorflow gets borked.

Think of it as a failure cascade - if Tensorflow breaks, the car can safely stop. If the low level stuff breaks, the car may not be able to stop (or go).

Info here: https://www.transportation.gov/AV (including noon Eastern livestream)

This is excellent news! Guidelines to follow implies that if the manufacturers can meet these guidelines then they could plausibly have a somewhat legal basis for putting self driving cars on the roads.

N.B., this policy is mainly concerned with Highly Automated Vehicles (HAVs), which are defined as SAE Level 3 ("capable of monitoring the driving environment") and above.

edit: as to SAE Level 2, it has this (and more) to say:

> Furthermore, manufacturers and other entities should place significant emphasis on assessing the risk of driver complacency and misuse of Level 2 systems, and develop effective countermeasures to assist drivers in properly using the system as the manufacturer expects. Complacency has been defined as, “... [when an operator] over-relies on and excessively trusts the automation, and subsequently fails to exercise his or her vigilance and/or supervisory duties” (Parasuraman, 1997).


> Manufacturers and other entities should assume that the technical distinction between the levels of automation (e.g., between Level 2 and Level 3) may not be clear to all users or to the general public.

I'm surprised that self-driving technology is focusing on replacing the driver as an autonomous actor, processing visual and radar/lidar signals in order to know about its surroundings. I've always thought we'd get further faster by having automobiles also talk to other vehicles nearby, and design roads to support the computer driven vehicles.

Two examples are:

1) If the vehicle is talking to the cars in front of it, it can know they are braking before it senses that visually. Also, the vehicles can speed up in a gridlock scenario more in unison, like a train.

2) On the interstate, markers in the pavement can be specifically designed for computer sensors rather than human eyeballs. Also, cars can draft together to save fuel.

While networked cars are interesting, there is also a massive security issue here.

Hackers will easily figure out a way to spoof the communication, and could play with traffic.

There are mitigations for most issues, but it's a complex topic.

Just imagine some scenarios:

-) Spoof an emergency break advisory that causes tailing cars to also do an emergency break. (could be mitigated by first observing that cars in front are actually slowing down before breaking)

-) Spoof a command from a smart traffic light at an intersection to stop immediately for police / other emergency traffic. (need to check if traffic light is actually red)

-) Spoof speed restrictions issued by a smart highway traffic jam prevention system.

-) A system for police to force a car to stop immediately and pull over, eliminating car chases. Just spoof this signal and stop anyone you want. (mitigate by checking if there is a police car trailing you, and ignore otherwise).

And so on...

A way around would be to maintain a national database with public keys for each registered vehicle, and make cars only accept those keys. But that would be hard to maintain and still hackers could just get a hold of some PK.

In the end, the driving system will always have to correlate such car 2 car communication with observations it makes itself.

And an autonomous system can react almost immediately anyway. So coordination doesen't give you all that much.

-- There are some useful ideas though, like:

-) Traffic lights can announce an ideal speed for a route, taking into account traffic and traffic light timings, so you can optimize throughput and minimize fuel consumption

All good points. Seems like you could get around it by using these other-car communications as noisy signals and weighting evidence against the world as the car sees it, e.g., if you get a spoofed emergency brake advisory, and the car's own percepts suggest there's no reason to brake, the resultant action may be to not brake. The signal from the other car[s] becomes just another feature.

Considering the millions of miles driven each day, even if the networked signal wasn't heavily weighted, a spoofed emergency brake advisor signal could still do significant damage.

>Hackers will easily figure out a way to spoof the communication, and could play with traffic.

It's far far easier and quicker to throw a brick off a highway bridge but that surprisingly happens very infrequently.

I don't think that's really all that relevant. The two activities at hand (hacking networked cars vs. throwing a brick off a bridge) appeal to two different types of people. Also one can (theoretically) be done from the comfort of one's own house/office/bedroom which can be anywhere in the world, while the other requires going to the specific location of the traffic.

The reason is that there are and for the foreseeable future will be things that are not networked. Lots of legacy cars, bicycles, pedestrians, trains. In some areas probably even animals on the road. If cars don't work in these environments they are pretty much useless.

It's interesting that this NHTSA statement doesn't mention car-to-car communications at all. There are parties pushing for that, but Google's Urmson was against it. His point is that the troublesome roadway obstacles, from kids to road debris, won't be on the net. So you have to have good sensors, and they can see other cars just fine.

The first self-driving cars will have to coexist with humans driving old cars without such communications.

That is happening too, but it's an independent technology that can also be very useful in human driven vehicles.


As others have mentioned, vehicle-vehicle communication only works if you trust the other vehicles. This sort of thing is almost certainly coming for fleets of trucks though (Google "truck platooning"), where a known set of vehicles can communication with each other.

Car manufacturers were discussing this kind of thing about 15 years ago.

We were working on diagnostic and emissions checking standards but there was the expectation that we would be able to make use of secure network links to cars at some point in the future.

The question at the time was which would come first. Would a requirement to do emissions testing under real-world conditions push the introduction of radio networks that could also be used for cars to talk to each other or would the road-train type applications be the initial use case.

I think that concern over malicious communicators will at least slow this down. If not implemented correctly, it could give hackers a dangerous amount of control over traffic.

And when an old vehicle can't be driven on the road? I can't see laws to prevent old vehicles for at least 40 years... You can still not wear a seatbelt if your car was manufactured without one, so I would think anything complementing only autonomous vehicles would be met with public outcry.

The effort is focused where the money is. Car companies will sell these cars and make a profit. States have a hard time keeping dumb roads in good condition, they have no money to make roads smart or keep them repaired.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact