NHTSA, which, after all, studies crashes, is being very realistic.
Here's the "we're looking at you, Tesla" moment:
"Guidance for Lower Levels of Automated Vehicle Systems"
"Furthermore, manufacturers and other entities should place significant emphasis on
assessing the risk of driver complacency and misuse of Level 2 systems, and develop
effective countermeasures to assist drivers in properly using the system as the
manufacturer expects. Complacency has been defined as, “... [when an operator] over-
relies on and excessively trusts the automation, and subsequently fails to exercise his or
her vigilance and/or supervisory duties” (Parasuraman, 1997). SAE Level 2 systems differ
from HAV systems in that the driver is expected to remain continuously involved in the
driving task, primarily to monitor appropriate operation of the system and to take over
immediate control when necessary, with or without warning from the system. However,
like HAV systems, SAE Level 2 systems perform sustained longitudinal and lateral
control simultaneously within their intended design domain. Manufacturers and other
entities should assume that the technical distinction between the levels of automation
(e.g., between Level 2 and Level 3) may not be clear to all users or to the general public.
And, systems’ expectations of drivers and those drivers’ actual understanding of the
critical importance of their “supervisory” role may be materially different."
There's more clarity here on levels of automation. For NHTSA Level 1 (typically auto-brake only) and 2 (auto-brake and lane keeping) vehicles, the driver is responsible, and the vehicle manufacturer is responsible for keeping the driver actively involved. For NHTSA Level 3 (Google's current state), 4 (auto driving under almost all conditions) and 5 (no manual controls at all), the vehicle manufacturer is responsible and the driver is not required to pay constant attention. NHTSA is making a big distinction between 1-2 and 3-5.
This is a major policy decision. Automatic driving will not be reached incrementally. Either the vehicle enforces hands-on-wheel and paying attention, or the automation has to be good enough that the driver doesn't have to pay attention at all. There's a bright line now between manual and automatic. NHTSA gets it.
So the reason it was a big deal is because it was a huge fatality. Tesla drivers are generally a pretty safe bunch. Statistically, if autopilot hadn't been engaged, that death would not have occurred. Autopilot makes Tesla drivers less safe, not more safe.
Also, the government is doing self driving industry a huge favor. These fatalities could screw over the whole industry if they get out of hand. Musk is giving self driving a bad name.
This is yet another "Tesla hit slow/stopped vehicle on left of expressway" accident. There are now three of those known, two with video, one fatal. Watch the video. The vehicle is tracking the lane very accurately. Either the driver is very attentive or the lane following system has control. Then, with no slowdown whatsoever, the vehicle plows into a stopped or slow-moving street sweeper.
Here's one of the other crashes in that situation. This was slower, so it wasn't lethal. There's another one where a Tesla on autopilot sideswiped a vehicle stopped at the left side of an expressway.
"A man with a watch knows what time it is. A man with two watches is never sure."
IMHO, emergency braking must be mandatory for every new car with top speed greater than 60 km/h.
It's the offence that an engineer feels about something being marketed as something it's not.
Tesla is fooling the public. The opinion of the general public who don't drive Tesla's cars is that automated driving is already here and Tesla is leading the way.
 You can say they tell you to keep your hands on the wheel and all that, but they themselves manufactured/fanned a ton of hype to the contrary. It's like arguing that you should have paid more attention to the EULA.
he's definitely anti "disguising level 2 as autonomy" though.
A likely future is one where automation is only enabled for consumers as an option on a minority of roads (starting with the Interstate Highway System) that have been heavily mapped and managed, and we work from there, developing the algorithms at high sample size, then slowly extending out into the state highways and arterials. The roads and maintenance actions will likely also, as the tech progresses, have some modifications made to increase reliability.
These cars are going to need a large quantity of sensors; The Uber self-driving car has "something like 20 cameras, a 360 degree radar, and a bunch  of laser [rangefinders]", and this is a decent start; a Tesla and even a Google car is simply not equipped for enough edge cases to let a consumer near without making them hands-on-wheel liable to take over.
"As far as you're concerned" doesn't mean a whole lot.
People can work around any system for this, but stuff like this makes it have to do it consciously. Seems pretty reasonable on Tesla's part.
There was an accident with a Volvo where someone was showing off the pedestrian safety system, and hit a pedestrian. It turned out they hadn't purchased the pedestrian safety system. Something is needed to prevent problems like that.
Then there's mode switching trouble. Classic problem with aircraft control systems. Tesla disengages the "autopilot" if the driver touches the brake. The trouble is that this also disables automatic braking, as the driver is assumed to now be in control. So tapping the brake without applying it fully in a hazardous situation causes a crash.
All these driver behavior problems with shared control authority are hard. Maybe harder than going for level 3 and letting the automation do it.
"Autopilots do not replace a human operator, but assist them in controlling the vehicle, allowing them to focus on broader aspects of operation, such as monitoring the trajectory, weather and systems."
I've used an autopilot on a yacht and didn't expect it to dock or avoid ships. A plane autopilot doesn't freak out when the pilot takes their hands off the controls. So there seems to be room to allow the name but tighten how it's used.
That is all that most auto pilots in planes do, however. I don't get where "autopilot" somehow came to mean full autonomy in cars but not in planes (and other vehicles like boats) where the term was used previously.
Autolanding is not autopilot. In fact, autolanding is not even certified if multiple autopilots are not available to provide redundancy.
Yes, they do. Then the problem is when the 2 autopilots of the 2 planes take the same evasive manœuvre...
The aviation world actually solved this problem a long time ago. Everyone turns to their right. FAR 91.113:
> (e) Approaching head-on. When aircraft are approaching each other head-on, or nearly so, each pilot of each aircraft shall alter course to the right
(among other collision-avoidance regs in that section)
I think there may be a fundamental flaw with lane keeping. It removes the driver from doing anything but still requires constant vigilance. That might be asking too much. My ADD is too strong to wach the road without having to do any part of the driving. I suspect a lot of people are the same way.
If most drivers are just keeping their hand on the wheel while day dreaming, Telsa should be forced to just disable the feature until the tech is ready for Level 3. Or use the Level 2 tech as a backup only.
I.e. a loud beeping noise that annoyed pedestrians and other drivers until you took the wheel. Kind of like how accidentally triggering your car alarm in the parking lot will lead to a very hasty correction on your part.
An eye-tracking system might work.
You get positive points for avoiding situations that are noticed (but not within the audible warning threshold) or correctly reacting to input (warned but not yet in automated 'fail safe').
Now if only they could automatically make cars exiting a rolling slowdown on the freeway actually get back up to the indicated speed of travel in an expedient manor.
Not true.. There are other ways of doing this incrementally. For example, slow speeds, closed roads (no pedestrians or other cars), only in good weather, ideal conditions, etc.
We can't have an autonomous car that expects a driver to take over in a dangerous situation if that driver hasn't had to maintain control the entire time. For instance, there are youtube videos of drivers moving to the passenger seat in a Tesla with autopilot on.
Yes, and they're all undesirable, unworkable, or useless, as your own post points out.
Resorts are irrelevant, smart cities are an oxymoron and would come way past the time wheen level 5 would be worked out.
Downtown cores are pretty much the worst possible place for level 2 systems at this time.
Nobody is above level 2.
Google's self driving system only basically works with the route preplanned and premapped ahead of time, specifically for that car. Even small changes in the environment are potentially devastating. And even mundane weather changes it isn't prepared to handle.
It should be well understood that if the only people who can safely handle the vehicle are professional test drivers on a preplanned route, the car isn't ready to say its at the level it claims it is.
"These maps contain the exact three-dimensional location of streetlights, stop signs, crosswalks, lane markings, and every other crucial aspect of a roadway."
"But the maps necessary for the Google car are an order of magnitude more complicated. In fact, when I first wrote about the car for MIT Technology Review, Google admitted to me that the process it currently uses to make the maps are too inefficient to work in the country as a whole."
"To create them, a dedicated vehicle outfitted with a bank of sensors first makes repeated passes scanning the roadway to be mapped. The data is then downloaded, with every square foot of the landscape pored over by both humans and computers to make sure that all-important real-world objects have been captured. This complete map gets loaded into the car's memory before a journey"
"The company frequently says that its car has driven more than 700,000 miles safely, but those are the same few thousand mapped miles, driven over and over again."
"Chris Urmson, director of the Google car team, told me that if the car came across a traffic signal not on its map, it could potentially run a red light, simply because it wouldn't know to look for the signal."
Google's entire business advantage is based on cloud. I welcome anyone to prove that this has changed.
Nobody here is making that claim.
If Google operated a taxi service within Austin, we would say the car operates at level 3. SAE levels say nothing about where the car is operating:
"At SAE Level 3, an automated system can both actually conduct some parts of the driving task and monitor the driving environment in some instances, but the human driver must be ready to take back control when the automated system requests"
• Data Recording and Sharing
• System Safety
• Vehicle Cybersecurity
• Human Machine Interface
• Consumer Education and Training
• Registration and Certi cation
• Post-Crash Behavior
• Federal, State and Local Laws
• Ethical Considerations
• Operational Design Domain (operating in rain, etc)
• Object and Event Detection and Response
• Fall Back (Minimal Risk Condition)
• Validation Methods
has this ever happened before?
 And I don't mean wrong as in "NSA spying" because you disagree with the policy. I mean like, "regulations mandated everyone use Beta tapes and laser disk even though they quickly became obsolete."
You have things like companies in Aviation Week (a big aerospace industry mag/site) running full page ads for sensors and other aerospace items proudly claiming its ITAR free (means not made/designed in US). A company I worked for bought a high power (2.5kW) laser from Germany. It failed and cannot be sent back to Germany for repair due to ITAR (tooling needed to fix it cannot be easily moved and probably would fall under ITAR). High end CNC machine tools will brick themselves if they are moved without the manufacturer specifically blessing the move due to ITAR regulations (earthquakes can trigger the "I've been moved without permission" response).
There is a countless list of other harms it has caused, but I have no direct experience with. ITAR is fairly easy to get around for the "bad guys" because they can just not buy US goods.
Clearly more work needs to be done to rigorously investigate and evolve the space, but recent Federal regulations have essentially put small innovators out of business.
In sum: a harm-reducing item is being regulated into the ground.
What HIPAA regulations are you talking about? Other than HITECH guidance (which can sort-of be seen as a "HIPAA regulation"), HIPAA regulations don't generally specify technologies at all, and I can't think of any that I would describe as outdated or troublesome due to the rise of shared virtual servers and "the cloud", whether they predate it or not.
This is a feature, not a bug. It also is neither HITECH nor HIPAA; it is instead AWS's requirement in order to sign your BAA.
> we can't use ELBs in the standard (easy) way
Also neither HITECH nor HIPAA. ELBs are used in a PHI-related scenario identically to any other scenario. Unless you are referring to using it as an SSL terminator, in which case I would say "the standard (easy) way is always wrong".
There is no, AFAICT, no regulation under HIPAA or related law that requires this. Certain service providers may have determined that they cannot provide guarantees of privacy/security without this technical restriction.
I don't think this meets OP's definition of "wrong".
1) determine what physical hardware in aws the target is running code on
2) somehow get the aws virtual machine manager to let the attacker run their malicious code on the same hardware
3) somehow pierce the protections of the virtual machine to read memory being used by the target application
4) figure out how the data is stored in memory in order to make sense of anything that was read
In AWS case, this is an AWS rule about when they will sign a HIPAA BAA, even though there is no HIPAA regulation that specifically prohibits the arrangement at issue. AWS clearly thinks it is worth worrying about.
When you run your own public cloud, you can determine what risks are worth accepting potential liability for.
I'm not commenting on Amazon's rationality (I haven't actually evaluated the security concerns that would determine that.)
> My point is that the legal environment has been designed in an un-optimal way from a technical perspective.
And you haven't pointed to anything in the legal environment that is suboptimal from a technical perspective. You haven't even pointed to anything in the legal environment at all.
Amazon (as a BAA) has certain administrative responsibilities for putting administrative and technical safeguards in place to prevent breaches, and certain obligations and liabilities in the case of breaches. HIPAA and related laws and regulations do not specify the specific administrative or technical safeguards, though they do specify areas that must be addressed.
Amazon has decided that the particular technical arrangement you prefer is too high of a risk, but you haven't pointed out anything that indicates that this is the result of an outdated regulation that results in poor technical choices rather than technology-neutral regulation and a reasonable evaluation of the security concerns of the particular technical arrangement you would prefer.
HIPPA is a very easy compliance standard to meet. If it seems difficult to meet those requirements with your standard tool configurations, you should think about what that means with respect to the integrity of your data.
Google around with terms like forensics and "Volatility" or "Volatility toolkit" and you should find some presentations and other references.
You know what I don't need a case, please find me a jurisdiction in which cold boot attacks have passed forensic certification, e.g. a link to the process like for example from a body equivalent to the ASTM https://www.astm.org/Standards/forensic-science-standards.ht... would suffice.
Stem cell research.
The embryonic stem-cell research ban didn't have anything to do with the underlying science and technology--it was based on a moral objection to the practice of destroying embryos for research. If the government had, for example, mandated the use of some testing methodology that soon became obsolete, that might be more on point.
This is of course once almost all cars are self-driving so it'll be interesting to see what happens in the midterm.
There will still be black people in cars
Do self-driving cars have a button labeled "fuck your rules and DRIVE"?
The data collection "black box" side of it is in a different section.
I pity the mechanics for that one. You just know the car manufacturers are not going to want some unwashed shade tree mechanic, or even a legitimate independent garage, to have access to do that.
By the way, in which area do the following requests fall:
- Yielding to an emergency vehicle with sirens on.
- Moving backwards to a safe and large enough spot when the route is too narrow to fit self-driving car and oncoming huge lorry (and there is no line marking the limit between road and ravine).
- Upon instructions from authority, recognize that the highway is closed due to an accident and, no matter what the driving code says, you actually have to make a U-turn on the highway and follow the crowd. Alternatively, just take that route (yes, the one with the large no-entry sign at the beginning) or that narrow path in the wood (yes, it exists, even if Google Maps isn't aware of it). At the bare minimum, park yourself off the road and let the others move on.
- Verify whether a queue is forming behind you. Listen to the honkers, they may be right. When you are an obstacle to the most part of traffic, moving to the side and letting others pass from time to time is sincerely appreciated.
Was that hyperbole? I would say the majority of regulations (at least in OECD countries) are sensible, and many that are not are intended to be, are outdated, or are politicized.
> I would say the majority of regulations (at least in
> OECD countries) are sensible
American friends have found it incredible -- for example -- that something like NICE can exist and people don't assume it's trying to kill them all; cf "death panels".
I also wonder in what other developed countries Jade Helm 15 would have been controversial...
 https://en.wikipedia.org/wiki/National_Institute_for_Health_... -- especially their guidance on how much a year of life is "worth"; see the "Cost Effectiveness section
On a related note, there is a truly hilarious story from a guy over on Reddit who served in the Marines, they were stationed in North Carolina and never been in snow; and who came to this exercise and of course got their asses truly handed to themselves in a snowball fight by a bunch of Norwegian schoolkids. Highly recommended reading; first comment after the OP here:
The EU is often criticized (e.g., Brexit) as being something that promulgates useless regulations (e.g., curvature of a banana).
> The EU is often criticized (e.g., Brexit) as being
> something that promulgates useless regulations
Yes but those almost always turn out to be made up by the Daily Mail.
You know, the type of thing where standard gradings and classification of produce and manufactured goods would be fairly important? (and you know, in no way different to any other modern nation or industrial group).
>>...Britain had achieved cost-effective treatment for everyone, at the cost of some people missing very expensive treatments that might help them. I was rather congratulating myself on this answer, because NICE is beloved of health wonks everywhere; Obamacare’s Independent Payment Advisory Board (IPAB) is an attempt to sort of replicate it. Pointing out something the British health system can do that the American system can’t, and doing so in dryly factual tones, seemed like a good way to endear myself to the British audience.
>>The other guest, a British health official, interrupted to basically accuse me of lying; the British health system, he said, did no such thing.
>>Now I reiterate: I had not called NICE a death panel, or said that it was bad; I had simply described what NICE does, which is keep the NHS from blowing its budget on very expensive treatments that deliver relatively little value per pound spent. You can read NICE describing what NICE does on its website; the description is not significantly different from the one I gave. Being told that this was flat out wrong was surreal. Things got even more surreal when I began again to explain what NICE does, thinking that perhaps I had been unclear, and the host interrupted me and said something like “As you know, that’s false.”
This is a social science result, not a meme.
It'd be easy enough to show that a future testing regimes increase the market share of domestic self-driving car manufacturers and push the market price up; less easy to show that it wasn't also in the public interest to have that testing regime in place.
Regulatory capture is a form of government failure that occurs when a regulatory agency, created to act in the public interest, instead advances the commercial or political concerns of special interest groups that dominate the industry or sector it is charged with regulating.
a) Previous government experience is highly valuable in private sector employees
b) Government pay is less than this value
b) affects regulatory capture in two ways: it allows civil servants the opportunity to get massive raises by going private (and incentivises them to be nice to future employers) and cripples the recruitment of highly talented individuals who are less dependent on industry advice. I don't think attacking a) is feasible in a modern regulatory state, but b) is readily doable if a government is willing to significantly deviate from standard salary scales for high-value industries. For example, SEC salaries would have to be much, much higher than Department of the Interior salaries. AFAIK, Singapore already does this and has very high talent retention rates. Even within the US government, it isn't entirely unprecedented, since an E-3 Navy special forces operator probably makes 8x the salary of an E-3 Army public relations specialist.
>Materialist capture, also called financial capture, in which the captured regulator's motive is based on its material self-interest. This can result from bribery, revolving doors, political donations, or the regulator's desire to maintain its government funding. These forms of capture often amount to political corruption.
Non-materialist capture, also called cognitive capture or cultural capture, in which the regulator begins to think like the regulated industry. This can result from interest-group lobbying by the industry.
Another distinction can be made between capture retained by big firms and by small firms. While Stigler mainly referred, in his work, to large firms capturing regulators by bartering their vast resources (materialist capture) - small firms are more prone to retain non-materialist capture via a special underdog rhetoric.
This isn't a political statement as it cuts across both parties, which renders it all the more insidious.
I have worked with engineers that write technical regulations. They are generally focused on doing a good job at the task at hand. To think some mid level person that is hired into a normal job and never meets a politician in their career cares about campaign contributions is asinine.
What do you think the people at NASA and NAVSEA and NIST do all day?
The real byline is in your proposed commitment to trying too improve government process: you don't have any. You think it's hopeless. You're apathetic. Which is what everyone, pushing any agenda, wants from you.
I lived near a small historic town with many buildings standing since the 1850s. Tourism is a HUGE industry that brings in dollars to local businesses. It's in the town's interest to preserve that income, so they mandate color and style codes for new buildings as well as restoring older buildings. This is complemented with many folksy festivals and re-enactments as part of drawing in tourist dollars.
When people balk at these codes, and they do all the time, there are several other larger and modern cities nearby where they aren't restricted in any design sense.
If an industry or contractor balks at these ideas (and they do every now and again) there are several other larger modern cities a few miles away with access to the Interstate and train yards. These don't share the "historic preservation" codes of this little town.
If the town allowed a free-for-all on design it would wreck it's main source of income and likely cause decay over the years as tourism dropped off.
Why don't "tourism people" just pay people constructing buildings to use the colour "tourism people" want? That should be fair to both parties.
> Why don't "tourism people" just pay people constructing buildings to use the colour "tourism people" want?
That implies they could ignore that rule at any time. Reimbursement programs are an increase in paperwork, which many would simply ignore for convenience. This would give the town the "Tragedy of the Commons" problem, erasing it's historic sense (and primary revenue source), and it would become another run-down town like many others in the region.
Is that fair to those who invested heavily in keeping their businesses and homes in that area? Their answer is a resounding "No"
If somebody balks, just like this, there are other more modern and relaxed cities within a few miles that can accomodate their building ideas. These cities even have more access to freeways and trains, so economically it makes sense to put their businesses inside those cities.
Instead, the primary draw towards this town IS it's historical authenticity, and thus the people living there keep it maintained through it's building codes. There is no other reason why a business or homeowner would live in that area, so it makes sense to keep with it's character. If that's too onerous, then perhaps your motivations for building should be reexamined.
GOV: We know this law is overreaching, but we promise we'll only use it the "right" way.
... 2 years goes by ...
GOV: If you don't <plead guilty | accept this plea bargain>, we'll tack on a charge of breaking <this law that is overreaching>, even though you didn't violate what it's supposed to be about, and add 20 years to your sentence.
It's seen over and over. The US citizen's distrust of government getting more power than it absolutely needs isn't paranoid, it's based on the actions of the government.
GROUP: The regulation is anti-business, anti-freedom and massively outdated. We should sweep it all away and deregulate this sector as much as possible. The market will take care of the bad companies.
...2 years goes by...
GROUP: Do you know how important it is that this industry survives? Please give us some money to fix it. And some of the behaviour of some companies in our industry is unethical and dangerous and really should be stopped. Why didn't you step in earlier?
libertarians != anarchists
I say this as someone who has failed to even get a non-form letter answer from any of my elected officials state level or higher. I'm convinced that money is the only way to affect policy.
Gun control is a great example that seems to confuse a lot of non-Americans. To your average San Franciscan, who has never used a gun and has no particular reason to use one, restrictions on e.g. magazine size probably seem quite reasonable. But go to an agrarian Texan rancher, and the situation is entirely different. Good luck thinning out a stampeding herd of wild hogs with a ten round fixed magazine. Similar situation with pot; the average SF resident is probably fairly familiar with it, whereas the rancher probably isn't. In either case, ignorance breeds irrational fear, which is a bad (but unfortunately likely) foundation for laws.
So yes, many regulations are not sensible, and it's harder to get away with in the US because the US isn't a monoculture. Even those regulations that are sensible (by whatever metric you like) are likely to anger some non-negligible group.
I think lately far to many people already have the answer before there is any discussion
I think democracy only functions when people are open minded and willing to put themselves in others shoes.
Well-written regulation (and I would argue that the majority of regulations in the US are well-written) serves the public interest. Two immediate examples that come to mindt are the Glass-Steagall act, which separated commercial banking from speculative trading until it was repealed by GLB in the 80s (opening the door for the financial crisis) and the FDA. I would prefer to live in a country where glass stegall was still in place and the FDA was even stronger than it is today.
But regulations that are computer-focused? Less so.
Hopefully, car companies will deal with reduced demand by going upscale with more fancy cars for a smaller market.
Of course, someone needs to build all of those auto-taxis. They are going to be do very very well for themselves.
Why do you say that? I have no opinion either way, just curious
Really, it should be international.
Now that I'm thinking about it, it's strange that vehicles are regulated at the state not federal level. They're a big component in interstate commerce, and therefore ought to be within the jurisdiction of Congress to regulate, even under a relatively strict reading of the Constitution.
For example vehicle window tinting laws vary wildly from state to state (and arguably they're more liberal in states that get hotter, and more restrictive in states with gang issues) so you can own a vehicle that is legally tinted in your home state, but gets ticketed when it crosses a state border.
Daylight running lights are another example, some states require them, while others do not. So you can buy a brand new vehicle which could get ticketed since it lacks DRLs.
Similarly, most people don't care about tint. Those that do but are agonized about being able to travel to other states can simply figure out the maximum allowed in the region they plan on traveling in. I guess that reaches the level of irritating, but what are the massive consequences for Joe Driver if he can't darken his windows?
Looks like they're strictest in Alaska, California, D.C., Delaware, Iowa, New York, Pennsylvania, and Rhode Island.
"Several states on the Eastern seaboard, the Southeast, and Gulf Coast (except Texas) have enforced vehicular laws since the early 1990s that require headlights to be switched on when windshield wipers are in use. This prompted the phasing in of DRLs in the affected states (from Maine to Florida including Louisiana, Mississippi, and Alabama)."
So it appears that DRL aren't required, but frequently standard equipment in states that require headlamps on if windshield wipers are on... Wikipedia does not list any states requiring use of headlamps all the time, though.
single choice monopolies impede progress, whether governmental or corporate. It's better to have states naturally group together than to force it with some top down measure.
So, for example, NY requires yearly safety inspections and you'll get a ticket if your inspection lapsed. But you don't have to get a safety inspection to drive in NY if your car is registered in a state that doesn't require safety inspections.
I could be mostly off base on this one.
Though some laws are so local sometimes it's impossible for an out of towner to know the local laws like going right on red is, as far as I know, illegal in NYC but legal... Everywhere else? How does someone from Texas supposed to know that?
You've pretty much picked an outlier. And I might be inclined to argue that someone from Texas trying to drive in Manhattan for the first time has other problems :-)
There are a few other things like whether you can pass on the right on an interstate and the aforementioned when headlights need to be on (though I often see this last point signed). But these are usually getting into corner cases and don't really affect how the average person has to approach driving.
Places with divergent laws make some effort to inform visitors of the divergence--you'll sometimes see electronic noticeboards saying that using your cell phone is illegal here, and sometimes permanent ones too (e.g, on entry into Virginia on interstates, you are immediately informed that radar detectors are illegal).
The report recommends that "Manufacturers and other entities should develop tests and verification methods...". Does anyone know whether verification here means software verification, or does it mean something else in this context?
Edit: Just noticed that I got to the PDF via elicash's comment and not via the linked article. Here's a link to the PDF: https://www.transportation.gov/sites/dot.gov/files/docs/AV%2...
In this context, they mean verification and validation in the systems engineering sense. Software would be included in that it is a part of the whole system.
On one hand, at the low level, sensor, motor control, etc you likely have traditional hard real time/MISRA C code, but on the higher layers you probably things like DNN, image recognition, which are much less deterministic.
So I am not sure how do you reconcile these two worlds, and prove it is safe and always work in timely manner.
It seems the only sound approach would be to validate the whole system on a real road.
First, as etendue says, it is not easy. The problem of mixing “Boolean” verification with probabilistic, less-deterministic verification is especially hard. I discussed this a bit in , if you care to take a look.
Also, I think most current AVs are not driven by DNNs at the top level (comma.ai  is one exception). See  for some discussion of that, and of verifying machine-learning-based systems.
Finally, one possible way to check that AV manufacturers “do the right thing” in correctly verifying the combination of DNNs, Misra C, digital HW, sensors and so on is perhaps to create a big, extensible catalog of AV-related scenarios, which ideally should be shared between the manufacturers and the certifying bodies – see . I think there is some hint of that in the DOT pdf – still working my way through it.
There's a surprising amount of work in the literature that serves as a guide for using neural networks in safety-critical contexts, e.g., http://dl.acm.org/citation.cfm?id=2156661 and http://dl.acm.org/citation.cfm?id=582141.
Verify components, validate the entire system is the typical approach.
Think of it as a failure cascade - if Tensorflow breaks, the car can safely stop. If the low level stuff breaks, the car may not be able to stop (or go).
edit: as to SAE Level 2, it has this (and more) to say:
> Furthermore, manufacturers and other entities should place significant emphasis on assessing the risk of driver complacency and misuse of Level 2 systems, and develop effective countermeasures to assist drivers in properly using the system as the manufacturer expects. Complacency has been defined as, “... [when an operator] over-relies on and excessively trusts the automation, and subsequently fails to exercise his or her vigilance and/or supervisory duties” (Parasuraman, 1997).
> Manufacturers and other entities should assume that the technical distinction between the levels of automation (e.g., between Level 2 and Level 3) may not be clear to all users or to the general public.
Two examples are:
1) If the vehicle is talking to the cars in front of it, it can know they are braking before it senses that visually. Also, the vehicles can speed up in a gridlock scenario more in unison, like a train.
2) On the interstate, markers in the pavement can be specifically designed for computer sensors rather than human eyeballs. Also, cars can draft together to save fuel.
Hackers will easily figure out a way to spoof the communication, and could play with traffic.
There are mitigations for most issues, but it's a complex topic.
Just imagine some scenarios:
-) Spoof an emergency break advisory that causes tailing cars to also do an emergency break. (could be mitigated by first observing that cars in front are actually slowing down before breaking)
-) Spoof a command from a smart traffic light at an intersection to stop immediately for police / other emergency traffic. (need to check if traffic light is actually red)
-) Spoof speed restrictions issued by a smart highway traffic jam prevention system.
-) A system for police to force a car to stop immediately and pull over, eliminating car chases. Just spoof this signal and stop anyone you want. (mitigate by checking if there is a police car trailing you, and ignore otherwise).
And so on...
A way around would be to maintain a national database with public keys for each registered vehicle, and make cars only accept those keys. But that would be hard to maintain and still hackers could just get a hold of some PK.
In the end, the driving system will always have to correlate such car 2 car communication with observations it makes itself.
And an autonomous system can react almost immediately anyway.
So coordination doesen't give you all that much.
There are some useful ideas though, like:
-) Traffic lights can announce an ideal speed for a route, taking into account traffic and traffic light timings, so you can optimize throughput and minimize fuel consumption
It's far far easier and quicker to throw a brick off a highway bridge but that surprisingly happens very infrequently.
We were working on diagnostic and emissions checking standards but there was the expectation that we would be able to make use of secure network links to cars at some point in the future.
The question at the time was which would come first. Would a requirement to do emissions testing under real-world conditions push the introduction of radio networks that could also be used for cars to talk to each other or would the road-train type applications be the initial use case.