
How well do cars do in crash tests they're not optimized for? - weinzierl
https://danluu.com/car-safety/
======
wscott
The article reminds me of a story from a friend that was an engineer at a
major water heater company. Modern electric water heaters are already fairly
efficient, but every model gets tested by the government to be given an
efficiency rating.

The test consists of replacing the anode with a rod that has a series of
temperature probes to measure the water at different levels in the water
heater. Since water would form layers of different temperatures you needed
measure at multiple locations to know the overall energy in the water. Then a
series of water draws are performed with pauses between them and the total
energy used is compared to the energy delivered.

Our engineer realized that if he could put the temperature layers at just the
right locations then the calculations with the limited number of probes could
be skewed. And this can be done by tweaking the height of the 2 heating
elements. Just moving them a few inches.

The first time he tried this, an internal test came back at 101% efficiency.
At little more tweaking and the test returns a consistent 99.x% that was
respectable.

~~~
cybrjoe
Serious question, how is this different from what VW did? There was a lot of
talk about complicit engineers vs. shady management in that discussion. Did
this ever make it to production?

~~~
henryfjordan
VW actively detected if the car was on a treadmill testing setup and
electronically changed the engine parameters to make it run cleaner.

The water heater was designed with the test in mind but functions the same
regardless.

VW was clearly cheating whereas the water heater is taking advantage of known
deficiencies in the test (is that cheating?).

~~~
alistairSH
I'm not sure that's an accurate description of VW's cheat.

VW knew the parameters of the test (stationary car, no steering input,
prescribed throttle inputs, etc). And they configured the car to pass the
test.

Water-heater engineer knew the parameters of the test (number and location of
probes). And he configure the water-heater to pass the test.

Same-same. In both cases, an engineering team willfully committed fraud to
improve their sales figures.

The only differences are a bit pedantic. VW's "configuration" was more
elaborate. But, both groups gamed the test.

~~~
nwallin
The differences are not pedantic at all. The differences are _extremely_
significant.

The VW engineers designed the car to do different thing while being tested vs
used by consumers. While being tested, the emissions system was on. While
being used by consumers, the emission system was _entirely disabled_.

The water heater engineers designed the water heater to do the same thing
while being tested vs used by consumers.

If the water heater engineers designed the water heater to _never turn on_ and
keep the water at room temperature while being tested, ("works good boss,
water heater efficiency is infinity percent") then yes, it would be reasonable
to claim they did essentially the same thing.

~~~
esrauch
> If the water heater engineers designed the water heater to never turn on and
> keep the water at room temperature while being tested

I think the difference between what they did and this is only a difference in
degree and not kind though, right?

~~~
titzer
It _is_ a difference in kind: VW dynamically detected and _adapted behavior_
to the test. It would never operate in that way under normal conditions. The
water heater example was completely static: it always behaved the same way,
under test or not.

~~~
esrauch
People obviously disagree since I was downvoted but I don't really see the
black-and-white difference between the two.

In both cases there was an intentional design change to deliberately changed
mislead a measurement, and the final product does not match the intended
thing. Both adversarially directly cause the consumer's purchase to not be the
intended item; no one gets a car that has emissions measured and no one gets a
heater that has the efficiency measured.

------
laegooose
The whole car safety topic is very confusing. Car accidents is top1-top2 death
reason in US for people in 20-40 age group [1]. Probably less in Europe but
still one of the leading ones. I was thinking about how I can make myself
safer.

Which car brand/model is better? Which seat is better for passenger? How much
riskier it is to drive during the night? In the rain? How dangerous is it to
overtake on road with 1 lane in each direction? What is most dangerous:
crossroad with no traffic lights? Turns? City? Highways or smaller roads?

I was not able to find any meaningful research. There are ratings by different
government agencies (US, UK, Australia etc), but basically all cars have
highest rating which I find hard to believe.

~~~
JoeAltmaier
Drive the largest vehicle you can. When in a collision the momentum is
converted to crushing force, you want to be in the less-crushable vehicle.

Hey this may be an unpopular choice, but it is the one with the biggest safety
impact! And that was the question.

~~~
yboris
Rather immoral. Not just offloading, but also exacerbating the burden onto
others.

Vehicles and Crashes: Why is this Moral Issue Overlooked? by Douglas Husak

[https://www.jstor.org/stable/23562447?seq=1](https://www.jstor.org/stable/23562447?seq=1)

~~~
dionidium
It's the arms race itself (and the fact that we allow it to continue) that's
immoral. Acting rationally within the arms race is not immoral. Our laws
should address the core causes of this arms race.

People are starting to wake up to the notion of "systemic issues," but still
too slow to apply what they've learned about that topic to other matters. It's
a mistake to focus on the behavior of individuals in a system that's
sanctioning and encouraging bad behavior.

------
hinkley
One of the more morbid yet profound aphorisms that I know is:

Safety regulations are written in blood.

FAA regulations are often based on finding a new way to break an airplane that
they hadn't diagnosed before. Safety ratchets up as new failure modes are
discovered. If cars keep showing up pancaked it a particular way, then
guidelines change.

Until those failure modes are regulatory capture, at least.

In the 90's the collective wisdom said you could shock people into doing the
right thing, and on several occasions they preserved a badly t-boned vehicle
to illustrate why drunk driving was such a bad thing.

These days the pillars are huge and well-connected. I'm not sure you can crush
a car that badly without a commercial vehicle. They test for that stuff.

~~~
w-ll
The "pancaking" is designed, modern cars are made to "crush" and absorb the
energy of an impact.

~~~
hinkley
You might want to talk to someone about your pancake making skills.

A pancake is flat. There is no room in a pancake for a viable human. The car
is all but obliterated.

~~~
pests
The front pancakes. The cabin does not.

~~~
hinkley
If I were talking about crumple zones I would have said crumple zones. I'm
talking about the car as a unit. Is-a, not has-a.

------
DavidPeiffer
Does anyone know of a way to objectively compare safety ratings between
different eras of cars?

The standards for safety ratings change periodically, and great strides have
been made in safety. I'm curious how much risk I'd remove by upgrading from a
2010 to a 2015 or 2018.

Driver death data [1] mentioned below may be the best source, just curious if
there are any others.

Also, [2] is a really interesting crash test between a 1998 Corolla and a 2015
Auris (rebranded Corolla).

[1] [https://www.iihs.org/ratings/driver-death-rates-by-make-
and-...](https://www.iihs.org/ratings/driver-death-rates-by-make-and-model)

[2] [https://youtu.be/IVEjsvip8kc](https://youtu.be/IVEjsvip8kc)

~~~
nxc18
Also worth considering is that anything after 2018 likely has intelligent
crash prevention features. In addition to modern cars being much safer in a
crash, the benefits of avoiding collisions altogether have to be worth it.

I’ve been driving a car with the pre collision warnings, lane departure
warnings, etc since 2017 and it has saved me at least once.

There’s a concern about moral hazard with the new technologies, but my rule is
to drive as if the car has no safety features, and if they activate, then that
is a bonus.

It’s hard to compare the collision prevention systems over time since they’re
so new, but just going off the narrative reporting from IIHS, and comparing
the test standard in 2017 vs 2020, leads me to believe they’ve made great
strides in the last few years.

~~~
Marsymars
> Also worth considering is that anything after 2018 likely has intelligent
> crash prevention features. In addition to modern cars being much safer in a
> crash, the benefits of avoiding collisions altogether have to be worth it.

I expect they're a net gain for the public, but that for an individual who's
conscientious and never drives under the influence, tired, or distracted, that
they're only marginally useful at best.

~~~
throwaway0a5e
>I expect they're a net gain for the public, but that for an individual who's
conscientious and never drives under the influence, tired, or distracted, that
they're only marginally useful at best.

Possibly even negatively useful if you consider that the car they come in has
significantly reduced visibility compared to it's '00s counterpart.

------
m12k
In my university physics course, we did a project where we made our own
physics models of car crashes and compared with crash tests and statistical
data about real-life crashes. Our conclusion at the time was that crash test
results matched our models of collisions with identical vehicles of opposite
velocity (i.e. a head-on collision - the most complex case we could solve
analytically), which, if you work it out, should come out the same as the
normal force when hitting something stationary like a concrete pillar. In this
case crumple zones are the most important factor, as they allow you to
decelerate over a longer distance. The real life data crash reports though
matched the result we got from simulations (which is what we resorted to when
the models got to complex to solve analytically), in which case by far the
most important factor was the weight of the cars involved in a given crash.
Whoever is in the heaviest car will tend to win the game of conservation of
momentum. Kind of a sobering conclusion tbh.

------
natch
This is the most confusing article I've seen from danluu. What data was used?

The glaring incongruity for me arose from the details on Tesla.

"For example, on the driver-side small overlap test, Tesla had one model with
a relevant score and it scored Acceptable (below Good, but above Poor and
Marginal) "

Note the words "above poor." But in the bullet points danluu uses a different
scale that doesn't match this prose, and slots Tesla into "poor."

I'm sensing an agenda. But I'd rather keep an open mind, and look at the raw
data myself.

The other odd thing is that the article mentions both 2012 and 2018 data, but
doesn't provide a specific link to the data that was used. What raw data was
used?

A clue lies in the other prose about Tesla in the article, which refers to a
2017 article about Tesla's pushing back. (The points about this being
predictable as a PR move seem valid. However what's more interesting is the
timeline.) 2017 is prior to 2018. Which points to the data being relied on
here being the 2012 data, not the 2018 data.

The problem with using very old data is that the cars have evolved
significantly since 2012. I mean, 2012 was the very first production year for
Model 2. We're talking almost a hand crafted (though semi robotic) assembly
line back at that point. And yet the article talks about this as though the
data represents Tesla in general. We are eight years along into the future now
— almost a decade — with that many years of improvements and two (correction,
_three_ ) new models, if you don't even count the vast updates to the Model S.
This should be disclosed in a way that is glaringly clear, not glossed over or
minimized.

In short, I would take everything here with a grain of salt. The research
question, though, seems like a very good one. I would look forward to hearing
an update where the data is provided, the sources are examined, the funding
for the lab tests is disclosed, and current car models are covered.

~~~
43920
Most of this is based on the IIHS data; 2012 is the last time a new type of
test was added (small overlap), so it's what's being used to make this
comparison.

Almost all the cars mentioned in the article have likely improved since then,
but the issue isn't so much how safe specific models are, but rather what
approaches the manufacturers are using to improve safety. What the test
results indicate is that rather than actually designing for safety,
manufacturers are designing to meet a very specific set of benchmarks; there
isn't much reason to believe that has changed since then.

The most obvious example of this (which the article briefly mentions) is the
difference between driver-side and passenger-side performance:
[https://www.iihs.org/news/detail/small-overlap-gap-
vehicles-...](https://www.iihs.org/news/detail/small-overlap-gap-vehicles-
with-good-driver-protection-may-leave-passengers-at-risk)

~~~
natch
>the issue isn't so much how safe specific models are, but rather what
approaches the manufacturers are using to improve safety.

How would anybody possibly be able to characterize Tesla’s approach to improve
safety today based on looking only at a test of one car eight years ago? You
said it yourself: “improve” — the word implies a measurable delta in the data.
But if the data is a static single value, there is no delta to work with.

And there’s no rule stating that this is the issue that a blog post must
cover.

It’s a blog. The author is free to choose and cover any issue they please.

It’s just not a worthy issue to make claims about, if your data is so limited
that you are (he is) trying to derive manufacturer behavior by looking only at
a single car model year as a snapshot in time. It’s simply ridiculous to try
to conclude anything from such data. And then to cover that up by presenting
the results on twitter as a table that highlights the false conclusion... just
dishonest imho.

~~~
archi42
Selective reading?

He acknowledges Tesla to be difficult to fit, right under the "Tier Table":
"[...] Tesla not really properly fitting into any category (with their
category being the closest fit)".

And in the "Bonus: Reputation" section, he supplies an argument/reasoning what
he doesn't like about Tesla:

[...]I find the Tesla thing interesting since their responses are basically
the opposite of what you'd expect from a company that was serious about
safety. [...], they often have a very quick response that's basically
"everything is fine". I would expect an organization that's serious about
safety or improvement to respond with "we're investigating", [...]

For example, on the driver-side small overlap test, Tesla had one model with a
relevant score and it scored Acceptable (below Good, but above Poor and
Marginal) even after modifications were made to improve the score. Tesla
disputed the results, saying they make "the safest cars in history" and
implying that IIHS should be ignored in favor of NHSTA test scores[...]

(end quote).

In that context I think it's important not to skip on Tesla and mention the
above. And by giving his thoughts this allows readers to make up their own
opinion (and you're free to come to your own conclusion).

Full disclosure: Safety was one of the main reasons to get a Volvo. However, I
am not someone to hate on Tesla.

------
gorgoiler
Volvo was always marketed as a safety product years ago. Seatbelts, seatbelt
warning lights, airbags, and daytime lights — a requirement in Sweden — all
spring to mind.

I had assumed that Volvo lost that edge this century so it’s good to see them
at the top of the list.

~~~
blakesterz
I just took a quick look at Consumer Reports for Vovlo's reliability and
they're not great. A couple were even bad. If my memory is right, this has
been the case for a while. It looks like maybe Volvos are safe, but not super
reliable I guess?

~~~
dangus
On the other hand, all car makes have become much more reliable than they were
in the past.

Mostly, what you're looking out for is cost of parts and difficulty of labor.
For example, a lot of Audi/Volkswagen products suffer from simple repairs
being made more complicated by the need to pull things apart for the mechanic
to reach them.

~~~
frosted-flakes
In some cases, absolutely. Having to completely disassemble the door to repair
a failing lock module is seriously annoying, as is making it impossible to
remove the engine mount while the engine is in the car (seriously complicates
timing belt changes on VW mk4 cars, because the belt loops around the engine
mount).

But on the GTI (2009 in my case), they did something rather ingenious: instead
of welding the outer door skin on, they bolted it on with about 25 short
bolts. Taking those off takes about one minute with an impact driver, and you
have full access to the door lock module and window regulator, as well as
making rust or dent repairs almost trivial. Why can't everything be like that?

~~~
AgloeDreams
Speaking of timing.. .you got that Timing chain Tensioner replaced right? Mine
blew on my 2009 Eos and lunched the motor.

~~~
frosted-flakes
Five years ago! The GTI was my late brother's (now my parents'), and he was on
top of that kind of stuff. Of course, we got a recall notice last year for it,
way too late. And since we did it ourselves, I don't see that VW would
reimburse us for the work.

------
netman21
When I designed car seats we optimized them for the federal tests. The biggest
load on a seat back was represented by a large 95% man leaning back to get at
his wallet. We were ready for production when the car had it's first sled
test. Basically mimic what happens to the occupants when hitting a solid
barrier at 30 MPH. The front seat occupants flew forward, bounced back, and
completely broke the seat backs, crushing the legs of the rear occupants.

~~~
bruckie
What's a 95% man?

~~~
kube-system
95th percentile in size

~~~
jacquesm
And more importantly: mass.

------
WickedFlick
This was an excellent article, car safety is something I think most people
don't take very seriously when buying a new car.

However, the author mentions the 1959 Bel-Air vs 2009 Chevy Malibu crash test
as a dramatic example of the progress of car safety. It is indeed a
spectacular video, but there's a good case to be made that the 1959 Bel-Air
was likely specifically chosen to exaggerate the progression of car safety, as
it had a _particularly_ terrible X-frame design that crumples like tissue
paper in a crash. Other, more traditionally designed cars from that era would
almost certainly fair significantly better in that test.

Take note of the comment below a Jalopnik article on this crash test:

>"Fast_Nel in the last story about this crash-off posted a video showing the
deficiency of the X-Frame design of this car and it was an excellent video. If
the IIHS would have chosen a car with a ladder frame, like say a 65 Impala or
another car of the same year with a standard ladder frame, the results would
probably been much better for the older car. I am sure IIHS purposefully used
this particular style vehicle with the knowledge that it would fail
spectacularly in order to toot its own horn. So yes there was something
seriously wrong with the video.

One might ask, "Why GM would use a frame like this?" My guess is that GM
attempted to be innovative with the X-Frame concept and they failed, because
all the ramifications had not been considered."

Source: [https://jalopnik.com/yes-the-iihs-crashed-59-chevy-had-an-
en...](https://jalopnik.com/yes-the-iihs-crashed-59-chevy-had-an-
engine-5364071)

That's not to say that newer cars aren't safer in general, they very likely
are. But that test in particular is not a good example.

------
Merrill
>For a long time, adult NHSTA and IIHS tests used a 1970s 50%-ile male dummy,
which is 5'9" and 171lbs. ... >For reference, in 2019, the average weight of a
U.S. adult male was 198 lbs and the average weight of a U.S. adult female was
171 lbs.

So the average mass of the American male has increased 15% in 50 years! No
data on whether he is taller, but I'd bet the BMI has gone up.

~~~
itpwang
Referencing the same line, I remember seeing another article on HN somewhat
recently about the downside to taking averages of a human to fit specs since
the "average"(mathematical mean of measurements) human never existed in
reality. Something about fitting pilots to a cockpit?

~~~
nitrogen
These "average" cars have pretty terrible seat backs if you happen to be
taller than the average. Outright destructive to the spine.

------
ummonk
Is passenger side small overlap really a common form of collision? I would
imagine since everyone drives on the right side of the road, most higher speed
small overlap collisions would be on the driver side. Perhaps car
manufacturers were right not to optimize for passenger side small overlap
collisions?

Frankly though, this is an area where I don’t think it is all that harmful for
the metric to become the target, since the metric itself can be modified to
reflect common crash conditions, and signal to car manufacturers what kinds of
collisions should be optimized for to improve real world safety.

One big feature missing from the current suite of crash testing though is
small vehicles colliding with larger vehicles. Walls only simulate colliding
with the same vehicle, but small vehicles are especially vulnerable in
collisions with larger vehicles.

~~~
hinkley
Passenger side small overlap seems like something that would happen when
someone tries to pull into your lane. As you say, the average delta-V for that
collision is going to be much lower than the driver's side collisions (passing
on the right, or crossing the center line)

~~~
throwaway0a5e
overlap crashes tend to happen when the driver attempts to avoid an obstacle
but there is not sufficient space to fully do so.

------
gok
One thing I appreciate in supervised machine learning is that it's widely
acknowledged that one should not test on training/tuning data. For both
computer benchmarks and car crash tests, this is not the case. Both fields
routinely work to optimize for a test, which leads to dramatic overfitting.
This would be like passing out a standardized test years before students take
it.

This seems solvable. For benchmarks, there should be held out tests that don't
get distributed to (say) compiler authors until after the results are
published. For crash tests, parameters like overlap and speed should be
randomized each year so that manufacturers are forced to optimize for a range
of possible crash scenarios.

~~~
ghaff
>For benchmarks, there should be held out tests that don't get distributed to
(say) compiler authors until after the results are published.

The way you really "solve" the problem is to have a third-party test
organization run the benchmark and only provide the final result--and you only
get one shot for a given generation of hardware/software. (This is sort of how
standardized tests work.)

The problem is that you're now benchmarking with a totally opaque test that
you have to trust is reasonably representative of the type of workload it
claims to represent as well as trust that the benchmark isn't favoring
specific vendors or design decisions that are irrelevant to real world
workloads.

I assume there have been benchmarks at various times that were similar to
this. Certainly, I'm pretty sure source code for benchmark suites hasn't
always been available.

------
mumblemumble
Somewhat related anecdote, related to me by an acquaintance who works at a
major US crash testing facility:

I don't know about now, but, once upon a time, not too long ago, US safety
standards included crash tests where the dummies were not wearing seat belts.
This naturally leads to cars sold in the US having air bags that are tuned
slightly differently, so that they are less likely to cause injuries to people
not wearing seat belts. Unfortunately, doing this _decreases_ their ability to
protect the vast majority (nowadays) of people who do use their seat belts.

~~~
csours
As far as I know, this is still the case. A significant minority of people in
the US do not wear seat belts; thus the timing on airbags has to accommodate
this, or the airbags themselves would cause injury.

~~~
Sohcahtoa82
I feel naked without my seat belt.

Occasionally, I'll get into my wife's car to simply move it into the street so
I have room in my driveway to wash mine, and even though I'm driving less than
50 feet, I still feel awkward not putting on the seat belt.

------
himaraya
The "Using real life crash data" section is just wrong. The IIHS confidence
interval describes an empirical death rate, not an "expected fatality rate."
Therefore, an interval including 0 deaths per million registered vehicle years
arises due to very few deaths over a very large dataset, not from some
"informative prior" bounding the interval at 0. As an empirical description,
the IIHS figures should have high statistical power, after adjusting for
confounding variables. Time and location could be interesting factors to
consider.

For Dan's suggestions, the IIHS already breaks down death rates per 10 billion
miles by vehicle class and individual vehicles, and the latter supports the
original ordering by registered vehicle years. Also, incorporating data from
models under the same make doesn't make sense. Since the IIHS already accounts
for different vehicle platforms, any believed correlation in death rates
between a manufacturer's models must derive from the original dataset,
creating a meaningless statistic.

Updated IIHS doc here: [https://www.iihs.org/api/datastoredocument/status-
report/pdf...](https://www.iihs.org/api/datastoredocument/status-
report/pdf/55/2)

------
jariel
On page 3 of [1] from the article which shows most number of deaths, it seems
like there's almost a perfect correlation between 'size' and 'deaths'.

I remember thinking about getting a mini Cooper, literally while driving in my
family's quite large SUV, and my father indicating how they are unsafe due to
size. I couldn't help thinking of the odd kind of 'race to size-safety' factor
severely limiting people's ability to drive smaller, more efficient cars.

I want to drive a small car for a variety of reasons. But the moment I have
children ... 'safety' might be my #1 concern (along with the convenience of a
large utility vehicle). It could very well be that this 'safety arms race' is
harmful.

[1] [https://www.iihs.org/api/datastoredocument/status-
report/pdf...](https://www.iihs.org/api/datastoredocument/status-
report/pdf/52/3)

------
wolfgke
"I think it would be surprising if other car makers had a large suite of crash
tests they ran that aren't being run by testing agencies, but it's
theoretically possible that they do and just didn't include a passenger side
small overlap test)."

The fact that Mercedes and BMW are in the category "poor without
modifications; good with modifications" (i.e. with some modifications, the
cars get from "bad" to "good") rather shows to me that these two car
manufacturers _do_ have such test facilities where they do such tests to be
prepared for the situation when such test become included.

------
arnaudsm
Reminds me of the modern web that is more optimized for SEO than for the
users.

------
strict9
Whenever metrics are instrumented that measure productivity, safety, quality,
or efficiency, the metrics and metrics alone are what matter, not the over
arching goal.

Whether it's car crashes, fuel efficiency, lines of code, completed tickets,
OKRs, or anything else, players in the system adjust to optimize output for
the evaluated metric.

A tale as old as time.

~~~
spott
Unfortunately, there isn't really a solution to this problem. Metrics are the
only way to measure some things reliably and consistently across users,
testers and subjects.

The goal is to have performance on the metric be as close as possible to the
desired characteristic. This isn't always possible (and gets really hard when
people are involved), but there isn't really any other solution.

~~~
nitrogen
Hidden metrics help, but it is pretty frustrating as an individual to be
evaluated based on hidden criteria.

~~~
spott
Yea, and no metric stays hidden forever if there is enough money in it getting
unhidden.

Statistical metrics have some of the benefits of hidden metrics while staying
"hidden" forever, but they also can be much more time consuming to do right
and are even more frustrating to be evaluated by.

------
kinkrtyavimoodh
For those who find this website's edge to edge text maddening to read on large
displays, I highly recommend Sakura
([https://github.com/oxalorg/sakura](https://github.com/oxalorg/sakura)) for a
one-click fix for badly formatted websites.

~~~
mixmastamyk
Reader mode is another option on most browsers.

However, it can also be said that a full-screen browser on a landscape
widescreen monitor is "doing it wrong."

------
bigtasty
This was a very interesting article. I’m curious if auto makers knew about the
new tests being added, and if so, how far in advance? It certainly seems that
if they did know about the new tests, it was far too long into the vehicle
development to make changes.

------
chrisbigelow
"When a measure becomes a target, it ceases to be a good measure."

------
ohples
Also, don't forget. Crash tests (in the US atleast) don't measure safety for
non-occupants, ie pedestrians, other car users.

------
x87678r
There must be stats for injuries or deaths by car. Is it public?

~~~
kube-system
[https://www.iihs.org/ratings/driver-death-rates-by-make-
and-...](https://www.iihs.org/ratings/driver-death-rates-by-make-and-model)

~~~
ohples
Those are just driver deaths.

~~~
kube-system
The request was vague, but insurance industry figures are going to be the best
breakdown by vehicle you can get in the US. NHTSA stats are usually broken
down by category of vehicle.

------
amelius
How about F1 cars?

~~~
andrewzah
Has there ever been a full or partial head-on collision with an f1 car?

~~~
crottypeter
Yes, in Abu Dhabi in 2010, Michael Schumacher spun 180 and Vitantonio Liuzzi
then hit him head on.

It was at relatively low speed (for F1) but Schumacher's head was very nearly
hit as the other car rode over his.

------
czzr
Optimizing for a test is an issue if the test is not well correlated with the
desired outcome. By and large that doesn’t seem to be a big issue for crash
tests.

~~~
michaelt
In this case, it's an argument that crash testing should be more expansive and
thorough.

For example, if engineers are going to position the driver and passenger
airbags for optimal crash test performance, we probably shouldn't just be
testing with a male dummy in the driver's seat and a female dummy in the
passenger's seat, as in the modern age women often drive.

If engineers are strictly designing to the test, there should be a suite of
repeats, putting dummies that are male, female, fat, thin, tall and short in
every seat.

