"Those are scary things, those gels. You know one suffocated a bunch of people in London a while back?"
Yes, Joel's about to say, but Jarvis is back in spew mode. "No shit. It was running the subway system over there, perfect operational record, and then one day it just forgets to crank up the ventilators when it's supposed to. Train slides into station fifteen meters underground, everybody gets out, no air, boom."
"These things teach themselves from experience, right?," Jarvis continues. "So everyone just assumed it had learned to cue the ventilators on something obvious. Body heat, motion, CO2 levels, you know. Turns out instead it was watching a clock on the wall. Train arrival correlated with a predictable subset of patterns on the digital display, so it started the fans whenever it saw one of those patterns."
"Yeah. That's right." Joel shakes his head. "And vandals had smashed the clock, or something."
To summarise: if a system is not understood there exists the possibility of sudden, unexpected harm. The system is unsafe. This is (probably) ok if the system is putting icing on donuts (you might get a bad batch) but is definitely not ok if the system is deciding on dosing levels for drugs or controlling machines that could suddenly smash into queues of school children.
Moreover, if (as customary in all technology) you build systems upon these systems, even the low malfunction probabilities will multiply into nearly assured failures. With a system you understand you can find the cause and fix the issue for all: this what allows us to build ever more complex systems over decades of engineering R&D.
But for a system you don't understand you are at whim of cascading patterns of errors in underlying behavior.
1. A cpu you rely on is a well understood system yet unexpected failures do happen (eg pentium bug, spectre exploits, etc).
2. A human you rely on might fail unexpectedly (tired, drunk, heart attack, going crazy, embracing terrorism, etc).
After testing reasonable number of things, if NNs perform more reliably than those other systems we rely on currently, it will be increasingly harder to justify using the other systems, especially when that means more people accidentally dying every year.
For example 2, we hold the human accountable and we have a system of social and administrative controls that attempt to mitigate these risks. For example, if you are a pilot and you show up drunk the flight crew will report you... because they don't want to be killed in the crash you may precipitate.
For 1 - well, actually CPU's are not as well understood as they might be. There are verified CPU's and stacks, but the pace of Moore's law and the cost of verification has meant that the performance gaps between them and the commodity chips has been huge. In 20/30 years time I expect that that gap will be small and we will see investment in fully verified stacks for many applications.
If the argument about accidental death reduction were true then wouldn't we have abandoned the use of personal cars and instead be insisting on public transit only? I believe that deaths per mile travelled on rail is far lower than for automobiles (in the US - which appears to have the worst record for rail one death per 3.4 billion passenger-km, for roads one death per 222 million passenger-km, that's an order of magnitude.)
I believe "With a system you understand you can find the cause and fix the issue for all" covers the first case. The second case is applicable to both NN and traditional control systems.
So which system would you personally prefer to rely on in life or death situation, the one that is well understood (accident rate 0.0001%), or poorly understood (accident rate 0.000001%)?
Yes, Joel's about to say, but Jarvis is back in spew mode. "No shit. It was running the subway system over there, perfect operational record, and then one day it just forgets to crank up the ventilators when it's supposed to. Train slides into station fifteen meters underground, everybody gets out, no air, boom."
"These things teach themselves from experience, right?," Jarvis continues. "So everyone just assumed it had learned to cue the ventilators on something obvious. Body heat, motion, CO2 levels, you know. Turns out instead it was watching a clock on the wall. Train arrival correlated with a predictable subset of patterns on the digital display, so it started the fans whenever it saw one of those patterns."
"Yeah. That's right." Joel shakes his head. "And vandals had smashed the clock, or something."