
Images that fool computer vision raise security concerns - lm60
http://news.cornell.edu/stories/2015/03/images-fool-computer-vision-raise-security-concerns
======
pavel_lishin
Peter Watts mentioned this potential problem in his Rifters series; one
explicit example was a neural net that ran a train and was trained via a
series of inputs, one of which was a clock in a train station; one day, the
clock broke, and the neural net took some action that ended up killing all the
passengers. (I forget the details.)

Which is not to say that we should all fear computers more than humans as a
consequence; we do inexplicable things, too.

------
masterfres
Very interesting. First of all here's a youtube video associated with the
paper --
[https://www.youtube.com/watch?v=M2IebCN9Ht4](https://www.youtube.com/watch?v=M2IebCN9Ht4)
. Second some on here have posted about the Svegedy, Goodfellow, and Shlens
paper [http://arxiv.org/abs/1412.6572](http://arxiv.org/abs/1412.6572) which
discusses the opposite effect. The Svegedy research is mentioned in the Nguyen
paper and specifies that given an image that is correctly classified in a DNN,
you can alter that image in a way imperceptible to a human to create a new
image that will be INCORRECTLY classified. The Nguyen, Yosinski, et al. work
that's the subject of this post states that given a DNN that correctly
classifies a particular image, you can construct a gibberish image that the
DNN will classify as the same image.

Both results are interesting from the point of DNN construction, and there
have been some papers suggesting ways to counter the effects specified in the
Svegedy research. In practice (as others have mentioned) in order to construct
an exploit similar to the one described in this post, you'd need to have a lot
of knowledge about the DNN (e.g. weights) that an external attacker wouldn't
have.

What this does leave open, though is a disturbing way for someone with
internal access to a DNN doing important work (e.g. object recognition in a
self-driving car) to cause significant damage.

------
commentereleven
This reminds me of this tool-assisted speedrun:
[https://www.youtube.com/watch?v=GOfcvPf-22k](https://www.youtube.com/watch?v=GOfcvPf-22k)

The image recognition in Brain Age is obviously much simpler, but it's still
basically the same idea.

------
randcraw
This is not so different from recognizing images in their fourier frequency
domain. The frequency features and their origins in the spatial domain can be
made very unintuitive.

But I'm not clear how important this phenomenon really is to the practice of
CV, since 1) 'spoofed' images are highly specific to each DNN being used, and
2) a trivial reality check of the image can always 'out' examples like these.

~~~
acadien
Your 2nd point is critical, you can filter these images easily before even
running them through the DNN. However researchers are also interested in why
it is possible to spoof NN's in general. The typical response of 'overfitting'
is being questioned.

Also the question is raised as to whether or not new methods of spoofing are
possible that aren't so easily detectable.

------
woodchuck64
"“We realized that the neural nets did not encode knowledge necessary to
produce an image of a fire truck, only the knowledge necessary to tell fire
trucks apart from other classes,” [Yosinski] explained."

This seems markedly different from biological neural networks. Is the
difference one of network structure/algorithm or rather the fact that
biological neural networks (in human image processing) actually have time and
space to learn a lot about each individual image class?

~~~
compbio
Deep nets are only loosely inspired by neurobiology. That's why LeCun calls
them "convolutional nets" and not "convolutional neural nets" and prefers
"nodes" over "neurons".

It is, however, possible to have a deep net produce 3D models/images:
[https://www.youtube.com/watch?v=QCSW4isBDL0](https://www.youtube.com/watch?v=QCSW4isBDL0)
"Learning to Generate Chairs with Convolutional Neural Networks".

I also suspect a different part of cognition is used when humans are asked to
recreate a "fire truck" than when humans are asked to classify a "fire truck"
from a "car". The former seems closer to using memory ("what did the last five
fire trucks I saw look like?"). A fairly recent addition to deep nets is
making use of memory:
[http://arxiv.org/pdf/1410.5401.pdf](http://arxiv.org/pdf/1410.5401.pdf)
"Neural Turing Machines". So the difference may quickly become less
significant.

------
a-dub
Cool! Reminds me of "Shazam Decoys" where barely audible or inaudible energy
can be added to a signal to fool Shazam into identifying it as the wrong
track.

I've often thought there would be an awesome opportunity in there to make a
hilarious app that catches cheaters during the music round of Pub Quiz.

~~~
EdwardDiego
> I've often thought there would be an awesome opportunity in there to make a
> hilarious app that catches cheaters during the music round of Pub Quiz.

The best pub quizzes dim the lights so that smartphone cheats beam forth their
sneakiness.

------
adrusi
The pattern-based illusions are actually quite interesting, almost artistic.
Half of them are recognizable to humans, the other half at least make sense
when identified. I wonder if we can automate the production of postmodern art
:)

~~~
jeffclune
In fact, we submitted them to an art competition and they were accepted and
put up in a museum!

[http://www.evolvingai.org/share/20150213_184537.jpg](http://www.evolvingai.org/share/20150213_184537.jpg)

------
valine
Here's a link to the Yosinski's paper.

[http://yosinski.com/media/papers/Nguyen__2014__arXiv__Deep_N...](http://yosinski.com/media/papers/Nguyen__2014__arXiv__Deep_Neural_Networks_are_Easily_Fooled.pdf)

------
aclinnovator
I think that the major problem with CV is that it only recognizes images in
isolation from each-other. Humans understand what they are looking at by
finding the concept that lies at the intersection of all the small ideas in
the image. For example, a human would recognize a keyboard because it contains
a "means of input" on which there are "symbols" specifically the "alphabet",
arranged in a "logical format"("QWERTYUIOP") which they know is the sign of a
keyboard. If a human were to see a keybaord that looks different from most,
they can still make the inference that it is a keyboard by understanding the
underlying concepts of what they see.

On the other hand, a computer mechanically relates the specific format of a
keyboard to the word "keyboard." It fuzzy matches the pixels of images to
extract the object in the image: not the individual ideas implicit in the
image.

Computer Vision needs more depth to actually be considered vision.

------
zk00006
If you ever work with classifier training, these results are not surprising.
You can take all false positives generate by classifier, average them and you
will come up with an image that resembles the object to be recognized.

------
deeviant
I know many here know this already but I'll say it anyway since I ran into
people with a misconception related to this: the particular misidentifications
are specific to the algorithm and training set used, it's not like all
computers recognize those blocks as cheetah or what not.

We use ML-based computer vision at my work, so I have a bit of experience
here. I think the biggest practice take away from observation the ML can give
some wonky results is that ML system can be a real PITA to debug.

------
zk00006
The whole idea is to tear down algorithm, simulate unnatural image that
triggers the required responses and pass it to classifier. There is no direct
link to security risks here.

------
ChuckMcM
One of the interesting things was the 'white noise' which was identified as
various animals. I reminded me of people looking at noise and "seeing" data.
Which for me suggests that at some level this isn't completely an artifact. If
the algorithms developed are so closely modeled on human perception are
susceptible to this sort of thing, humans probably are too. Perhaps that
explains reports of people seeing things in the electronic 'snow' pattern of a
disconnected TV?

~~~
ars
The difference is that humans know they are seeing noise, they don't claim 99%
confidence in their image, but rather a very low confidence.

~~~
Udik
It's probably very naive, but that makes me wonder whether these neural nets
are trained to recognize noise or meaningless images as such. If we train a
system to tell us what an image represents, the system will do its best to
classify it in one of the existing categories. But having a low confidence in
what an image represents it's not the same as having high confidence in the
fact that it doesn't represent anything. So maybe we should train the networks
to give negative answers, like "I'm totally confident that this image is just
noise".

------
c-slice
This isn't very groundbreaking, they're simply discovering the inherent
weaknesses in neural net learning. We've know for years that neural nets have
inherent gaps in their training and this is just taking advantage of that.
This isn't really an issue in a real life scenario as you wouldn't be able to
determine without thousands of recursions which images produce faulty results.

~~~
jpeterson
Well, obviously they should've checked with you before publishing their paper.

------
appleflaxen
This makes me wonder if we might be flirting with the computational equivalent
of autism.

Could autistic children have learning impairment due to their inability to
correctly sort/segregate stimuli, in the same way that these neural networks
generate high-confidence false-positives?

------
bwross
If you look really closely at the noisy images, you can see little blotches
that vaguely resemble the things the computer recognized them as.

~~~
Coincoin
Yeah, they say the algorithm erroneously sees an Amarillo. But wait, I
actually see an Amarillo and I actually see a centipede.

------
uncoder0
Not directly related but, I was at a security related convention and overheard
some people talking about an image that when occupying <3/4 of a frame will
crash any digital camera. (Phone, DSLR, IP Camera) Does anyone know anymore
information about this image and effect? I imagine it's a bug in some low
level firmware of a common IC for digital photography DSP but, I'm very
unfamiliar with digital cameras. It also could have been complete bunk because
I've not heard of it since and it wasn't being showcased at the convention.

~~~
Houshalter
Could be related to this?
[http://en.wikipedia.org/wiki/EURion_constellation](http://en.wikipedia.org/wiki/EURion_constellation)

~~~
TazeTSchnitzel
It's most likely not the constellation itself, but the other mechanism:

[https://en.wikipedia.org/wiki/EURion_constellation#Other_ban...](https://en.wikipedia.org/wiki/EURion_constellation#Other_banknote_detection_mechanisms)

In particular, have a look at this:

[http://www.cl.cam.ac.uk/~sjm217/projects/currency/](http://www.cl.cam.ac.uk/~sjm217/projects/currency/)

------
justintbassett
I'm not sure I really understand the implications of this. It doesn't seem
like this is an inherent weakness in computer recognition of images, but
instead a weakness of a particular DNN? Or am I way off base?

------
higherpurpose
How effective will this be against Intel's "RealSense 3D cameras"? Is it, as
I've already assumed, just a matter of time before that technology can be
fooled too?

------
jahnu
> But computers don’t process images the way humans do, Yosinski said.

This means that come the singularity AIs will have to use AI specific CAPTCHAs
in order to distinguish between humans (aided by dumb computers) and other
AIs.

~~~
pavel_lishin
_Which of the following would you most prefer? A: a puppy, B: a pretty flower
from your sweetie, or C: a large properly formatted data file?_

~~~
msandford
Welp, it's official. I'm a computer.

~~~
anon4
No, you're the maintenance guy. Computers don't care about formatting. The
correct answer is A, because "A" is drawn with just three straight lines and
computers like straight lines.

~~~
pavel_lishin
But it arguably takes more data to encode an A - three lines, which means six
endpoints, plus whatever signifies the command "draw a straight line." At
minimum, that's seven pieces of data.

A C, however, can be drawn as half of a circle - one command to draw an arc, a
center point, a radius, and the start and stop angles. Five pieces of data.

(This is assuming, of course, that computers prefer minimal amounts of data.
If that's wrong, then the computer would obviously prefer B. You need more
data to describe it.)

~~~
Gifford
Two of the A endpoints are the same,so don't need to be encoded twice.

------
lawlessone
Machine pareidolia?

On a serious not couldn't we keep training the same DNN's using these white
noise images as negative examples?

~~~
Houshalter
This paper tried that with positive results:
[http://arxiv.org/abs/1412.6572](http://arxiv.org/abs/1412.6572)

------
compbio
It is good to know that they need access to a lot of predictions from a net,
before they can create an image that will "fool" the net, but look alien to
humans. Secondly, this doesn't account for ensembling: "fool me once, shame on
you. Fool me twice...". Since the images are crafted for a single net, a
majority vote should not be fooled by these images. I suspect this effect
rapidly goes away when adding more nets (which is basically industry-standard
practice to increase accuracy).

Furthermore, I am seeing the security concerns, but I figure this is far from
a practical attack. Deep Learning Classifiers do not act as gatekeepers: You
have not much to gain from a single faulty classification. You won't be
granted access to secret information if you happen to look like the CEO.

~~~
userbinator
_Furthermore, I am seeing the security concerns, but I figure this is far from
a practical attack_

Perhaps it's a sign of the times that almost every discovery that could
possibly be related to security in some way, does. I have a feeling that if
this was a decade or two ago, the sentiment would be very different. ("Can you
figure out what a computer thinks these images are?")

Also, the image labeled "baseball" immediately reminded me of a baseball...

------
humanfromearth
They can only fool the DNN because they know their weights. In theory, you
could do the same for people's brains.

As long as you don't publish the specs of your net you should be fine I guess.

~~~
dsr_
If you have an oracle, you don't need to know the net specs.

An oracle, here, would be any version of the system that you can query against
repeatedly without suffering too much of a penalty.

------
FLGMwt
Who decided to name it a "lesser panda"? And of the alternative names for a
red panda, firefox, or red cat-bear why would someone _choose_ "lesser panda"?

~~~
bediger4000
That is a question for the ages. Who decided on "cool ranch flavor" for
Doritos? And what's a "cool ranch"? Or "blue raspberry", a common flavor for
candy these day, or "TV Spokesmodel"
([https://www.google.com/?gws_rd=ssl#q=TV+spokesmodel](https://www.google.com/?gws_rd=ssl#q=TV+spokesmodel)).
There's lots of weird-beard names that get applied to stuff, and nobody has a
say in those names.

------
valine
Does anyone have higher resolution versions of the images used in the article?

~~~
jeffclune
Here they are:
[http://www.evolvingai.org/fooling](http://www.evolvingai.org/fooling)

There is also a video summary of the paper there.

------
Houshalter
A paper came out that explains this effect and a method of minimizing it:
[http://arxiv.org/abs/1412.6572](http://arxiv.org/abs/1412.6572)

Basically neural networks and many other machine learning methods are highly
linear and continuous. So changing an input just slightly should change the
output just slightly. If you change all of the inputs slightly in just the
right directions, you can manipulate the output arbitrarily.

These images are highly optimized for this effect and unlikely to occur by
random chance. Adding random noise to images doesn't seem to cause it, because
for every pixel changed in the right direction, another is changed in the
wrong direction.

The researchers found a quick method of generating these images, and found
that training on them improved the net a lot. Not just on the adversarial
examples.

~~~
mlmonkey
> Basically neural networks and many other machine learning methods are highly
> linear

No they're not! You introduce non-linearities like the sigmoid or tanh to make
them highly non-linear.

~~~
neuralk
I was thinking the same thing until I scanned through the paper linked above.
While neural networks are indeed non-linear, some NNs can still exhibit what
amounts to linearity and suffer from adversarial linear perturbations. Example
of linearity in NNs that the authors are considering from the paper:

>The linear view of adversarial examples suggests a fast way of generating
them. We hypothesize that neural networks are too linear to resist linear
adversarial perturbation. LSTMs (Hochreiter & Schmidhuber, 1997), ReLUs
(Jarrett et al., 2009; Glorot et al., 2011), and maxout networks (Goodfellow
et al., 2013c) are all intentionally designed to behave in very linear ways,
so that they are easier to optimize. More nonlinear models such as sigmoid
networks are carefully tuned to spend most of their time in the non-
saturating, more linear regime for the same reason. This linear behavior
suggests that cheap, analytical perturbations of a linear model should also
damage neural networks.

~~~
drostie
Right. The basic idea is something like the transition (manifolds) from doing
special relativity to general relativity. The special "linear" says that given
two inputs x and y to a function f, f is linear if f(x + y) = f(x) ⊕ f(y) for
two operations +, ⊕. The general "linear" says that f(x + δx) = f(x) ⊕ δf(x,
δx) for some small perturbations δx in the vicinity of x.

If x is a bit-vector then this can be as simple as saying "flip one bit of the
input and here's how to predict which output bits get flipped." When you're
building a hash function in cryptography, you try to push the algorithm
towards a non-answer here: about half the bits should get flipped, and you
shouldn't be able to predict which they are. But of course there's a security
vulnerability even if + and ⊕ are not XORs.

Resisting "adversarial perturbation" in this context means basically that
neural nets need to behave a bit more like hash functions, otherwise they will
confuse the heck out of us. The problem is that if you just took the core
lesson of hash functions -- create some sort of "round function" `r` so that
the result is r(r(r(...r(x, 1)..., n - 2), n - 1), n) -- seems like it'd be
really hard to invent learning algorithms to tune.

------
kingkawn
Maybe this is the algorithm making art

~~~
rndn
I quite like this thought, though art is about combining _known_ patterns in a
novel way. Here they create unknown patters to evoke associations with known
but unrelated patterns. It's kind of reverse-art.

~~~
kingkawn
I think art is about the relationship of the viewer to the work, not about the
explanation for the work's creation. The viewer is the place where it all goes
down.

------
z5h
This raises an interesting question. Do we want computers to see "correctly",
or to see how we see?

Would a preferred computer vision system experience the Checker shadow
illusion?
[http://en.wikipedia.org/wiki/Checker_shadow_illusion](http://en.wikipedia.org/wiki/Checker_shadow_illusion)

If yes, computer vision will be as fallible as ours. If no, then there will
always be examples, like the ones presented, where computers will see
something different than humans.

~~~
maxerickson
A computer vision system can have multiple ways of processing an image. So at
the limit, it could interpret a scene in terms of what a human sees and also
have a separate, better understanding of the scene.

~~~
JoeAltmaier
The OP shows that computers DO NOT have a better understanding. Its evident
they have no understanding at all; they are simply doing math on pixels and
latching on to coincidental patters of color or shading.

People recognize things by building a 3D model in their head, then comparing
that to billions of experiential models, finding a match and then using
cognition to test that match. "Is that a bird? No, its just a pattern of dog
dropping smeared on a bench. Ha ha!"

~~~
maxerickson
How could I have better put _So at the limit, it could_?

I meant to talk about what some hypothetical future system could do (which I
think was a reasonable context given the comment I replied to), not to
characterize current systems.

~~~
JoeAltmaier
Sorry, I re-read and see that.

To get there, computers will clearly have to change utterly their approach. A
cascaded approach of quick-math followed by a more 'cognitive' approach on
possible matches, could definitely improve on the current state of affairs.

------
karpathy
This work has led to some unfortunate misconceptions.

In particular, this weakness has nothing to do with Computer Vision and also
nothing to do with deep learning. They only break ConvNets on images because
images are fun to look at and ConvNets are state of the art. But at its core,
the weakness is related to use of linear functions. In fact, you can break a
simple linear classifier (e.g. Softmax Classifier or Logistic Regression) in
just the same way. And you could similarly break speech recognition systems,
etc. I covered this in CS231n in "Visualizing/Understanding ConvNets" lecture,
slides around #50
([http://vision.stanford.edu/teaching/cs231n/slides/lecture8.p...](http://vision.stanford.edu/teaching/cs231n/slides/lecture8.pdf)).

The way I like to think about this is that for any input (e.g. an image),
imagine there are billion tiny noise patterns you could add to the input. The
vast majority in hundreds of billions are harmless and don't change the
classifications, but given the weights of the network, backpropagation allows
us to efficiently compute (with dynamic programming, basically) exactly the
single most damaging noise pattern out of all billions.

All that being said, this is a concern and people are working on fixing it.

~~~
yosinski
> This work has led to some unfortunate misconceptions.

Agreed; the weaknesses reported should definitely not be taken to affect only
convnets or only deep learning. Ian's "Explaining and Harnessing Adversarial
Examples" paper (linked by @Houshalter) should be required reading :).

> backpropagation allows us to efficiently compute (with dynamic programming,
> basically) exactly the single most damaging noise pattern out of all
> billions.

True. By using backprop, one can easily compute exact patterns of pixelwise
noise to add to an image to produce arbitrary desired output changes. However,
it's an important detail that that most of the images in the paper (all except
the last section) were produced _without_ knowledge of the weights of the
network or by using backpropagation at all. This means a would-be-adversary
need not have access to the complete model, only a method of running many
examples through the network and checking the outputs.

> ...there are billion tiny noise patterns you could add to the input.

Perhaps because the CPPN fooling images were created in a different way
(without using backprop), they seem to fool networks in a more robust way than
one might think. Far from being a brittle addition of a very precise,
pixelwise noise pattern, many fooling images are robust enough that their
classification holds up even under rather severe distortions, such as using a
cell phone camera to take a photo of the pdf displayed on a monitor and then
running it through an AlexNet trained with a different random seed (photo
cred: Dileep George):

[http://s.yosinski.com/jetpac_digitalclock.jpg](http://s.yosinski.com/jetpac_digitalclock.jpg)
[http://s.yosinski.com/jetpac_greensnake.jpg](http://s.yosinski.com/jetpac_greensnake.jpg)
[http://s.yosinski.com/jetpac_stethoscope.jpg](http://s.yosinski.com/jetpac_stethoscope.jpg)

I thought this was surprising the first time I saw it.

~~~
JoeAltmaier
Wait - they didn't use knowledge of the neural network internal state to
calculate these patterns? Does that mean they could create equivalent images
for human beings? What would those look like!

~~~
yosinski
No, but we did make use of (1) a large number of input -> network -> output
iterations, along with (2) precisely measured output values to decide which
input to try next. It may not be so easy to experiment in the same way on
natural organisms (ethically or otherwise).

Of course, if you're as clever as Tinbergen, you might be able to come up with
patterns that fool organisms even without (1) or (2):

[https://imgur.com/a/ibMUn](https://imgur.com/a/ibMUn)

~~~
JoeAltmaier
Perhaps a single experiment on millions of different people? A web experiment
of some kind? "Which image looks more like a panda?" and flash two images on
the screen.

~~~
yosinski
That's a good idea, though note that there's a difference between asking
"Which of these two images looks more like a panda?" and "Which of these two
images looks more like a panda than a dog or cat?". The latter is the
supervised learning setting used in the paper, and generally could lead to
examples that look very different than pandas, as long as they look _slightly_
more like pandas than dogs or cats. The former method is more like
unsupervised density learning and could more plausibly produce increasingly
panda-esque images over time.

A sort of related idea was explored with this site, where millions (ok,
thousands) of users evolve shapes that look like whatever they want, but
likely with a strong bias toward shapes recognizable to humans. Over time,
many common motifs arise:

[http://endlessforms.com/](http://endlessforms.com/)

~~~
tripzilch
Problem is, even if you succeed and end up with a fabricated picture that
fools human neural nets into believing it's a picture of a panda, how would
you tell it's not _really_ a picture of a panda?

You'd need another classifier to tell you "nope it's actually just random
noise and shapes" ... hm.

~~~
JoeAltmaier
Probably not hard - just engage the higher cognitive functions. "Does it look
like noise? Yeah." Or just close one eye and look again.

~~~
tripzilch
I think you missed my somewhat deeper philosophical point :)

Who gets to decide what is _really_ a picture of a panda?

If we'd manage to craft a picture that could with very high certainty trick
human neural nets (for the sake of argument, including those higher cognitive
functions) into believing something is a picture of a panda, _" except it
actually really isn't"_, what does that even mean?

Human insists it's a picture of a panda, computer classifier maintains it's
noise and shapes.

Who is right? :)

~~~
JoeAltmaier
Interesting, sure. But I started out wondering if some obviously-noise picture
could be found that fooled humans, at least at first glance. "Hey a panda!
Wait, what was I thinking, that's just noise!" It would be weird and cool, on
the order of the dress meme etc. but much more so.

Kind of like the memes in Snowcrash, ancient forgotten symbols that make up
the kernel of human thought.

------
msandford
This is some excellent research!

It reminds me of the CV dazzle anti-facial-recognition makeup that made the
rounds a while ago:
[http://www.theatlantic.com/features/archive/2014/07/makeup/3...](http://www.theatlantic.com/features/archive/2014/07/makeup/374929/)

This definitely reinforces my belief that having humans in the loop is not
only desirable but necessary. For the majority of human history minus a few
years you could only be accused of a crime by another human being. I'd like to
see the trend of automated "enforcement" reversed and codify into law that you
MUST be accused by a human being.

If everyone is breaking so many laws that the police and courts can't keep up
it doesn't mean that humanity is broken. It means that the law has gotten so
far out of sync with humanity that the law is broken. People make the laws,
not the other way around.

~~~
toomuchtodo
> If everyone is breaking so many laws that the police and courts can't keep
> up it doesn't mean that humanity is broken. It means that the law has gotten
> so far out of sync with humanity that the law is broken. People make the
> laws, not the other way around.

The world would be a much better place if more people realized this.

~~~
mikeash
This always frustrates me when discussions of plea bargaining and the right to
trial come up, and the argument is given that plea bargaining is a necessity
because the courts would be horribly overloaded if every case went to trial.

If the system doesn't have the resources to give every accused criminal a fair
trial, then either you're making too many criminals, the system doesn't have
enough resources, or both. Bypassing trials is just a way to cover your ears
and shout "la la la" to ignore the problem.

~~~
wmil
This is a big issue in the US and it actually goes back to the Warren court.
They issued a long series of rulings making it difficult to prosecute cases,
without worrying about the consequences.

By the 70s crime had skyrocketed and it was clear that they had gone too far.
But instead of issuing a mea culpa and reexamining past rulings, the various
courts started allowing prosecutors to claim broad new powers and take
extremely aggressive tactics.

By this point there's no real way to fix it. The legal system is based
strongly on precedent. It can't just undo major rulings of the past and
replace the with something sane.

~~~
bsder
> They issued a long series of rulings making it difficult to prosecute cases,
> without worrying about the consequences.

Um, so? Then we need to allocate more resources to prosecute cases.

If I am falsely accused, I want my day in court. And I want it to be fair. The
current system has problems on both fronts.

~~~
msandford
> If I am falsely accused

The problem is that there's no way for anyone else to determine _a priori_ if
your accusation is false or true. That's what the whole presumed innocent
until proven guilty thing is about.

In reality, if you are accused AT ALL, you want your day in court and you want
it to be fair. Even if you had committed a crime, if the police did something
they're not supposed to that needs to get sussed out in court and you should
go free.

Half the point of a trial is to make sure that nothing unfair is done by the
investigators (police, prosecutor, etc). This is to keep their power in check
so that they'll follow the rules. Otherwise it could get mighty tempting to
fudge something a little bit "because we KNOW this is the guy!" and "we need
to do the right thing."

