Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not averse to voting machines provied:

* The code is publicly available

* The code is publicly verifiable - eg, every voter is provided with a 'voting receipt' with a non-identifying UUID and their voting results. Users are allowed to look up this info online at any time they wish.




This is a classic, but how does it relate to my post? Are you pointing out that a trojan could still be hidden in public code?

If so, I agree. But I think making that code public reduces the risk of it being malicious. Eg, for one you've already made it so that the malware must be implemented as a trojan, rather than being obvious. The other is the code would be under massive scrutiny (hopefully without the developers OKing changes without actually looking at what they do, ala OpenSSL).


Read again. You cannot trust code, and making it public is almost irrelevant in this regard. You need the full resources of an "emerging" country, like China, to fully develop your own (e.g. Longsoon).

"Moral

The moral is obvious. You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect."

I would be awfully surprised, if people who made millions (from Verisign) by "looking" at ssl would have made that kind of a mistake (effectively freedom frying the French National Guard).


Certainly, everything can be hacked, but if the code is running on a machine using standard off-the-shelf hardware, and running OpenBSD or another equivalently open *nix distro, and the code is interpreted code running on that machine, using the standard interpreter from that (open) language, then the chances that someone might manage to introduce a hack are heavily reduced, due to the number of eyeballs which can verify that the code is safe.

I agree that you have to look at every element of the chain, but my point is that you can construct such a trusted chain that goes all the way from the hardware to the vote-tallying software. There is still some potential for hackery in the hardware, but that could also be ruled out by designing and implementing some open hardware - a project which, I'm sure, would gather plenty of support from the open source community.


you still look @it as a technological problem. It isn't. It is a social (moral) problem, thus cannot be solved by any interpreter, BSD fork (which deceivingly, towards the BIOS pretends to be Windows ;) etc.


The social problem exists whether or not you have e-voting. It has been solved to a satisfactory level already by having election observers, etc.

One of the cool things about social problems is they don't need exact solutions. If you can solve things for 99.9% or even 99% of the cases, that's considered good enough when it comes to people


You're acting like risk is binary. It isn't.

You can't fully trust code unless you've created it entirely yourself.

That doesn't mean that code you have not fully created yourself which is available to scrutinize has the same level of risk as code which you have not created that is secret.

Nor does Thompson say it is in his article - read it again.


Good point. See also the "Underhanded C Contest": http://underhanded.xcott.com/?page_id=2




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: