Hacker News new | past | comments | ask | show | jobs | submit login

"I don't understand how only allowing signed execution would help avoid this problem. Like you said, anyone could pay to get their company listed as a 'trusted' entity."

The purpose of signing isn't to guarantee that an entity is anything, but to allow the user to absolutely and decisively rule what code is allowed to execute. If the Russian Mob sneaks some code from "G0ogle, Ink." onto my machine by tricking me into authorizing that cert, I can just de-authorize it and then it all DIAF.

When I say signing, I don't necessarily mean the app store feudal model. I mean an inverted version of that -- where the user decides what runs by approving certs by signing them with some kind of master key.

"Poof, no more program debuggers, profilers, no more device drivers, no more third party file systems, no more"

You can debug Java pretty effectively. There are great toolchains for that. I agree that direct ASM may be required for a few things like drivers, but those are going to be the exceptions not the rule.

"How does that help my mom? She's just going to call me when the computer asks her "weird questions about keys and permissions"."

"Again, why would the user WANT to be in the drivers seat? They have no clue how to drive the car!"

Freedom and control are things you should have, but should not be forced to exercise. It should be possible to leave them alone and just trust one or more vendors. This is a UI/UX issue.

With things like iOS I don't have the option.

"That only tackles the problem of cleanup, which is a separate problem. By that time, the malware is already on the system and it's sent your credit card and documents to the bad guys."

Absolute security perfection isn't possible, but I think huge improvements can be made. Don't let the perfect be the enemy of the good.

Data leakage and social engineering are particularly thorny because they're really only half technical problems. The meat sack using the machine is always going to be a weak point in any security model. But if the machine were secure, it would help.

We're going around in circles I think.

The entire problem is that the users have no idea prior to installing the software, whether its legit or malware. I don't see anything in what you've proposed that solves the root problem. Yes, we can look at peripheral problems like cleanup and revoking certificates but those only affect users AFTER they've already made the choice of installing a particular piece of software.

> We're going around in circles I think.

Just you, my friend. If you recall, here is the point you were challenging:

> this is a band-aid over the fact that OSes have terrible permission separation and application isolation. If OSes were better architected from a security point of view, it would be substantially less of a problem.

Would you now concede this point?

> If you recall, here is the point you were challenging:

Yes, and I didn't receive any information that would lead me to believe that applying his/her suggestions would substantially tackle the root problem. All process isolation does is push the problem out further into the application side of things. Now the user has to micromanage the data flow in between applications. The root problem has very little to do with OS architecture, and I'm happy to be convinced otherwise.

>Would you now concede this point?

Okay. If you insist. I have no desire to "win" the argument. It's merely idle chit chat for me. My code's compiling ;)

The idea isn't to solve that problem, but to limit the damage significantly.

I should be able to give a Russian mob hacker on crack access to my machine without worrying too much about them doing anything I don't give them permission to do.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact