Hacker News new | past | comments | ask | show | jobs | submit login

I don't have a fully baked design in mind, but I could probably come up with one if I had a long time to focus on it. I do have a general framework in mind for how one might approach the question.

First and foremost, security should be a primary OS design objective and it should go into the design deep. It should be something you think about before secondary concerns like a process model, binary format, driver framework, or memory management.

Every single executing piece of code should be signed (with self-signing by the user an option of course). In that respect the app store model has it right, but I'd like to see something that ultimately puts the keys in the hands of the user. That means that a keyring is something you put down with the task manager, memory manager, etc. It's a core OS function. The ability to administrate the keyring and control permissions is also a core OS function.

Every function call outside a context should do a permission check against the certificate of the executing code. The right way to design this would be to make it work first, then figure out how to make it fast without compromising security. I think fundamental innovation would be needed here. I'm not sure exactly how this should work.

We should get away from distributing compiled code that runs straight on bare metal for most things. It might still be available as a permission, but one that would come with a warning to the effect that this could allow something to pwn your machine. Honestly I'm not sure if it's necessary. I'd look into the idea of shipping binaries as LLVM byte code and AOT compiling everything, and possibly including a secure implementation of OpenCL for really high performance computing needs.

A concept of users should be baked in from the get-go too, and should be part of the permission set of an executing context. Each user should have a key and be able to authorize other keys by signing them, etc.

So yeah, I think that's sort of a starting point. Crypto and permissions should be baked in from the get-go.

It'd also be important to think about usability from the get-go, since if it doesn't "just work" nobody will use it. UI/UX would be a challenging part of the project.

Storage is another challenge -- how to allow execution contexts to hand-off and/or share data without compromising security and without being too inefficient. The fact that storage is getting so cheap means things like copy-on-write with versioning might be baked in from the get-go to permit almost any operation to be rewound for a good period of time. So if a piece of bad code borks your work, just undo. I wonder if the whole OS could be built around a command model where things just fall off the end when they're too old? Log-structured everything? Again, not fully baked but I think it's the right general direction.

I highly doubt I am unique in thinking these things. I'm not the sharpest tack in the world and these kinds of ideas strike me as obvious results of reasoning from first principles about current OS challenges and failures.

The app store and mobile sandboxing models are steps in the right general direction but they are very, very ham-fisted compared to what I'm imagining here. They're the right ideas applied as a band-aid to fundamentally obsolete systems. They also cut the user out of the picture. I think that's because their models are ultimately too shallow and coarse-grained (and also because the vendors want control). Develop something good enough and the user can be put in the driver's seat without the machine turning into a malware cesspit. If the user authorizes a piece of bad code, just de-authorize it and it dies.




Thanks for taking the time to reply.

--

I don't understand how only allowing signed execution would help avoid this problem. Like you said, anyone could pay to get their company listed as a 'trusted' entity. The problem is you cant push this task to the user, because the non-technical layman user is not in a position to determine this.

If we only allow signed binaries to be loaded in memory, then we won't need IPC to pay the security tax for every function call. - given that there are probably going to be tens of thousands of them per second.

>We should get away from distributing compiled code that runs straight on bare metal for most things. [..] Honestly I'm not sure if it's necessary.

Poof, no more program debuggers, profilers, no more device drivers, no more third party file systems, no more .. you get the idea. Maybe that's not "most" things, but ask yourself how functional is an OS without the ability to load kernel mode stuff.

> Each user should have a key and be able to authorize other keys by signing them, etc.

How does that help my mom? She's just going to call me when the computer asks her "weird questions about keys and permissions". The entire point is that the average user is not the best judge of what is and isn't malware. Technically savvy users already have no issue with malware for the most part.

>Develop something good enough and the user can be put in the driver's seat without it turning into a malware safari.

Again, why would the user WANT to be in the drivers seat? They have no clue how to drive the car!

> If the user authorizes a piece of bad code, just de-authorize it and it dies.

That only tackles the problem of cleanup, which is a separate problem. By that time, the malware is already on the system and it's sent your credit card and documents to the bad guys.


"I don't understand how only allowing signed execution would help avoid this problem. Like you said, anyone could pay to get their company listed as a 'trusted' entity."

The purpose of signing isn't to guarantee that an entity is anything, but to allow the user to absolutely and decisively rule what code is allowed to execute. If the Russian Mob sneaks some code from "G0ogle, Ink." onto my machine by tricking me into authorizing that cert, I can just de-authorize it and then it all DIAF.

When I say signing, I don't necessarily mean the app store feudal model. I mean an inverted version of that -- where the user decides what runs by approving certs by signing them with some kind of master key.

"Poof, no more program debuggers, profilers, no more device drivers, no more third party file systems, no more"

You can debug Java pretty effectively. There are great toolchains for that. I agree that direct ASM may be required for a few things like drivers, but those are going to be the exceptions not the rule.

"How does that help my mom? She's just going to call me when the computer asks her "weird questions about keys and permissions"."

"Again, why would the user WANT to be in the drivers seat? They have no clue how to drive the car!"

Freedom and control are things you should have, but should not be forced to exercise. It should be possible to leave them alone and just trust one or more vendors. This is a UI/UX issue.

With things like iOS I don't have the option.

"That only tackles the problem of cleanup, which is a separate problem. By that time, the malware is already on the system and it's sent your credit card and documents to the bad guys."

Absolute security perfection isn't possible, but I think huge improvements can be made. Don't let the perfect be the enemy of the good.

Data leakage and social engineering are particularly thorny because they're really only half technical problems. The meat sack using the machine is always going to be a weak point in any security model. But if the machine were secure, it would help.


We're going around in circles I think.

The entire problem is that the users have no idea prior to installing the software, whether its legit or malware. I don't see anything in what you've proposed that solves the root problem. Yes, we can look at peripheral problems like cleanup and revoking certificates but those only affect users AFTER they've already made the choice of installing a particular piece of software.


> We're going around in circles I think.

Just you, my friend. If you recall, here is the point you were challenging:

> this is a band-aid over the fact that OSes have terrible permission separation and application isolation. If OSes were better architected from a security point of view, it would be substantially less of a problem.

Would you now concede this point?


> If you recall, here is the point you were challenging:

Yes, and I didn't receive any information that would lead me to believe that applying his/her suggestions would substantially tackle the root problem. All process isolation does is push the problem out further into the application side of things. Now the user has to micromanage the data flow in between applications. The root problem has very little to do with OS architecture, and I'm happy to be convinced otherwise.

>Would you now concede this point?

Okay. If you insist. I have no desire to "win" the argument. It's merely idle chit chat for me. My code's compiling ;)


The idea isn't to solve that problem, but to limit the damage significantly.

I should be able to give a Russian mob hacker on crack access to my machine without worrying too much about them doing anything I don't give them permission to do.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: