You either have expiring capabilities, which is a hassle but probably inevitable. Or you have revoked capability lists, imposing an additional cost on every single invocation. The latter was a bit of a killer once people had figured out clever, fast bit-operation ways to check a capability, because the cost of looking up the list swamped the gains from bit-twiddling optimisations.
Mind you, we sorta wound up with it anyhow, in the form of OAuth tokens. And here the algo-economics changes again to favour it: the cost of a network roundtrip is gigantic compared to any comparison operation, so it doesn't seem quite so bad after all.
let [revoke, revocable] = makeRevocable(cap);
share(revocable); // share can call revocable(), which forwards to cap() unless revoked
setTimeout(1000, revoke); // for auto-expiring, just for example
A capability system can help you batch up those round trips you mention, btw: https://capnproto.org/rpc.html
Personally I'm fascinated by ocap systems and am always looking for ways to employ them. There are some interesting "modern" applications (aside from a couple of research kernels) like CapnProto and Pony language. Macaroons come close but are more of a hybrid approach to authorization. (They don't out of the box prevent confused deputy or anything)
Macaroons on the other hand are nice because they don't need a proxy to solve this problem: the authorizer can issue a short-lived "discharge Macaroon" to authorize access, then the client can get to the target resource directly without the need to use a proxy. To revoke access: just refuse to issue subsequent discharge Macaroons when the current discharge expires.
Macaroons can probably be seen as an implementation of this.
On many hardware capability-based addressing systems it is easy to zap them by clearing the corresponding bit in the capability mask. However, the problem then becomes how to revoke a cap from some holders and not others etc (which can be done with chains and ... turns into an interesting permission-like system ;) ...)
Permission-list systems have exactly the same problem too. IIRC munmap ends up walking linked lists on Linux rather like ancient heaps. It might have changed.
Modern project already running FreeBSD with capability security:
Also look up CapDesk and DARPAbrowser.
RBAC -- role-based access control -- is identity-based. Certain identities have certain roles. Certain roles are allowed to perform certain actions on certain resources. When you access a document in an RBAC system, your identity is determined, then the roles associated with that identity are determined, and then you get access based on whether you have the role.
Capability-based security and identity-based security are mutually exclusive approaches (although you can build a system that does both, for defense in depth).
(See also my very long top-level comment for more background on capabilities and how they are used.)
You can leave out tokens. You can leave out identities or assume them (eg. IIRC Linux's OS-level capabilities assume this on a per-process or cgroup basis). You can add role-based inheritance, cache, expiry and other features. You can make it mandatory or discretionary. The overall model remains the same, and deserves to be discussed as such.
Perhaps in lieu of merging https://en.wikipedia.org/wiki/Computer_security_model should actually be populated with some text?
Reply to below: That all sounds reasonable enough, but possibly confuses academic theoretic fields with the mathematical reality of implementations (not that different, really, and explicable using arbitrary terminological preference) and is full of appeals to authority. It won't do for a holistic explanation, which Wikipedia currently sorely lacks.
Reply to final part of below: Yes, I am aware you are talking about something different to linux caps. Perhaps you can define briefly the difference between an authentication token in any other security system and a capability in the terminology you are using? My interest is in the abstract model, not the implementation. If you are talking about the capsicum implementation, then it's too specific as an OS-level implementation, the sort of thing Laurie encourages ignoring, and will almost certainly die a death of relative obscurity.
... in summary, my view is that they're all just dynamic (feel free to read "glorified") ACLs with differing terminological and philosophical approaches to the management of such. I remain unconvinced of the supposed uniqueness, at the basic model level of abstraction, of the academic field of capabilities based security as you describe them.
It's true that secret tokens commonly appear in security system implementations, and secret tokens usually could qualify as (weak) capabilities by definition. However, usually such tokens are not intended to be used as capabilities, but are rather a practical implementation detail, and the fact that they are transferable is seen as a weakness. E.g. when you log into a web site it stores a cookie on your system which you present back to the server to authorize each request, but abstractly the security model calls for authenticating that each request is coming from you, and the token is being used as a hack to avoid making you sign every request.
Capability-based security is a field of security theory which goes far beyond describing these implementation-detail tokens. In fact, when tokens are used in capability systems, they are again an implementation compromise. A good capability system does not assign authority to a secret sequence of bits, but rather is based on communications protocols where capabilities are explicitly recognized as they pass between contexts. For example, a unix file descriptor is a sort of capability, but its numeric value is not a secret token. It's secure because only the process which possesses the descriptor can access it using that number. You can pass file descriptors between processes, and the OS knows that the transfer is happening and assigns a new number to the object in the new process.
The researchers at the forefront of capability-based security theory, such as Mark Miller, strenuously oppose RBAC. Alan Karp, another researcher, likes to call capabilities "ZBAC" -- "authoriZation-Based Access Control" -- to make explicitly clear that he's talking about a different thing (though I'm not a fan of this acronym personally). Trust me, I know these guys and they will fight hard against any suggestion that their research and RBAC are the same thing.
> You can leave out identities or assume them (eg. IIRC Linux's OS-level capabilities assume this on a per-process or cgroup basis).
Linux OS-level capabilities are NOT capability-based security. This is a common misconception -- Linux/POSIX capabilities were misnamed by people who didn't understand the capability-based security model which predated them. Maybe this is what's confusing you. Indeed, Linux kernel "capabilities" are much more similar to RBAC than to capability-based security.
File descriptors are much closer to being capability-based. Once you have an open file descriptor, you can pass it off to another process. That process will be able to access the file descriptor in the same ways that you could, even if that process doesn't have permission to open the underlying file directly. That's capability-based security. Though, disclaimer: file descriptors are missing a lot of features which are normally considered necessary in a high-quality capability system.
EDIT: Please don't reply by adding new text to the parent post, it's really confusing.
> full of appeals to authority
We're debating the meaning of "capability-based security" and whether it should be merged with "Role-based access control" on Wikipedia. The opinions of the people who have literally spent years or decades of their lives researching these fields is relevant, because they literally defined the terms.
If we were debating whether capability-based security or RBAC is better-suited to a particular purpose, then appeals to authority would be invalid.
> Perhaps you can define briefly the difference between an authentication token in any other security system and a capability in the terminology you are using?
A capability is both a pointer to a resource and a grant of access to that resource. A capability is the subject of an operation.
An authentication token is a thing tacked on to requests indicating that the request was made by some particular user identity. In a way, you could consider it a capability to the user account, where the user account in turn contains capabilities to everything the user can access, and the request receiver digs into that bag and pulls out the needed one automatically in order to authenticate the request -- however, this is convoluted, and not how capabilities are normally used.
A fundamental difference between capability systems and identity systems is "ambient authority". In an identity system, you are authorized to perform a request simply because of who you are. It's "ambient".
Consider how you delegate access to someone else in the two systems:
- In an ACL system, Alice adds Bob to the ACL for a resource, and then asks that Bob to perform an operation on her behalf. However, if Alice is malicious she could request that Bob perform an operation on some other resources which Alice never had access to but Bob did. This is called a confused deputy attack. Bob must carefully check that Alice has the correct permissions on the resource before acting on her behalf.
- In a capability system, Alice sends Bob her capability and then asks Bob to access it. There is no risk of confused deputy because there is no way for Alice to instruct Bob to use a capability that he has but Alice doesn't.
> My interest is in the abstract model, not the implementation.
Then the use of tokens is not relevant.
> If you are talking about the capsicum implementation
I'm not. I'm talking about things like the E programming language, or Microsoft's Midori, or my own Cap'n Proto and Sandstorm.io, or Google Docs invite tokens (an example of an accidental capability system), etc.
> the sort of thing Laurie encourages ignoring
Ben Laurie, who is a friend of mine, would absolutely object to the idea that RBAC and capabilities are the same thing. He is also once of the driving forces behind capsicum so I think you may misunderstand his views on it.
> my view is that they're all just dynamic (feel free to read "glorified") ACLs
Mark Miller et al. explicitly address that belief here:
Of course, if you zoom out far enough, you can argue that anything is basically the same as anything else. But it takes a very liberal definition of "access control list" to include an access control mechanism that does not involve any kind of "list".
> I remain unconvinced of the supposed uniqueness
It's a valid opinion, but you really shouldn't be proposing merging two fields of research on Wikipedia based on an opinion that differs from the experts in said fields.
My goal was to clarify things on Wikipedia. I am not opposed to the idea that x and y and z want to be seen differently, but one should also recognize there are biases in academia and industry that can over-emphasize minute difference (for funding, notability, career, supposed industry USP, etc.). A rose by any other name.
The current Wikipedia content remains terrible to useless and more confusing than a high level overview, IMHO. Since you seem to know so many industry experts, perhaps you would like to write it or we could collaborate? I believe https://en.wikipedia.org/wiki/Computer_security_model or a new comparison-of page would be a good place. It would really be a great contribution.
This has been a constant problem for capability-based security researchers: finding the right way to explain the abstract concept such that people "get" it.
I don't get this part. Why not? Alice and Bob can still do what they can do, and if Alice is able to send messages to Bob then she can ask Bob to do that very thing. It wouldn't be a confused deputy attack only in that Bob would have to go out of his way to be a confused deputy.
Exactly. :) Sure, you can still get it wrong if you try, but it's a lot harder.
Access control is implemented by encapsulating private members behind a restricted public interface.
Polymorphism allows you to add wrappers around an object implementing arbitrary security policies. E.g. want to revoke access to this object after some time? Add a wrapper where the methods forward to the real object until revoked, and then throw exceptions instead.
What's interesting is that these are patterns we are doing anyway, in real code, not for the purpose of security but for the purpose of correctness and maintainability. Because we were doing these things anyway, our programming languages provide really good tools for making them work naturally, and we already know how to think about them.
On the other hand, the competing model -- access control lists (ACLs), and other types of externally-specified policy -- are not things we were doing anyway. They're external concepts bolted into the system awkwardly. Because of this, they tend to be painful to maintain, which in turn means people often don't bother.
One of the clearest examples of how ACLs go against the grain: An ACL is a list of entities who are allowed to access a resource. These are effectively pointers pointing in the opposite direction of all other pointers in the system: ACLs point from the callee to the caller. This creates all kinds of problem. Just try to imagine how, in a programming language, you would specify the allowed callers of your object.
This translates to UX, too: Say you are using Google Docs, and you want to "share" access to a document with Alice. Traditionally this is seen (by the implementers -- I was once on the Google Docs sharing team) as a two-step process:
1. Grant access: Add Alice to the access control list.
2. Notify: Send Alice an e-mail notifying her that she should open the document.
Part 2 is obviously necessary and natural. You intuitively know you must tell Alice about your document.
Part 1 may sound obvious to any techie, but it's not at all: Users regularly forget this step, and we build all kinds of UI to try to recover from this. And even if it were obvious, it's problematic. What if Alice doesn't have a Google account? What if she does, but I don't know the email address tied to that account, because it's not her usual address?
What we really want to do is eliminate part 1 and have only part 2: Abstractly, when you notify Alice of the document, that notification should itself contain the permission to access the document. Then I don't need to know Alice's Google account ID, I just need to know how to send her a message containing the permissions. (This is analogous to sending a pointer in code -- receiving the pointer grants the recipient the ability to access the pointed-to object.)
This is the essence of capability-based security at the UX level: Deriving access control from actions you were already doing. The user rarely needs to be confronted with a security action, yet the right people end up getting access.
Some will argue that such implicit security is inherently dangerous because it's hard to audit and control. I argue that "explicit" security is dangerous because it's hard for users to understand and use correctly -- inevitably, many will simply turn the security off.
But there's a compromise: If the underlying system deeply understands capabilities and knows when they are being passed, it can maintain an "access control list" on the side, automatically populating it based on the observed movement of capabilities. This pseudo-ACL can allow the user to audit who has access to their document and revoke people who shouldn't be there. It also offers a place to hook in policies: if the system observes a capability movement that is contrary to some explicit policy (say, "documents shall not be shared outside the organization"), then it can revoke that capability.
This is the model we're implementing in Sandstorm:
And Cap'n Proto RPC implement capabilities at the network level:
(I am the lead developer of both Sandstorm and Cap'n Proto.)
Other good reading:
http://srl.cs.jhu.edu/pubs/SRL2003-02.pdf - Capability Myths Demolished, an academic paper.
http://joeduffyblog.com/2015/11/10/objects-as-secure-capabil... - About Microsoft's Midori research OS, which was also based on this model.
Basically we have a solution where users have accounts, and their contacts can have labels. They can assign access based on the labels, or to individual users. When the user represents an organization the labels basically represent roles for permissions. The same system is used for both people sharing with friends and organizations sharing with various members.
Now, as a user you can also invite others. Then a unique invitation link is generated, and securely delivered to an endpoint such as an email or sms. The recipient may choose to follow the link, which confirms their email address or mobile number. From there, we see if they already have an account or create a new one without any action on their part. (Password can be set up next time they try to log in with that email address or mobile number.)
It is at this point that we look at the access granted in the invite. We make sure that the inviting user has at least those permissions (so you can't eg send yourself an invite to survive getting banned from a chatroom) and then grant them to the invited user via the access system.
We have three levels of permissions: readLevel, writeLevel and adminLevel. Each ranges from 0 to 40, signifying various things (eg a writeLevel of 40 lets you delete that whole social collaborative thing).
So the invitations are kind of like capabilities. And we do a check upon the person redeeming the invitation. But once the user accepts the invitation, they have an id and we use our general system of access control, so admins can set up the access for various types of contacts.
For an example of capabilities that can't leak, look at Unix file descriptors. A process can transfer a file descriptor to another process via an SCM_RIGHTS message over a unix domain socket. The receiving process is able to perform exactly the same set of operations on the file descriptor as the sending process could, even if the receiving process belongs to a different user who doesn't have permission to open() the underlying file. Thus the FD is like a capability.
File descriptors are not secret tokens -- they are easily-guessable small integers. But they cannot accidentally "leak" between processes because the OS explicitly involved in all transfers.
(Of course, file descriptors still lack traceability and revocability, which a good capability system should support.)
And that's why you want them to be revocable.
You can make it a bit simpler by having the permission be a one time only token for requesting account specific permissions too. Works if you know how many people the link needs to be shared with (and ideally if you can give them separate links). But fails your goal if the link must be reused.
This makes capabilities a more honest form of systems security.
> Just try to imagine how, in a programming language, you would specify the allowed callers of your object.
Alternatively, you might have some hard-coded mechanism for extensibility (e.g. UNIX's open() and FS permissions). Try to imagine how, in a programming language, you would maintain some global table of callers and objects specifying who is allowed to call whom.
I'm not sure about OAuth as I don't have much experience but I think it doesn't scale to fine grained access control.
ABAC seems to be a better approach but I see no mainstream implementations of it. There is a company called Axiomatics that apparently has a product but is very hard to get a hands on.
I was just thinking how emptier HN would have felt without all that over the years, so...
Best of luck! Happy holidays.