Hacker News new | past | comments | ask | show | jobs | submit login
Capability Based Security (wikipedia.org)
44 points by jacquesm on Dec 6, 2015 | hide | past | favorite | 35 comments



As it was explained to me, the main problem with capabilities is working out how to revoke them.

You either have expiring capabilities, which is a hassle but probably inevitable. Or you have revoked capability lists, imposing an additional cost on every single invocation. The latter was a bit of a killer once people had figured out clever, fast bit-operation ways to check a capability, because the cost of looking up the list swamped the gains from bit-twiddling optimisations.

Mind you, we sorta wound up with it anyhow, in the form of OAuth tokens. And here the algo-economics changes again to favour it: the cost of a network roundtrip is gigantic compared to any comparison operation, so it doesn't seem quite so bad after all.


To revoke them, the usual pattern is to never hand out the original capability, you hand out a wrapper that you can disable. E.g. in Javascript:

    let [revoke, revocable] = makeRevocable(cap);
    share(revocable); // share can call revocable(), which forwards to cap() unless revoked
    setTimeout(1000, revoke); // for auto-expiring, just for example
See section 4.3 in http://www.erights.org/talks/asian03/index.html

A capability system can help you batch up those round trips you mention, btw: https://capnproto.org/rpc.html


This is exactly right. Back before Java 1.0 was released I was working on a capability based security model for it. The class loader would hand out what was essentially a virtual method of the instantiated class based on the capability's allowed. Revoking capability consisted of updating the template which would fail to resolve the revoked methods on the next call but leave the ones which were still on the stack.


So there is still the indirection of the function call, but it's probably the right approach.


There's a paper called "Capability Myths Demolished" that addresses this. There's an online version here[0], see "revoking access".

Personally I'm fascinated by ocap systems and am always looking for ways to employ them. There are some interesting "modern" applications (aside from a couple of research kernels) like CapnProto and Pony language. Macaroons come close but are more of a hybrid approach to authorization. (They don't out of the box prevent confused deputy or anything)

[0]: http://www.erights.org/elib/capability/duals/myths.html


Yes, Capability Myths Demolished describes the "revocable forwarder" pattern, which uses a proxy to delegate access in a revokable manner.

Macaroons on the other hand are nice because they don't need a proxy to solve this problem: the authorizer can issue a short-lived "discharge Macaroon" to authorize access, then the client can get to the target resource directly without the need to use a proxy. To revoke access: just refuse to issue subsequent discharge Macaroons when the current discharge expires.


A common optimization in a capability system is to allow a remote client to create a new revokable membrane living on the same machine as the target object, thus avoiding introducing a new network hop due to the proxy. Said client receives a separate capability to revoke the new wrapper, which it can exercise later. Or, the client might supply the object host with some other instructions on when revocation should occur.

Macaroons can probably be seen as an implementation of this.


It's hard to generalise like that. There are lots of flavors of cap system.

On many hardware capability-based addressing systems it is easy to zap them by clearing the corresponding bit in the capability mask. However, the problem then becomes how to revoke a cap from some holders and not others etc (which can be done with chains and ... turns into an interesting permission-like system ;) ...)

Permission-list systems have exactly the same problem too. IIRC munmap ends up walking linked lists on Linux rather like ancient heaps. It might have changed.


Applied to whole systems to make them more secure from hardware up:

http://homes.cs.washington.edu/~levy/capabook/

Modern project already running FreeBSD with capability security:

http://www.cl.cam.ac.uk/research/security/ctsrd/cheri/

Also look up CapDesk and DARPAbrowser.


http://en.wikipedia.org/wiki/RBAC is far more readable that the article in question and I have just proposed a merge on Wikipedia for 'Capability-based security' and 'Object-capability model', support if you wish. FWIW, I wrote an application-level implementation of RBAC a few years ago for a secure financial system prototype. It was an interesting experience, to say the least.


Capability-based security and RBAC are completely different things. Some would say they are opposing philosophies.


Care to explain why? I believe you may be confusing OS-level 'capabilities' implementations (eg. Linux's) with competing 'RBAC' implementations (of which Linux has at least two or three). In fact if you remove the waffle the abstract-level model is the same, and IMHO is that a Wikipedia article should deal with the subject as a whole using the most abstract terminology possible... referencing implementations and their terminological preference is a secondary concern.


A "capability" is, abstractly, a thing. I can send you a message, in that message I can include a capability to access some resource. When you receive the message, you can follow that capability and you'll have access to the resource. Your identity is never checked -- your authority to access the resource is based on your possessing the capability, regardless of who you might be.

RBAC -- role-based access control -- is identity-based. Certain identities have certain roles. Certain roles are allowed to perform certain actions on certain resources. When you access a document in an RBAC system, your identity is determined, then the roles associated with that identity are determined, and then you get access based on whether you have the role.

Capability-based security and identity-based security are mutually exclusive approaches (although you can build a system that does both, for defense in depth).

(See also my very long top-level comment for more background on capabilities and how they are used.)


'Capability' in the system you describe is similar to a token issued (eg. against a session) in an RBAC system. They're just different terminological approaches to the same fundamental problem: how to make dynamic ACLs that are manageable.

You can leave out tokens. You can leave out identities or assume them (eg. IIRC Linux's OS-level capabilities assume this on a per-process or cgroup basis). You can add role-based inheritance, cache, expiry and other features. You can make it mandatory or discretionary. The overall model remains the same, and deserves to be discussed as such.

Perhaps in lieu of merging https://en.wikipedia.org/wiki/Computer_security_model should actually be populated with some text?

Reply to below: That all sounds reasonable enough, but possibly confuses academic theoretic fields with the mathematical reality of implementations (not that different, really, and explicable using arbitrary terminological preference) and is full of appeals to authority. It won't do for a holistic explanation, which Wikipedia currently sorely lacks.

Reply to final part of below: Yes, I am aware you are talking about something different to linux caps. Perhaps you can define briefly the difference between an authentication token in any other security system and a capability in the terminology you are using? My interest is in the abstract model, not the implementation. If you are talking about the capsicum implementation, then it's too specific as an OS-level implementation, the sort of thing Laurie encourages ignoring, and will almost certainly die a death of relative obscurity.

... in summary, my view is that they're all just dynamic (feel free to read "glorified") ACLs with differing terminological and philosophical approaches to the management of such. I remain unconvinced of the supposed uniqueness, at the basic model level of abstraction, of the academic field of capabilities based security as you describe them.


The RBAC page you link says nothing about tokens, because tokens are not a fundamental part of RBAC.

It's true that secret tokens commonly appear in security system implementations, and secret tokens usually could qualify as (weak) capabilities by definition. However, usually such tokens are not intended to be used as capabilities, but are rather a practical implementation detail, and the fact that they are transferable is seen as a weakness. E.g. when you log into a web site it stores a cookie on your system which you present back to the server to authorize each request, but abstractly the security model calls for authenticating that each request is coming from you, and the token is being used as a hack to avoid making you sign every request.

Capability-based security is a field of security theory which goes far beyond describing these implementation-detail tokens. In fact, when tokens are used in capability systems, they are again an implementation compromise. A good capability system does not assign authority to a secret sequence of bits, but rather is based on communications protocols where capabilities are explicitly recognized as they pass between contexts. For example, a unix file descriptor is a sort of capability, but its numeric value is not a secret token. It's secure because only the process which possesses the descriptor can access it using that number. You can pass file descriptors between processes, and the OS knows that the transfer is happening and assigns a new number to the object in the new process.

The researchers at the forefront of capability-based security theory, such as Mark Miller, strenuously oppose RBAC. Alan Karp, another researcher, likes to call capabilities "ZBAC" -- "authoriZation-Based Access Control" -- to make explicitly clear that he's talking about a different thing (though I'm not a fan of this acronym personally). Trust me, I know these guys and they will fight hard against any suggestion that their research and RBAC are the same thing.

> You can leave out identities or assume them (eg. IIRC Linux's OS-level capabilities assume this on a per-process or cgroup basis).

Linux OS-level capabilities are NOT capability-based security. This is a common misconception -- Linux/POSIX capabilities were misnamed by people who didn't understand the capability-based security model which predated them. Maybe this is what's confusing you. Indeed, Linux kernel "capabilities" are much more similar to RBAC than to capability-based security.

File descriptors are much closer to being capability-based. Once you have an open file descriptor, you can pass it off to another process. That process will be able to access the file descriptor in the same ways that you could, even if that process doesn't have permission to open the underlying file directly. That's capability-based security. Though, disclaimer: file descriptors are missing a lot of features which are normally considered necessary in a high-quality capability system.

EDIT: Please don't reply by adding new text to the parent post, it's really confusing.

> full of appeals to authority

We're debating the meaning of "capability-based security" and whether it should be merged with "Role-based access control" on Wikipedia. The opinions of the people who have literally spent years or decades of their lives researching these fields is relevant, because they literally defined the terms.

If we were debating whether capability-based security or RBAC is better-suited to a particular purpose, then appeals to authority would be invalid.

> Perhaps you can define briefly the difference between an authentication token in any other security system and a capability in the terminology you are using?

A capability is both a pointer to a resource and a grant of access to that resource. A capability is the subject of an operation.

An authentication token is a thing tacked on to requests indicating that the request was made by some particular user identity. In a way, you could consider it a capability to the user account, where the user account in turn contains capabilities to everything the user can access, and the request receiver digs into that bag and pulls out the needed one automatically in order to authenticate the request -- however, this is convoluted, and not how capabilities are normally used.

A fundamental difference between capability systems and identity systems is "ambient authority". In an identity system, you are authorized to perform a request simply because of who you are. It's "ambient".

Consider how you delegate access to someone else in the two systems:

- In an ACL system, Alice adds Bob to the ACL for a resource, and then asks that Bob to perform an operation on her behalf. However, if Alice is malicious she could request that Bob perform an operation on some other resources which Alice never had access to but Bob did. This is called a confused deputy attack. Bob must carefully check that Alice has the correct permissions on the resource before acting on her behalf.

- In a capability system, Alice sends Bob her capability and then asks Bob to access it. There is no risk of confused deputy because there is no way for Alice to instruct Bob to use a capability that he has but Alice doesn't.

> My interest is in the abstract model, not the implementation.

Then the use of tokens is not relevant.

> If you are talking about the capsicum implementation

I'm not. I'm talking about things like the E programming language, or Microsoft's Midori, or my own Cap'n Proto and Sandstorm.io, or Google Docs invite tokens (an example of an accidental capability system), etc.

> the sort of thing Laurie encourages ignoring

Ben Laurie, who is a friend of mine, would absolutely object to the idea that RBAC and capabilities are the same thing. He is also once of the driving forces behind capsicum so I think you may misunderstand his views on it.

> my view is that they're all just dynamic (feel free to read "glorified") ACLs

Mark Miller et al. explicitly address that belief here:

http://www.erights.org/elib/capability/duals/myths.html#equi...

Of course, if you zoom out far enough, you can argue that anything is basically the same as anything else. But it takes a very liberal definition of "access control list" to include an access control mechanism that does not involve any kind of "list".

> I remain unconvinced of the supposed uniqueness

It's a valid opinion, but you really shouldn't be proposing merging two fields of research on Wikipedia based on an opinion that differs from the experts in said fields.


Apologies, I had to reply in parent as it wouldn't let me reply here.

My goal was to clarify things on Wikipedia. I am not opposed to the idea that x and y and z want to be seen differently, but one should also recognize there are biases in academia and industry that can over-emphasize minute difference (for funding, notability, career, supposed industry USP, etc.). A rose by any other name.

The current Wikipedia content remains terrible to useless and more confusing than a high level overview, IMHO. Since you seem to know so many industry experts, perhaps you would like to write it or we could collaborate? I believe https://en.wikipedia.org/wiki/Computer_security_model or a new comparison-of page would be a good place. It would really be a great contribution.


I would love to help improve descriptions of capability-based security, but I'm unsure if I can do much better than is there now. To me, the existing text seems clear and concise, but perhaps that's because I'm already deeply familiar with the topic. :/

This has been a constant problem for capability-based security researchers: finding the right way to explain the abstract concept such that people "get" it.


- In a capability system, Alice sends Bob her capability and then asks Bob to access it. There is no risk of confused deputy because there is no way for Alice to instruct Bob to use a capability that he has but Alice doesn't.

I don't get this part. Why not? Alice and Bob can still do what they can do, and if Alice is able to send messages to Bob then she can ask Bob to do that very thing. It wouldn't be a confused deputy attack only in that Bob would have to go out of his way to be a confused deputy.


> Bob would have to go out of his way to be a confused deputy.

Exactly. :) Sure, you can still get it wrong if you try, but it's a lot harder.


Please do not merge those two pages.


The brilliant thing about capability-based security is that it turns classic object-oriented programming patterns into security patterns.

Access control is implemented by encapsulating private members behind a restricted public interface.

Polymorphism allows you to add wrappers around an object implementing arbitrary security policies. E.g. want to revoke access to this object after some time? Add a wrapper where the methods forward to the real object until revoked, and then throw exceptions instead.

What's interesting is that these are patterns we are doing anyway, in real code, not for the purpose of security but for the purpose of correctness and maintainability. Because we were doing these things anyway, our programming languages provide really good tools for making them work naturally, and we already know how to think about them.

On the other hand, the competing model -- access control lists (ACLs), and other types of externally-specified policy -- are not things we were doing anyway. They're external concepts bolted into the system awkwardly. Because of this, they tend to be painful to maintain, which in turn means people often don't bother.

One of the clearest examples of how ACLs go against the grain: An ACL is a list of entities who are allowed to access a resource. These are effectively pointers pointing in the opposite direction of all other pointers in the system: ACLs point from the callee to the caller. This creates all kinds of problem. Just try to imagine how, in a programming language, you would specify the allowed callers of your object.

This translates to UX, too: Say you are using Google Docs, and you want to "share" access to a document with Alice. Traditionally this is seen (by the implementers -- I was once on the Google Docs sharing team) as a two-step process:

1. Grant access: Add Alice to the access control list.

2. Notify: Send Alice an e-mail notifying her that she should open the document.

Part 2 is obviously necessary and natural. You intuitively know you must tell Alice about your document.

Part 1 may sound obvious to any techie, but it's not at all: Users regularly forget this step, and we build all kinds of UI to try to recover from this. And even if it were obvious, it's problematic. What if Alice doesn't have a Google account? What if she does, but I don't know the email address tied to that account, because it's not her usual address?

What we really want to do is eliminate part 1 and have only part 2: Abstractly, when you notify Alice of the document, that notification should itself contain the permission to access the document. Then I don't need to know Alice's Google account ID, I just need to know how to send her a message containing the permissions. (This is analogous to sending a pointer in code -- receiving the pointer grants the recipient the ability to access the pointed-to object.)

This is the essence of capability-based security at the UX level: Deriving access control from actions you were already doing. The user rarely needs to be confronted with a security action, yet the right people end up getting access.

Some will argue that such implicit security is inherently dangerous because it's hard to audit and control. I argue that "explicit" security is dangerous because it's hard for users to understand and use correctly -- inevitably, many will simply turn the security off.

But there's a compromise: If the underlying system deeply understands capabilities and knows when they are being passed, it can maintain an "access control list" on the side, automatically populating it based on the observed movement of capabilities. This pseudo-ACL can allow the user to audit who has access to their document and revoke people who shouldn't be there. It also offers a place to hook in policies: if the system observes a capability movement that is contrary to some explicit policy (say, "documents shall not be shared outside the organization"), then it can revoke that capability.

This is the model we're implementing in Sandstorm:

https://sandstorm.io

https://docs.sandstorm.io/en/latest/developing/security-prac...

And Cap'n Proto RPC implement capabilities at the network level:

https://capnproto.org/rpc.html

(I am the lead developer of both Sandstorm and Cap'n Proto.)

Other good reading:

http://srl.cs.jhu.edu/pubs/SRL2003-02.pdf - Capability Myths Demolished, an academic paper.

http://joeduffyblog.com/2015/11/10/objects-as-secure-capabil... - About Microsoft's Midori research OS, which was also based on this model.


We spent a long time adjusting the design of our social app platform, http://qbix.com/platform

Basically we have a solution where users have accounts, and their contacts can have labels. They can assign access based on the labels, or to individual users. When the user represents an organization the labels basically represent roles for permissions. The same system is used for both people sharing with friends and organizations sharing with various members.

Now, as a user you can also invite others. Then a unique invitation link is generated, and securely delivered to an endpoint such as an email or sms. The recipient may choose to follow the link, which confirms their email address or mobile number. From there, we see if they already have an account or create a new one without any action on their part. (Password can be set up next time they try to log in with that email address or mobile number.)

It is at this point that we look at the access granted in the invite. We make sure that the inviting user has at least those permissions (so you can't eg send yourself an invite to survive getting banned from a chatroom) and then grant them to the invited user via the access system.

We have three levels of permissions: readLevel, writeLevel and adminLevel. Each ranges from 0 to 40, signifying various things (eg a writeLevel of 40 lets you delete that whole social collaborative thing).

So the invitations are kind of like capabilities. And we do a check upon the person redeeming the invitation. But once the user accepts the invitation, they have an id and we use our general system of access control, so admins can set up the access for various types of contacts.


Isn't the second approach a bit like when you send a special link to access the document? Which leads to the obvious "Anyone with the link can access the document"-ness with that solution? How do you avoid that situation - that if the link spreads, you effectively lose control until you realize and remove the link-based access of the document?


Secret URLs ("anyone with the link can access") are a weak form of capabilities. They are problematic since they can't be traced nor revoked. A good capability system allows you to audit where the capability has gone and revoke it from specific parties without revoking it from everybody. A good capability system also makes capabilities harder to "leak" accidentally, while still allowing them to be transferred intentionally.

For an example of capabilities that can't leak, look at Unix file descriptors. A process can transfer a file descriptor to another process via an SCM_RIGHTS message over a unix domain socket. The receiving process is able to perform exactly the same set of operations on the file descriptor as the sending process could, even if the receiving process belongs to a different user who doesn't have permission to open() the underlying file. Thus the FD is like a capability.

File descriptors are not secret tokens -- they are easily-guessable small integers. But they cannot accidentally "leak" between processes because the OS explicitly involved in all transfers.

(Of course, file descriptors still lack traceability and revocability, which a good capability system should support.)


The link might as well be the document itself, a proxy into the computer of the person you shared it to. That's the point of capabilities.

And that's why you want them to be revocable.

You can make it a bit simpler by having the permission be a one time only token for requesting account specific permissions too. Works if you know how many people the link needs to be shared with (and ideally if you can give them separate links). But fails your goal if the link must be reused.


One of the maxims of capability-based security is: “don't prohibit what you can't prevent”. If I give Alice a capability, I can no more prevent her from sharing it with Eve than I can prevent her from being a proxy for Eve.

This makes capabilities a more honest form of systems security.


In my experience the most important benefit of capabilities is that it allows for a modular implementation of policies. As you mention:

> Just try to imagine how, in a programming language, you would specify the allowed callers of your object.

Alternatively, you might have some hard-coded mechanism for extensibility (e.g. UNIX's open() and FS permissions). Try to imagine how, in a programming language, you would maintain some global table of callers and objects specifying who is allowed to call whom.


And for those interested in a FP-oriented (rather than OO) approach to cap-based security, here's a post on using it as a tool to enforce good design: http://fsharpforfunandprofit.com/posts/capability-based-secu...


FWIW, I've never been able to figure out why people consider FP and OOP to be opposing. They seem orthogonal to me, and I've written a lot of code I'd consider to be OOP in pure-FP languages... so I think there's a subtle mismatch between what I consider OOP and what everyone else thinks, but I haven't quite been able to identify what that difference is. :)


Have you seen http://www.cs.utexas.edu/~wcook/Drafts/2009/essay.pdf? It includes the great line "the untyped lambda calculus was the first OO language", something I'd noticed myself -- I won't say independently, since both the author and I were on the cap-talk list (or maybe it was e-lang) back in the day.


If you want a crash course on capabilities, I'd suggest Ben Laurie's paper "Access Control (v0.1)":

http://www.links.org/files/capabilities.pdf


Interestingly, with respect to some of the comments here, Laurie concludes "For those interested in further exploration and experimentation, I would skip the operating system approach".


Slightly OT, but is there a similar or better standard for managing authorization more focused on web applications. I see many applications still use RBAC often mixed with application logic.

I'm not sure about OAuth as I don't have much experience but I think it doesn't scale to fine grained access control.

ABAC seems to be a better approach but I see no mainstream implementations of it. There is a company called Axiomatics that apparently has a product but is very hard to get a hands on.


Hey, Jacques. I wanted to say that I'm happy you're still around HN. Thanks for sharing a lot of interesting stuff, and experiences regarding business and life in general.

I was just thinking how emptier HN would have felt without all that over the years, so...

Best of luck! Happy holidays.


Thanks! That's very gratif-

oh




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: