
BeyondCorp: How Google Ditched VPNs for Remote Employee Access - dsr12
https://thenewstack.io/beyondcorp-google-ditched-virtual-private-networking-internal-applications/?_lrsc=736ef0f1-9e4b-4970-81fc-82957c3e6c0f
======
tyler_larson
Most companies need to be able to answer the question, "is this client one of
ours," when protecting sensitive resources.

Most companies will instead answer the question, "is the client on our
network," and pretend that it was the same question. The fact that it clearly
is not has some very obvious security implications and attack vectors that
we've been living with for decades.

Beyondcorp tries to more directly answer the original question about device
identity rather than subbing in the network question in its place.

The fact that this approach is novel says a lot about the maturity of our
industry.

~~~
colmmacc
Even the question "Is this client one of ours?" is a bad question to ask. A
much better question is "Is this specific action authorized and
authenticated?".

When you only authenticate a client, with a mechanism such as TLS Mutual-Auth,
or ALTS, you still aren't really authenticating the actions, just the channel.
That leaves the system open to request smuggling attacks, hi-jacking attacks,
context-mismatch attacks (TLS is particularly cumbersome here, because
authentication contexts can change mid-request), layering violations like
credential lengthening problems (do you tear down a previously opened
connection when the credential used to establish it expires or is revoked?),
and vulnerabilities in the channel authentication mechanism (e.g. X509 and
ASN1 are both notorious problem areas).

I work at AWS, so I'm biased, but it seems much stronger to me to use a system
that AAA's each action, like a request signing protocol (ours is
[https://docs.aws.amazon.com/general/latest/gr/signature-
vers...](https://docs.aws.amazon.com/general/latest/gr/signature-
version-4.html) ).

Request signing systems like that authenticate each action, which is very
granular, it permits strong offline signing (just like my iPhone banking app
does), and it's agnostic to the details of networks and clients and things
like hi-jacking and smuggling just can't work.

Of course, today's browsers aren't really set up for this; they don't support
built-in request signing, but I still find it a little weird to see
VPNs/networks traded for a model whose expiry date also went by years ago.

~~~
Shoothe
Request signing can be easily implemented in JavaScript for API requests with
WebCrypto but it's not clear to me in what threat model it would be beneficial
because even if they keys are not exportable users don't generally see what
actions did they authorize and as such they don't know what did they exactly
sign.

~~~
colmmacc
The main threats are protocol and network level. For example, request-
smuggling and hi-jacking attacks can take the form of bugs in proxies and
servers that allow requests to be smuggled because they don't escape newlines
in headers and so on. With signing, these requests don't validate.

It takes a much smaller TCB, and no connection state, to AAA a signed request,
but with mutual-auth you need a state machine and the TCBs for X509/ASN.1
validation tend to be huge. That's not what you want in a security critical
control. Honestly, enabling normal TLS mutual-auth likely _degrades_ security
in most cases, as it opens the server to whatever attacks the X509 processing
is vulnerable to. ALTS mitigates this somewhat by using Protobufs, but that's
still a very very big TCB. Compare that to say the TCB for validating HMAC or
ed25519.

Then there's the basic stuff I already pointed out, like sessions lasting
longer than their credentials are valid for. The layering violation invites
these kind of issues.

~~~
Shoothe
Hmm... But you'll still use TLS for transport so adding mutual auth doesn't
really increase the attack surface that much. (it still requires ugly ASN.1
and X.509). Or maybe you suggest using signed requests without TLS (plain
HTTP)?

~~~
colmmacc
A TLS server without mutual auth doesn't need to do any online X509
processing, and only a tiny amount of ASN.1 (parsing the DH share, which is
easy). It just serves certs, without parsing them.

Mutual auth increases the TCB by a lot, the Kolmogorov complexity increases by
several orders of magnitude.

~~~
Shoothe
Good point. Although I think the design was chosen because of the complexity
of infrastructure behind your TLS server. Adding more things for the client to
do so that you don't need to trust any intermediaries inside AWS data centers.
(I'm not complaining, just an observation from my POV).

(For the record in BeyondCorp all backend components are mutually
authenticated but they still use sessions and U2F tokens so there are no
trusted points).

------
wyc
The short and long of BeyondCorp:

\- Instead of a single VPN that will expose your entire squishy corporate LAN
to anyone who gets VPN access, each application gets its own protected proxy.

\- The protected proxies query a centrally-aggregated auth/authz database,
which can work with client-side software to ensure qualities such as private
key possession, full disk encryption, software updates, etc. In Google's case,
this is combined with a host-rewriting browser extension for usability.

\- Access proxies can easily funnel HTTP traffic, but some more clever
solutions involving tunnels exist for plain old TCP and UDP.

By giving every application its own authentication and access control proxy,
each application is secured on its own, hence "zero-trust."

~~~
darksim905
Do you have more information on these 'protected proxies' that you mention? My
understanding was more that it was each client was given a health check of
sorts & was either allowed, or not allowed after meeting a certain number of
criteria.

~~~
gbil
For commercial solutions take a look at zscaler offerings.

The biggest problem in such solutions is that you have to identify the
applications you have and sure Google is a relatively new tech company. Try to
identify all apps/services on big old companies. Until you do that or
decommission them, hou have either to keep your old VPN solution up or proxy
ALL traffic and use the analysis tools they have to identify apps/services,
again no walk in the park.

Of course as mentioned Google has most of its apps/services anyhow on the
Internet so you mostly just use the host checking and client identification
functionalities of such tools.

~~~
e12e
You could probably keep the vpn and incrementally move services "out",
starting with low-hanging fruit (few dependencies) and/or popular ones (eg:
(Web)mail). Basically low cost/high benefit trade-off.

At some point you'll have a few dinosaurs on the vpn, and can take those
services quietly out back and retire them permanently.

------
discreditable
> All the corporate resources are behind this uber-reverse proxy. Based on a
> decision provided by its “trust engine,” the proxy makes the decision of
> whether or not to provide access to the desired application. If permissions
> are in place, according to the tiered trust model, it forwards the requests
> to the application, along with the security credentials.

To me this sounds like a firewall/vpn moved into the application layer.

~~~
pyre
Sounds like the difference between "authenticate, then access this
application" and "authenticate, then access our network which has this
application accessible on it."

~~~
Florin_Andrei
I mean, what the OP said implies what you're saying.

------
gervase
I know the approach described in the article is not particularly new [0], but
I think it deserves to get more traction than it does (AFAIK).

I do have some other questions, though:

1) Does this infrastructure support BYOD, and if so, what does the
provisioning process look like?

2) What permissions do employees have their devices?

3) How are device compromises handled?

[0]
[https://research.google.com/pubs/pub43231.html](https://research.google.com/pubs/pub43231.html)

~~~
tyler_larson
The answer to both the byod and permissions questions is the "tiered" device
trust part from the article. You, the policy-maker decide how certain you are
that a device hasn't been pwned given its provenance and user access story,
and you assign a "trust tier" accordingly, which determines what resources it
can access.

I don't think beyondcorp necessarily changes your incident response story,
assuming you already have one.

A lot of this discussion glosses over the fact that U2F really makes this a
viable system. U2F solves the MITM problem and ensures that the anyone who
logs in does so with a company-issued hardware authenticator in physical
communication (usually USB, but maybe also NFC or Bluetooth) with the client
device. This means that even in a byod story, there's a piece of corp-issued
hardware always attached. This in turn means that impersonation requires
physical device theft in addition to credential theft.

~~~
e12e
> A lot of this discussion glosses over the fact that U2F really makes this a
> viable system. U2F solves the MITM problem and ensures that the anyone who
> logs in (…)

Makes viable: certainly; solves: not so sure. Session hi-jack doesn't
magically cease to be a problem.

~~~
nine_k
It becomes much less of an issue if the connection is re-negotiated
periodically, and a new key may require a physical action (touch) from the key
generator.

------
peterwwillis
Reasons your company will never adopt BeyondCorp:

\- Your company does not do real inventory management, or if they do, they do
it partially, and poorly.

\- Your company does not manage all the infrastructure in a single place.
Adopting this kind of authentication+authorization requires putting more trust
in your local network in order to allow your internet gateway access to all
your internal resources.

\- Your company's network traffic is made up of a bunch of random protocols,
authentication+authorization realms, and edge cases, half of which won't work
with this system.

\- You don't need it.

\- If you have a couple hundred thousand employees and a millions to burn on
building universal integration systems, you're going to write your own stuff
anyway.

~~~
dingaling
> Your company's network traffic is made up of a bunch of random protocols

Basically anything other than HTTPS

------
zdw
Lots of things become simpler when everything is a layer 4 or 7 service, and
protected by SSL, and you're enabled to mandate hardware and software upgrades
across your entire device fleet.

Another reuse of this philosophy can be seen in the Istio project, which
combines Kubernetes and the Envoy proxy to authenticate and secure
communication within a microservice architecture.

------
nirv
Um, I'm not Google, but I think for a typical small/medium-scale tech business
we're not there yet.

> "The problem with the “castle” approach is that once the perimeter is
> breached, the entire internal network, and all the associated applications,
> are at risk. “Do not trust your network. It is probably already owned”"

Considering an example of a common software development company, we may assume
they use VPN to get into private network with their project management, git,
devel, staging, backups, documentation and other servers/applications. Each of
them requires user authentication, each user has its own privileges. VPN here
adds an extra layer of security. But either way, being behind the VPN or not,
services potentially carry the same level of risk. Implementing perimeter
security doesn't imply a lack of security of services within.

> "Google’s approach involves comprehensive inventory management, one that
> keeps track of who owns which machine in the network. A Device Inventory
> Service collects a variety of live information about each device […]
> Employees get the appropriate level of access regardless of what device they
> are using or where in the world they are logging in from. Lower levels of
> access require less stringent checks on the device itself. […] The
> applications themselves are routinely checked for breaches by vulnerability
> scanners."

> "VPN was cumbersome to use, and slowed performance, especially for overseas
> workers. And it is no walk in the park for admins either. To set up a new
> user, the admin would typically have to configure the cloud network, along
> with setting up the IPSec rules and firewall rules, the VPN. This is
> followed by a lot of testing"

Again, I'm glad that it works for Google and that they're able to routinely
check all hardware credentials and servers "for breaches by vulnerability
scanners", but this whole passage and complexity scheme behind it causes me a
headache. I think I'll continue to rely on the traditional VPN, but based on
the modern lean WireGuard[1].

[1] [https://www.wireguard.com/](https://www.wireguard.com/)

~~~
oarsinsync
You may wish to re-consider your position of relying on Wireguard. From their
website:

> WireGuard is not yet complete. You should not rely on this code. It has not
> undergone proper degrees of security auditing

~~~
Hello71
I would trust my WireGuard significantly more than any installation of IPsec,
if nothing else than because I will almost certainly configure IPsec in a
manner that is completely insecure, but not obviously so.

------
1024core
Last time this created a lot of discussion in HN:
[https://news.ycombinator.com/item?id=14596613](https://news.ycombinator.com/item?id=14596613)

------
nealmueller
Slides from our presentation: [https://conferences.oreilly.com/security/sec-
ny/public/sched...](https://conferences.oreilly.com/security/sec-
ny/public/schedule/detail/61327)

~~~
nealmueller
Video
[https://www.youtube.com/watch?v=oAvDASLehpY](https://www.youtube.com/watch?v=oAvDASLehpY)

------
code4tee
I think there’s two points of view here. On the one hand yes, if someone gets
into your castle then they may have a lot of access you don’t want to give
them. However that assumes that things inside these walls are also not
secured, which is often not the case.

The other point of view is that the castle wall gives you added protection
against unknown unknowns that could mean there are security issues with your
now public facing infrastructure. By just exposing everything publicly you
create this potential big risk. Google’s preaching here glosses over this
fact.

~~~
nkassis
It's not quite removing the walls, think of it more as a ton of tiny castles
dotting the landscape with equally good walls instead of one giant wall around
the kingdom and some unprotected wooden huts inside.

By doing this they want to force those managing internal apps to put up
protection on the level they would for any other external service. Overall
it's leading to better security (at least that's what they argue).

------
kdv
BeyondCorp sounds great in theory, but deployment sounds like a nightmare
without going to one of the several companies that are offering it as a
service. It's certainly not as accessible as a decent VPN w/ 2FA, and I doubt
we'll see mass deployment for smaller groups until then.

~~~
datguacdoh
We wrote a lot more about what we did to ensure then end user experience was
good in
[https://research.google.com/pubs/pub46366.html](https://research.google.com/pubs/pub46366.html)

You're right that it's still early for companies that don't have the same
resources of a company like Google, but products are slowly starting to emerge
to make it more turnkey, so I have high hopes that this will be the norm for
new companies in a few years.

------
arekkas
It's really nice that we see more and more awareness for Zero Trust and
specifically Google's BeyondCorp whitepaper. If you're looking to experiment
with this model yourself, check out the following open source projects. While
they might not implement everything in Google's BeyondCorp paper yet, they are
pretty close to the full thing, and address many issues raised in the
comments.

-> OAuth2 Authorization Server [https://github.com/ory/hydra](https://github.com/ory/hydra)

-> Identity & Access Proxy (early access): [https://github.com/ory/oathkeeper](https://github.com/ory/oathkeeper)

If you have questions don't hesitate to ask.

~~~
theptip
These look great, a couple questions:

1) Are these deployed at scale anywhere?

2) Any known security audits?

Thanks!

~~~
arekkas
1) Hydra is deployed at scale, Oathkeeper is our new kid on the block

2) We have an OpenID Connect certification coming in, but no security audits
so far

------
nwah1
Are there any decent open source implementations that a small company could
deploy?

~~~
marcc
Check out scaleft.com. They have tooling to help implement this. It looks to
be largely focused on ssh access but there is some stuff about controlling app
level access also.

~~~
fortyfivan
I'm from ScaleFT, thanks for the mention. True that our original focus was in
SSH/RDP access, however we've recently introduced Web access as well.

[https://www.scaleft.com/blog/how-to-deploy-a-beyondcorp-
styl...](https://www.scaleft.com/blog/how-to-deploy-a-beyondcorp-style-web-
app-behind-the-scaleft-access-fabric/)

I agree with many commenters that it appears transformative, but that's only
through the lens of Google. Centralized access controls at Layer 7 through a
proxy service that can authenticate and authorize requests, while brokering
encrypted sessions isn't that out of reach. Our goal at ScaleFT is to offer as
much as a service as we can.

Where things do get tricky, though, is with the access policies and device
attestation in a BYOD environment. Admittedly, we have work to do in this
regard, but it may not require a full MDM layer. Really, you only need to
query device state at a given time to make an authZ decision.

Love to see BeyondCorp get more coverage, and I hope to see further adoption
outside of Google.

------
maxsaltonstall
Hi, I'm Max, one of the two people who gave the talk this post references. I
work in Google Cloud and help publish papers about how we have done
BeyondCorp. Ask me questions!

------
wsetchell
Can someone explain what makes "corporate" applications so different.

Why does my corporate bug tracker need different security from Gmail? Does
that different security do anything?

~~~
theptip
Typically the corporate environment includes many "line of business" apps,
which are often a simple web interface onto a custom application. It's not
common for these to have Gmail-level hardening, and so security is provided by
restricting access to the underlying network with a VPN.

~~~
zokier
> It's not common for these to have Gmail-level hardening

Translation for those not in the business: They are custom J2EE applications
made by the combined forces of over-expensive consultants and CEOs nephew,
running on unpatched Windows 2003 servers with plain HTTP login form
(obviously vulnerable to SQL injection) and DB listening on 0.0.0.0 with
default admin credentials. But it's in the internal network, so don't worry,
no hacker can get there.

------
marmaduke
Maybe naive but can this sort of thing be accomplished with the likes of
FreeIPA and NFSv4 (with k auth) or this is entirely another ballpark?

------
cowmix
This is great for "people" accessing server resources but not so much for
automation / API access.

~~~
pquerna
Why can't the "Uberproxy" (in Google terms) consume Authentication mechanisms
like spiffe[1] certificates and allow access to protected resources via those?

Theres no reason a BeyondCorp architecture needs to make automation or API
access harder?

[1] - [https://spiffe.io/](https://spiffe.io/)

~~~
Shoothe
Because for Google it's not enough that you have a certificate if your client
is using an old unpatched OS and is potentially vulnerable.

------
samcat116
This seems very similar to what Microsoft is trying to do With Azure AD and
Intune

------
en4bz
Why not just use Citrix?

