Hacker News new | past | comments | ask | show | jobs | submit login
How to use BeyondCorp to ditch VPN, improve security and go to the cloud (blog.google)
441 points by fhoffa on June 20, 2017 | hide | past | web | favorite | 155 comments

Great to see them continue this series, and glad that this one touches on what it takes for other companies to achieve something similar. I talk about BeyondCorp a lot as evidence that the Zero Trust model works, and that employees will love it.

The most common feedback I get is that it seems like too much of a stretch for companies that don’t operate at Google scale. That may be true if looking at the system as a whole, but the principles behind the architecture should attract anyone’s attention - remove trust from the network by authenticating and authorizing every request based on what’s known about the user and connecting device at the time of the request.

Disclaimer: I work for ScaleFT, a provider of Zero Trust access management solutions.

Edit: If folks are interested in hearing more about how other companies can achieve something similar, here's video of a talk I gave at Heavybit a few months ago on the subject: https://www.heavybit.com/library/blog/beyondcorp-meetup-goog...

The major barrier is really for companies that lack a lot of internal IT expertise. It's really dangerous for people who don't understand security and networking to just open up like this, since most enterprise software is grotesquely insecure out of the box. Everyone assumes LAN = safe = no need to worry about security. This is always false, but it's especially false if you're devolving away from LAN.

The illusion that it's okay to run cleartext, unauthenticated services on an internal network is also pretty dangerous. Making it clear that the network is out in public might actually yield a better security posture overall.

If an organization is doing 802.1x, competently manages its endpoints (this is a tiny, tiny fraction of "managed" Windows sites), etc then maybe a BeyondCorp-style architecture is a net loss of security.

If an attacker can waltz into a conference room or exploit some salesperson's IE6 and start making requests from the "secure" network, probably best to make it obvious that there is no secure network.

I've believed this for a long time, but try re-educating two generations of IT people who think firewall equals security.

It's hard enough to get them to adopt IPv6 since most think NAT is essential for security. "But my address is world reachable!" Face palm...

Very true... the "ditch your VPN" sure is a nice soundbite, but in reality it's the last thing you should be doing. I mean that literally... as in it's the last step. Better know what you're doing before getting there.

The first couple BeyondCorp papers talk a lot about how Google deployed this architecture side-by-side their traditional LAN, and slowly migrated applications over, only after closely inspecting and understanding the traffic.

But the real point they make is that Internet != safe = very much worry about security.

I'm not sure I understand the argument you're making here. A VPN offers you direct access to all the servers within your internal network. The BeyondCorp model offers you proxied access to only particular applications that have been opened up based on a wide variety of checks on the user and device accessing the application.

How is the latter going to be less secure than opening up your entire LAN to everyone who needs to access a single resource?

There's a good video on the topic from this year's RSA conference here: https://www.rsaconference.com/events/us17/agenda/sessions/66...

The point was that fundamentally the Internet is not safe, so companies will do the right things to secure their resources. So yes, in BeyondCorp this means running a proxy service that centralizes the auth workflow through policies that check the user and connecting device against the resource at the time of the request.

Exactly. And because you can't be sure that the intervening network is safe, you need to encrypt all the traffic, even after checking authorization and authentication. That's the BeyondCorp mission at Google. [Disclaimer: I work for Google, and worked on these papers and blog post]

off topic: do bastion servers in scaleFTs architecture provide any interactive-session auditing capability (e.g. gravitational teleport), or do they simply act as a bastion access tunneling tier?

If you have interactive session audting.....you will be hearing from me.

Great question, and not off-topic at all ;)

Our first priority in developing our bastion product was to guarantee end-to-end privacy and verifiability, so the cleartext is not available on any bastion. We do have a roadmap item to support customers' desire for visibility into team activity, but we engineered for privacy first. Our current auditing is event-based - device enrolled, credential issued, ssh/rdp login, etc.

Happy to discuss our roadmap further - ivan.dwyer@scaleft.com

In case anyone is unfamiliar with them, ScaleFT is a leader in this space and a team of solid folks. They took the BeyondCorp paper and model and really ran with it. Worth listening.

I commend the Google team for not only deploying an effective and innovative security solution, but also for contributing to security community through this series of informative articles.

Enterprises need to know that while BeyondCorp is Google-specific, there are similar types of open architectures that they can deploy today, most notably the Software-Defined Perimeter (SDP).

SDP is an open architecture from the Cloud Security Alliance, and with it security teams can ensure that:

. All users are authenticated and authorized BEFORE they can access network resources

. Network resources are inaccessible to unauthorized users, dramatically reducing the attack surface

. Fine-grained policies control access for all users – remote and on-premises – to all resources , whether physical, virtual, or cloud

. All network traffic is encrypted, even if the underlying protocol is insecure

Here’s a video of me presenting on Software-Defined Perimeter at the CSA Summit at the 2017 RSA Conference https://www.youtube.com/watch?v=ysi_9c5fmBg and a brief overview from our corporate site https://www.cryptzone.com/products/appgate/why-a-software-de...

Disclaimer: I led the CSA’s Software-Defined Perimeter working group publication of SDP-for-IaaS, and am leading the current effort to create an SDP Architecture Guide. I also work at Cryptzone, an SDP platform vendor.

My ex-manager who left Google to another well established company once said the most missed thing from Google was the ability to work remotely right away on corp laptop with BeyondCorp.

Disclaimer I work for Google not related to BeyondCorp.

Is logging into a VPN in 3 seconds and starting to work not "right away" enough?

Maintaining a stateful VPN connection is much harder than making a stateless HTTP request.

Modern VPN solutions allow for full IP roaming. Nothing to maintain really.

I see what these guys are trying to get at - its essentially how I run distributed services for my small business, but having a VPN in front of those is still a more secure option. VPN should not mean the keys to the kingdom and should indeed be restricted to a subset of explicitly exposed services.

I thought Google doesn't allow remote work?

This is incorrect! I began as a full-time remote employee and stayed so for 16 months until it made more sense for me to move to HQ.

There are hundreds of remote workers, but being local has definitely allowed me to not need to rely on email and video chats so heavily.

(Disclosure: Google employee)

I concur: I'm typing this from my home office in Palm Springs, and I'm a Director at Google.

Directors might get to work remotely. Good for you. I hope you enjoy Palm Springs while your reports are trapped on 101.

Mere Developers are essentially never permitted to work remotely long-term. Google would rather lose someone valuable like Tim Bray to a major competitor than allow him to do so.

If you're a global subject expert like Professor Hinton, maybe you'll be accommodated, but you dare don't mislead people into believing it's remotely common. That would be a lie.

Probably more dependent on one's immediate manager and/or chain than on company wide policy.

Personally, I would not want to report to someone who spends the majority of time remote. But maybe this person is a really great boss.

> I would not want to report to someone who spends the majority of time remote

Why? There is really little to no difference if person is in next cubicle or video chat away.

I work on a team with another remote team, and I assure you this is not always the case.

For me it is the glance over to the next cubicle or next aisle to see if the person is on phone, heads down working, or is easily interruptable. I don't have remote technology to do that. The best I have is an IM which is a cognitive load for me and an interruption for them.

Same here - not a the same level though :-) Undisclosed location from the European Alps - I've always worked remotely, even when my company HQs were in Sunnyvale and I was living a few blocks from the office.

how long did it take for you to obtain clearance on that?

i tried to create a plan to go remote within a year of starting, and i was told that people WAY more senior than me have tried and failed to go remote

How does one request full-time work in the interview? Is it normal to do it during the onboarding process?

Full-time remote work? If you're going to reach an agreement on where you will live and work it's better to do it as early in the process as possible. I'd say WELL before onboarding and interview. Like conversation #1 with the recruiter/internal contact. It's about mutual understanding and respect, and making sure your physical position would provide value.

The smaller the team, the better, but it's 100% on you to explain when there are 70,000 counter-examples in play. Same would go for discussing why a certain regional office (like Seattle) would maybe work vs. Mountain View. You have to be where you will give your best work to your self and team. I got really tired of flying back and forth and holding meetings being the one remote person out of ~12-20 got really ridiculous, so a move was inevitable for me.

When I'm applying for jobs I'll open with an email to their recruiter saying that I'm interested and intend to apply, but only if they can confirm they're open to me working remotely the majority of the time, I also mention my expected salary range. Doing it that way saves us both time and hassle in going through the motions only to find down the line that it would never have worked out because of either work arrangements or salary.

For a big company, Google is surprisingly willing to make the right thing happen in individual cases. My sense is that Google's ban on working from home is a strong default, a rebuttable presumption that working from home would be a bad idea in a given situation. Rebuttable presumptions can be rebutted.

On my team (and most other teams) it is cool to work from home or a different office occasionally.

Maybe you're confused with Yahoo?


They're very, very different companies/work culture :)

One is in the technology industry.

The other is managing its shares in Alibaba.

Very different needs.

Remote work can be intended also as in "you are at a customer facility and need to access the corporate intranet to get a document or access the SW repository".

It's quite rare to have a full time WFH situation. But WFH once or twice a week is quite common. At least in my office.

Well, even if you don't work remotely, you routinely need to access corp resources from outside office.

This is for an SRE role who would need to work during on call, or for whatever reason decide to WFH.

I think pure remote role is still allowed but that's not what I mean.

Working remotely part of the time is common. Full time seems rare, and I've known people who had to find another job when their office closed.

I think the only restriction is working on Google approved hardware so no BYOD there.

(Disclosure: not a Google employee).

Bring Your Own Device is fine for ChromeOS and Mobile. You might not get the full amount of trust as a Google-issued device (for mobile/tablet).

To achieve the highest levels of access in the BeyondCorp model you need a machine with Google's management agents, so we can evaluate device state accurately and pull information from our inventory management system.

But if you don't provision the device yourself how can you be sure it hasn't been tampered with in a way that just displays "bootloader OK, everything good" but in the mean time it was rooted? Or is that a risk calculated in the "no full amount of trust"?

That protects against newbies, but we’re talking here about Google employees – modifying and cloning the ICs on the board to fake a verified boot status should be a triviality for people who design their own chips and boards for Google’s own servers, right?

That would be covered by policy controls, not technical ones—it's the same issue as someone taking pictures of the screen with their personal phone. You'd need to address the actual issue that's causing people to do that (ill-thought-out policies, employee actually working for $INTELLIGENCE_AGENCY, employee enjoys espionage,…).

A recent example would have been the data that was stolen from Google and given to Uber – the employees who were qualified enough to design their own LIDAR chips and boards would equally be qualified to circumvent any such protections.

And that's how / why the legal system is involved and it's cost them their job. Security isn't just preventing something from ever being possible.

On a Google-approved device, you can still use that device, and copy content to another, non-Google-approved device. Nothing is perfect, but at some point you trust your employees.

you can't clone code from a non-Google-approved device so if you write code you might as well have a google issued machine

they do; many teams discourage it though

Remote meaning on the Google Bus on the way to the office.

Or working on the boat, as we at Google Zürich do: http://goo.gl/P9CA69

Or working on the train: http://goo.gl/mPZKcc

I don't know why people are negatively marking your post, because this is a thing a lot of people do and it does feel like there is a bit of a stigma to do so.

I work for Duo Security, which this year launched the first major commercial implementation of BeyondCorp as a part of our product offering. Using it to jump on to the wiki, for diff reviews, and other internal resources has been excellent.

In addition to simple primary and second factor, you can design policies for MDM-controlled devices only (i.e. designing endpoints that are trusted for remote access), geolocation, and software versions on a per-application basis, for example.

I think save for a few use cases (SSH into your datacenter, e.g.), VPNs will be dead before we know it.

VPNs will remain because of SSH, eh?


I think you misunderstood. My point is that you will still need direct access into the network in order to work on the BeyondCorp servers themselves, for example -- not that SSH shouldn't, or couldn't, be covered under the zero trust model as well.


Are you saying that "the bootstraps" (panic access) is VPN? Why isn't the first level just an open SSH port?

I'm not sure why "when all else fails" is better left a VPN port than an SSH port.

I'd say SSH infrastructure (a server with only pubkey login, maybe behind TCP-MD5 and/or heavily filtered source addresses) is probably more reliable and safer than a VPN.

With a VPN you have two security layers -- one into the network, and a second one into each individual server.

They aren't mutually exclusive. Sure, you can leave SSH open publicly on WAN. I wouldn't for anything mission-critical.

I did mention two other security layers, so I take issue with you counting only one.

Especially since the SSH access would obviously be on a dedicated jumpgate, so that IS two just right there. Maybe running a different OS and architecture just lower risk of one zero-day piercing both.

Also huproxy, or squid, or anything else that provides network-level access.

But I also think that if you consider network-level access as fundamentally different from other access then that's kinda missing the point of BeyondCorp.

> But I also think that if you consider network-level access as fundamentally different from other access then that's kinda missing the point of BeyondCorp.

I understand what you're saying here. I think to me, the point is, don't trust LAN access more than WAN access. But that doesn't mean that restricting LAN access is a bad idea. One of the benefits to BeyondCorp is that you don't (generally) require LAN access in order to access resources. But if your BeyondCorp server goes down, then what? How will you access it?

I probably still wouldn't want to expose my mission critical services over WAN (though I understand your point that VPN is a service exposed over WAN -- and why not SSH then?) Maybe that's wrong of me (I likely haven't given this as much thought as you), especially if you're using TCP-MD5 (which I actually haven't heard of until now, sorry for missing that), or filtering source IP addresses.

This is an interesting discussion, and I really appreciate your thoughtfulness. I'd love to hear more about huproxy and how it's working for you, if you'd care to discuss it more. My email is jmaguire@duo.com.

I've gotten to integrate duo security in a few clients sites and I think you guys are fantastic.

This is really awesome. My own venture ZeroTier (www.zerotier.com) was strongly influenced by the original BeyondCorp paper. Our vision is a little different in that we do network virtualization that treats the whole world like one data center. Instead of eliminating the LAN you make it fully virtual and mobile and replace the physical perimeter with a cryptographic one.

Here's a somewhat over-simplified TL;DR on Google's approach:

Make everything in your company a SaaS app that lives on the Internet via cloud hosting or a proxy.

Nice but not always readily do-able.

Thank you for creating ZeroTier. It is really awesome. It's so much simpler to setup than e.g. OpenVPN and the peer-to-peer architecture also makes a lot more sense to me.

ZeroTier is amazing. I tell everyone about it.

Yesterday, I saw an article[1] about Amazon's plans to block websites in their stores (a very bad thing) and was wondering when a company like Google was going to launch a VPN service. I wonder if these things will meet in the long term. If companies that control the network try to limit access to information about their competitors, then their competitors might try to liberate that information.

[1] http://gizmodo.com/just-in-time-amazon-patents-method-to-pre...

Google has a VPN service fyi. When you connect to a non secure network there's an option to proxy all traffic through a free Google VPN.


To a Google-approved Wi-Fi network.

One of the more interesting insights from the comments (which I agree with) was that the Amazon patent was for defensive purposes in order to prevent other companies from trying to implement such an idea in their stores.

I have never given much thought to the idea of defensive patents, but if this is truly the intent of Amazon's patent then it's brilliant.

blush that was the intent.

Look up Macrovision (VHS copy protection from the 20th century). They came up with the scheme, then patented every way they could think of to break it.

Not a lawyer, but wouldn't that be challengeable in court? I see the ethics of it, but maybe legally it falls in the same bucket as patent trolls that hold on a patent with no intention of ever commercializing it.

I thought the whole problem with patent trolls was that holding a patent with no intent of commercializing it was perfectly legal.

Interesting. "Let's block those who would block comparison shopping" certainly makes more free-market sense than "let's block comparison shopping."

Maybe a combination of defensive and opportunistic.

This should be seen as a defensive move so they can sue anyone who comes to market with a product that blocks the shopper's ability to search Amazon while in a given store.

Showrooming benefits Amazon and will continue to until they have a majority of retail space (never).

Back in the day, the one-click patent was claimed to be defensive, too.

These decisions tend to be opportunistic. Or maybe a holder honestly convinces themselves that this particular offensive use is really a "defensive" move.

I give these sorts of declarations the same value I give crime-law proposals where someone pushing it declares that it would never be used in that way.

I'm the one who wrote that patent (sorry).

Amazon or Google? One hour ago you posted that you're "a Director at Google". https://news.ycombinator.com/item?id=14597662

It's plausible that they changed jobs ;).


I see -- I thought the patent was new. I'd ask you why you wrote a patent that allows large companies to block the open flow of online information (considering that it might prompt other companies to block information in different, but similar, ways), but I'm guessing that you won't be able to talk about it.

If you are a good guy, getting a patent for X may help you prevent bad guys from using X.

Also if you don't patent X, somebody else might and then figure out a away to use it against you.

News sites too often write the patent articles in the form "company A plans to do Z" when the only fact available is that company A has applied a patent for Z. There's an incentive for a company to patent pretty much everything they can, since besides the patenting costs, there's no downside I'm aware of in having extra patents. The costs are probably negligible on Google/Amazon scale and when you have good processes.

Amazon uses them offensively, so that argument probably doesn't apply in this case. https://en.wikipedia.org/wiki/1-Click#Barnes_.26_Noble

That's a patent, there is no indication Amazon has any plans to do such a thing.

I know -- that's why I used the phrase "long-term". :)

I probably shouldn't have used the word "plans" though, since I don't know for sure if they will implement it.

They very well could have patented it to prevent anyone else from doing it - which seems more likely.

Amazon employees are encouraged to patent basically anything, at least in AWS. Validity of content or relevance to future business plans isn't really a factor.

It's probably a combination of defensive and opportunistic. Companies stake out positions and then watch how the winds blow.

This seems so completely obvious that it's surprising how common intranets and internal services locked only by network rules are.

Also highly recommend https://www.scaleft.com/ for anyone who wants beyondcorp-style access to infrastructure.

How is this different or more secure than let's say TLS client authentication with the private key on a smart card / Yubikey?

They also take into account the state of the machine you're working on. So locked bootloader and probably a client cert in TPM-like component, plus "device health". Client certs alone are good for authentication (don't work in HTTP/2 though) but they want to reach even better target - no malicious software running on your computer.

That's from reading old papers, I don't know if anything changed now.

That's correct. Previous papers touch on the inventory data pipeline and machine health, though without as much detail as I might like in your shoes. Our agents track a wide variety of things on client machines, and we use that inventory data to determine how trustworthy a machine could be. [I work at Google, and helped make these papers, and blog post, happen]

Also FYI- there's visible template code here: https://cloud.google.com/iap/docs/quickstart

Interesting design. As far as I understood from old papers client certificates are used only to identify the device while user authentication is handled differently.

Could you elaborate on the technical details on user authentication? (If that's not top-super-secret) I guess it's just like accounts.google.com for Enterprise with mandatory 2FA (username+password+U2F key?). Does it work the same on mobile/Android (U2F via NFC or codes)?

Android supports U2F via NFC and Bluetooth now, which is used for user authentication on Android devices. We've also released an (experimental?) iOS app to support U2F over Bluetooth.


How is trust anchored?

There's tpm and secure boot - does the (presumably signed, in the trusted boot->os->user binary/service-path) agent access signing services from tpm - backed by a key in tpm, and use that to identify itself as an authentic agent?

Otherwise I can't see how an (admin) user couldn't extract the key from ram and run the os and agent in a vm?

> Client certs alone are good for authentication (don't work in HTTP/2 though)

Um, what? Did client certs get removed from HTTP 2?

I have been collecting conflicting reports on HTTP/2 client certificate support:

14 months ago: "No. But TLS!" https://news.ycombinator.com/item?id=11556762

7 months ago: "421" https://news.ycombinator.com/item?id=13022596

Maybe it would be worth the time to test out the various implementations if someone has not already done so?


Doing some preliminary research, support is still a working draft https://datatracker.ietf.org/doc/draft-bishop-httpbis-http2-... source: https://daniel.haxx.se/blog/2016/08/18/http2-connection-coal...

Yes. As far as I understand, the problem was that the requirement for a certificate is a per-request thing, but HTTP 2 can have multiple requests in flight over the same TLS connection at the same time and thus can't just renegotiate the connection when it comes up. There have been proposals to fix this, but nothing has gained the necessary interest and traction.

Servers can ask the client to fall back to HTTP 1.1 instead, and then use client-certificates there.

Ah that's a good point, thanks!

Dumb question - is the 4th article in the series only available via ;login;[1]?

The other articles in the series have PDF links, but not the latest one. I'm assuming it will eventually...

[1] https://www.usenix.org/publications/login/summer2017/peck

I think that was my mistake, the PDF is in the pipeline, expect it live within a week.

Blog post now links to downloadable PDF

With productivity apps being cloud hosted (Office 365, Google Docs, Tableau, PowerBI, etc) and with source code and team management services being hosted (Github, Visual Studio Online, Gitlab, etc) huge percent of people's day to day work can seemingly happens without a VPN.

The largest notable exceptions seem to be internal file shares, and remote connections to machines that need to be behind a firewall.

I guess the overall point I have is that with the data files for both productivity and source code being stored cloud side, that VPNs become less and less necessary for a large % of workers.

"The largest notable exceptions seem to be internal file shares, and remote connections to machines that need to be behind a firewall."

Office 365 / OneDrive and Google Drive are even doing away with the requirement for internal fileshares. We used the former heavily at my previous job and I use the latter in my current role. Both have been pretty good alternatives.

I am looking for ideas to control hardware connrcted to PC. We use vnc-viewer now. Can beyondcorp help here?

GH/Gitlab are usually behind VPN for non-startups

Part 3 [0] discusses "Wrapping SSH traffic in HTTP over TLS." Can one comfortably do coding over a good cellular (LTE) connection over this?

I ask because, I find it relatively comfortable to do coding on a chromebook over a 'mosh' session over LTE.

[0] https://static.googleusercontent.com/media/research.google.c...

The actual framing is WebSockets over TLS once the session is established and the latency is no worse than SSH over VPN practically speaking.

The protocol also supports session resumption in case your connection to the relays is briefly interrupted, but client support is buggy so it's been disabled for years (with few complaints)

To get good performance, one would need a BeyondCorp enabled mosh proxy.

With plain SSH over HTTP over TLS, performance is satisfactory but not great. 4G is just about usable for vim, but you'd probably be best off using sshfs over http over tls and running vim locally, then compiling and running remotely.

Git/mercurial over ssh and preform actions on push?

Yes, although I tend to use SSHFS and a local editor. Even a cellular quality connection is not required. It works fine from an airplane.

It almost seems like this could be described as dynamically building a per-user VPN, via inbound proxies for admission control and traffic src/dst filtering, and services hosted behind multiprotocol terminating proxies. Some extra client analysis (practically effective, even if no theoretically valid remote attestation), tedious but necessary work to understand the access patterns for all the internal services, etc.

It seems there can still be lateral re-infection via difficult to patch shared services (finance/procurement/obscure wikis). The examples in one of the papers (delivery people not needing access to financial systems) is completely bogus -- sometimes the worst engineered, most xss-y, mission critical apps have to be accessed by everyone, have insanely hand coded 'business logic', and no docs. Content aware behavioral profiling would seem to have a role in managing that risk.

Sorry this will come off as a super dumb question. I use ssh. I can login, edit, develop, run, basically anything. What am I missing? I thought VPNs are for 'admin' types that need access to a MS Excel file.

Google's model allows SSH to "internal machines" over a set of relays that apply the same machine authentication and trust tier logic that's laid out in the papers.

So your workflow would still be supported, and it would likely be more secure than exposing SSH traffic to the internet at large.

"I use ssh. I can login, edit, develop, run, basically anything. What am I missing?"

You're not missing anything and you have an extremely efficient and secure workflow that runs laps around any of this.

The tradeoff is you work in a terminal and understand SSH, etc., which is too much to ask of many non-technical users.

If you wanted to obfuscate your traffic or the direct path to your remote host was blocked for some reason, a VPN might get you there, but you'd still run SSH over that VPN and your workflow should remain unaltered (albeit, higher latency).

To address a sibling posts comment, you can enjoy this very same workflow without exposing your sshd to the global Internet by placing it behind a "knock" with knockd. Highly recommended.

The VPN changes your network route. This can get you around geographic locks (services that only work in certain areas). It can also get you around traffic issues, if your ISP has technical/political routing issues. Like with Comcast/Verizon refusing to add additional peering because they wanted to double-bill netflix traffic.

Some VPN services also advertise additional privacy or anonymity, but trusting a stranger to not sell you out to their local government isn't usually a good idea.

From a business standpoint, you may want web and network services without exposing them to the wider internet. So they're only accessible on IPs in local subnets. VPNs will get you inside the wall.

ssh can be used as a VPN - you can proxy ports and tunnel all sorts of things through it. You can easily drill a connection through to say an "internal" Windows or NFS file server and grab docs off it. There is file transfer built in as well eg sftp and scp, with easy rsync integration.

It doesn't really matter whether you use ssh, RDP or whatever for remote system access but you should be aware of the capabilities of your methods and the strengths and weaknesses of them.

If your username and password are reasonably hard to guess, and ideally you use passwordless logins, and you keep your system regularly patched, and you definitely don't allow remote root logins, and you cycle your passwords say 90 days or so, then you should be fine. Do not bother changing port 22 to say 2222 or requiring 20+ char passwords. You may want to disable some of sshd's functionality if you don't use it but that might be a step too far.

Also, reset your sshd's keys occasionally and get them into your local ~/.ssh/known_hosts as soon as possible and read up and understand why ssh warns you when the keys and names look odd - that could save you a MitM attack from a bored techy in a hotel with wifi or whatever.

To sum up: a well handled sshd and client can be a fairly decent VPN and remote access solution. However, a separate VPN eg OpenVPN and then ssh over that is better and need not be inconvenient.

You pays your money .... 8)

Duo Security seems to be offering a BeyondCorp-like third-party solution for client companies:


"Over the course of the migration we’ve discovered [Google] services that we thought were long dead..."

Maybe some Google employees were still using Google Reader?

Stumbled across beyondcorp.com a few months ago. Great to see google, scaleft, and others pushing the envelope here.

This sounds a lot like Microsoft's DirectAccess which has been in the Enterprise version of Windows since Windows 8. Please correct me if I'm wrong though.

Kind of. Microsoft sold it more as an always-on VPN. They weren't selling a radically different philosophy for securing your network with it. But regardless of the differences Microsoft really hamstrung themselves by making it so Windows centric.

At the present moment Cisco is getting our money rather than MS for our VPN specifically because our Linux users couldn't leverage DA.

Interesting. This will never happen at my big company, though. Seems hard to imagine most companies being able to manage the complexity.

You never know. SaaS is kind of a gateway drug to BeyondCorp since SaaS isn't inside the firewall to begin with. The next step is to start applying a SaaS mindset to your own internal apps and then you're mostly there.

Trying to secure a traditional corporate "intranet" while enabling productive work is much harder. This is just branded common sense end to end security.

Link the blog post now points to a downloadable PDF thanks to Google Drive.

Still not seeing that.

Anywhere we can read the publication without being a subscriber to LOGIN?

Is there a link to the actual (fourth) paper? I only see the abstract.

Working on that now, I think I messed up on my end with our internal tool, hope to have the full PDF download from research.google.com in a day or two, maybe next week if I epic failed. [I work at Google, and helped make these papers, and blog post, happen]

& add cross-link to things like google/huproxy ?

"We discovered services we thought were long dead..."

What's wrong with VPN?

Just doing VPN reinforces the wrong notion that LAN == Safe.

Wait, that's the massive problem? Jesus.

I'll stick with a VPN.

off topic: I never noticed that there's a .google TLD...

Google wants my traffic for themselves and calls it “more secure”. Ha ha, nice try.

I n k


So google bought the .google TLD!

Oh well, no way to delete my comment now.

Yes turn keys over to Google. I am sure if you are an American Fortune 500 company you have no problem with this. Not so if you are a non-American company. Though a lot of people will jump on board despite the huge security implications of doing something like this and turning over all your security over to Google. Meanwhile nation states are exploring how to use quantum encryption to prevent eaves dropping others are being coerced to simply hand over security to a third party that you hardly trust with any sense of privacy.

It seems from the article this is only being offered as a product to people already using Google Cloud services, specifically for accessing those services? Otherwise it's just a series of papers describing the system.

You're right, the initial version of Identity-Aware Proxy (IAP) is for Cloud applications, but that's not the end of the story, and we're learning from BeyondCorp's 7 year journey to inform the direction of IAP going forward. [I work at Google, and helped make these papers, and blog post, happen]

Thanks for sharing through the papers and posts, they've been incredibly informative. Keep up the good work!

When are we going to hear about further contextual auth capabilities coming to IAP? It's awesome.

How do you use IAP with GKE?

Haven't tried yet myself, but since the ingress resource is just an https load balancer, enable IAP on that. Like so: https://medium.com/@DazWilkin/google-cloud-iap-and-gke-c773d...

Edit more direct: https://cloud.google.com/iap/docs/container-engine-quickstar...

Much like 'bigtable' was a google internal product, and only published a set of papers describing the system, and now we have hbase.. Or how 'mapreduce' was a google internal product, and now we have hadoop, etc.

Calling it "Google BeyondCorp" makes it sound like a product; maybe if they called it something like "BeyondCorp architecture" it would be clearer what they're talking about.

It's a way of doing things, nothing specific to Google.

Use any authentication/identity service and publish all internal services as public apps, consolidating access, increasing security, and simplifying maintenance.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact