The most common feedback I get is that it seems like too much of a stretch for companies that don’t operate at Google scale. That may be true if looking at the system as a whole, but the principles behind the architecture should attract anyone’s attention - remove trust from the network by authenticating and authorizing every request based on what’s known about the user and connecting device at the time of the request.
Disclaimer: I work for ScaleFT, a provider of Zero Trust access management solutions.
Edit: If folks are interested in hearing more about how other companies can achieve something similar, here's video of a talk I gave at Heavybit a few months ago on the subject: https://www.heavybit.com/library/blog/beyondcorp-meetup-goog...
If an organization is doing 802.1x, competently manages its endpoints (this is a tiny, tiny fraction of "managed" Windows sites), etc then maybe a BeyondCorp-style architecture is a net loss of security.
If an attacker can waltz into a conference room or exploit some salesperson's IE6 and start making requests from the "secure" network, probably best to make it obvious that there is no secure network.
It's hard enough to get them to adopt IPv6 since most think NAT is essential for security. "But my address is world reachable!" Face palm...
The first couple BeyondCorp papers talk a lot about how Google deployed this architecture side-by-side their traditional LAN, and slowly migrated applications over, only after closely inspecting and understanding the traffic.
But the real point they make is that Internet != safe = very much worry about security.
How is the latter going to be less secure than opening up your entire LAN to everyone who needs to access a single resource?
If you have interactive session audting.....you will be hearing from me.
Our first priority in developing our bastion product was to guarantee end-to-end privacy and verifiability, so the cleartext is not available on any bastion. We do have a roadmap item to support customers' desire for visibility into team activity, but we engineered for privacy first. Our current auditing is event-based - device enrolled, credential issued, ssh/rdp login, etc.
Happy to discuss our roadmap further - firstname.lastname@example.org
Enterprises need to know that while BeyondCorp is Google-specific, there are similar types of open architectures that they can deploy today, most notably the Software-Defined Perimeter (SDP).
SDP is an open architecture from the Cloud Security Alliance, and with it security teams can ensure that:
. All users are authenticated and authorized BEFORE they can access network resources
. Network resources are inaccessible to unauthorized users, dramatically reducing the attack surface
. Fine-grained policies control access for all users – remote and on-premises – to all resources , whether physical, virtual, or cloud
. All network traffic is encrypted, even if the underlying protocol is insecure
Here’s a video of me presenting on Software-Defined Perimeter at the CSA Summit at the 2017 RSA Conference
https://www.youtube.com/watch?v=ysi_9c5fmBg and a brief overview from our corporate site
Disclaimer: I led the CSA’s Software-Defined Perimeter working group publication of SDP-for-IaaS, and am leading the current effort to create an SDP Architecture Guide. I also work at Cryptzone, an SDP platform vendor.
Disclaimer I work for Google not related to BeyondCorp.
I see what these guys are trying to get at - its essentially how I run distributed services for my small business, but having a VPN in front of those is still a more secure option. VPN should not mean the keys to the kingdom and should indeed be restricted to a subset of explicitly exposed services.
There are hundreds of remote workers, but being local has definitely allowed me to not need to rely on email and video chats so heavily.
(Disclosure: Google employee)
Mere Developers are essentially never permitted to work remotely long-term. Google would rather lose someone valuable like Tim Bray to a major competitor than allow him to do so.
If you're a global subject expert like Professor Hinton, maybe you'll be accommodated, but you dare don't mislead people into believing it's remotely common. That would be a lie.
Personally, I would not want to report to someone who spends the majority of time remote. But maybe this person is a really great boss.
Why? There is really little to no difference if person is in next cubicle or video chat away.
i tried to create a plan to go remote within a year of starting, and i was told that people WAY more senior than me have tried and failed to go remote
The smaller the team, the better, but it's 100% on you to explain when there are 70,000 counter-examples in play. Same would go for discussing why a certain regional office (like Seattle) would maybe work vs. Mountain View. You have to be where you will give your best work to your self and team. I got really tired of flying back and forth and holding meetings being the one remote person out of ~12-20 got really ridiculous, so a move was inevitable for me.
They're very, very different companies/work culture :)
The other is managing its shares in Alibaba.
Very different needs.
I think pure remote role is still allowed but that's not what I mean.
(Disclosure: not a Google employee).
To achieve the highest levels of access in the BeyondCorp model you need a machine with Google's management agents, so we can evaluate device state accurately and pull information from our inventory management system.
Or working on the train: http://goo.gl/mPZKcc
In addition to simple primary and second factor, you can design policies for MDM-controlled devices only (i.e. designing endpoints that are trusted for remote access), geolocation, and software versions on a per-application basis, for example.
I think save for a few use cases (SSH into your datacenter, e.g.), VPNs will be dead before we know it.
Are you saying that "the bootstraps" (panic access) is VPN? Why isn't the first level just an open SSH port?
I'm not sure why "when all else fails" is better left a VPN port than an SSH port.
I'd say SSH infrastructure (a server with only pubkey login, maybe behind TCP-MD5 and/or heavily filtered source addresses) is probably more reliable and safer than a VPN.
They aren't mutually exclusive. Sure, you can leave SSH open publicly on WAN. I wouldn't for anything mission-critical.
Especially since the SSH access would obviously be on a dedicated jumpgate, so that IS two just right there. Maybe running a different OS and architecture just lower risk of one zero-day piercing both.
Also huproxy, or squid, or anything else that provides network-level access.
But I also think that if you consider network-level access as fundamentally different from other access then that's kinda missing the point of BeyondCorp.
I understand what you're saying here. I think to me, the point is, don't trust LAN access more than WAN access. But that doesn't mean that restricting LAN access is a bad idea. One of the benefits to BeyondCorp is that you don't (generally) require LAN access in order to access resources. But if your BeyondCorp server goes down, then what? How will you access it?
I probably still wouldn't want to expose my mission critical services over WAN (though I understand your point that VPN is a service exposed over WAN -- and why not SSH then?) Maybe that's wrong of me (I likely haven't given this as much thought as you), especially if you're using TCP-MD5 (which I actually haven't heard of until now, sorry for missing that), or filtering source IP addresses.
This is an interesting discussion, and I really appreciate your thoughtfulness. I'd love to hear more about huproxy and how it's working for you, if you'd care to discuss it more. My email is email@example.com.
Here's a somewhat over-simplified TL;DR on Google's approach:
Make everything in your company a SaaS app that lives on the Internet via cloud hosting or a proxy.
Nice but not always readily do-able.
I have never given much thought to the idea of defensive patents, but if this is truly the intent of Amazon's patent then it's brilliant.
Showrooming benefits Amazon and will continue to until they have a majority of retail space (never).
These decisions tend to be opportunistic. Or maybe a holder honestly convinces themselves that this particular offensive use is really a "defensive" move.
I give these sorts of declarations the same value I give crime-law proposals where someone pushing it declares that it would never be used in that way.
Also if you don't patent X, somebody else might and then figure out a away to use it against you.
News sites too often write the patent articles in the form "company A plans to do Z" when the only fact available is that company A has applied a patent for Z. There's an incentive for a company to patent pretty much everything they can, since besides the patenting costs, there's no downside I'm aware of in having extra patents. The costs are probably negligible on Google/Amazon scale and when you have good processes.
I probably shouldn't have used the word "plans" though, since I don't know for sure if they will implement it.
Also highly recommend https://www.scaleft.com/ for anyone who wants beyondcorp-style access to infrastructure.
That's from reading old papers, I don't know if anything changed now.
Could you elaborate on the technical details on user authentication? (If that's not top-super-secret) I guess it's just like accounts.google.com for Enterprise with mandatory 2FA (username+password+U2F key?). Does it work the same on mobile/Android (U2F via NFC or codes)?
There's tpm and secure boot - does the (presumably signed, in the trusted boot->os->user binary/service-path) agent access signing services from tpm - backed by a key in tpm, and use that to identify itself as an authentic agent?
Otherwise I can't see how an (admin) user couldn't extract the key from ram and run the os and agent in a vm?
Um, what? Did client certs get removed from HTTP 2?
14 months ago: "No. But TLS!" https://news.ycombinator.com/item?id=11556762
7 months ago: "421" https://news.ycombinator.com/item?id=13022596
Maybe it would be worth the time to test out the various implementations if someone has not already done so?
Doing some preliminary research, support is still a working draft https://datatracker.ietf.org/doc/draft-bishop-httpbis-http2-... source: https://daniel.haxx.se/blog/2016/08/18/http2-connection-coal...
Servers can ask the client to fall back to HTTP 1.1 instead, and then use client-certificates there.
The other articles in the series have PDF links, but not the latest one. I'm assuming it will eventually...
The largest notable exceptions seem to be internal file shares, and remote connections to machines that need to be behind a firewall.
I guess the overall point I have is that with the data files for both productivity and source code being stored cloud side, that VPNs become less and less necessary for a large % of workers.
Office 365 / OneDrive and Google Drive are even doing away with the requirement for internal fileshares. We used the former heavily at my previous job and I use the latter in my current role. Both have been pretty good alternatives.
I ask because, I find it relatively comfortable to do coding on a chromebook over a 'mosh' session over LTE.
The protocol also supports session resumption in case your connection to the relays is briefly interrupted, but client support is buggy so it's been disabled for years (with few complaints)
With plain SSH over HTTP over TLS, performance is satisfactory but not great. 4G is just about usable for vim, but you'd probably be best off using sshfs over http over tls and running vim locally, then compiling and running remotely.
It seems there can still be lateral re-infection via difficult to patch shared services (finance/procurement/obscure wikis). The examples in one of the papers (delivery people not needing access to financial systems) is completely bogus -- sometimes the worst engineered, most xss-y, mission critical apps have to be accessed by everyone, have insanely hand coded 'business logic', and no docs. Content aware behavioral profiling would seem to have a role in managing that risk.
So your workflow would still be supported, and it would likely be more secure than exposing SSH traffic to the internet at large.
You're not missing anything and you have an extremely efficient and secure workflow that runs laps around any of this.
The tradeoff is you work in a terminal and understand SSH, etc., which is too much to ask of many non-technical users.
If you wanted to obfuscate your traffic or the direct path to your remote host was blocked for some reason, a VPN might get you there, but you'd still run SSH over that VPN and your workflow should remain unaltered (albeit, higher latency).
To address a sibling posts comment, you can enjoy this very same workflow without exposing your sshd to the global Internet by placing it behind a "knock" with knockd. Highly recommended.
Some VPN services also advertise additional privacy or anonymity, but trusting a stranger to not sell you out to their local government isn't usually a good idea.
From a business standpoint, you may want web and network services without exposing them to the wider internet. So they're only accessible on IPs in local subnets. VPNs will get you inside the wall.
It doesn't really matter whether you use ssh, RDP or whatever for remote system access but you should be aware of the capabilities of your methods and the strengths and weaknesses of them.
If your username and password are reasonably hard to guess, and ideally you use passwordless logins, and you keep your system regularly patched, and you definitely don't allow remote root logins, and you cycle your passwords say 90 days or so, then you should be fine. Do not bother changing port 22 to say 2222 or requiring 20+ char passwords. You may want to disable some of sshd's functionality if you don't use it but that might be a step too far.
Also, reset your sshd's keys occasionally and get them into your local ~/.ssh/known_hosts as soon as possible and read up and understand why ssh warns you when the keys and names look odd - that could save you a MitM attack from a bored techy in a hotel with wifi or whatever.
To sum up: a well handled sshd and client can be a fairly decent VPN and remote access solution. However, a separate VPN eg OpenVPN and then ssh over that is better and need not be inconvenient.
You pays your money .... 8)
Maybe some Google employees were still using Google Reader?
I'll stick with a VPN.
Edit more direct: https://cloud.google.com/iap/docs/container-engine-quickstar...
Use any authentication/identity service and publish all internal services as public apps, consolidating access, increasing security, and simplifying maintenance.