Hacker News new | past | comments | ask | show | jobs | submit login
Google Moves Its Corporate Applications to the Internet (wsj.com)
283 points by kjhughes on May 13, 2015 | hide | past | web | favorite | 148 comments

I'm so happy to see this. As Bruce Schneier (who runs an open WiFi network at home) explains, "if my computer isn't secure on a public network, securing my own network isn't going to reduce my risk very much."[1]

The same is true for corporate applications (and devices like printers). If they're not secure on a public network, securing the corporate network won't reduce their risk that much: they're still exposed to potential breaches elsewhere in the corporate network.


[1] https://www.schneier.com/blog/archives/2008/01/my_open_wirel...

There are other, valid reasons to not run a public access point. Not wanting neighbors to steal your bandwidth, run a TOR node off it, or host illegal content, for example. All of these activities could get you removed from your ISP, and even taken to court. While you could probably prove your innocence in court, I can not imagine why taking the risk for absolutely no personal benefit is worth the risk. I don't really see how running an open wifi network shows anything other than ignorance of the risks.

This law needs to change. An Internet connection isn't a gun. Just because somebody uses my Internet connection to do something illegal, I didn't do anything illegal.

Pretty sure that's not how guns work either

If you leave your gun lying around and someone commits a crime with it, I'm pretty sure that's criminal negligence.

If you leave your gun loaded and your kid shoots someone - yes.

If you leave your gun and someone 'steals' it to kill others - I don't think so.

It's all about intentions of third party which is one of many paradoxes of the law system.

Right, but if someone steals your registered gun and uses it in a crime that gets traced to your gun, the police will be visiting you, if not arresting and prosecuting. For this reason, I'd assume it's smart to do what you can to avoid having your gun being used by strangers. For the same reason, you could argue it's smart to not let strangers use your internet connection, even if theoretically you can make a case that you're not legally responsible.

What is a registered gun? I have plenty of guns, none of them are registered.

This varies by country. In Australia if I want a gun I need to have a firearms license, I have to register every weapon I own and store them in an safe that meets certain minimum requirements (separate ammunition, requires key and combination to enter, minimum thickness, rules about how it is secured to the wall/floor, etc) and at any time the police can show up and ask me to show them I have the weapons correctly secured.

Or I could rent appropriate storage space at a gun club, store the weapons that way.

It's a completely different gun culture to what is seen in the US, and personally I like it a lot more.

You also need a valid reason to have a gun- you can be a member of a gun club and be a recreational shooter, or for animal control (my license) but to have that you have to have a property to shoot on

Self-defense is not a valid reason, that should be noted.

The Australian government would rather you get beaten, raped, or murdered in your home than enable to safely and effectively defend yourself.

I know. There's not a single other implement in the home that could be used for defence. The government must want you beaten, raped and or murdered.


What implement would you use that would be safer (for yourself) than a firearm? What would be more effective?

Better yet, what would you do as a weaker individual when confronted with two or more aggressors?

I don't know about you, but I don't pretend to be tough enough to "take em!" Nor would I want my wife to try and fend off those who have already committed felonies against my property.

Baseball bats, batons, mace, knives, etc. are heinous, laughable, or otherwise ineffective tools for self-defense.

Simply put: the firearm is an effective equalizer. It enables a mother to defend her family against multiple aggressors at otherwise completely unfavorable odds.

I value the lives of myself and loved ones far more than someone who has violated the rights to my property. I refuse to cede any advantage that might make the difference in my survival.

It's also easier to accidentally kill people using a gun. And it's a lot easier to kill or injure the wrong person. A gun is effectively a killswitch. It's a thing for making people not be alive anymore.

The question you should be asking yourself is not "Do I need a gun?" (the answer to which seems to be "yes", which should worry you more than that you might not be allowed to have one) but "Why do I need a gun?".

Incidentally, in Germany you can get a gun license for self-defense reasons. But only in very well-defined, very rare and very extreme circumstances.

If the average person needs a gun, the society is extremely broken. In some parts of the world -- mostly the so-called third world but apparently also many places in the US -- the society really is that broken.

The long-term answer to a broken society isn't arming the populace. It's fixing the society so it's no longer broken.

Guns turn many non-violent crimes into violent crimes already, just by being widely available. If a criminal has to expect to face a frightened, potentially very lethally armed individual in the course of their crime, they are more likely to plan for that eventuality and bring a gun or lethal weapon when they wouldn't have had to otherwise.

The only thing more worrying than a frightened normal person with a gun is someone who is already in such a mess that they knowingly engage in an activity where they will likely have to face such a person (or some theoretical hard-boiled vigilante, though that's the far less likely encounter in practice).

Instead we need to prevent those crimes in the first place. I know this is a very un-American concept, but every crime is a failure of society. Not just because the powers that be should have interfered in the process of committing that crime but because the criminal shouldn't have been allowed to become a criminal in the first place.

An overwhelming amount of crime is the direct or indirect result of social problems and drug politics. Society being broken isn't normal, even when it seems pathological. Prevent crimes and you save lives, give people guns and your best chance is that you add more "bad people" to the bodycount.

There are exactly two groups of people we want to have the legal permission to use lethal weapons in the course of their duty: soldiers and the police.

Soldiers sacrifice their humanity to protect society from external harm. They're trained to overcome their natural inhibition to murder people in cold blood because that's what they are up against. That's why we should respect veterans, we shouldn't aspire to become like them. They're essentially trained to act like sociopathic murderers if necessary. It's a failure of mankind that we still need them at all.

The police on the other hand shouldn't kill people. The act of shooting to kill in hostage situations is called "Finaler Rettungsschuss" in German -- literally, a "final rescue shot" or more figuratively a "last resort". During normal duty a police officer shouldn't even unholster their weapon -- which seems obvious if you consider that you should only aim a gun at something you plan to destroy.

In the UK, normal police officers don't even carry guns on them in the first place -- although the terrorism scare has led to heavily armed policemen (in effect, untrained soldiers) showing up in various places. This should seem obvious: the chance of facing an armed criminal willing to seriously injure or kill police officers is extremely low and if someone seeks out to specifically do so they'll likely not give sufficient advance warning to shoot them.

But of course this isn't very helpful in the US where you can't even expect the police not to harm or abuse you.

Yes, it's probably legitimate to want a gun (and want to be trained in its use and learn to use it against other humans) in places like some parts of the US. But I think it's solving the wrong problem. You're trying to fix a broken pipe with band-aids. Just think: if your society is so broken you need a tool for murdering humans just so you're reasonably safe in the course of your normal life today, what will it be like tomorrow? And what are you doing to change the course -- at least for yourself and your loved ones?

The australian people don't fear home invasions as much as citizens of some other countries. It's not a rational fear in the scheme of things. My risk of getting killed by someone because of high gun ownership in the community is much higher than my risk of getting a home invasions.

My front door doesn't even lock, and i'm happy that way.

Guns in Europe are registered and you are obliged to have a dedicated safe.

That said, it's better not to own a gun.

They all have serial numbers, and you had to show ID when buying them. So yes they are registered, you just might not be aware of this fact.

At least in the US, private sales are legal, and there is no requirement to file any paperwork whatsoever.

Moreover, there is a staggering number of firearms that predate any sort of government record-keeping. It's not like these are muskets and black-powder revolvers, either - most of the designs for hunting rifles and shotguns are nearly unchanged from models that were available a century ago.

A paper 4473 sitting in the store-room of my local gun dealer (of which, there are thousands) is hardly akin to registration.

All firearms I have purchased privately, been gifted, or manufactured also do not fall under this category.

I cut a bunch of them from aluminum, for which I showed no ID, and they do not have serial numbers.

OK, nice. So, my analogy does not apply to all cases, but the point is that many guns are tied to government ID, as are many internet connections. Subverting that is of course possible, and an alternate solution to those problems.

I have seen an open router which automatically sends the clients through tor first so the activity of free guests is disassociated from your home ip address.

You can easily limit the bandwidth guests can use so it is not noticeable to you anyway.

Do you think Schneier is ignorant of the risks?

Bruce Schneier is a very smart guy, but let's not deify him; he can be wrong about things too.

For example in the above column he repeats an urban myth: that running an open WiFi access point provides an affirmative defense against prosecution for things like piracy, hacking, or child porn.

I call it an urban myth because, personally, I have yet to find a court case in which such an argument was made, let alone one in which it was a determinant in the verdict. It's endlessly repeated online, with seemingly no evidence that it is true. (If someone reading this is aware of such a case, please reply with it! I would love to know.)

Also be aware that this was written when WEP was the state of the art in encrypting WiFi, and long before tools like Firesheep were widely available. You can't expect Schneier to make security arguments that will be true forever. Today it is so easy to snoop on open WiFi traffic that any given 11 year old could do--and today WPA2 is sufficently good to stop that.

At the same time, Bruce can be really right about things too, like how to properly secure a laptop. If anyone could run open WiFi and still be secure, it's a security expert. As opposed to my dad, who until recently was running Windows XP SP1 on his computer. He benefits from an encrypted WiFi signal as the first layer of the security onion.

People assert the counterexample, too, but in reality it's hard to find real cases where anything bad happened as a result of running an open wifi network.

There are many, though they all came out of people trying to get away from kiddie porn charges. http://www.huffingtonpost.com/2011/04/24/unsecured-wifi-chil...

The article mentions three cases, which I would not call "many". I have not been able to find any real numbers on this scenario; only a scattered handful of anecdotes, most of them referring to the same original news articles.

I'm not worried. It is extremely unlikely that there are any kiddie-porn fans in my neighborhood to begin with, much less any within range of my wifi signal.

https://torrentfreak.com/judge-an-ip-address-doesnt-identify... ?

not quite what you're looking for, but effectively the right thing. an ip address does not identify you personally, the same way a drivers license does.

He said later (~2-3 years ago) that he may change his attitude exactly because of possible liability.

>Certainly this does concern ISPs. Running an open wireless network will often violate your terms of service. But despite the occasional cease-and-desist letter and providers getting pissy at people who exceed some secret bandwidth limit, this isn't a big risk either. The worst that will happen to you is that you'll have to find a new ISP.

^ FTA. Not "ignorant", but more flippant. Not everyone has options when it comes to ISPs.

Or unlimited bandwidth. There's no way I'd run open WiFi in Australia because I only have a finite amount of downloading I can do, and I pay for the amount I expect to use myself so can't afford other people leeching off my connection.

I am fully aware of the risks and I judge them to be negligible in practice. I've been running open wifi at home since 2002.

Private network is just another layer on top.

Google did not just throw away this layer, but replaced with device authentication. They are essentially using two factor authentication.

For what it's worth, Bruce Schneier also has a machine air gapped from the internet. Different scenario from a huge corporate LAN or even a small network with a wireless access point to be sure, but there's a number of ways to interpret that quote.

Does Bruce have his open-wifi clients connections pass through a VPN (in-between him and clients destination) ...

Tor would be good, guests wouldn't appear as your own ip, instead as a tor exit node.

We tried this where I worked (with the exception of the evil desktop app financial program)... and had to retract after a zero day defacement in one of our web apps. In the meantime we also learned that keeping all of your web apps 100% up to date at all times is really freaking difficult. The good news is that the (failed) attempt got us off of a few client side applications and made us much more platform agnostic than we were before.

If you have the resources of Google it's a bit different, especially if all of the software is custom and developed internally.

You can't succeed with this model by just setting your firewall to allow This approach still requires defense in depth, and a holistic view of security. If someone was able to deface your web app, then your company wasn't actually using all the components that are required to make this model work (such as authenticated devices, device patch management, and user 2-factor authentication).

Device patch management only works when there are patches available, ever heard of a 0day?

What are "authenticated devices"? The closest I can think of are client certificates being installed on the devices and used as a first-level of authentication. It could be anything from TLS client certificates to VPN certificates.

Yep, client certs installed on a device with verified boot and an account authenticated via 2FA would be a good start.

If you do it right (store the cert in a TPM) the device itself actually is a second factor so you don't need anything other than the device.

Wouldn't that require a browser plugin to login with?

You can have a SSO server that requires a TLS client certificate signed by your own internal CA, or you could put it behind a VPN authenticated with the certificate. Either way, with no custom software, you get device and use authentication.

Was this a custom app? My company puts most of our stuff on the internet, but most of it is stuff we've bought from Atlassian and the like.

The only way to succeed with this is with heavy firewalling or VPNs. There are several unknown zero days in any application so just by opening up your application to makes it possible for blackhats to get in. The only question is how much your information is worth for somebody. If you it is less than price of a brand new zero day you might be ok, but there are still the script kiddies and political blackhat organizations who mass deface any site that has a "zero day" vuln, (zero day means in this case: unknown to the operators of the site).

There are different layers of firewall, and VPN isn't really the issue here.

You can still have location-aware servers that can talk to eachother directly. This should be done over an encrypted channel as much as possible.

As to firewalls, on each server only allowing access from those ports applications run on is probably a good start. Better still would be publicly facing machines that act as reverse-proxies to backing servers that run said applications.

As intimated only approved machines (likely with client certificates and pinning to mac addresses, and probably only a limited number of accounts beyond that) can tighten things farther.

Putting your exposed (internal use) applications facing the internet doesn't mean unlocking all the doors. There are ways to mitigate and reduce the effects of a 0-day vulnerability in practice. The fact is that by making it all available anywhere, makes you think of the risks in a way that is actually better in practice than just believing because you are behind a hard shell it isn't easy enough to get to the soft-gooey center.

A hardened system involves more than firewalls and vpn access. A properly hardened system should be able to run over the internet. TLS channels with certificate/mac pinning alone can go a long way in terms of communications, and is far more than a typical firewall/vpn setup would offer for protection. This goes from SSH to your internal services. For that matter not exposing anything beyond SSH, and requiring tunnels for all communications may be simpler still.

Mix in LDAP for access, with accounts, machines and certificates all tied together and you have a pretty good base recipe for a hardenned system. That said, this isn't the only approach, just me rambling on about the ideas. There is overhead in terms of development, operations and management to setup such a system. Not everyone can implement such a system, given what they may be starting from. A smaller company would have an easier time for many cases than a larger company. It may require the use of a windows terminal server behind a secured channel in order to keep some critical applications (likely wrt finance). Other applications may be excessively costly to migrate, and others still may not have the necessary protections.

Given that most internal applications are web based these days it is slightly easier than at any other time in computing history.

> There are several unknown zero days in any application

Does this include the firewall and VPN?

no, I am more specifically talking about the hosted application. There can be security bugs in the firewall or VPN as well, I was focusing on purely what is the difference between hosting a vulnerable application on a local IP vs. a public one.

> There are several unknown zero days in any application

I think you want all your applications to authenticate the device and the user before proceeding to anything. This looks indeed impossible with third party closed source apps (if only because you can never be sure there is no backdoor).

Then, even if you authenticate every remote peers using TLS client certificates, you have to follow closely the vulnerabilities of your TLS implementation... But that should not be less manageable than to make sure your firewalls are reliable.

This looks indeed impossible with third party closed source apps (if only because you can never be sure there is no backdoor).

Seems easy enough: don't allow the app to bind to a port on any interface besides loopback, then put an authenticating reverse proxy in front of it that can actually receive remote connections.

If it's an HTTP service, you can use nginx with client SSL certificates. For other protocols, spiped[1] might be a good choice.

[1] https://www.tarsnap.com/spiped.html

I am not sure how the open source vs. closed source plays any role in this. You can support a feature with a closed source app as much as you can with an open source.

TLS implementations having serious problems as the last few years proved it. We need a more fundamental change in security protocols and implementations, using reliable crypto (for example elliptic curve cryptography) and implementing them in safer languages (like Rust).

Finally, the zero trust network has its day. I've been following this for quite a while, especially since this kind of architecture makes even more sense for smaller businesses than large ones that can pay for sophisticated network-edge protection.

Edit: Great talk at lisa in 2013: https://www.usenix.org/conference/lisa13/enterprise-architec...

Yes, I submitted this yesterday (instead of the article submitted here, because I found it to be deeper and did not want to submit 2 links at the same time), but sadly it failed to gain traction.

Videos don't usually gain traction on HN.

OTOH, Google is way too big to be able to trust all of their users. And this just happens to be excellent advertising for them.

On what planet was that talk "great?"

It's worth reading the original "BeyondCorp" paper that discusses this:


As some other folks have pointed out (and contrary to what the headline implies), there's isn't just setting your firewall to allow In particular, pay attention to the Device Identity (client cert) and the Access Proxy parts.

What's left implied but unstated in this post is that a corporate intranet is often in practice as vulnerable as the internet -- from unpatched Windows to old Android versions to people plugging in random USB dongles to a million variations on XSS/XSRF, once you've made your corporate network secure against these attackers it's also secure against the wider internet.

I felt like that was stated clearly in the second paragraph.

> "The new model — called the BeyondCorp initiative — assumes that the internal network is as dangerous as the Internet."

You are right. I had meant they left out why the internal network is dangerous. They open with assuming it is dangerous and then discuss how to implement it.

As a remote worker, it's delightful to see things move in this direction, as VPNs are a regular thorn in my side.

Of course, there's a certain irony that Google isn't fond of remote workers. :)

Few workplaces are fond of remote workers. The major reason a lot of people remain employed is so they have a purpose to wake up, leave their houses, and spend the day occupied by the relative comfort of an office building, surrounded by reasonably-intelligent coworkers, as a faux-family. And it's a slap in their face that you don't want to spend your time basking in their physical proximity.

>it's a slap in their face that you don't want to spend your time basking in their physical proximity

That's slightly self-centered. Nobody cares about you or their co-workers so much that they are offended if you don't want to work (or bask) in the team's presence.

People are tribal by nature, and one of the ways humans feel a sense of belonging is through overcoming common conflict (i.e. Tuckman's stages). While not impossible, it's much harder to have common conflicts if you're just not there.

> People are tribal by nature

People were tribal before communication tech.

That said the company success will depend on how frequently the information can be mixed up. That said, some roles do not need as much communication as others, especially in mature business and niche positions.

Dont be absurd. Humans conform to communities now more than ever.

Those communities are formed around ideas, not limited to geography as previously.

So humans are not tribal, but now humans are not tribal only for geography-based situations?


Communication and management overhead of remote workers is harder to do right. Like functional programming.

This is absolutely true also, but it is generally easier to address than the need for physical interaction often felt by people in management. (I'm not trying to be derogatory here, as good physical interaction is often closely tied to a manager's success, so it's natural that they would be wary of removing that aspect of their relationships.)

It's not just management, the relationship between coworkers can be harder to develop when you're not (for example) having lunch together.

with modern software, it's only hard to do right if they're not at their laptop/workstation. which means they're doing some other shit other than working.

our entire team is remote. it's really obvious when someone isn't at their computer. it's basically the only hard requirement of the job beyond the work product - be at your computer when you say you are going to be at your computer.

The thing is, the skill level required to properly manage remote employees, especially in a mixed remote / non-remote office is higher. When you are large corporation and have to start looking at employees in a statistical manner, you might choose 'no remote' because it's cheaper / more possible to find managers that can do 'no remote' vs. 'mixed remote'. There is almost no large company that is remote only right now.

It's the same reason why almost every company programs in blub vs. lisp, haskell, erlang, etc. The skill level of employee can be hard to find who can do those things. Companies already feel supply constrained with engineers, now imagine you add the haskell requirement and you need to hire 1000 of them.

Startups can leverage this disadvantage sometimes by not having a statistical management requirement and start with a foundational advantage. There is also the case of using haskell killing your startup although because of tool problems, and big companies being able to make small special R&D type teams that can use Haskell, so it's definitely a balance.

Really? some times the crucial part of the job is done away from the computer.

I remember doing a quick fix for one of the attractions at the melenium dome - the core part of the work was working out on paper all the permutations and what should happen the coding at a terminal was the trivial part.

BTW the program correctly run first time and was delivered in less than a day.

Management isn't just about who's working or not. There's a lot of what makes a successful team that requires careful consideration to replicate when everyone is remote. Yes, modern technology makes much of this easier, but it still requires effort to identify and address, whereas it is somewhat automatic at physical offices.

For example, "water cooler" conversations. In a physical office these happen because people are conditioned to make small talk. That is not the case virtually, as it's easy to spend an entire day in a chat room with a co-worker and never interact with them.

You can wake up, leave your house and go to a confortable office building surrounded by people working in the same industry by going to a coworking space.

As a remote worker I actually want to have this routine, meet people, this is why I go to a coworking space.

The benefit of remote work is that you can have much more choice in the company you work for, without having to move to SF/NY/London/Paris/<some other crowded and overpriced area>.

> The benefit of remote work is that you can have much more choice in the company you work for

That would be true if most companies allowed remote work. In my experience, only a few do. Even fewer have a "remote first" culture, where you don't miss important discussions if you're not in the office.

Interesting way of putting that perspective. I have been trying to put my finger on the whole remote working issue for a while. What you expressed is one aspect to the issue, but I also think that it is probably just an intermediary symptom, a phantom / false cause if you will. Personally I think we would see a lot more remote working and even a social revolution if at very least the incentives for physical office space were removed from tax and accounting policy. Remote working is really a difficult nut to crack that has a significant cleft between it's potential and present reality.

I guarantee that the company that can scale up effective remote working that has severed the tether to co-located organizational structure will be wildly successful. Unfortunately, I think it will also have major social repercussions similar to the impact that autonomous transportation has the potential for causing.

For me, it's a combination of things... I'm far more distracted at home, and it's much easier to communicate face-to-face when you can have, for example a white board to draw out discussions on.

Aside from that, hangouts, lync and the like do make communicating remotely very nice. I've wanted for years the ability to actually share a project session with someone via Visual Studio (or something similar)... so that you can work in the same visual space... if you're on the same file, you can see eachother's edits/changes real time... if you're in other files, you can just flow...

I don't think it all requires that people be in physical proximity, but there are advantages to being a closer team. I often find that 3-4 developers in the same room can dramatically outperform teams that are disconnected from each other. But then again, I think that if your project can't be distilled into units of work/services worked on by 3-4 developers you're doing something wrong.

Could you explain what you mean about the incentives from tax and accounting policy? Are you saying that tax benefits for having office space are richer than the actual expense of renting that office space?

Essentially, there seem to be significant tax, accounting, and general financial incentives associated with office space and expenditures that come with it. Take, for example, during the recession, special corporate tax rules were set in place that rewarded and essentially subsidized capital expenditures and allowed for accelerated write-downs.

Those are not necessarily solely financial incentives, but they function to compound and combine with already established systems of corporate organization, personnel structures, and general status quo operations and processes. In that scenario, the co-located work paradigm was not only supported and strengthened where remote working was not, but it also set an expectation that thar be government breast milk in tumultuous times.

I am no corporate tax, accounting, or financial expert; but what I can say is that it seems to be the forest that can't be seen for all the trees.

In the USA, you can deduct your business expenses (including building rental) from your profits for tax purposes. This creates a huge incentive to rent, as it's basically free - the company is basically paying part of their tax payment to their landlord.

It's not even close to free. It's X% cheaper where X is the tax rate the company is paying.


$10,000 in gross profit - 20% in taxes = $8,000 net profit

$10,000 in gross profit - $1,000 rent = $9,000 - %20 taxes = $7,200 net profit

The net rent was $800 instead of $1,000 but not free.

I think you are mostly right here, sadly enough.

Google doesn't like workers who always work remotely, but periodically working from home (or the bus, or hotel, or coffeeshop, or the lobby at the auto mechanic, etc.) for a day is common for all employees.

Fair enough, but that still limits potential employees to a certain physical proximity to Google offices. While there are a number of those, and I aim to bring one here eventually, it is still a limitation.

But why would anyone not be willing to leave their family, friends, and life behind and move to one of the most expensive cities in the world in order to work at Google? Isn't work important to you?

According to his github profile, munificent lives in Seattle, so...

That is the same policy that my $BIG_INTERNET_CORP_WITH_FEMALE_CEO copped a lot of flak for because of the news being mis-reported by the media.

It isn't just corporations that are not fond of remote work. As a lay-employee, I am not fond of remote work either. I would much rather have all my teammates in the same room.

Tribal? Yes, unapologetically so.


Because :

- lots of information is ambient at the team place - because not all people are that concentrated at home - because working alone is difficult sometimes (especially when you're under stress) - because you're harder to contact (yes, some people are afraid to call a colleague at home) - because people don't like to have 4 simultaneous chat sessions with colleagues

So a day of remote work certainly help, but I'd venture to say that's a maximum for many. Of course there are people who a re remote 100% of the time, but not everybody can do that. You have to love your work for that, and many people don' t /love/ their work...

Why are VPNs a problem for you?

It's hard to fully trust a person that is remote. Are they keeping your data safe?

Another solution is the idea of Amazons virtual workspaces for remote workers, this allows the company to have more control over the computer that the employee is using.

My work has required daily VPN use over most of the past 7 years, and I have yet to find one that is reliable. I'm certain this has more to do with properly setting them up then an inherent flaw in the concept, but it still means when I can avoid them, I do.

VPNs break down, are filtered out or throttled by ISPs, etc. The office internet is usually significantly more reliable.

I wonder how far this really extends into their network and how ipv6 is related. In principal it sounds really good to me. I realize this is mostly about access to corporate applications, but how much further could this approach go?

Thinking out loud, if I suddenly removed the firewall perimeter security from my network, moved security to devices/servers directly, dropped my NAT, switched to ipv6 with all publicly routable addresses, my network infrastructure simplifies incredibly. However, I do have to still protect my network to ensure network quality of service/availability and protect my devices/equipment from "public attacks". I guess the principal here is, the surface area that can be attacked is the same if you can penetrate the layered security approach - it all ends in the devices and equipment.

The fact that all devices/equipment can now have an publicly routable/addressable IP in ipv6 solves the problem of running out of address space, and would fit hand in glove with such an architecture.

Put another way, the network becomes just the network, without the need to discern between the intranet/LAN, the extranet/WAN (or DMZ) and the Internet/WAN.

>can now have an publicly routable/addressable IP in ipv6

Almost no one can actually route "publicly routable" IPv6. When it becomes a standard feature of DSL/cable, maybe.

Your information is out of date. Every T-Mobile subscriber with a modern Android device on LTE has a fully-working, native ipv6 address. And somewhere between 30% and 50% of Comcast subscribers already have native dual-stack ipv6.

Somewhere between 50 and 80% of Comcast subscribers, then, do not have IPv6. In a couple years, yes this should be viable, but it isn't now.

Time Warner Cable supports IPv6 as well.

This seems like the sort of thing that can work for Google because Google runs on Google software, which runs on Google hardware. They control the full stack from top to bottom, so they can decide where to put the doors and where to put the monitoring.

Most companies run on stuff that is not their own. Microsoft Exchange running on Windows running on VMWare running in some 3rd party datacenter is a fairly modern way to host an email server. In that situation, everything is out of your hands BUT the network edge. You don't audit Microsoft's code bases, you don't specify how Webmail works, you don't control the discovery, disclosure, or patching of critical vulnerabilities.

Sure maybe the firewall/IDS/VPN only keeps amateur griefers out, but there are way more of those than APTs.

And folks will only have limited insight into the internals of all this 3rd-party software. But if you have a gated network, then you can use a tool like NetWitness to characterize and alert on your traffic--and just your traffic.

"The Cloud" that they're talking about is their own datacenters (they're certianly not using EC2) and they're hosting their web-apps over a WAN without VPNs or other traditional forms of closing off access.

However, this doesn't say much about their datacenters which will still be heavily firewalled. IPMI, SSH, and other access wouldn't be shared over a wide open WAN. The "Cloud" (see: datacenter) LAN will still be protected traditionally.

This article doesn't have enough information in my opinion.

I mean, of course it's their own datacenters. They're not going to be putting their corporate data in a competitor's data center. That said, they could be leveraging new things in their Google App Engine cloud, which would actually make it "the cloud" as people refer to it.

I'm glad you mentioned that. A full scale switch like this, if successful, would be great marketing ammunition and would simplify the administration of their datacenters.

No separation of their cloud hosting and internal services would greatly simplify administration at the cost of having to beef up security mentioned in the article.

The funny thing is, they expect everyone else to put their corporate data in their datacenters! Google has an extremely one-sided view of where data should go. (In all cases, directly to their servers.)

Do they, though? They expect everyone else to put their product in their datacenters. I'm sure they'd like it if others also put their company data in (more revenue, after all), but I wouldn't say that anything they have offered is really pushing for the corporate data in the cloud aspect. I'd say Amazon is more aggressive on that side with things like WorkSpaces and WorkDocs.

Google Apps for Business? Android for Work?

This kills BYOD, right, at least for now? "Employees can only access corporate applications with a device that is procured and actively managed by the company"

Android Work Profiles has this covered, if you would want to use your own android phone.

It also means you allow your device to be managed remotely by the company (i.e. purged if lost/stolen).

It seems to me that any company whose business involves providing secure web apps to external users (who aren't using devices specially trusted by the company providing the service) ought to be able to provide its corporate applications on the same basis.

One of the biggest reasons for BYOD is so you don't need to deal with the crappy supplied hardware of your employer and you can bring your own. I'm pretty sure that Google employees get their pick of hardware so this would remove at least one big reason for BYOD.

This brings up a few questions: 1) Does Google not use the same publicly hosted version of Google Apps that we all use? 2) Does this only work with privately hosted versions of applications? 3) Are they using the publicly hosted version of Google Auth for the authentication piece? 4) Is the Device Inventory Database hosted on a public machine or is that deployed to a private network? 5) digging into the white paper that provides a bit more information on how they're actually doing this… does anyone care to take a crack at explaining what this means? "BeyondCorp defines and deploys an unprivileged network that very closely resembles an external network, although within a private address space. The unprivileged network only connects to the Internet, limited infrastructure services (e.g., DNS, DHCP, and NTP), and configuration management systems such as Puppet.” (full white paper published by google available here: http://static.googleusercontent.com/media/research.google.co...)

It's nice to see companies moving towards taking security more seriously, but boy oh boy some of it is a real pain in the ass. Every website you log into now needs to text you, or have a companion app or whatever. Every time you lock your screen to get a cup of water or take a leak, you have to log back in, wait for your flaky vpn connection to come back before you can resume what you were doing (maybe not an issue here if they actually do away with the VPN). "Credentials" often comes down to typing very long cryptic passphrases, on a glass screen, with dots instead of being able to see what you're typing. Et cetera.

Are you on OS X and have you unchecked the option "Disconnect when switching user accounts"?

The probably falls in the general category of a Good Thing™ for employees and people developing B2B applications since internal systems are more easily accessible but, this will be a gut check/squeaky bum time for traditional on-premises B2B vendors like PeopleSoft/SAP/IBM and the like. The corporate firewall has always been a bastion of security they have been able to hide their applications behind. As the concept of a corporate firewall begins to fade their security risk increases and previously non-worrisome attack vectors become serious problems for them.

Insecure enterprise software and poor security practices mean death to firewalls and to vpns will take a long, long time. People hide behind these things for a reason.

Google is right here though, this makes things easier for employees and probably saves them money (no vpn); unfortunately most orgs don't have the staff/expertise to pull something like this off.

More importantly though, I think google builds all their enterprise web apps in-house (speculating). Most orgs who do have intranet apps use 3rd party off the shelf software so pulling off Google's BeyondCorp architecture is less likely as they can't control or easily modify how they work. Ergo, VPNs are here to stay.

Even for those orgs who write their own internal applications: do you really want to expose your internal analytics dashboard to the internet?! GASP.

Apologies for being off topic..

> squeaky bum time

Never heard this expression before. Quick search defines it as "An exciting part of a sporting event, particularly the final minutes of a close game or season". Unfortunately, I still don't really get the reference. Could someone spell this out for me?

Make people nervous.


Login systems are only one small part of the surface area of most applications. Admin backdoors, an absence of SSL, SQL or other injection attacks, protocol vulnerabilities, unpatched code, homegrown/weak crypto, session hijacks, and XSS attacks going to need fixing too. Applying patches in a timely manner will be essential too.

What has changed? I worked as a contractor at Google in 2013. Everyone had a securely locked down company laptop. All logins anywhere required a dual auth device.

I was really sick one day and I had no problems doing my work from home. Also, one of the great joys of working at Google is the availability of code labs that are individualized instruction to learn different aspects of their infrastructure and technology in general. I spent a ton of time when at home working through code labs that were relevant to my job. No problems with remote access.

Nothing new here.. I work for Microsoft. we have had most of our tools in the cloud for quite a while. One thing's for sure - every new app is cloud based. We use Azure AD and multi factor auth to allow access from internal and external networks. It's pretty common with small\medium companies I work with, maybe less with large enterprises.

I think I'm missing something important here. I understand it as far as "internal networks give people a false sense of security," but it's still worth something, isn't it? Why not implement all of these security features AND keep your internal network locked up? Is it really just convenience?

The biggest problem with VPNs these days is that they connect a user to the network, not just to the applications they need to access. Any malware on the users device can ride the VPN into the network and start having fun. Sure you can micro-segment the network to limit the damage but at least the Google approach puts all traffic through an "Internet-Facing Access Proxy" limiting exposure to the individual applications in question.

However, I completely agree with the previous post that the user devices need to be considered untrusted. This is a huge problem with the Google approach. Certificate distribution and management on thousands of employee owned devices is not practical nor scalable.



This is just client posturing, it's been around forever before NAP from Microsoft.

It's nothing like that. This is all about server side hardening and strict client side 2 factor auth practices.

I wonder which "cloud hosting provider" they will choose.

Microsoft? Amazon?

Does it make a difference?

If my company starts selling cloud hosting and then I announce my company will be hosting its internal applications in "the cloud" (i.e., in my own data centers), what are the security implications for my company?

Are they the same if some other company asks me to host their applications in my data centers?

Is this article a PR piece (or "submarine" as PG calls it)?

What do you think?

> I wonder which "cloud hosting provider" they will choose.

Meant to be sarcastic? Google is in this market. Doesn't strike me that there is any chance they'd use MSFT or Amazon for infrastructure.

In a way, it seems similar to what Amazon did when they launched AWS.

They used Salesforce before for something (probably Adwords). Could be expanding.

The access proxy is somewhat similar to what Cisco is doing with TrustSec. http://www.cisco.com/c/en/us/solutions/enterprise-networks/t...

I'm not a security expert by any means, but this seems like an intelligent move to me. And it sounds like this process of accessing corporate applications has the potential to be less onerous for employees.

One interesting question is: how many companies have enough public IP addresses to publish their applications to the internet? If you assume that there are multiple services and each one is hosted on their own server (or server farm), then you'd need many more public IP addresses. When we live in a world that is severely lacking free IPv4 space, how feasible is this? Or is this just a matter of pushing things from land to IPv6 addresses? How many locally developed enterprise apps have good enough security to be put on a public facing site and have good IPv6 support?

I applaud the idea and the effect of forcing security to be dealt with. But I don't know how feasible it is for corporations without their own B or C blocks.

Any sane design is going to have exactly one ingress/egress for end-user traffic per application. Even an application with a few hundred servers is only going to be available to end-users at 1 IP address (maybe several for load balancing/redundancy).

It wouldn't be hard to add a global reverse-proxy/load-balancer HA cluster at the actual network edge forwarding traffic to individual applications' own load balancers.

If it's http, it doesn't matter much. You can put a thousand different sites on one (or probably a few) IP and have a load balancer distribute the requests to the appropriate set of backend servers.

I'd like to agree, but I think you overestimate the sanity of many Enterprise IT designs...

It certainly would be helpful if there was a single point to handle the device-level authorizations, but again, many existing systems aren't necessarily designed to play well with others.

Seeing the strength of google migrating all their employees to yubikeys

What if I plug a pendrive in one of these devices and install an app that reads every network activity? If I have the same sequence of packages required to authenticate, I'm in?

How is this any different than doing the same on a laptop that is accessing a VPN?

> If I have the same sequence of packages required to authenticate, I'm in?

The sequence of packages needed to authenticate is different any time.

The return of capability based security? Couldn't help but notice the phrase "fine grained access", I guess your device acts like a sort of token?

I hope this means accessing internal resources will be a lot less painful too

if it's crucial for the company, it should not be on a public internet.

so THAT'S why they bought up the .dev TLD.


Tldr google no longer use vpns for corp access

Really so they are using a 3rd party hosted HR systems etc on the internet, or something on their own services?

When you are talking about a company like Google that basically owns a large part of the internet (backbones, CDNs, hosting services) "moving stuff to the internet" it means a lot less than a non IT company like a bank.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact