Hacker News new | past | comments | ask | show | jobs | submit login
Corporate networks weren't designed for working from home (arstechnica.com)
118 points by feross on Oct 13, 2020 | hide | past | favorite | 103 comments



Why are we presenting bigger, badder VPNs as the solution?

With core services to employees (e.g., email, calendar, office tools, filesharing, chat, collaboration, etc.) either serviced as a SaaS solution (e.g., GSuite, Office365, Slack, etc.) or hosted as VMs in a cloud provider (Exchange Server but on an Azure VM), the security moves to the application and the whole "walled castle" corporate network goes away. Then so do your VPN bandwidth problems.


Probably because bigger badder VPNs potentially let you side step tricky political and organizational problems. Bigger badder VPNs might not be the lowest cost solution, or most efficient solution, but it's a relatively easy sell - you're just spending more money on the same type of resource as before because your using more of the resource. Anyone can understand that situation, no one is going to pushback.

Switching to SaaS or cloud hosted providers, and especially if it means changing your security model means that you have to... do all those relatively hard and higher risk things to do.


> Why are we presenting bigger, badder VPNs as the solution?

I'm guessing because this is basically a submarine piece touting bigger, badder VPN providers.

Though when I see the problem mentioned, I think of BeyondCorp[0] (the basic model, not the particular technical solutions) more than VPN as the solution approach.

[0] https://cloud.google.com/beyondcorp


Agreed. Split-Tunneling is also a simple solution where you end up using home internet bandwidth for things like MS Teams/Zoom/etc. and the rest goes over the VPN. For generic large corporations or governments, this is a good enough solution. Email and documents are not bandwidth hogs.

That should be the first step, and after that, you start getting into detailed analysis as each business has its specific needs.


Split tunneling is only simple if you can control the IPs everything is hosted at. If your SaaS uses generic EC2/Azure IPs, do you want to split tunnel ALL traffic to those providers? Probably not.


I thought that all access to EC2/Azure IPs is over HTTPS or SSH so you don't need to tunnel to it. We use split tunnel for my work, all of my work's IPs go through the tunnel. Everything else is the individual HTTPS or SSH connection. If you were within your corporate wall, you'd still need HTTPS to get to Azure from inside, right?


Aside from accessing intranet resources: Is the corporate VPN for encrypting Internet traffic, or is it for proxying and monitoring employee traffic?

Without answering that question, you can't discuss what makes sense.


our corp VPN is for accessing internal sensitive documents. I'm sure there is some level of monitoring, but it isn't the first priority. I work for a large US Corp so we have lots of design, specs, reviews and other paperwork to access.


but all the VPNed resources would be under private IPs anyways? In that case you only need your VPN to route traffic to your company's subnet.


Most companies want all general internet traffic to go through the VPN so they can filter/record/do whatever to it. The sites that are split tunneled are usually trusted internal (in that they're owned by the company, or used for a known service like email) or known high bw but safe (like youtube)


Not always, thankfully... Newer versions of Cisco AnyConnect (for example) can do name-based split tunnel. The client resolves the address then dynamically adds a route for the split tunnel as it's needed.

It works excellently for cloud-hosted stuff.


Ah that's quite interesting. There how do they work around the client controlling the DNS server? I suppose part of the session setup could be sending anyconnect a known trusted DNS server address.


I believe the DNS gets assigned to the client during the VPN session setup. And this is watched, and if something tries to change it, it shuts down the tunnel.


Split-tunnel has solved all of our bandwidth and performance issues, and it's so easy.


We recently moved _away_ from split tunneling. It's still giving me headaches. Not just performance, but things like docker containers don't always work right.


We run our own hardware (due to security concerns, our work is in mobile infrastructure) and have both a split and full VPN; I'm usually on the split one. Our code, task management and dev/test VMs go through the VPN, the rest over the open internets.

The security argument we have is a bit hit and miss though, since our e-mail is at a random hosting provider (doesn't even have calendar support) and most of our other comms goes through services like Skype, Zoom, etc.


For a large company, there are all kinds of data security and compliance rules requiring that sensitive data can only be accessed from a secure network. Your lawyers are unlikely to sign off on PCI, HIPAA, GDPR, etc. with only application level security.


PCI seems to be the posterchild of checklist based security, second only to FIPS in terms of worthlessness. I guess it is soemwhat useful for convincing people to outsource the checkout path, so they can avoid PCI compliance nonsense.


That's because it's designed to be impossible to implement. Note that:

> "no compromised entity has yet been found to be in compliance with PCI DSS at the time of a breach."

https://www.computerworld.com/article/2531828/post-breach-cr...


What about entities that haven't been breached?


They are irrelevant. PCI compliance protects people by prohibiting unsafe storage of sensitive information in case of a breach.

As a result, fewer companies store the data unsafely, but speaking strictly in terms of "entities that haven't been breached" then PCI compliance doesn't really matter one way or the other.


The easiest way to come up with rules where no compromised entity is found in compliance is to make the rules impossible (or wildly impractical) for anyone to comply with.

If it were a matter of "we 100% know lots of companies that haven't had breaches fully comply with the rules", it would be a relevant data point. If we don't know that, we can just assume that the rules—much like our legal code—are written in a way where a motivated auditor can always find non-compliance. "Give me six lines written by the most honorable of men, and I will find an excuse in them to hang him." and all that...


Isn't that the whole point of BeyondCorp though? That application level security can work. Long term, trusting the network is a bad idea anyway.


Can work, yes. Has the budget (in both money and engineer time), probably not. It’s not as easy as just installing a new appliance and flipping a switch, especially for companies with low risk appetite and strict SLAs around service uptime.

Plenty of enterprises have mission-critical applications running on mainframes because either no one understands the software enough to move it to more modern platforms or because the business can’t spare the time/money to have their engineers work on moving it. It’s hard to expect them to adopt zero-trust and BeyondCorp with that kind of culture.

I agree “don’t trust the network” and especially the idea that those four walls don’t offer any real protection, but enterprise culture is incredibly slow, inefficient, and risk averse.


To add to this. In (old school) enterprise security measures seem to exist more for compliance and legal protection. Top management and especially shareholders care much more about being financially protected via some insurance or government action than technically safe(ish). See Equifax.


That's nice, but it seems the whole Zoom and Teams hype has done a quick end-run around those.

Given that Zoom looks like the interface on the moon lander, I'm sure part of the appeal must be in ignoring all these rules.


> That's nice, but it seems the whole Zoom and Teams hype has done a quick end-run around those.

It has. And somebody's going to get the book thrown at them for a breach at some point. At which point businesses will shut down Zoom and Teams post haste.

Corporate security is solely driven by perceived legal liability.


PCI, HIPAA and GDPR all would agree that most of your users most of the time should not have any access to sensitive data. It's reasonable to have users VPN'ing in only when/if they need access to a specific system storing sensitive data, and even then, have only that access happen through that VPN, there's no need to pass their Zoom conference through the same VPN, and it would be a benefit (or even a requirement) to ensure that the sensitive data is separate from their Zoom conference. E.g. a remote desktop access that prohibits copy/pasting and file exchange so that you can't accidentally get the sensitive data to your email or chat environment.


> the security moves to the application

The application access should be secured IN ADDITION to network access to it being secured!

Security in layers is a thousands years old concept. Castles for example where built with multiple concentric walls that defenders could fall back to if the outer perimeter was compromised.

Reasons for corporate IT network are multiple: the application could have an exploit at some point, the employee credentials could be compromised (even dual factor), the application if publicly exposed is vulnerable to denial of service even if it remains completely secured, vulnerabilities can be scanned if public, etc, etc.

Please stop repeating bullshit from "zero security" vendor marketing materials.


When the network is so secure I can’t do my job anymore, I suggest to air-gap the power supplies. Ultimate security.


I'm a bit sceptical - not rubbishing, but sceptical - of the castle walls analogy.

If an attacker gets through your walls (network access), why do you think you'd stop them at the keep?

I think castles have something of an answer to that question, because physical static defence is just more constrained than computational static defence - the ratio of effort of building a wall : effort to breach is poor compared to even complex passwords.

Your REAL physical defence is(are?) your soldiers, and the wall mainly aids their co-ordination: small attacks aren't worth attention, big attacks are SLOW AND VISIBLE. Does infosec have a similar wall to that? Certainly, when attackers breached castle walls via backdoors, it was normally game over, keep or no.


> If an attacker gets through your walls (network access), why do you think you'd stop them at the keep?

This whole analogy is bad, but to work in the context of it, each of the castles defenses is impenetrable (Wolverine went back in time and taught them to make adamantium). The only way to get through is to discover and exploit flaws in the design of the defenses.

If they breach the walls, they still have to find an exploit a flaw in the keep, _before_ you notice they made it through the first wall, rebuild the first wall without the defect and then send your wolverine-knight automatons to clean up the survivors. In this scenario, IDS/IPS systems are your scouts watching for invaders that breach the first wall.

Importantly, the designs of these walls change all the time, so the attackers need a window during which both the keep and the walls have a design flaw. And unless the flaw is in something common to both the keep and walls, it is unlikely to be common to both (OpenSSL is an example of that).

> small attacks aren't worth attention, big attacks are SLOW AND VISIBLE

This is not true of castles nor infosec. A huge enemy force against a well-defended castle meant they likely mean to lay siege and starve them out of the castle. Not a significant concern so long as you have food and water; especially so if your resources would outlast their supply lines.

Small forces on the other hand are exceptionally dangerous, because it's probably a sapper party that's going to try to tunnel under your wall and detonate explosives (or just collapse the tunnel if this is pre-explosives). Sappers are almost infinitely more dangerous. You can see the same pattern repeat in World War I. Trenches with machine guns are nearly impenetrable to attackers, thus No Man's Land. So we went back in time and re-created sappers to dig tunnels under their trenches. Having the other side come over their trenches at you is far less concerning than hearing sappers digging under you.

Infosec is the same way. Port scans and attempted DDoS attacks are common, easily noticed, and unlikely to lead to anything threatening. Your real risk is being targeted. Someone interested in you specifically because of the data you have, or money, or whatever. People that will poke around your stuff in ways that are hard to notice, like analyzing your auth flow for vulnerabilities. Those are your digital sappers. Large DDoS attacks are bad, because they can knock you offline, but it's like a seige. All you have to do is outlast them, at the end of the day you lost some productivity but the castle is still intact. If the sappers get in, you go offline _and_ they'll steal your data.


>>"Why are we presenting bigger, badder VPNs as the solution?"... SAAS instead.

YOu can think of it as path dependency. Organisations are already structured around a "walled castle." It's possible to run everything differently, but the path from A to B is too rough.


I should mention that split tunneling allows wholescale collection of data about employees.

The difference is that it correlates and merges personal behaviors and data with work. They all come from the same ip address and have the same patterns.


I doubt I could convince anyone to upload our customer and supplier data or technical designs to the cloud. Developers could do that, most industries probably not.

There are hackers and then there is industrial espionage. Users choose their passwords themselves, there are also service accounts. It would be a complete security nightmare.

We employ quite a few SaaS tools that are mostly marked as "not that important" in case they get compromised.


If espionage is a concern, you first have to handle the low lying fruits (and I'd assume you already do) like you only allow your own hardware: no employee may use their own computer to connect to work.

You should probably also require hardware keys and things like that but really I'd think it all depends on how much an adversary wants your stuff because I'd assume at some point it becomes viable to just send someone to work for you and steal your stuff.


I am not suggesting to be that diligent, but yes, for some industries it might be the way to go. But I don't think these are low hanging fruits compared to using SaaS or cloud services on the net. There are real additional risks when using these.


Absolutely. If you go by the marketing, all of aws staff is now potentially working for you when you use aws. You could say this means if any of the aws employees is compromised, so are you.

As a contractor, all my code goes through a code review from one of the employees. I only have access to the development environment (even qa is off limits except the publicly available stuff). It is all on azure though. I think it is fine for must work done by peons like me.

I once used an application to take a screenshot of the while display every minute (on my own computer) and it amazes me that the application didn't need root at all on Windows.

In short, I agree that you should avoid the public cloud for sensitive work but you should probably do a lot more (even regular training?)


My company and the customer I work for moved many apps and services to be accessible outside the VPN with MFAs mandated for login. Pretty much I don’t even connect to VPN nowadays except for accessing some on-prem servers and stuff nowadays.


> With core services to employees (e.g., email, calendar, office tools, filesharing, chat, collaboration, etc.) either serviced as a SaaS solution (e.g., GSuite, Office365, Slack, etc.) or hosted as VMs in a cloud provider (Exchange Server but on an Azure VM), the security moves to the application and the whole "walled castle" corporate network goes away.

Office 365 is still not GDPR compliant (https://www.heise.de/news/Microsoft-Office-365-Die-Gruende-f...), and many companies are not exactly a friend of putting their other core services (=Exchange) into the cloud where the US secret services can access it. In fact a lot of German companies outright forbid transferring their data on anything "cloud" and demand storage and processing of their data in on-premises systems, for fear of trade secret theft.


A VPN wouldn't have stopped the recent twitter hack via social engineering for example. Zero trust networks will have to be the way to go instead of the walled castle model.


Network segregation is a pretty solid mechanism and authentication is still required in private networks in most cases. Wouldn't make sense to give it up for no reason.


I'm a consultant across a bunch of orgs, and some of the fun stuff I've seen recently:

Client A requires VPN Client "X" that is exactly version 1.2.3.4.

Client B requires the same VPN Client "X", but force-upgrades it to "1.2.3.5" or whatever.

I now cannot connect to Client A without uninstalling the VPN client installed by Client B and reinstalling from scratch.

A gentle suggestion to the VPN vendors that write not-backwards-and-forward compatible clients like this: Go die in a fire. Or quit IT and go lay bricks or something. Actually don't do that, I wouldn't want to live in a building you build. It'll probably collapse if the wind isn't blowing at the exact right speed.

Similarly, Client C has a VPN client that is also the kitchen sink, and has a "profile" that overwrites half my system with its junk. It refuses to connect if I edit the profile. I have several coworkers who VPN in via Virtual Machines to work around this.

Many VPN vendors believe that VPN client downloads should require a support login. My feedback: Stop that. Just stop. Publish a straight up download link to every version. Please. Just do it. I promise you won't lose any sales because of this. Really. I promise.

More fun: A large gov org client of ours had spent nearly a decade hard-coding their "LAN subnet range" in a bazillion firewall and IP whitelisting rules in dozens of products. The VPN is in the Cloud and uses a different range (10.x.x.x/8), so now every firewall rule is broken for users connecting from home. 9 months into the pandemic, they're still working on it.

I have Gigabit fibre and a high performance computer. Downloads really do go at 800+ Mbps. I'm yet to see a VPN product that can crack 200 Mbps even during periods of low load and with a matching gigabit link at the other end. Most do 3-20 Mbps, and I've seen as low as 1 Mbps. This is entirely a software problem!

Microsoft deserves the IT equivalent of the Razzie awards for their implementation of IP-HTTPS in DirectAccess which is a) Slow as molasses, b) Limited to 8 cores on the server for no reason, c) Not fixed in 12 years, d) Still theoretically supported, e) Will never be fixed.

Vendors: Benchmark your products please. Once a decade maybe. Or two. No rush. It's not like there's a pandemic on or anything. Take your time.

I have to stop ranting now, I think my blood pressure is rising to dangerous levels...


> I'm yet to see a VPN product that can crack 200 Mbps even during periods of low load and with a matching gigabit link at the other end.

WireGuard and IPsec (the latter being a nightmare to set up) [1].

[1] https://camo.githubusercontent.com/907b408d818e9522afe5168c3...


The protocol is not the problem here.

Most commercial VPN products are IPsec internally, plus some proprietary authentication stuff. (So much that for several, there are open source reimplementations of just those parts, and the actual IPsec is done by the normal Linux kernel IPsec code. See OpenConnect's PulseSecure support for an example.)

Also, having both run a VPN for a large network, and worked on VPN software for large networks, once you get to high user counts it's not as trivial as you think.

VPNs introduce extra latency, probably the corporate VPN hub is not at all on the shortest path to your actual destination. And corporate VPNs tend to go through (multiple) layers of other filtering. The bottleneck can be Pkts/s, Byte/s or Connections/s and it is not at all obvious which it is. And so on.

Besides, there are reasons corporations pay for expensive VPN solutions - certifications and compliance driven requirements such as integration of endpoint management.

Not saying it can't be done, but "use IPsec or Wireguard" misses the point entirely. (Of course, if those products were using Wireguard instead of IPsec, it would probably help a bit.)


To support our customers we usually have VPN access to the servers where our software runs. So invariably our support team needs a ton of different VPN solutions.

Like you say some just refuse to co-exist (even between vendors), require 32bit Windows (still!) or other silly things. So support folks have like 5-6 different VMs, each with several different VPN clients in them.


Situations like these are why I like the networking system of Qubes OS. With it you can plug any network VM in front of your applications, usually TOR but there's no reason why it can't be a corporate VPN, and let the rest of your software use that as their uplink.

Vendor VPN software will probably never be fixed; there is no financial incentive for playing nice with other vendors and incompatibilities in upgrades means their locked-in customers who need to work with other companies are only incentivized to also upgrade. Sadly a system like Qubes' or using VMs will probably be the only way to combine VPNs from different companies and/or vendors for while...


> Sadly a system like Qubes' or using VMs [...]

You can do the same with Docker, too. Or an external machine plus some routing voodoo.


That's true, although routing a browser or generic application through Docker can be difficult sometimes. There's also not that many of these shitty VPN clients with Linux support, which is a major pain.


No idea which shitty VPN clients these are. Am aware of frontends only (also I currently mostly use macOS as client), as well as Sshuttle (provides VPN over SSH via CLI).

You can run a container running a browser such as Firefox or Browsh linked to a WireGuard or OpenVPN container.

But you can do it without that, too. Have a different user use a different (virtual) interface, for example. Layer 7 packet filter could achieve it as well.


Ah, my problems with these systems is that they usually rely on some proprietary Cisco protocol or OpenVPN with an extra authentication daemon. I agree that open source VPNs are thankfully to run, thankfully.


Most networks weren't designed for everyone working from home. It just wasn't seen as in the realm of the possible.

On paper I have 1 gig internet and this was certainly true before the pandemic, but now my apartment's fiber is clearly having issues with the whole complex being home and using the Internet at the same time.


Yes. As someone who has to support network infrastructure and had many network issues (at all levels) bubble up to them since the start of covid/wfh - broadband networks are being stressed during work hours. You can run your own monitoring at home to validate it to differentiate between last mile vs. peering related.

The challenge is some of thr broadband networks aren't being transparent about it. They won't admit to it, etc. Luckily I've found folks in a few companies who are willing to go off the record and explain what is the state of the last mile and give dates around planned upgrades (which are impacted due to permitting delays due to city employees struggling with covid restrictions).

I highly recommend getting two ISPs at home (cable + DSL/fttx/etc.) if you can.

As for Corp networks...they should be doing more with providing their employees some tooling to indicate if the issue is their ISP or the Corp network. It's a sad state that VPN appliances cost as much as they do (surprise - you could probably build a cheaper/better one but something something no one ever got fired for buying Cisco/etc).


No. You don't have 1 gig ... you have oversubscribed channel with top speed up to 1 gig.

You can totally buy guaranteed connectivity ... buy you will dislike the price tag.

Although my hunch is that you probably have oversaturation of wifi channels if you are in apartment building or something like that. Since everything is streaming wireless nowadays.


> You can totally buy guaranteed connectivity

Guaranteed bandwidth to internet is nonsensical concept because there is no single internet to have the bandwidth to.

Guaranteed bandwidth to some specific network or host, that is valid concept, but it is very different thing and usually you can't get such thing to any arbitrary network.


> No. You don't have 1 gig ... you have oversubscribed channel with top speed up to 1 gig.

As a residential subscriber in a US apartment building I really don't think I can actually purchase guaranteed 1 gig from any ISP.


not as commodity service, but given enough money someone will work a dedicated cable for sure


I rent, so I doubt I could get a dedicated cable like that on the property by myself, let alone into the unit. The work would probably violate my lease.


> Most networks weren't designed for everyone working from home. It just wasn't seen as in the realm of the possible.

This totally seems to be the case. I've had way more problems to the connection from home being slow then I've had because of the corporate network. This also seems to be true for everyone that I've worked with. At this point I don't think there would be much benefit to upgrading the corporate network it's just not where the bottleneck is.


They were designed for some people working from home, but that movement started like 15 odd years ago when broadband became ubiquitous. But since then, the amount of data we use has multiplied by idk, probably a factor ten or a hundred, and they built it for working from home / on the go sometimes, maybe some emailing. It was designed in and for a previous era.


RDP takes little to no bandwidth.


The issue is not rdp or remoting software in general - it's videoconfs.


Likely it’s the upload associated with all those video conference sessions, since pre-pandemic everyone would still be watching Netflix at the same time in the evenings.


Eh, at least some of those people would be out and about at the movies, fitness classes, bars, restaurants, etc., none of which (at least here in WA) are anywhere close to being back to normal.


This doesn't make sense to me. Unless "small business" actually means a firm that uses an internal video link for many dozens of people, I would think that small businesses are actually overprovisioned. Consider that you could easily support several people watching youtube or similar streaming media at the same time.

Suppose you have a small firm with 10 office workers. They're doing what? Downloading excel spreadsheets or word documents, working on them, then uploading them. For that they have a 20-50Mps line. It might be asymmetric, but chances are that it's still fast enough to get most documents very quickly, and your 10 staff will barely ever be using the line at the same time.

Even remote desktop sessions are not all that bandwidth hungry, and chances are if you're doing vidconf, you are using the provider's infrastructure, eg zoom.

IMO if you're a small business your problem is more that you don't have anyone who knows how to configure the router, you might have lax security, and your infrastructure is brittle (net goes down, no fallback).

That's what I'd focus on rather than VPN capacity.


From practical experience before we moved everything to "the cloud" and emptied the office server rack: These small businesses have a Windows server in a closet somewhere with huge amounts of files on it, typically a users complete documents folder etc.

When the user is in the office, they have a 1gbit LAN connection to that server and everything is great. Now 20 people work remote and all those files are behind a shared 20mbit uplink from the office to the internet.

So the problem isn't only people watching YouTube or the fact that they VPN through the office to get online, it's also that all the files are still in the office.


I've been in those kinds of offices too, before the great cloud switchover. What kind of usage would result in high upload volumes? How big is your average Excel, Word or Powerpoint? In my experience, it's not enough to touch the sides.

The only time there was ever a problem was when we bought a huge dataset to run quant strategies on, but that's not your usual office workload?


Excel can be huge (think: all orders from a long time frame) but PowerPoint is also famous with lots of pictures or embedded video.


not to be a party pooper, but I've seen excel files in the multiple gigabytes thrown around.

Excel is actually one of the better tools ironically; I've seen powerpoint that make me genuinely sad for the state of computing.


This type of beyondcorp problem is what cloudflare's new product Cloudflare One is supposed to solve:

https://blog.cloudflare.com/introducing-cloudflare-one/


google are now offering beyond corp as a product too it seems https://cloud.google.com/solutions/beyondcorp-remote-access


What is the complexity of creating an SDN with cloud hosted VPN. I am amazed their aren’t more people in the space.


There's a huge amount of players in that area, pretty much all SD-WAN suppliers have an agent you put on your computer which acts like this as well as application aware so you can decide per application where to send the traffic. They usually have cloud peering as well as adding extra security to the clients in different ways.



Some of them are though.

e.g. the big audit firms. They've got 100,000+ people working from client sites so the tech was ready to roll. They just had to jack up their capacity a bit for mixed to full WFH.

Even with 10s of thousands of people extra on VPN nothing broke.


Tailscale is the easy solution here. Just wireguard all the individual machines together as required. Scrap the corporate VPN.

https://tailscale.com/


Tailscale is a better VPN but it's still a VPN. Also, I have a feeling that organizations that can't migrate to BeyondCorp won't be able to "wireguard all the individual machines" either; they'd have to use Tailscale gateways which start to look like VPN servers.


I guess the main issue with BeyondCorp-style architecture is that while they work well with apps that mainly use HTTP, but it can be a pain to get them working for plain TCP or UDP-based apps.


Afaik, Tailscale works very hard to P2P connections and uses the gateway either to punchhole through NATs, or proxy connections in case of hard NATs, or for authz/n.


I'm talking about what they calll "relay nodes" which could easily become a bottleneck: https://tailscale.com/kb/1019/subnets


This looks interesting, thanks for sharing. Will bookmark and read up later.

Always looking for these “zero trust” solutions and alternatives to the VPN based authorization


That's a neat solution, but I feel like $10 / user / month is ridiculous for something I can do for free as a traditional VPN.

I know everything in Silicon Valley is $10+ per user per month, but they're out of touch with the real world. I deal with tons of small businesses and they complain about spending $5 / month on MS365 for email which is absolutely essential to their business. The idea they're going to be signing up for all these SaaS platforms is crazy to me.

On the other side there's enterprise, but if I have 1000 users, why wouldn't I deal with my own infrastructure instead of paying someone $120,000 / year to do it for me? I'm in Canada and you could get 3 decent sysadmins for that.


Open network border and SSH to servers/workstations (key access only). I can easily work from anywhere. I tunnel my web browser via SSH, too, when I need to access internal org websites. SSH is really all you need. You definitely don't need (largely useless) network firewalls that make you feel secure.


SSH is awesome for this, at least with a low head count. With a good SSH config (key only, rate limiting etc) bastion (jump boxes) you can go a long way. I keep meaning to try https://github.com/sshuttle/sshuttle


Unless the internal applications are poorly secured. It’s not unusual.


It's exceptionally common. I've encountered what seems like a wildly unreasonable number of otherwise smart, competent, capable devs possessed of the bizarre belief that having a firewall or NACLs means they don't have to:

  * Have authentication on their services, nevermind authorization.
  * Use TLS on their microservices.
  * Patch their libraries.
Or in any way pay attention to security. The egg is a very common network security model.


This seems like a great case of boldly stating the obvious.


You must not have seen Ars Technica before.


IMHO the biggest problem with WfH is in companies being forced to trust networks/devices outside their control. This makes the attack surface available for a potential intruder way too large.


The real solution is to get rid of corporate intranets- put the LOB apps on the internet or better yet, host them in a cloud or use SaaS. Yes, you’ll still need a guest network/WiFi for when employees actually come to the office, and some businesses will need networks for machinery, but information workers should just use the internet. Side benefit: you don’t have to manage corporate desktops anymore.


For a small business the first thing you should be looking at is Office 365. Put it in the cloud, get rid of the VPN.

I'm having a bit more of a problem with AD. AD wants you to be in the office when you log in for some group policy stuff. We are finding some policies can't be removed remotely! We are investigating Azure AD as the future, but that is more like MDM than AD.


> For a small business the first thing you should be looking at is Office 365. Put it in the cloud, get rid of the VPN.

In Germany, this is forbidden by the data protection laws:

https://www.heise.de/news/Microsoft-Office-365-Die-Gruende-f...


So, from the article at least, the EU requires hard changes from MS regarding data storage. While the Germans are more complaining about the wording in the terms and conditions. Makes sense, I would have been surprised if my valued German administration had come to a different conclusion.

But honestly, this will be a) challenged by MS to defend their Office business in the EU and b) if that challenge doesn't work be solved by some other wording and / or changes to the data storage of Office 365 for EU based costumers.

Until then I have a hard believing all the companies, small to public international ones, will change back to non-cloud services anytime soon.


Almost every week in my business I get told I can't do something because of German law (normally by our local staff), and on closer inspection I find it to be not exactly the case. It is a big joke in the UK!

In this article I find references to the online versions of the Office apps and terms and conditions that the many German regional bodies are arguing over. This sounds like something MS have to sort out with the regulator, not something I have to do.

You make your own judgement, but I have users on 365 in Germany and they are staying there. We do not store customer data on 365.


> In Germany, this is forbidden by the data protection laws:

Does that mean Office 365 is illegal in Germany??


fwiw its gernerally frowned upon, illegal only for use in an official capacity dealing with pii


It's not so much that Office 365 is illegal as much as it is the storing of GDPR sensitive data in it (ie PII of customers and/or employees) because the German government does not trust Microsoft to not share the data with US spy agencies. (And does not trust the US spy agencies to not share sensitive data with other strategically interesting US companies. America First and all that.)


> illegal as much as it is the storing of GDPR sensitive data in it (ie PII of customers and/or employees)

A lot of data is GDPR sensitive. Also, if some of the data must not be processed by Office 365, a lot of the advantage of Office 365 disappears.


I know that, but (certainly from the viewpoint of the German government) that is a problem for Microsoft and the Office 365 team, not for Germany and its citizens.


> Almost a year into the grand work-from-home experiment, businesses are still adjusting.

Huh? More like half a year...


It started in q1, we’re now in q4, that feels like “almost a year” despite being 7 months.


They work for me! outlook, confluence, teams, slack... No panic boy!


But they are designed to make all the employees have homocidal thoughts towards each other ...

Am I right ?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: