> “They were able to get cryptographic secrets for single sign-on cookies and remote access, full source code control contents, and signing keys exfiltration,”
Maybe putting your network control plane in 'the cloud' isn't such a good idea after all...
Edit: Just re-read the article, this part stood out:
> the attacker(s) had access to privileged credentials that were previously stored in the LastPass account of a Ubiquiti IT employee, and gained root administrator access to all Ubiquiti AWS accounts, including all S3 data buckets, all application logs, all databases, all user database credentials, and secrets required to forge single sign-on (SSO) cookies.
> Adam says Ubiquiti’s security team picked up signals in late December 2020 that someone with administrative access had set up several Linux virtual machines that weren’t accounted for.
If this is true, and whoever breached them had full access to their AWS account, can we really trust them to clean up all their tokens and fully eradicate all forms of persistence the hackers may have gotten?
WTF. Does anyone have a decent WAP where I can use PoE, deploy like 5 of them and have them support roaming between APs, all managed locally? Is that too much to ask?
Generally, halfway decent wireless APs are all targeted at the enterprise market. Consumer hardware is a brutal race to the bottom, as lay consumers aren't qualified to compare options based on anything but price and UI. Ubiquiti was an outlier in trying to bring enterprise features to the consumer market
The problem for enthusiasts and small business/home office setups like yours are that both the enterprise market (e.g. Meraki) and the premium consumer market (e.g. Google WiFi) focus heavily on ease of management - cloud controllers are table stakes these days, not a controversial feature. Part of that premium that Meraki, Aruba, and that class of enterprise supplier charge is about having a trustworthy and secured backend.
Note, however, that roaming between APs is a feature of the 802.11 standard; you just need to have all your APs on the same layer 2 (802.x) network, and using the same SSID and credentials. No fancy hardware required, and you can even mix and match vendors.
The security appliance was relatively cheap, then we saw the fine print that the total bandwidth was artificially limited and increased only adaquetly two product levels up. Sorry Mr BubbleTime, you need to buy a new applicance and a new license. Your old one is worth nothing and non-transferable, watch it rot.
The switches seem absurdly expensive when you consider the 5-7 year licensing costs. And the quality is poor at best considering Meraki went and pushed a firmware update that bricked every fan in every 48 port switch we had. But you have the security appliance so it “only makes sense” to pay for these switches.
We had an IPSEC incompatibility between a vendor with an ASA and our Meraki gear. The solution was to buy a Cisco device just for that one connection.
All in all, it’s passable, but because of the lock-in it’s not like I have a cost effective choice to get away from it. I wouldn’t chose it again.
That said, it does offer a mediocre IT tech a single pane of glass they have to try to mess up.
Of all the Meraki factors I’ve learned and considered, that it is cloud-based is the least important towards my recommendation or lack of. There are lots of people that would be happy to explain all the ways my experience is wrong, but whatever.
Short version, I wouldn’t do it again.
It fits well with being able to rapidly bring bodies into a project and implement change X across hundreds of stores, while having a standing IT team of 5.
If you have onsite (fulltime) IT, its likely not the best option.
I'd be particularly interested in comparisons of Meraki/Mist/etc. for small enterprise and campus.
Last I worked at Meraki was 2015; I don't remember any artificial limiting of bandwidth at that time.
Hard in what way? As long as the control traffic has paths between all relevant devices over the management LAN, why does the cloud need to be used at all?
2. Most customers who want this have multi-site setups; in that case, you need paths across the public internet too. Again security footguns, and also reliability ones.
3. Remote work is very very common for IT people.
4. Recovery from configuration mess-ups is harder if your control plane has to run on the same network that you've messed up.
There are on-site controllers available. They've just lost out in the market because of the amount of in-house IT expertise they require. No one wants to deal with that shit, and outsourcing the security and reliability problems to a specialized third party is usually a good idea.
In the prosumer to small business segment, I would argue that there is still enormous potential value in being able to configure all of the network gear from a single GUI, not least because it doesn't then require a lot of in-house networking expertise to get something going that works and is reasonably secure.
But with a cloud-managed system you have a professional, single-purpose organization dealing with those challenges. Which you are getting for the rock-bottom price of your licensing/support plan. Building a good internal IT organization is hard and expensive, and most businesses have other things to do.
> plus you have all the usual concerns about any critical system that depends on Internet connectivity to work properly.
Generally these systems only need internet connectivity to change the configuration and for some monitoring features. In practice, customers are okay with these being unavailable during internet outages as long as both the management platform and the ISP are on a pretty strict SLA.
(Compare, for example, the usual downtime from your 1-4-person IT team not having someone with the right skills on call.)
> and nothing is more flexible for disaster recovery than having someone physically on-site.
Who has the cash for that?
> In the prosumer to small business segment, I would argue that there is still enormous potential value in being able to configure all of the network gear from a single GUI, not least because it doesn't then require a lot of in-house networking expertise to get something going that works and is reasonably secure.
That was my original point: "Generally, halfway decent wireless APs are all targeted at the enterprise market. Consumer hardware is a brutal race to the bottom, as lay consumers aren't qualified to compare options based on anything but price and UI. Ubiquiti was an outlier in trying to bring enterprise features to the consumer market"
I don't know what your standard for a 10-to-50-employee small business is, but "point your browser at this IP address" is usually beyond their in-house technical skills . Small businesses whose core competence is software/networking, or who by coincidence have that expertise in-house, are a tiny niche market. No one  cares.
 See for example the rise of the Managed Service Provider, which was a large and growing subsegment for Meraki back in 2015 or so. Showing up, installing the hardware, setting up the wireless, and then managing it from your office a few miles away is a big business opportunity, and is a much more efficient use of limited skilled IT labor.
 No one with substantial resources and a profit motive.
But with a cloud-managed system you have a professional, single-purpose organization dealing with those challenges.
Just to be clear, are you thinking of the professional, single-purpose organization we've been discussing today in the context of a catastrophic data breach, the one we've been discussing in the context of incompatibilities with other vendors, lock-in effects and expensive licensing, or a different one?
Generally these systems only need internet connectivity to change the configuration and for some monitoring features
So as long as the equipment is set up exactly how we need it and never needs to change or be checked for any reason, everything is good. It's hard to imagine why these devices need a UI at all, when the engineer who installs the equipment could just set it up once and then you're done.
In practice, customers are okay with these being unavailable during internet outages as long as both the management platform and the ISP are on a pretty strict SLA.
John: Bob, the Internet is out again. Who do I call at the ISP?
Bob: We don't have a dedicated contact, it's just the business support number on their website.
John: I'm in the queue, at number 17. What's our maximum time for someone from the ISP to contact us about an outage? That might be faster.
Bob: No-one will call, but if it's not back by next business day we do get £50 off next month's bill.
(This is roughly how that conversation probably goes when you're a 20-person organisation with two floor of an office building on a business park outside a small town.)
What's an IT team?
What cash? When we have a new starter, John or Bob sets up the WiFi on their laptop and company phone and adds those MAC addresses to the whitelist for the network. Normally John works in development and Bob works in sales, but they do know a bit about networks so this is fine. Well, as long as they can get to the GUI, anyway.
Small businesses whose core competence is software/networking, or who by coincidence have that expertise in-house, are a tiny niche market. No one  cares.
And yet as someone who has worked for software development businesses for an entire career and whose customers/clients have mostly been other relatively small organisations of one type or another, I have never met one that didn't. Of course that could be because I've tended to work with other technically-inclined businesses, but the same is true even for schools or my own business's accountants. I'm not claiming this is some sort of universal truth, but I don't think the market is nearly as tiny as you're suggesting, at least not in this part of the world (the UK).
Remember, we're probably not talking about setting up encrypted WAN tunnels across continents and multiple layers of switches in a data centre here. We're more likely to be talking about getting an Internet connection with suitable firewall set up, connecting a handful of switches and APs and making sure everyone knows the WiFi password, and installing everyday software on the staff PCs and mobile devices with maybe some basic configuration and enabling updates.
They're not unheard-of here, but again, in my experience such arrangements are far less common in smaller organisations than just having a couple of people on the staff who also "set up the IT" and know enough for the kinds of everyday admin tasks you're talking about.
"Small businesses whose core competence is software/networking, or who by coincidence have that expertise in-house, are a tiny niche market."
You have that expertise in house. Having looked at sales numbers and market research for a company that sold internationally and cross-industry: yes, your experience is very unrepresentative.
> even for schools...
Tangent: schools are honestly pretty technically sophisticated! We sold to some of them at Meraki, but they were drawn to us more for labor savings than to compensate for limited expertise. Education customers typically had very few (especially in perpetually-underfunded US primary and secondary schools), but very competent, IT people. They were feature-hungry power users.
In part that's because, even with low employee headcount, they have to provide a surprising level of IT services per student as well. A school with 80 employees and 1000 students probably has the IT workload of a white-collar employer with 500+ headcount.
OK, let's assume that's true for the sake of discussion. According to your market research and sales numbers, what is the big market for these cloud-managed products among smaller organisations, and how do those organisations generally manage their IT facilities?
1. Use low-cost consumer hardware with zero centralized management, and set it up with the same expertise and judgment as your typical residential deployment.
2. Have one admin person with the wherewithal work with web UIs, and wants a simple setup-and-forget system. UI not much more complicated than a single-AP residential deployment, user management workflow no more complicated than adding a G-Suite user. If they can use the default password for the admin system, they will (which e.g. Meraki and Aruba don't have in any meaningful sense).
Your original contention was that it's hard to implement a single pane UI without putting a bunch of logic in the cloud. If our hypothetical one admin person with some idea of what they're doing, together with any automatic assistance the relevant devices provide, can set up enough local networking that all of those devices can reliably access the Internet and support cloud-based configuration, then a similar process can set up those devices to support single pane configuration using the LAN only.
At that point, looking back to the four "hard problems" you enumerated a few comments ago, I still don't see a strong argument for needing the cloud dependency.
The risks around network setup and reliability don't seem any worse for LAN-based configuration than cloud-based. In fact, LAN-based clearly has an advantage by not relying on any external infrastructure. It also has the advantage that if you want to get more serious for a larger deployment, you can run independent cabling and create a dedicated management network for control signalling, while most places aren't going to have an independent second Internet connection for management traffic if you accidentally break your configuration so your main data network loses Internet access.
Managing multiple sites is probably a non-issue at this level of the market.
Remote access for IT/support people is easily provided if necessary by having safe and easy VPN setup as part of your user-friendly interface. This has the added advantage that your tech people can also reach any other parts of the network they need, and so you might have required this functionality anyway. And if it's locally configured, you can always quickly shut that VPN access off again in case of any security worries, without needing anyone else's remote systems to be working properly before you can secure your own in an emergency.
That’s a senseless statement in the context of a cloud solution that requires Internet to work.
In theory yes, but man do a lot of devices have terrible roaming heuristics.
"I can still see beacons so id better stay here even though i havent received a packet in the last minute. Wouldnt want to pay the time cost of associating with that other BSS that has 5X the signal"
It's so nearly there. The power management stuff means that even with single a physical radio one can associate with multiple BSS's on different frequencies by telling one BSS to hold packets for you while tuning in to the other frequency.
All that's needed to make it reality is a way to tell a BSS "If I fail to ACK a link layer packet, please forward it via the wired network to this other BSS to send to me instead".
Then a client could be connected to multiple BSS's, send packets via either, receive packets via whichever one it is currently tuned into, and not lose any packets while switching.
It should help high power bad signal (some devices use fixed thresholds) and equalize the beacon vs. data reception quality.
I don't think openwrt had data rate config in webui, but it does support the setting in the config files (that I normally scp onto a device). The following seems to work:
config wifi-device 'radio0'
option txpower '1' << 1mW (more than enough for 1 room)
option legacy_rates '0'
list basic_rate '24000 36000 48000 54000'
list supported_rates '24000 36000 48000 54000'
i hear this a lot but never experienced it myself, maybe related to outdated os?
been running multi-ap with same ssid/key no special sauce for years and it just works.
I have multiple cheap APs setup in my house using the same SSID and it's fine. As long as I'm not holding a realtime conversation and moving around between APs I never have any problems. And since I almost never hold a Skype call while walking through my house I almost never have any issues.
Of course you could say: Does the house have to be designed that way? Do the APs have to be located where they are, is it really necessary to have that stone wall, is it necessary to put the study in the place where it is, is it necessary to have that noise insulation around the elevator? None of that is necessary, but some Mikrotik hardware was much cheaper than getting rid of a stone wall and more pleasant than having to hear it when the neighbours use the elevators.
If I'm in the living room and need to move to the other end of the house to get away from family-related noise, the device needs to roam between two APs.
Unifi handles this without any issues.
Not exactly. There are extensions to pre-authenticate with an AP (802.11r) for truly seamless roaming without packet drop or delay and for AP controlled roaming (802.11k) where the current AP tells you your options to roam to. This last one is important because the AP has generally better information about the network than the client and because the clients are not that great at managing this.
I am sure there are other extensions too, but afaik cheap APs don't implement these.
The base standard's behavior requires a reassociation to the new cell (i.e. AP i.e. BSSID). This introduces a gap in coverage, but for simple setups like the 5-AP one IgorPortola is talking - I assumed that this was using shared-password auth - the gap's length is functionally 0. 802.11r gets rid of that gap, which is important when using heavier-weight authentication protocols like 802.1x.
(Note that by 802.x in my original I meant not 802.1x, but rather the set of standards including 802.3 (ethernet) and 802.11 (wifi))
Ubiquiti had a secured backend - their screw-up was not doing MFA on their admin accounts. I would still like if there was an option for a local-only control panel.
The work flow we used was AWS Vault -> Okta -> short lived AWS creds.
It briefly pops you out to a browser to authenticate and caches a short lived token locally
There's tools like aws-okta that can advantage of that to supply short lived credentials which require 2FA
You could also write a service that requires whatever authentication you want and returns the results of STS AssumeRole
I am deeply saddened by Ubiquiti’s fall from grace... they were so good.
If I wanted to run it all the time, I’d try putting it in a docker container on my synology.
Instead, I have an sd card for my raspberry pi that has nothing but the controller installed. The main downsides to this are that it is easy to lose the sd card, and that the controller gathers bandwidth/usage/wifi connection reliability stats, but only when it is running. I don’t get those unless I boot up the RPi to diagnose some network issue (this has never been an issue in practice).
One advantage of the RPi setup over a synology container is that it has both a ethernet jack and a wifi adaptor. This is surprisingly helpful when bootstrapping complicated mesh topologies.
I too am disappointed in UniFi’s direction.
I used to recommend them. I don’t now.
I am still looking for alternatives when the time comes to replace mine. Which I'll be forced to replace once/if they completely nerf the self hosted on self hardware options.
I couldn’t see an option on setup.
I might try block it from internet and see what happens.
Having your local network depend on an external network makes my old school sysadmin bones tingle for some reason.
They have a good UI, good hardware but the software seems half baked.
Originally with the switch to the "new settings", the schedules were switched between the AP's and the UDM, not sure about a dedicated cloud controller.
Great product, poor QA I think.
From a couple years back, https://arstechnica.com/information-technology/2016/04/how-h... (the hackers got remote access to a sysadmins desktop then waited til he mounted TrueCrypt and stole the entire contents)
Even with hardware tokens, if someone gets access to your machine while you're using it they can wait til you authenticate then use the creds proxying requests through your machine so they look legit
So you're saying it was both not trustworthy and not adequately secured?
MFA is not a silver bullet. You can still login with stolen cookies and 'replay' the session without signing in.
With standard 802.11 roaming, you have to reassociate and reauthenticate to the new AP. While this process is underway, you can't pass any traffic. For open networks or simple auth schemes like WPA2 single-password, this isn't very noticeable; however, for heavier-weight auth schemes like 802.1x this pause is substantial and is especially noticeable on voice/video calls. 802.11r is a scheme for caching the authentication info, letting you avoid the 802.1x round-trip to a central auth server.
For a 5-AP network, usually with shared-password WPA2, it's not necessary.
My kids have to go into settings, reconnect, and move on.
i think it did support roaming in the past and they disabled it in an OS update.
I will note that as of 2015, "L3 switching" (i.e. hardware-accelerated IP routing) hardware was expensive as hell. I believe that on the software side, dropping new hardware into the existing hardware-routing infrasturcture is fairly easy, but I don't actually know because I didn't work much on MS hardware.
I've got some Ubiquiti gear I bought a couple years ago. Like you, I want good quality gear that I can manage myself. I don't need a bunch of fancy corporate garbage, like link aggregation or cloud management. Give me solid, hardware accelerated routing and switching, flexibility over my local DNS, and maybe some VLANing.
I was running Linux on a small x86 box as my last network router. Maybe it's time to get back to that. That or go back to banging rocks together. Haven't decided which, yet.
my experience as a professional "network nerd" is that most other people in the networking field run cheap/second hand enterprise gear fetched from their employer at a major discount and simply seem to care less about wifi in general.
EDIT: it's when you get into supply contracts in the thousands .. then it gets tricky
I picked up a pair of Aruba 3200 controllers and a bucket full of APs on a local auction site for a song years back, still does me fine. Then again, not caring about the fastest latest standards is key, if you’re chasing current gen the enterprise stuff is unaffordable. You do need the appetite for a bigger power bill, mind.
My plan: OPNsense on a PC Engines board for router + firewall, an unmanaged PoE-providing switch for switching, and something from 2-8 WAPs for indoor/outdoor Wi-Fi.
> APU2, APU3 and APU4 motherboards have four 1Ghz CPU cores, pfSense by default uses only 1 core per connection. This limitation still exists, however, a single-core performance has considerably improved.
I can saturate 1GB/s with no problem OoB on Debian/OpenWRT on APU2/3/4, ymmv
It's pretty hard to recommend Unifi based on how they handled this breach, but the hardware itself has performed very well. Hopefully the new PC Engines boards can accommodate your needs.
It's got a quad-core i5. I run Proxmox and virtualize VyOS as a router, Home assistant, and a couple of other small things like an https reverse proxy for various services that I like to access remotely.
Went this route after my old OpenWRT router couldn't keep up with gigabit WAN. This box has no problems doing so, and even does WireGuard at near wire speed.
There are a bunch of similar units available on Aliexpress, as well as 1U units with x86 CPUs and SFP ports for 10GbE, etc.
They’re small passively cooled embedded x86 machines. They haven’t made the jump to 10GBit, and their newest model (the apu2) is getting pretty old. However, they have very long production timeframes (many years) for each board config, which leads to stability over time.
I have an ER4 which works for now but plan to go down the custom route once the ER4 is unable to push packets quickly enough. My hope is that VyOS/DANOS is sufficiently stable by then to run as a VM on say a Odroid H2+ replacement (or something similar)
I know quite a few companies that use it in production.
No. They just don't want to serve the low end. I'm from SK, Canada and the vast majority of all businesses are small businesses. This site  says 98%. The problem is they only account for about 25% of the GDP, so vendors don't consider them worth serving. Everyone wants to sell to the 2% of the businesses that make up 75% of the GDP.
There's a lot of money to be made in the small business sector. It's just not *enough* money for huge tech companies.
[Hi from Regina!]
For example with pfSense going closed source we’d be willing to pay around $100 total lifetime cost to put it on PCEngines hardware. We can build that in to the upfront cost of the device. I wouldn’t be shocked if they try for $50-$100 / year which won’t be economically viable for our market, so instead of getting $100 / device and never interacting with us, we’ll end up moving to a different product. I really hope they come up with an offering that’s appealing to the small business sector, but I’m not holding my breath and I’ll be learning opnsense as a contingency.
Still, it’s nice to have a hobby, and if you’re looking for one, run your own, sure! No shame in that. But it’s no longer necessary, and that’s pretty swell to me.
^ I agree with why they don’t make that accessible to end users: because people will uselessly fiddle with settings knobs to feel empowered, knobs like “separate 2.4 and 5 networks” (which breaks roaming and makes users incorrectly blame their WiFi routers when PEBCAK is at fault) that semi-expert users feel qualified to mess with, and lazy technicians will use to create “guest” networks that don’t offer protection and perform miserably due to being locked to 5GHz.
I do have requirements beyond what the typical consumer does of their network, like PoE to run a couple of access points, PPPoE so that I can put my modem in bridge mode, the desire to configure extra DNS records, dynamic DNS since my home IP changes. Oh, and let's not forget some filtering/rewriting capabilities so that I can force modern smart TVs to respect the DNS server I provide them.
My network is much more usable having put the time into it. Yes, you could buy some off the shelf thing and get an OK experience, but that wasn't good enough for me.
All of these features are available out of the box and have a GUI intelligent enough to offer a text area for adding filtering/rewriting commands that exceed the GUI’s remit. I used to have to hand-build this. Now I can plug and play it, and end up with the same experience as someone who built their own server and OS, using the same open source components as they would.
Total time invested, 8 hours over 5 years. I’m content with that exchange, and it has come with the only drawback being “it cost money to purchase the router itself”. I could DIY for less expensive in dollars and more expensive in hours. That’s the hobby-or-not choice, as I see it.
I do not decry those who invest time instead. Good, do so! I invested thousands of hours of my life into DIY of this stuff. It was invaluable experience, but it’s no longer mandatory to DIY to get a great experience indistinguishable from DIY.
It would seem the market is RIPE for them to come back into the wifi market with a mesh product.
They do sell mesh wifi products from Eero, Linksys and Netgear on their shop, but I don't think there's going to be any Apple-branded network gear anytime soon.
Generic Linux or BSD boxes are ok as routers, but they're not the best switches since they start taking up a lot of space if you need a bunch of NICs.
The latest incarnation on linksys ea8500 is slightly bumpy (seems like a kernel crash), but didn’t get annoying enough yet to hook up the serial console and get into kernel bug hunting, yet.
I have about a dozen VLANS that are distributed between different SSIDs and a few L2 switches for wired; bonjour gateway/filtering for the stuff like AirPrint.
All my switches are bonded to one another, and it was handy when something snapped one of the fiber runs. That side of the house kept connectivity until the weekend when I could crawl around and run a new cable. (Never did figure out why it broke, though. Guessing the house shifted in just the right way.)
It would have hardly been the end of the world if I had to wait, but if your kit can do it, why would you not?
I would not detract from your network going the extra mile. I suspect that for most people, the value-to-effort ratio of link aggregation just isn't there in a residential setting.
IMO using what we have intelligently is easier. Uniquiti hardware has the Edge line of routers and switches that are not cloud-controlled, not listen on any ports, and not establish any connections on your behalf.
That's something entirely different from what happened with Ubiquiti.
many people switch not simply for the security/security-theatre, but because they no longer want to support a company with such poor security strategy after it is revealed that they have internal issues.
Less dopamine, though.
they've been working nicely. i have good luck with fiber SFP+ modules, but it seems picky about 1G copper SFP modules, fwiw.
i checked my order history, it looks like ipolex and 10gtk 1000bT copper modules have had troubles in my mikrotik switches. the mikrotik brand works fine. and every 10G fiber module i've tried has worked (lots of fs.com, and i think 10gtek, and probably some other brand off amazon)
I've got a setup similar to what you're asking for. The TP-Link APs (AC1750, AC1350 and AC1200) support PoE, they're in a wireless mesh, support roaming, and all configuration is handled with one interface, no cloud involved.
Just make sure that what you're ordering says it supports Omada. They still ship a lot of SMB gear that doesn't, but all the basics are there now.
I just started using an EAP660 HD at home a week ago, so far so good. Haven't topped out the speeds yet because nothing in my house can take advantage, but I have some AX200 cards coming. I understand there's a throughput bug at the moment that's going to be solved in a future firmware fix, but my clients don't go fast enough to hit that yet. TP-Link seems to very actively update their firmware for the pieces I've been using, FWIW.
So I've been pretty happy with it so far. Roaming has been fine, though in one case I think I had non-optimally located a couple of APs because my Linux laptop kept rapid-fire flapping between two of them. I believe that's a client-side problem, though.
I did try a Cisco 240AC and its wifi performance was rock solid. The management interface is non-cloud, and I believe covers the whole network, but it lives inside the AP itself, which I don't love. The management UI is buggy and they seem slow to push bugfixes, and when I added a 142ACM to extend my network it started going flaky -- I had to do a factory reset/reconfigure of the 240AC to resolve it, then it happened again a few weeks later -- so I'm gonna flip my Cisco stuff on eBay. :-(
 Tip if you adopt one of these in Omada: You need to give Omada the EAP660's password (default "admin"/"admin") for it to successfully adopt. The other APs never required a password to adopt, so it was a little confusing until the internet came to the rescue.
The layer-3 stuff however is still early days and I can't recommend getting the secure gateway at this time. No IPv6 support. Depends strictly on an internet uplink configuration for default route to which all traffic is then NATted. Can't change that. No real security features, no packet inspection etc. The routing features really feel like an alpha version. They are working on it and have a roadmap to a more workable layer-3 solution. So maybe in the future the will be as nice as the Ubiquity solution.
Cloud is not needed but possible. You can get an OC-200 controller for not much money that fills the role of single pane configuration webinterface. The software for that controller can also be downloaded for Linux on PC or ARM if you want to use your own hardware. Also the network keeps running if the controller is down.
I've been very happy with roaming/throughput/reliability generally. The EAP-225 is 2x2, which they don't readily announce. Their newer and more expensive units are available as 4x4. That being said they're so cheap, I've been happy just to throw more onto the network.
For the software to manage them it uses some kind of multicast identification scheme to find new APs. If you're on a different subnet then it won't be able to automatically see them. They have a tool to connect to the AP and give it the management server IP, but that's Windows only.
The other option (that I went for) is just to create a management VLAN (good practice anyway) that the controller and APs live on. This is specifically supported by the APs.
Without those, it takes a little longer for the device to switch APs at the borders of their coverage. Mostly imperceptible, but the longer handoff times can be enough to kill a phone call over iPhone WiFi calling
I think I'd rather take an ostensibly-offline controller from China than a cloud-enabled one from the US, though I'm not really happy with those options. :-(
Are there some good options I missed? Would like to hear about them, if there are any.
 I expect their hardware is made in China, even if their controller may not be.
Please do review https://news.ycombinator.com/newsguidelines.html and stick to the rules when commenting, regardless of how wrong other commenters are or you feel they are.
The comment was something about how if you get the FBI mad they'll fabricate a drug case against you which somehow involves hacking into your home router or possibly subpoenaing your ISP.
Even Cisco was doing it:
And the NSA was known to be intercepting router shipments to international customers, injecting their backdoors, then re-shipping the modified hardware:
https://www.infoworld.com/article/2608141/snowden--the-nsa-p... (this is documented all of the place; infoworld may not be the best source but it is just one)
For every example that is exposed, it is safe to assume there are others that have not been found.
The OS, TurrisOS, is based on OpenWRT and for a while they were having trouble keeping up-to-date but that's been sorted in recent releases.
There are great features like auto-updates and BTRFS snapshots and the ability to rollback to previous known good if you screw up a config. I also run LXC containers on it for things like PiHole (not on the internal flash but the main board takes an M.2 SSD).
The Turris MOX is a modular Turris system that you can assemble from the parts that you need.
I have a small Gl.iNet router upstairs flashed with upstream OpenWRT that I use as a WiFi access point and have setup 802.11r for BSSID roaming. Have been using this setup for months and handoff has been completely transparent.
They can be a little nasty to users on the forum as well but in general I really like the product.
It's the right hardware, and great firmware and wonderful flexibility - but it needs an easy to use GUI controller to make the simple stuff easy to take over from Ubiquiti.
Even before now there are some limitations with UniFi that have annoyed me. Setting up more complex DNS and firewall rules requires editing the JSON config. IPv6 tunnelling isn’t well supported. The stats in the controller, whilst neat, aren’t very useful because they have to be manually reset to zero.
CLI for Port Forward:
/ip firewall nat add chain=dstnat dst-port=1234 in-interface=ether1-gateway action=dst-nat protocol=tcp to-address=192.168.1.1 to-port=1234
VS having to document the same task in the GUI:
Dst. Port: Port
In. Interface: ether1-gateway
To Address: IP address of Server
To Port: Port # of Service
Highly worth getting one to try out.
With the CLI you either need to document it yourself, or you need to know to query if there are any port forwards. That can be a problem if there is more than one person responsible for the network, or if someone else needs to inherit your setup.
Documentation of configuration sometimes isn’t an issue on your own home system because you generally have a high level memory of what changes you made and their purpose. Conversely I still struggle sometimes with Ubuntu because I customise my configuration using command line tools, and I find keeping track of those changes or the implications of those changes is difficult.
I would start with a hAP ac², a wireless router that is approximately the equivalent of their hEX Ethernet router plus a dual-band AP (cAP/wAP ac). It's a great standalone device and less than $70, or you could get the individual devices for a bit more flexibility.
Avoid the models labeled "lite", those are low-cost versions with lower routing speeds and 2.4GHz WLAN only.
For management you can obviously configure each device separately, or you can use CAPsMAN where one device acts as the controller and handles all configuration. It's not as slick as Ubiquiti, but it works.
This news (covering up, legal overriding good security practices) is super concerning though, and I'm definitely going to start looking around as well.
I think it's specific to Access Points, so not a general purpose centralized controller for MikroTik equipment, but... centralizing access point management seems to be the main thing under discussion here.
No, you don't? I mean you can but you don't need to.
There are cases when that is useful, true - for example, the automatic channel selection makes some curious choices sometimes.
Scummy? Sure ... especially if you don't have a Ubiquiti gateway but only AP's so the top part of the page is blocked out, but it's not exactly "pushing ads at me!" in the traditional sense - e.g. they're not targetting ads, they're not collecting data.
The pervasiveness of adtech doesn't cease to impress me.
I used to have remote access turned off and accessed the video streams via the iOS app when my phone was on VPN to the local network. That no longer works. Remote access (cloud) needs to be activated in order for the iOS app to work, no matter if you are on the local network or not.
My controller is only on 6.0.43 but i can access it via iOS app on VPN.
My contoller only does Wireless/AP management though. nothing more.
New UI: Settings > System Settings > Administration > Enable Remote Access
"Classic" UI: Settings > Remote Access > Enable Remote Access
I'm still on version 5.14 and all of the cloud features are optional. I just ignore them. I guess now I know not to upgrade!
You're conflating "NSA secretly rerouting shipping company deliveries to end-users, installing their firmware, then senting it on" with "Cisco willingly did that".
Cisco was unaware, and once aware (thanks to Snowden), Cisco took steps to try to prevent it, by altering shipping destinations, at the last minute, on route.
So, while this whitepaper is news to me, how is this an "NSA backdoor".
Reading up on this, it sounds like
* it was required, much as with phone tapping, by the US gov
* ergo, ISPs needed it, were mandated to have it
* therefore, CISCO implemented it
* this protocol was for lawful intercept. Police, FBI, everyone.
While beyond annoying, this is not a back door for the NSA. Nor is it even secret. Before you get all pissy, you should at least state fact as fact. Not exaggerate. Not make it about a specific actor, when it isn't. And not during a whataboutism.
If your goal is to let people know, I assure you, spouting unvarnish, direct truth will help a lot more.
So let’s run through it.
Cisco writes white paper supporting LE back door access.
LE/IC use hard coded back doors as revealed in the Snowden and Vault7 leaks.
You’re saying it never happened, ever.
Maybe you’re right (you’re not) but you spoke so firmly!
Do you know something I don’t?
In 2005 the FCC ruled that CALEA applies to broadband Internet providers
So yes, it was mandated. You may disagree with the ruling, but ISPs were required to do something, and Cisco enabled this on products for ISPs. Did they have it beforehand? Yes. However, this product only existed on certain products, and other countries required this before the 2005 FCC ruling (again, from IBM white paper).
But of course, this still isn't "Cisco put in back doors for the NSA". This is "Cisco putting in back doors for law enforcement, including even local police".
Further to that, everyone was aware of this. You can't have a 2010 white paper by IBM, before the Snowden leaks even(2013), if it was secret. And realistically, a "back door" isn't quite that, if it is well known. It's just another access point in a product.
Secondly the 'Snowden' leaks, which had everyone quite pissed, including Google (whom I hate, but...) starting the big push for SSL everywhere, were not caused by these specific back doors.
Heck, this white paper is from 2010, and this 'law enforcement' "back door" was well known, AND!, not in all Cisco products! How, then, could Google be surprised by this revelation. That this back door existed?
How could anyone?
It was not a secret. It was not in all products.
No, Cisco routers were infiltrated in two ways. Undisclosed vulnerabilities, which the NSA was aware of, and used against all router vendors to install NSA malware. And again, by intercepting shipments to end-users, installing NSA backdoors and malware, then resealing and shipping the product onward.
This is what the Guardian Snowden leaks talk about!
The big differences between China(and your whataboutism), and the US, is that if you don't let the Chinese government into your company, do precisely what it says, and install all the backdoor software it wants?
You don't have a company any more, your freedom, and maybe even your life.
Meanwhile, the NSA, has been acting illegally, and does NOT have the support of US tech vendors. In fact, US tech vendors are hostile to NSA's attempts to subvert their products, including lobbying US politicians to stop this sort of behaviour.
There is a vast difference between these two things, and in all of the above, Cisco did not willingly put "back doors" in anything for the NSA.
So in reasons to your question? Yes, I know something you don't.
History. Factual, actual, history. Not revisionist.
I'm happy to re-examine any of this, if you can provide links to data showing Cisco allowing NSA agents into its midst, and installing NSA spyware for its products at the factory. On purpose. Which aren't open, and were hidden from everyone.
Or something similar to this.
Because otherwise, your statement is absolutely, positively, not factual. How can I say otherwise?
And yes my original response was firm, because I've seen others say this sort of thing. We must be factual in our claims, not hyperbolic!
So, your I agree with you in not being hyperbolic.
However, let’s just say I have exceedingly applicable industry experience. (IC and LE)
I know beyond a shadow of a doubt that I’m right.
So now my burden is finding what I can in the public domain to share this truth with you without violating NDAs.
The one linked in the tomshardware url, in your own post! The whitepaper by IBM, you even talk about in your post!
Years later, in 2010, an IBM security researcher showed
Apparently you're discussing how IBM showed this, without even reading them doing so?!
So now I've done more research into IBM'S whitepaper, which you summarize, than you?
That very same IBM whitepaper you cited, claims the FCC mandated it. As in, pushed an interpretation of a regulation. Are you claiming the whitepaper is wrong?
The whitepaper which you used to validate your claims?
Or, are only the parts of it which you agree with correct?
The claim of an FCC mandate in a white paper does not indicate legality of deployment in the real world is what I mean.
Don't know if this is the same case still or not, but they did this for FCC compliance around the time 802.11ac was launching. That might have changed that though I'm not sure, I stopped considering them at that time.
Also a good company to look at would be Microtek, I have heard good things, but haven't looked into them directly.
With my 5 year old Mikrotik hAP AC I am able to get up to 500 Mbit/s on lan.
And my old phone now shows 250 Mbit/s on speedtest.net both directions.
How much more are we talking about? Have I missed some big hardware upgrade recently?
I remember that when I had hAP AC using firewall rules inside lan, it also did not go much faster. Good indication was CPU usage. If it used 100% CPU at ~200Mbit/s then it was firewall slowing things down.
There are PoE devices with OpenWRT support and should be possible to enable 802.11r if they have the support. They can be managed locally even with self-signed certificate.
To somewhat eliminate the chances of adventure, I’ve profiled the setup for each of my many OpenWRT devices and created unique profiles for them in a (reasonably) simple Git repo.
All I need to do to get device-specific firmware is to update the OpenWRT version-number in a single makefile and the rest happens automatically.
I’ve even setup Github Actions to build the firmware for me (basically, run make), so I can even get/build new firmware from my phone.
I’ve yet to have any issues when flashing these builds. It used to be much worse when flashing the regular “official” OpenWRT image and restoring packages afterwards.
Couldn’t be simpler! (With the regular Linuxy you-have-to-build-it-yourself-first clause)
I need to get back to trying to build a custom build for my KanKun smart plugs.
I have it on very good authority that Ruckus have started rolling out a change in their pricing model to require a Unleashed license per AP to operate, a move which obviously increases costs to the end-user.
Some people might say its a deliberate move prevent cannibalisation of their main business model by nudging people away from Unleashed. I couldn't possibly comment.
It works so well I wouldn't mind paying some fee, but it'll depend on how much.
My earlier comment was based on a change of policy which happened around 1st March, and any Unleashed quotes as of 1st March (and the two-weeks prior) need to be re-quoted for the new "license per AP" Unleashed model.
I've been a bit busy with other work since that bombshell dropped, but if I get a moment I'll try to dig up some pricing.
The other thing to note is feature discrepancy between Unleashed and standard. Perhaps of most interest to your average HN contributor was (the last time I checked) IPv6 was not supported on Unleashed firmware, and not much sense of urgency (if any !) to rectify that.
For me I bought my AP on eBay and just plopped the standalone Unleashed firmware on it and that's all seemed fine. In what I see there's nothing changing? But it sounds like you're running a /much/ larger install.
As you may or may not be aware, Ruckus have an "all quoted" policy, there is no price list per-se.
At the time I was working on the project (late 2020) Ruckus did have a promotional activity going on where you could buy Unleashed kits at fixed prices without quoting.
However due to various technical questions that were coming up (e.g. IPv6 support) we missed the window and it was uncertain if Ruckus were going to extend the promotion.
Ruckus did extend the promotion, at least initially (Jan-Feb 21') but then they switched to the "license per AP for Unleashed" and the promotion was killed off.
It was at at that point that my friend took the hint and dumped the idea of Ruckus and I went back to my normal work.
If I get a chance I'll try to find out what happens about second-hand kit. My guess would be that if you stay on old firmware there's not much they can do about it. Although whether its desirable or advisable to stay on old firmware is another question, obviously.
Without going into detail because, well, you never know who's reading ....
TL;DR "WatchDog End User Support" is now mandatory for Unleashed and is sold and priced on a per AP per year basis.
The pricing is not too scary (two digit figure per AP per year). But I'm told the requirement is (will be ?) enforced so its unlikely to be a case of being sneaky and paying the first year and "forgetting" to pay the renewal.
I've clearly only just scratched the surface of Ruckus stuff.
What are you describing here? I have a Ruckus Unleashed that I bought without a credit card and it works fine.
Or if you just want Wave1 Hardware...R700/R500
You can get these as overstock on the cheap on amazon etc. The unleashed version means it can run the controller on the AP.
I do find myself rarely looking for firmware upgrades unless there’s a specific issue I can’t workaround.
Even on my ubnt equipment. I find it best to just leave it segmented/network isolated and humming.
All these cloud features just increase exposure and grant the vendor leverage to hold you hostage.
I bought an R610 AP on eBay a few months back, flashed it with the Ruckus firmware (legally available to all from their site), and it does exactly what you want. On-prem only, no cloud, one of the APs will act as a controller/manager for the others, and they can all communicate via wired or meshing off of each other. One of them can even be a NAT thing if you want.
I think I paid around $160 because someone had a bunch of off-lease ones. But if you look up anything that supports the Unleashed firmware you'll be good. 802.1ax is the hotness right now, so the slightly older (but still work great) ones are a LOT cheaper.
I replaced a Ubiquiti setup with a Ruckus R610 and small fanless running OPNsense (Protectli) with a basic switch and POE injector and it's excellent. Sure, it's not single pane of glass for it all, but the AP is rock solid and OPNsense is a solid known quantity. I've got no regrets.
The other alternative is to go way up-market and buy industrial gear. Consumer gear is shit due to a race to the bottom mentality. 90% of consumers buy the cheapest. This is also what turned every TV and appliance into a feature-encrusted shitbox full of spyware.
(also sell campus controller local no cloud ... but this route is pricey)
Not as comprehensive as Ubiquiti’s management interface but the CAPsMAN feature on Mikrotik routers and APs does cover this use case.
Support/Licensing costs are totally worth it for having trouble-free WiFi with no cloud dependencies (context: using and supported UniFi in various roles since the first UAP came out, and I think was free for UWC attendees, though I could be confusing that with their first camera), but am network nerd that's comfortable with enterprise wifi.
I'm a network nerd that would love enterprise wifi but that seems way out of my price range.
Edit: I got upvoted by somebody, but as an UI user I'm genuinely looking for an answer. If it's still possible to get inside if devices aren't connected to UIs cloud.
1. They are now pushing ads to their local controllers. That is a shady tactic. It also means the controller is phoning home. It means they might have an XSS in that code now or in the future.
2. They just deprecated a bunch of relatively new hardware. If I’m going to invest a non-trivial amount into their hardware I want to know it’ll keep working for a long time.
3. They lost trust due to this breach. How can I trust their code to secure my locks network if they can’t secure their own?
This is the reason I went with the Ubiquity UniFi 6 years ago. It was the only one I tried that didn't constantly drop connections or cost a fortune. But it's only G and I've been considering an upgrade, but there are no good options on the market that don't have stupid cloud management bullshit, are built on garbage hardware, or cost an arm and a leg.
Other than ubiquiti I assume you mean? Not that I know of. I want the old ubiquiti back where customers, not stock price and ad revenue, was the focus.
Both will run from locally hosted controllers if desired.
I've been seeing more Cisco "Meraki Go" kit around as well, which looks to target the same use cases as Ubiquiti (very very similar gear, WAPs, low end switches & gateways), albeit without a local controller option, but at least without the usual steep Meraki subscription charges.
I know someone that works there and they seem pretty happy with the place and product. just saw the amazon link now though so that may be a detriment depending on your view of them. (I have never used their systems or anything so it's not really an endorsement but something to consider)
If you are willing to go this price range, I think FortiAPs feeding back to a Fortigate FW is rock solid solution. But a FortiAP-431F is $616. And a base FG60F as controller is $535 + service if you need it. And although you probably won't need repair options, support/maintenance is a yearly fee ontop of that.
Ubiquity was definately a unique company offering many of the enterprise features for consumer pricing.
You probably want something like , which has PoE support and an optional Cloud connection. You can roll your own automation with (e.g.) SSH access since they are just Linux machines.
Enterprise solutions with your self-contained WLAN controller and APs (not including PoE switches) are typically pretty pricey (>$5k, can spend a lot more).
I get that people with larger networks would find centralized management useful, but I'm fine just managing a couple APs, a router, and a couple switches on their own. They're pretty much set-it-and-forget-it devices anyway.
Or something like this:
Again, dont expect it to be simple. Be prepared to learn.
I use the TPLink forums to put local management in as a feature request. Perhaps if enough people make a noise?
Setting up a UDM first thing I did was add a local super admin account, then disable remote access. That way, if their cloud auth servers are down I'm not affected as I use the local admin account.
Can't see anything on their website for a transition plan in the event of shutdown (and of course, why would they post that and potentially signal lack of confidence in their longevity).
We had ubiquiti, but the power outage usually corrupts the controller, and requires constant resetting.
WAPs have been absolute crap for years.
All cloud based management stuff is optional and provides TP-Link’s own DDNS support and remote access only. You don’t have to use it.
i recall some features being locked behind a UBNT account, but that was only reporting-type stuff IIRC
Alix makes a decent router board that can host Linux and dual PCI cards means 5 and 2.4 ghz AP's. the total would be ~200 for each "AP" but they would be pretty massively powerful.
It works with their small 16 port (8 PoE switch).
Check out Ruckus. I've found their 'unleashed' stuff quite nice (no affiliation, just a customer).
I'm so sorry. I'll go now.
It Just Works.
Apple style. Plug it in. Never fuck with it. Rock solid.
Case studies, focus groups, surveys and interviews are great ways to find the unknown unknowns. Of course, you need to pay people to participate in them, and then you need to pay expensive employees to conduct, collect and analyze the results.
It's often just cheaper to spy on customers, though, and pretend that there is no other possible way to conduct business.
No they're not, because the vast majority of people simply won't be bothered, and most people probably aren't as reliable as concrete data.
Telemetry that tells you which features are popular is useful but does need filtering to avoid identifying individual users. But sending back errors and crashes is what's really important.
You can do things like have feedback forms but typically users don't like sending that in because they feel like they're doing work for free.
It needs some getting used to, but preform well.
They have their clod versions also, but they keep putting out non cloud devices.
Isn't one of the major selling points of cloud-everything "How can you possibly secure your service better than BigRespectableCompany?" I know any time I bring up self-hosting E-mail or a web site or whatever, someone always comes out of the woodwork to remind me that I am not an expert in securing Internet services, and that BigRespectableCompanies have full-time employees dedicated to security. Surely I should be moving to the cloud for this expertise! This is sounding more and more like FUD to me.
Ubiquiti really aren't in the same ballpark as AWS or Microsoft, which are the companies people use that argument for, and you can bet your ass their security is better than in most places.
As your manager, how can I tell the difference between someone who actually did the work right, and someone who said they did the work right (and also legitimately believes that they did)?
I posit that it doesn't take burning a zero day, or a coordinated effort by the CIA, the FSB, and Randy Waterhouse to break the typical DIY self-hosted security implementation. (And that the manager paying someone to build it has no ability to tell between a great, a good and a bad DIY job.)
It's extremely difficult to lock down an AWS account when there are a bajillion services, IAM policies, roles, etc.. I've been trying for the last few days and it's so difficult that I can understand things like this. I don't think it's acceptable, but I can see how it happens.
I think the expectation for AWS, Azure, GCP, etc. needs to change. Accounts should allow nothing by default and part of the tutorial / learning process should be understanding the permissions needed for each service and how to limit access to those services. As a bonus, they should show you how to configure Budget Actions to catch anomalies and runaway services. For example, I'm trying to set up my account so SMTP access to SES gets revoked for SMTP users if the message count exceeds a certain threshold. It's really, really hard because there's not a single document / guide that shows the process from start to finish.
While your concerns are 100% valid, we need to remember too that setting up access in restricted ways and inviting users to understand the protection and remove the correct barriers, or implement the concerns necessary to interact with those for themselves, always runs the risk that some users will find your protections cumbersome and instead find a (totally incorrect) way to baffle them, or otherwise even route around them entirely mooting any efforts to secure a platform.
And every time I hear this played out in conversation, the answer is "that's on them!" But it's clearly a balancing act, it's a trade off; tautologically, when you make the service less accessible then... it is, well, ... made less accessible.
Besides facilitation of the secure access also sales conversion ratios will depend on that accessibility. The crux of your argument stands, the defaults are too open, and we need to do more to ensure that naive users aren't handed a loaded gun to aim at their own feet.
Especially once you couldn't just login as root anymore in many distros.
The hard part for me is figuring out how to disable access without breaking everything. I know it’ll be useful once I understand and I’ll take the time I need to learn it, but most people won’t.
I prefer the opposite learning direction. Start closed and open the 1 or 2 things I need instead of having to understand 1000 things immediately to configure permissions reasonably.
Can you explain how IAM doesn’t work well with the “starting closed” approach? IAM authorization is “default deny” and every principal needs an explicit allow statement with the appropriate action before authorization will pass.
> Can you explain how IAM doesn’t work well with the “starting closed” approach?
It works ok once you do a lot of learning and read the best practices. I think a lot of people will skip that and use their root account for everything.
The biggest mistake I made was creating an admin user, but giving it too many permissions and using it like a normal user.
After learning more I use the root account to make an admin account, but I think the admin account should only use IAM to create other fine grained users.
So it works fine, but I think it would be better to force people into creating those first couple of accounts with permissions chosen by experts. It’s too easy to jump right in and start using an over privileged account.
This is the same for any breach. At least if you're using AWS, you know that your management tools aren't lying to you (as long as you assume AWS itself isn't hacked) and you can use those tools to cleanup. If you run your own machines, you can't assume your management tools work correctly. All your machines could have rootkits, all your tools could contain backdoors, and every attempt to cleanup might just be a fake veneer. See Reflections on Trusting Trust.
Full disclosure I work for a cloud computing company (but not AWS).
The state of security in the tech industry is miserable. The only companies we should trust not to leak our data are those that never collected it in the first place.
Maybe connecting everything to a network and making it a high value target by collecting everyone's data is just a terrible idea in the long run.
Do you have a source for this? I follow OpenBSD quite closely and this is news to me..
My concern (and the concern of many others, I think) is that if OpenBSD suddenly got enough attention from the wider security community, including people who actively look for holes that can be exploited, there'd be plenty of important stuff found. Until then, these issues sit quietly waiting for a malicious party to discover them. There's quite some fanfare for OpenBSD, but how many of you are actively auditing the code? I'm subscribed to cvs@ and tech@ and I read them daily and I just don't see much contribution at all from outsiders. And when I do see it, it's mostly stuff like fixing typos or amending man pages. All the commits that change code with security implications tend to come from the core developers, and are reviewed by a handful of people at best. And I have seen some obviously broken stuff slip through.
This seems like a structural advantage to less popular software. If your software is less common, attackers will have put less time into exploiting it, and therefore you will be more secure. My impression is that MacOS and Linux both benefited from this relative to Windows for a long time.
In general this should be true if usage grows faster than security resources for popular system. It might be still be true even with significant, commensurate investments in security while you grow, because if a small percentage of users mis-configure the software and create vulnerabilities, that population will hit a critical mass with growth regardless of your security efforts.
Is it rally cost and complexity?
Or just missing awareness?
Or the lack of consequences when you get hacked in a way which could easily have been prevented (through then they might have attacked in a different way, tbh.).
Joseph Heller predicted 2FA in Catch 22 when he wrote:
"Almost overnight the Glorious Loyalty Oath Crusade was in full
flower, and Captain Black was enraptured to discover himself
spearheading it. He had really hit on something. All the enlisted men
and officers on combat duty had to sign a loyalty oath to get their map
cases from the intelligence tent, a second loyalty oath to receive their
flak suits and parachutes from the parachute tent, a third loyalty oath
for Lieutenant Balkington, the motor vehicle officer, to be allowed to
ride from the squadron to the airfield in one of the trucks.
Every time they turned around there was another loyalty oath to be signed. They
signed a loyalty oath to get their pay from the finance officer, to
obtain their PX supplies, to have their hair cut by the Italian barbers.
To Captain Black, every officer who supported his Glorious Loyalty
Oath Crusade was a competitor, and he planned and plotted twentyfour
hours a day to keep one step ahead. He would stand second to
none in his devotion to country. When other officers had followed his
urging and introduced loyalty oaths of their own, he went them one
better by making every son of a bitch who came to his intelligence
tent sign two loyalty oaths, then three, then four;"
Notice how 2FA turns into MFA? Keep adding FA until you're as secure as the security theater demands.
"To anyone who
questioned the effectiveness of the loyalty oaths, he replied that
people who really did owe allegiance to their country would be proud
to pledge it as often as he forced them to. The more 2factor logins
a person went through in a working day, the more secure he was;
to Captain Black it was as simple as that"
"Captain Piltchard and Captain Wren
were both too timid to raise any outcry against Captain Black, who
scrupulously enforced each day the doctrine of 'Continual
Reaffirmation' that he had originated, a doctrine designed to
trap all those men who had become insecure since the last time they
passed a 2factor authentication prompt a few minutes earlier."
You login to a physical machine with a password (the machine is trusted on the network via AD so physical access is one factor and password is a second)
You visit websites and they use SPNEGO to land on Kerberos or NTLM auth which then bootstraps off the fact you're already authenticated to Windows. You never even need to see a login page
It's achievable with macOS and Linux but afaik there's some more configuration to be done. The only place I saw with a setup like that was a bank and it was part of a new technology stack that almost nothing used yet
With that setup there's almost nothing to phish if you can train people to only enter their password into the OS at login. You can pretty much eliminate the possibility of credential sharing but locking logins to certain machines
Not to mention legacy code that only knows about access key ID and secret, and doesn’t have a place to even put a token.
Because it's a giant PITA unless you have a dedicated team managing it. And the service companies get this and charge accordingly (aka enterprise levels).
It's why companies like 0Auth get bought for gigabucks.
The attacker had access to the whole database. Which meant he could alter the 2FA seed. So it wouldn't have mattered much.
So with 2FA they would have had a much harder time to gain access to the database.
The part of changing the seed only matters for customers of the hacked company but is (as far as I can tell) unrelated to them gaining access.
Once I saw it required cloud login I got scared.
After I saw an ubiquiti ssh key preinstalled in a device with unfeteted internet access I shut it down to never bring it up again
1) You dont need to turn on cloud acccess
2) My UDM pro doesn't have ssh open to the world so not sure how that would be useful externally
About 2... I guess when you got access to all their source and infra is just a matter of pushing an update to enable ssh and they don't even need to even push a key. My problem with the keys is that they come bundled with it and you don't know it. There's no reason for them to install a key in there without your consent. Imagine Microsoft presetting an Administrator account on every Windows Server without telling anyone... It's just a security problem, even more in a firewall
Sure it isnt. It is extremely bad idea and actually something like the ubiquiti breach is not even strange to me, once you have worked once in "enterprise(tm)" world this doesnt seem like anything strange.
There is just no way to buy a router that communicates with 3rd party servers and to let it access the LAN is a complete no-go (even if I am paying ISP router as a part of the package it is running as bridge just to pass the connection to my router).
I consider router as a first line of defense for inbound traffic and last line of defense for outbound and there is just no way to trust some fishy corporation for this.
And if the corporation is actually promoting cloud access, like Ubiquiti or Google, they are pretty much banned from my shopping list for all times.
“Help yourself to a free year of identify theft insurance” and all that jazz.
Me and my colleagues always pushed for more secure setups and configs but the common rebuttal was "no need there's a keycloak running several layers above and you need to use a VPN and need access to AWS first, go implement features instead."
I hope for them that no rogue employee decides to play around a bit or that no one stores their credentials in some cloud LastPass account with a '123456qwerty' master password.
If you discover, you have to report. If you don’t, odds are nobody will notice/will blame someone else.
Yes, if they destroy all of their backups, all of their hardware and every one of their current AWS accounts. Then start entirely from scratch. Any measure falling short of that (and let's be reasonable, it definitely will) means that they're entirely untrustworthy from now on.
Of course having your home network controlled from the cloud should already have been entirely untrustworthy, so in practice it won't be an issue for their sales.
Uh. AWS? GCLOUD?
Those have network control planes, maybe not for physical networks, but a control plane nevertheless.
My TL-R605 router, OC300, HD660s, and 8 port 2.5 gigabit switch are going strong, and I put the whole network together for under $1000.