> Next, we keep the DERP network costs under control… by trying to never use it. When using Tailscale, almost all of your traffic goes peer to peer, so DERP is only used as a backup. We continue to improve our core product so it can build point-to-point links in ever-more-obscure situations.
This is mentioned in passing, but shows a very good technique. They incentivize technical excellence by tying it to a concrete cost. A free plan, with DERP, is the sacred cow that must not ever be removed. If they don't fix the "ever more obscure situations", then the cost goes up. If they pay an engineer to investigate and fix this, not only does the engineer get to do interesting technical work, and not only does the system become more reliable and "good", they can also think of it as increasing the profit margin of the product (by not increasing costs).
I worked with Avery on Google Fiber, and we did the same thing. Our sacred cow was excellent US-based phone support. That is quite expensive. If there were bugs in our product, users would call in, and our call center costs would increase because we'd have to have more people working. So every week in our team meeting, we would look at summaries of calls, and take on engineering work to address the most common class of problems. That let us scale up the business and still provide friendly and competent phone support, because we were reducing the problems that people called in about. (This was things like having our Wifi access points steer 5GHz capable devices away from flakier 2.4GHz signals, or fixing "black screen" bugs where TV randomly stopped playing for software or network reasons.) Because we had that "sacred cow", every obscure bug that we spent months fixing not only made the product better and were intellectually stimulating to finally figure out, but had a concrete impact on how costly it was to deliver the service.
What most companies would do here to reduce costs is simple. Don't fix DERP bugs, just charge for it. Don't fix "black screen" bugs, just hide the phone number on your website so people can't figure out to call.
Avery has found the perfect balance between cost reduction, interesting engineering, and the somewhat nebulous "good product". Normally conflicting concerns, all living together in harmony. If everyone copied his technique here, the world would be a better place.
I would add that software infrastructure can run incredibly fast and scale incredibly well on modern hardware if you're a bit careful about resource usage.
Traditional relational DBs like Postgresql are very scalable on modern hardware. If you take the time to craft a normalized schema with low redundancy being careful about keeping data small, you can achieve performance and resource efficiency and cache efficiency hundreds of time better than bloated distributed document nosql based systems. You can also get better transactional integrity, reduce reliance on queues (use load balancers instead) and get more instant and more atomic error propagation and edge case resolution. You can really build something lean and mean to a point that would be physically impossible in distributed systems or systems that involve multiple hops over potentially congested networks (as Admiral Grace Hopper likes to remind us, distance matters in computing https://www.youtube.com/watch?v=9eyFDBPk4Yw). As far as I know, normalized relational databases are still the best at efficiently using multiple levels of physically near caches to their full potential. Most applications don't need to scale horizontally and can easily fit on single servers (that can scale to dozens of cores and terabytes of ram).
I hear arguments that it's the engineers that are expensive so it's ok to be wasteful with hardware if it saves engineering time but in my experience the type of engineers that are good at optimizations are often good at engineering in general and the ones that forget to think about efficiency are also sloppy with other things. It doesn't mean you have to write your code in C. Very fast code can be written in high level languages with the proper skills (See projects like Fastify).
> Very fast code can be written in high level languages with the proper skills
I sometimes wonder why we don't have mandatory coursework that demonstrates the upper bound of what a modern x86 system is capable of in practical terms.
If developers understood just how much perf they were leaving on the plate, they would likely self-correct out of shame. Latency is the ultimate devil and we need to start burning that into the brains of developers. The moment you shard a business system to more than 1 computer, you enter into hell. We should be trying to avoid this fate, not embrace it.
We basically solved the question "what's the fastest way to synchronize work between threads" in ~2010 with the advent of the LMAX Disruptor. But, for whatever reason this work has been relegated to the dark towers of fintech wizardry rather than being perma-stickied on HN. Systems written in C#10 on .NET6 which leverage a port of this library can produce performance figures that are virtually impossible to meet in any other setting (at least in a safe/stable way).
This stuff is not inaccessible at all. It is just unpopular. Which is a huge shame. There is so much damage (in a good way) developers could do with these tools and ideologies if they could find some faith in them.
> ...we would look at summaries of calls, and take on engineering work to address the most common class of problems.
I learned about engineering away support costs from Robert McNeel & Assoc (RMA). They used to sell AutoCAD. The value added reseller racket is to slough off support costs onto their dealer channel, which Autodesk did shamelessly. (Probably learned from auto mfg.)
RMA thought about revenue in terms of ROI and velocity. Instead of increasing margins, RMA reduced transaction costs.
So RMA made AutoCAD add-ons, nominally low price but actually given away, to reduce support costs. Stuff like plot drivers which actually "just worked". Over time, customers learned that TOC working with RMA was lower and hassle-free.
--
I applied this strategy as an engineering manager. I had inherited some products (necromancy) with a lot of technical debt. There was intense pressure to add features, restart the software upgrade revenue (long before subscriptions).
I insisted on also chewing thru our hottest support costs. Burned A LOT of political capital and goodwill. (One of my rationales was that our small team simply didn't have the resources to manage the high support burden.)
The intangible, unquantifiable benefit was that velocity improved for every one of our products. Each release got easier and proved more rewarding.
It was like printing money. Our customers sold our products for us (word of mouth). So we saved even more money (less need for marketing).
We somehow created a virtuous cycle.
--
Back then, I had guessed that tech debt piles up because of suboptimal bookkeeping. The costs of poor quality were externalized. Meaning lazy, aloof devs cutting corners increased costs for QA and tech supp. And worse, negatively impacting sales and marketing.
Today, I'm not so sure. Unifying dev and supp (DevOps) hasn't improved quality. Maybe poor quality is something like a natural law. Sturgeon's Law applied every where. The only big examples of aggressive continuous improvement I'm aware of are Musk (Tesla & SpaceX) and Haier. And probably Apple too.
But it seems they attain their results by whipping their employees. Whereas back in the day, we improved quality by doing less work (taking the time to save time).
FWIW, Sandy Munro of Munro & Assoc remains an outspoken proponent of 90s era concepts like quality, continuous improvement, "muda" (remove waste), etc.
If any one has other links, resources for 3rd millennium era quality obsessives, please share.
Or just bribe lawmakers at every level of government so you can keep increasing prices while worsening your product, using the funds from your captive audience to preemptively destroy every upstart challenger.
There is a reason Comcast is everywhere and Google Fiber is nowhere...
They wanted to know what would happen if people had super fast Internet. All the big players made a show of getting onboard. Experiment success, and they got out... Actually operating as an Internet provider wasn't something they wanted to be part of their core business- they have enough anti-monopoly mumbling associated with their name as it is.
As for the experiment... I guess the answer was not much? More high definition streaming, some few companies attempting the streaming of games. It certainly hasn't been the panacea of innovation, but it is also still early days I suppose.
IIRC, they bought a company that uses wireless APs to deliver the signals within municipalities/cities. The bureaucracy and cost was too prohibitive for physical fiber. At least that was my understanding of the situation last I had checked.
Webpass, I believe. How do I know? I’m using it right now!
A good internet provider, though living in a snowy city, the signal can deteriorate a bit when you get heavy snowfall. Still usable, but slow in those cases. The symmetrical gigabit is damn nice 99% of the time.
That's the weird thing with Google isn't it?
They have the money to do so many things. But also have a reputation for closing up so many things a few years after launch.
It is now at the point where you can't commit to a new google thing and would rather use the competitor instead because you know they will likely outlast Googles corporate attention span...
They had a hard time with doing whatever was necessary to get their cabling on poles and finding sites for equipment.
They also found that when you light a fire under the butts of incumbent telcos, they can deploy modern infrastructure really quickly. And the telcos already know how to get lines on poles and how to site equipment boxes.
They are alive, and they use no opt-in for their marketing emails. I know that because my old gmail account has been getting those mails for over a year already. They keep trying to sell me fiber, but unless they also sell me an address is in the US, it’s not very useful.
(Yes, I could unsubscribe. But I prefer to report every opt-out mail like that as spam and also look at what kinds of opt-out spam I get. Some are even more interesting, unsubscribing requires logging into an account I don’t have access to.)
I think the revelation is if you offer expensive support (a real person to call, on time pizza or it's free, next day onsite repair, etc), that makes improving the product by fixing bugs easy to justify, because it lowers your support costs.
Of course, you could also lower support costs by having a mostly ignored user support forum.
For me it’s the opposite: I actually don’t mind paying for a great product such as Tailscale (which I really like), but have security and privacy concerns!
Mesh VPNs have substantial control over networks that they manage (they bypass firewalls by having users instal agents from within). They could add hidden nodes to networks, which is a major security concern, and see who is taking to who, how long, what service they are running, etc, which can be a privacy concern. They are targets.
Is there a way to address these concerns, and make them “really” (not just on website) zero trust or at least minimal trust? Will Wireguard preshared keys as an option help (a maliciously added public key lacks a secret key exchanged among peers out of band)?
What are the implications of the substantial control that Tailscale has?
Or we have no way, but to trust someone? Looking at events of the past decade, I don’t have a good feeling about this!
They're the same as the implications for using something like Okta as your source of truth for authentication, and Okta is ubiquitous in large enterprises.
It's not not a concern, it is something you can think about and work out how to mitigate, but the benefits to their product of Tailscale hosting the control plane are going to outweigh the objections.
Agreed, one way to help mitigate this is to establish Layer 7 security controls, rather than implicitly trust the network. Tailscale shouldn't be the sole security control in any environment.
I pretty much agree. Tailscale makes this pretty easy: you get role-based default-deny port-granular ACLs, so it was easy for us to establish a regime where we're only exposing HTTP-type services, on specific machines rather than whole swathes of address space. We then require SSO logins on those services (which in turn enforce things like 2FA).
Just getting access to our Tailscale networks doesn't get you anything; having your account in a group with access to an application gets you the right to attempt an SSO login to it and nothing else.
Yes, this is a real concern. No matter how good tail scale guys are, their control plane services become super attractive target for attackers (solar wind style attack). Tailscale could provide a "Github Enterprise" style on-prem deployable control plane services running on enterprise controlled domain and with its own BYOK infra. This would majorly address the concern.
Even with a on-prem control plane, you probably want logging setup to detect when unusual nodes get pushed to the accessible list of nodes on your clients.
There is also Cloudflare Zero Trust (Teams), which is free for 50 users and accomplish the same thing (Wireguard = Tunnels), with a lot more years of "trust" and security behind it.
However, it's very cumbersome to setup, nowhere near as easy as Tailscale.
Cloudflare Zero Trust here: we're releasing significant improvements in the setup with Tunnels in a few days that addresses this exact kind of feedback.
We believe that our security, performance and integration with the rest of Cloudflare are already quite awesome.
We're going to raise our usability by quite a notch with the news coming up. Stay tuned with blog.cloudflare.com
Haven't used, but I believe lighthouses are primarily for host discovery (dns) + hole punching. I think if you configure static hosts on all nodes you're good:
Yes nebula is amazing, I'm using it everywhere!
I made a rest API to manage nebula lh, multiple networks, users, certs. All packaged as a docker image. And open-source of course: https://github.com/elestio/nebula-rest-api
You could run your own encryption on top of Tailscale; for web properties, you can use use Tailscale's HTTPS[0] via an ACME client (thus Tailscale doesn't see your HTTPS private keys) or SSH which is inherently encrypted and verified via host identification. For anything else I don't think you can manage it much, you've always had to trust your network operator for unencrypted/unverified traffic.
The concern is not encryption. Wireguard encrypts the traffic, and users could indeed verify this fact before traffic leaves their machines.
The concern is that, if an attacker (such as a government) compromises Tailscale, or Tailscale wants, they could probe your applications. It would be like your SSH being exposed to internet.
These products bypass firewalls, which is a good thing if they are secure, and a terrible thing if they are not.
There have been cases where the coordination servers have been (sometimes silently) compromised; see stories about encrypted phones. Users thought they were secure.
And unfortunately small companies may not have sufficient resources to secure their infrastructure against more resourceful adversaries.
That’s why it’s better to pay, so that the startups have funds to improve the product.
I think by "adding encryption", they mean using mTLS internally. Your application can request that the client authenticate the connection by presenting a certificate, your application then applies whatever validation it wants before allowing that session to do anything. If someone were to compromise Tailscale, they can open a TCP connection to your application, but your application will then reject the connection because it doesn't trust the certificate. That's "zero trust" as I understand it.
This is the direction I'd like to see networking go in general. Everything can have a public IP, but applications won't talk to anything that's unauthenticated. No more VPCs, VPNs, "kubectl port-forward", jumpboxes, etc. In practice, this is a colossal pain that nobody really knows how to do right. It requires rewriting all existing software, a secure way of issuing certificates (ideally not controlled by the cloud provider that runs your applications), and it can very easily fail open.
(I do mTLS for my personal projects, but my cloud provider can easily issue themselves a trusted cert and use that to poke around if they really wanted to. They own the machines that my CA runs on, so they are the root of trust. At some point, what you end up with is something that feels correct, but is in practice the same thing as just trusting Tailscale. The first 99% of security is making sure some rando on the Internet can't download your HR database and secret plans for world domination. The remaining 99% of security is making sure the NSA can't do that. Maybe you're OK with the NSA mucking about with your internal network, and in that case, you can save yourself a lot of trouble.)
Isn't the main alternative also an ACL, just in the form of a more course grained firewall? The idea of these networks AIUI is that some of the existing infra, such as firewalls and even application level encryption, are replaced by something that is subjectively easier to administer and monitor. Not saying it is better, just that's it's different. And if it's different, then it makes sense that the attack surface is different too.
Surely ACLs are controlled by the central authority (Tailscale), and not set on each individual device outside of the central authority's control. If so, then the whole ACL argument is moot because the threat model under consideration is that tailscale is compromised and attackers can modify the control plane.
You have to worry about attackers modifying the control plane regardless of whether it's under your control or Tailscale's. You do need to collect the logs of how the nodes allowed to connect are changing to your SIEM. Which should be already done, because they already shove the (extremely verbose) logs into the appropriate places (eventlog on windows, journalctl on linux)
Obviously you have to secure your control plane. The question is who is securing it. I would rather be segregated from other users so I'm not swept up in a breach in tailscale that can compromise every user at once. It's a big single point of failure.
My bigger issue is them adding hidden nodes that can potentially access my services. If I use Tailscale to provide (otherwise unauthenticated, since I've already authenticated to Tailscale) access to, say, a file server, a hidden node can just see all my files.
Isn't this where the ideas of zero trust networking come into play?
It doesn't matter that you've authenticated to the network, you still need to authenticate to the application. SSO and the like become increasingly important in this kind of world mind.
BastionZero is a similar product which gives you multiple roots of trust, one to BastionZero, and one to your identity provider (Google, Okta, etc.) https://www.bastionzero.com/security-model
I'm not worried too much about the client software because it's open source and can be built from source, at least on the desktop.
I mitigate the potential risk of a compromised control plane by using secure protocols on top of Tailscale. Namely SSH and HTTPS with a custom CA and proper authentication wherever possible.
OpenZiti and NetFoundry address by enabling you to close all your inbound firewall ports (and link listeners) such that even your OpenZiti (open source) or NetFoundry (SaaS) Fabric Routers can't initiate sessions into your network.
Tailscale reminds me of Fly.io; fantastic tech that "just works", run by people who know what they're doing and know how to write about it. What other companies belong in this all-too-exclusive cohort?
IMHO:
Vercel/Next.js and their wonderful changelogs and documentation. Probably the "coolest" tech company in my book.
Cloudflare, who almost single-handedly pushes the CDN industry ahead. So much respect for what they do and how they explain it in easy to understand terms.
IntelliJ family of IDEs and their extensive release notes and forum discussions; it can be a bit overwhelming and disorganized at times though
My personal favorite headless CMS: DatoCMS, small company but highly involved devs and iterating very quickly
Google USED to be really good at this long ago, but since Alphabet, they've become less and less transparent and more and more evil
Airtable, for bridging that gap between Excel and a proper database, with a heavy focus on UX and great release notes
Similar to Airtable, Coda is another nice example. Their engineering team has put out some amazing work, and their release notes (via email) are typically very well done, with concise gifs of the functionality. It's so easy to skip those bells and whistles.
I will say to Google's benefit, the Data Studio team is building a pretty nice product, but there are some crazy gaps and the custom connector docs leave a lot to be desired.
It's interesting that unlike most of the other loved companies, this one has no free tier. How did people come to take a chance on them without being able to demo their product?
I found fly.io was slow and hung when trying to deploy. I might try it again in a year. Vercel definitely just works and has been a wonderful experience using NextJS for a side project
As a long-time Next.js user, I'm deeply disappointed by the way Vercel (the company, fka Zeit) has so tightly coupled vercel (the platform / hosting service, fka 'Now') with Next.js (the framework). I absolutely loved Zeit (the company, before it was rebranded Vercel), with its amazing DX-first ecosystem. But now, to the detriment of it all, only those specific patterns of Next.js usage that align perfectly with the vercel hosting platform's opinionated approach are treated as first-class citizens. Vercel the company makes its money from vercel the hosting platform, and it employs all the Next.js maintainers. As a result, simple economic incentives have shaped the framework, making it much more rigid and limited than it'd otherwise be.
Examples include:
- proper support for "12-Factor App"-style immutable build artifacts, deployable to multiple envs (eg DEV, QA, STAGE, PERF, UAT, PROD). The way .env files are handled makes this incredibly cumbersome. Vercel only supports (local development) + (preview deploys) + (production), so for anything outside that tri-fold paradigm you're on your own.
- removal of support for custom servers
- reliance on unofficial 3rd-party serverless framework to support deployment to Lambda@Edge, and problematic or missing documentation for any non-Vercel serverless deploy target
I still think Next.js per se is pretty great*, but in my direct experience working with consulting clients to leverage it in their non-Vercel, multi-environment infrastructure, the pain points are significant and caused largely by the coupling I described.
If you embrace vercel as the hosting platform, with its specific conventions and limitations, and can accept severe vendor lock-in, then sure, it's going to be a pretty smooth ride.
*For my part, when it comes to picking a React-based framework that provides SSR / BFF, Remix.run is looking better and better all the time.
Had a heart skip for a moment from the headline, it reads exactly like the kind of corporate "Here's how [thing we're taking away] is actually a good thing for you" line we hear so often. Pleasantly surprised this is actually just a deep dive into how they keep their costs low and free tier still tenable.
I like how they call out that you can do this with a SaaS offering as long as you keep a handle on scaling costs with the hybrid architecture. Their system architecture enables their business model!
I see the same thing with $CURJOB, which has a downloadable, self-hostable fully featured solution. The operational dynamics are different (it is harder to convince folks to run software themselves than to sign up for a SaaS, all other things being equal) but the overall dynamic is the same: offer a spectacular free product to allow for scaling customer discovery and word of mouth, then charge for things that people with money care about:
> At each level, the value proposition is different, so that users use your tech differently and benefit differently from it. And at each level, the buyer is different, so the messaging is different.
This is market segmentation 101, but it's nice to read about it from an infrastructure company perspective.
One thing they didn't mention which I would in their shoes is how powerful $0 is in terms of letting folks kick tires and self-select their solution. (Or not select it, which is fine too.) Especially for dev focused products, a $0.01/month charge is such a barrier compared to a free solution.
> One thing they didn't mention which I would in their shoes is how powerful $0 is in terms of letting folks kick tires and self-select their solution. (Or not select it, which is fine too.) Especially for dev focused products, a $0.01/month charge is such a barrier compared to a free solution.
I was just thinking about this, I tried a hosted solution with $25 in free credits the other day and liked it, so we're now using it. It's not that we needed the $25, but if I had to talk to finance and get authorization first, we would never have gone with it.
Same with educational material, especially if it useful beyond the service providing it. (See Digital Ocean's playbook, including their purchase of CSS Tricks.)
> It's not that we needed the $25, but if I had to talk to finance and get authorization first, we would never have gone with it.
This is it, this is the truth!
Every single developer tooling company should have this tatooed on their collective forehead. Or something like that :) .
>Especially for dev focused products, a $0.01/month charge is such a barrier compared to a free solution.
Sometimes it can be a useful filter. There are a lot of time wasters at the bottom end of the market. There could also be people who are willing to pay but the free tier is good enough for their requirements.
Sure, that's the balance you have to strike. How much do you want to filter folks vs how much do you want to remove friction for using your product.
Since the marginal cost of delivery of software is essentially zero, it turns into a customer service and marketing question.
> There could also be people who are willing to pay but the free tier is good enough for their requirements.
Sure. Again, will you turn off more folks and lower growth by charging? Or is it better to let some folks use your software for free and not capture all of the consumer surplus as a company?
I think that market segmentation is different than personas. Personas are who is going to use, buy or otherwise interact with your product.
Market segmentation is which features and offerings apply to which buyers and what can you charge them. I don't have a great reference, though. So here's a grab bag:
I like the folks at Reifyworks for all their developer marketing writing: https://www.reifyworks.com/writing but they don't cover segmentation directly that I can find.
A good example of market segmentation is something like this:
- People living on retirement income generally have less spending money than mid-career professionals.
- Companies recognize this so they offer "Senior discounts" for those over 65 years old.
- The old people who have less money save money by presenting an ID proving their age.
- Boom, you can now choose a different (profit maximizing) price for the groups of older and younger consumers.
Basically, if you can check membership in a group or get people to self select into different groups (based on features, pricing, convenience, etc), you can charge different prices to each group. This is a great way to increase profit.
Hope that helps with the basic idea. Source: I am getting a PhD in economics.
Thanks for that! I hadn't heard, before, that anyone had successfully used the "Birthday Paradox" on port numbering to successfully defeat NATs. Glad to see that Tailscale has successfully implemented and tested this -- not just at one layer, but at multiple layers of NATs!
Do a significant % of users realistically read stuff like this? We've had exactly the same ongoing problem for over a decade, and walls of text, FAQs, etc just don't allay concerns. People are just too cycnical to believe it, maybe rightly.
We found the best solution was to claim that pricing kicked in at certain usage tiers, whereas everything is actually free all the time.
Well, now they've got a well-written explanation they can link to any time someone asks "so.. how is this free?"
Plus, this is a great blog post on its own merits. Someone like me (who has never used Tailscale) might find this interesting just as an explanation of SaaS economics. That might lead to me actually using Tailscale, or applying for a job there, or whatever.
Even if 1% of users are satisfied by this post... that's a lot of people!
And if it gains them even one enterprise client, I'm sure that's a massive ROI.
That's funny. Are you saying you had a website or product doc somewhere that said "When you get to 20k connections, you'll be charged $X/month" and then you never bothered implementing payment logic?
Yeah, it’s the google drive integration on draw.io/diagrams.net . Big companies want some cozy feeling so we tell them over 25 users per org is pay for.
We don’t measure it at all, but there’s plenty of companies we know are over 1k users.
Thank you for providing such a great service! Especially love the fact that exporting to PNG allows embedding the diagram data into the image, so it can be still edited later. Genius idea and implementation!
Could anyone ELI5 to me why I might use tailscale? If I don't have a use case for a VPN is there any use case for this product, or if I did want a VPN, why this and not some other service like Nord?
Asking from a place of curiosity, I don't quite understand this company. I suspect it solves a lot of issues related to provisioning your own networks ... Which would explain why I don't quite get it because I've never done that.
I have some services running on my home network (e.g Kubernetes and some stuff on a Raspberry Pi) that I'd like access when I'm away from home. Tailscale made that really easy. I just setup their client on the devices that need to communicate, and that's it. I can access those devices on my home network from my Macbook when I'm out and about. What's really neat is I can even set my Raspberry Pi as a DNS server for devices in my Tailscale mesh (using their DNS features) and use Pi-hole to setup custom DNS rules for those devices. Wrote a short piece about it here: https://evanshortiss.com/crc-tailscale
Many good answers here. I'd say tailscale is useful if you want an "actual" VPN - that is, a virtual network, which is private. You can have VM or rented server somewhere and use it for backups, a media library - and you can connect to it securely over tailscale VPN over 4/5G - and you can connect with your laptop on wifi etc. Maybe a backup box at a friends place, or maybe that server/vpn runs an instance of gitlab and/or a personal wiki etc.
Tailscale unfortunately uses user-space wireguard still - other than that I think it's a strict "value-add" on top of manually configuring wireguard.
The option to route traffic through an exit-node can also be useful (make your phone behave as if it connects through your corporate VPN, with the fixed IP address that your clients white-list for access, for example) - but it's not the primary usecase.
While "VPN Services" like Nord are really anonymizing internet access providers routers - and while technically using VPN-technology, they're not really enabling Virtual Private Networks in a meaningful way - it's just your computer, and their exit-node - a tiny network of two - and they don't have anything to talk about - the exit node doesn't run a web site or a service - it's just a router.
I've wondered this as well. Everyone seems to rave about it, but I run my own wireguard and don't find it too hard to add devices to the network. I think maybe you can use it to expose certain things to the internet easily? I don't have a lot of trouble doing that either. I've scrolled around their marketing site for a few minutes before and I just don't really get what all the fuss is about. I'm sure I'm missing something.
I will say, and I think this is right, the proposition here isn't a VPN like Nord which you'd use to hide your traffic from your ISP or masquerade into a different geolocation, but rather a VPN for connecting to your own devices.
I can see that it's easier to setup for someone who doesn't know how to use WireGuard, but not how it would benefit me personally. I guess SSO is nice.
I think it's more like... "What's the value in Dropbox when I'm already running Nextcloud?"
It's very easy to run your own WireGuard, and if that's all you want, by all means, do that. A lot of work went into making WireGuard the easiest-to-configure VPN --- it's deceptively sophisticated (the best kind of sophisticated).
Tailscale is also deceptively powerful, and that's why people love it. In particular: getting WireGuard deployed across a whole team with a single source of authentication truth and role-based default-deny ACLs is not, in fact, very easy to do. The massively more common pattern in tech companies with access VPNs is something like OpenVPN, with separately-managed credential stores (that get desynced and lock people out --- or accidentally retain access for separated team members) and default-allow network policy that gives anyone with access to the VPN direct access to Redis, databases, staging instances, and stuff like that.
I don't just like Tailscale. I fucking hate Tailscale for how simple they've made one of the larger problems in corpsec. It's maddening.
That's true, the ACLs are pretty huge. I've heard about Tailscale almost exclusively in the context of /r/selfhosted, and this post is about the free plan which guided my response. It's not hard to see why this would be useful at my job. Honestly I wish they'd pay for it, OpenVPN is such a pain for my users.
If your ISP does CGNAT a typical WireGuard setup won’t work without a public IP address. Tailscale makes it possible to use a VPN without a public IP. I use Tailscale with Starlink which uses CGNAT.
Tailscale has three main pieces of functionality over vanilla Wireguard: Automatic peer configuration, NAT holepunching, and network ACLs.
I won't talk much about ACLs since if you're the only user on your VPN, they don't matter. E.g. I use Tailscale but I don't use ACLs because who am I going to block from connecting to what? Am I concerned about my server trying to compromise my Raspberry Pi? (Maybe I should be, but life's too short so I don't bother.)
Automatic peer configuration is a pretty killer feature, though. If you're just running plain vanilla Wireguard, then you have to manually copy keys between every pair of devices that need to be able to talk to each other. That's fine if you only have a few devices, or if you have a large number of devices but you're happy to use a hub-and-spoke model where each "client" only talks to the hub, and the hub routes all traffic. But once your number of devices starts to grow, or you decide you want direct links instead of hub-and-spoke, it can start to get unpleasant.
NAT holepunching may seem unnecessary if you're used to having a VPN hub and just port-forwarding to it. But it opens up a whole set of possibilities that would just be non-starters without it. Just off the top of my head, here are some things that I would consider easy with Tailscale but cumbersome-to-impossible without:
1. Not having to worry about static IP assignments on my LAN. Admittedly, this is more of a convenience than a true barrier to anything, but with vanilla wireguard one of the devices needs to be able to initiate the connection, meaning that the other has to be able to receive unsolicted traffic on some port. Normally I'd do that with port forwarding, but all of the port forwarding I've ever done requires a fixed internal IP to which to forward the port. Instead, with Tailscale, you can just plug in your server/RPi/whatever and forget about it.
2. Similarly, you can take advantage of this to get a window into a network that you don't control. (It sounds bad when I put it that way.) Say you've got a relative a long ways away, and they're constantly calling you for help with their network and you're constantly walking them through how to fiddle with their router settings or something - with Tailscale, you could just preconfigure a Raspberry Pi, ship it over, and not have to worry about being able to connect to it once they plug it in. Voila, you have an entrypoint into Grandma's network or whatever.
3. Self-hosting afficionados like myself tend to turn to "can I put a thing on a server somewhere" as a solution to many problems involving cross-device communication: file synchronization is an obvious example. But what if all the devices could seamlessly talk to each other, anywhere and anytime? Then you could pop, say, Syncthing on each device and not have to worry about having a server up.
Tailscale also has some extra goodies like being able to share a device to someone else's Tailnet, so if you run (say) a Plex server and you want to let someone else talk to it without exposing it to the greater internet that's pretty easy.
Their "Magic DNS" feature is also quite convenient - I used to pride myself on being able to remember all the IPs I had assigned to all my network-connected stuff and therefore not needing DNS, but since I've started using Tailscale I've found myself defaulting to DNS names more and more without ever even consciously deciding on it. Words are just more memorable than numbers, there's no need to fight it.
All that said, if none of those use cases seem compelling to you then maybe Tailscale just isn't for you. Different strokes for different folks.
This is all great stuff, and reasons to respect Tailscale, but honestly the killer feature for their big-money customers, and the reason I have such strong feelings about it, is much simpler: Tailscale does SSO login, and does it extremely well. If you're running a security practice for a growing tech company, one of the most important early jobs you have is getting all your services migrated to SSO. VPNs are notoriously annoying to SSO (I have seen some janky Okta integrations for OpenVPN).
It’s atrocious. We are using OpenVPN with Okta LDAP and you have to type “password,totpcode” as your password. Alternatively you can type just your password and wait for it to send a push to your phone while OpenVPN is completely blocked waiting. You have a yubikey? That’s a damn shame.
Training and support for this for our entire company was a pain in the ass. I also felt embarrassed having my name on rolling out something so janky.
We are trialing Tailscale now and onboarding is two minutes and practically doesn’t need a guide (Download the app. Click login. Okta auth however you want). Our OpenVPN guide is like 8 pages.
I think the pitch here is “Semi-managed WireGuard peer provisioning and NAT punching as a service” usable by anyone who may not otherwise have a clue how WireGuard works (eg. friends sharing access to a file/media server), within 5 minutes or less from download/login to “done”
I struggled with a use case at first as an individual user, but now I'm using it in a few different places.
I have a Synology on my home network which I use for Time Machine backups among other things. My Mac has a Tailscale client and I can backup to my Synology from anywhere.
I have a number of random servers I keep for hobby stuff, a mix of hosted bare metal, VMs and VPS. None of them have SSH open to the internet. My access is all over Tailscale. It was super easy to setup, and now I never have to touch it. Occasionally I'll see that the Tailscale daemon was updated on some host.
If I were starting a company today, as soon as I had any resources that needed any kind of remote access for the team, I'd use Tailscale to provide that access.
A service like Nord VPN or other such VPN providers setup a connection between your device and an exit point that they manage (a server to keep things in a client-server structure). So the idea there is that no one monitoring your traffic should ideally see what websites you visit, what things you download or what devices you connect to (I'm keeping this broad and very surface level to be able to reach a common point of understanding and if anyone adds to this, by all means, let's clarify this as it's quite a complex topic).
So let's say the local government blocks access to certain content, you can connect to a VPN provider's network, select an exit point (a server) and your traffic is routed through them. But this can be monitored by that provider and I read an article recently that highlighted a lot of free VPN providers cannot be tracked down to companies, so you couldn't say who is running those servers. Which means, you don't know if all your traffic isn't actually recorded in the end and sold on to someone.
This brings me to the first difference - you can setup your own server (at home or more likely through an infrastructure as a service provider like Hetzner, Ovh, DigitalOcean, etc) and install Tailscale on it and on your device(s). This way your connection is secured to the server and the server is the exit point now. Your provider in this case, cannot see what your server is serving you. The added control here is that the server IS YOURS, so you can clear logs, take it down and setup another one and so on.
The second difference is that a VPN in most canonical cases has a client-server construction. But this means that there is a hierarchy and that all your devices use that server as a gateway of sorts. If I understand it correctly, Tailscale acts as a mesh that is laid on top of your existing connections, but it means that devices that you connect to the same mesh, behave as if they were on the same LAN network, but over the internet. So let's say you're on holiday, you can connect to your home computer (assuming your device and your home system have Tailscale, an internet connection and are running ofc) as if it was on the same network. Because it is. It's on a virtual network where Tailscale creates these connections and manages the IPs on the network. So you can view your movies, copy over your pictures from your phone to your home computer and so on.
You could also maybe have a home server which might be running a number of services. Enabling SSH over the internet has it's risks, but Tailscale could alleviate a lot of these risks because you would have a fixed IP on this virtual network and so does your server. So suddenly, you can define a rule on your server firewall that says "hey, block everyone, except THIS ip".
Lastly, you could maybe even just share pictures, documents and whatever else with friends, family or anyone else who is running on the same Tailscale network.
I really hope I haven't completely misunderstood the service and I'd be happy to get more clarity or some better examples. These are SOME of the use cases I can think of, but there are probably more! Btw, I don't use Tailscale, I am considering it after having considered other mesh networks like Yggdrasil as that's the part I'd be interested in...
For pretty much every SaaS app out there with a freemium model users on the free plan aren't "the product" and their info isn't being sold to anyone. Rather, the free plans are considered a business expense to motivate a percentage of the user base to move to paid ones.
So what they are saying makes sense but is very far from revolutionary.
It isn't that they have a free plan, but instead how they plan to keep it free (presumably forever), if that makes sense.
To compare, logistics was Amazon's main cost driver and yet Bezos mandated to not only make it free for Prime customers but deliver orders in 2 days! This forced Amazon on a path to optimize the hell out of fulfillment and logistics so much so that they now own airports and cargo planes, build electric vehicles for middle mile and last mile themselves, have a tremendous network of distribution centers across the US / world etc. In essence, sustaining Prime drew a proverbial moat around Amazon (though, companies like Facebook, Snap, Uber, Instacart, Klarna, Alibaba, Wish have inflicted damage, inspite that).
I've been a fairly big ZeroTier fan for a year or more, playing around with it on my own machines. They do some really slick things with public networks and braodcast traffic and those "public network with an open firewall for port X" (their name escapes me), and I like their web interface (vs managing files like Wireguard or Nebula).
They were on the short list for deploying an overlay network for work, and when I started thinking hard about it, I was concerned about availability if their controllers went down, I didn't want to tie our availability to theirs.
So I asked their sales a question about if we could host a backup controller or something to allow our network to operate if their controllers went offline. It took (IIRC) a couple weeks to get a reply and that reply was along the lines of "It's impossible for all our controllers to go down, but if you want to self hose you lose the web UI." I replied linking to a ZeroTier tweet saying "Hosted controllers are coming back up" and asking "What was the event referred to in this tweet", and got only crickets in response.
So I'm planning on going with Nebula, but also keeping an eye on DefinedNetworking.
If you're looking for alternatives you might find the free, open source project I'm a dev on interesting too. you can run your own network if you want. Give us a peek? https://openziti.github.io/ If you like the project just give us a star on github so we can spread the word :) Right now we also have "a single controller" but you don't lose any network traffic if you have to restart it and of course - we are right in the midst of going "distributed controller" to eliminate that spof.
Just from a look at the homepage, it does sound interesting. In our case, we have a lot of legacy apps that will get in the way, but Zero Trust is a direction we'd love to move in.
The tunnelers are your friends there. We have them for all major desktop/mobile os'es. They are "basically a vpn client" but with all the actual zero trust goodness you'd expect...strong identity, policy based security etc. We realize you can't just embed zero trust into all your apps right from the start. it's a journey! :)
But they aren't just Nebula, they offer a hosted key manager for Nebula and tooling that centrally manages your configs. It's still fairly early, doesn't support all Nebula features, but it looks promising.
Interesting. Last time I checked it wasn't possible to run Tailscale on Android TV (NVidia shield). With zerotier it's at least possible to sideload the APK. Not so with tailscale :(
I like the idea of Tailscale, because it gets us a little closer to the network where all devices are connected instead of 99% of them being behind NAT.
But I don't want to use them when they don't support email based logins. I did read their explanation[0], but I am not sure how it actually makes sense - if they don't want to have passwords, why not a client cert?
You can imagine them getting to some fussy custom authentication scheme like client certificates at some point, but IdP-based SSO logins --- usually email-backed --- are a practically-universal security best practice for corporate security now. The goal is to make it easy to enroll and offboard people and make it difficult to miss a step in offboarding and thus leave people with undesired access, and to have a single source of authentication truth that can be regularly audited.
I wouldn't trust an IdP-based SSO login for any critical service that I need continuous access to, unless I control the IdP.
All those stories like "Google blocked my account without recourse and don't answer tickets anyway" have put me off. I lost editing rights to a Google My Business profile that I was the sole owner of, because they gave third party input precedence over the owner's own entered data (opening times of all things) then locked the ability to update it, so I know loss of control over one's own account isn't that rare with Google.
It's not just Google. So I trust my domain provider more than I trust any third party SSO, because I believe I have legal ownership of domains in case all else fails. I don't seem to have equivalent rights over SSO accounts at any third party. So, for now until something better is available, email-based accounts are a must-have for any critical service.
You do you, but that first sentence puts you wildly out of step with most security practices at most companies, very much including most tech companies.
The most recent medium-size tech company I worked at is very security aware in a modern way, and uses Google IdP for employee access.
But they don't trust Google, and have been looking for a way to migrate everything away (IdP, docs and email) for some time.
Everything you said about the benefits of IdP, SSO and enrolling/offboarding employees is spot on.
The only problem is that some third party IdPs aren't as low-risk as they seem, if you're a small entity who cannot get corporate support in case of a problem. The risk is small but not enough, and the consequences for a small entity are severe. Loss of access to docs, email and your other service provider logins can kill a business.
For larger tech companies the support hotline will answer, so it's not a problem and it makes sense to outsource. It would still be better if the company had legal rights to their own credentials on IdP as a backup, though, similar to the way they have legal rights over their domains.
I use it for a simple use case of connecting to my home assistant server from outside my network without having to open ports.
I just installed the tailscale app on my home assistant server (ubuntu) and then installed it on my iPhone. Then once they're both logged in I can use the IP address in the tailscale app to connect to the server from anywhere.
Like mentioned in the article it just works and is perfect just the way it is, for free. I don't need any extra features or improvements.
So, will it be possible to do ssh to my home machine with my office laptop, if both have tailscale app? I imagine even if it were possible, office network security might block it.
It's fine for families with the sharing nodes feature[0]. My family and I use it for a few services hosted on different machines in different locations (Jellyfin, Home Assistant, and some others).
It's an easy way to get remote access to services when away from home, or when the family lives in different homes but wants to share services. I wrote a guide explaining how to set up remote access for Jellyfin using Tailscale[1], which may illustrate the use case.
> It's also not super great for some workplaces because tailscale kinda...gets superpowers in your network. It is definitely something smaller companies are using though.
There are banks [1] using Tailscale. If their security concerns can be addressed, I'm sure it can work for pretty much any company.
Tailscale may have given them a custom solution. If not it'd be pretty irresponsible to run it the way tailscale runs right now in a banking situation.
You think banking cyber-security is waaaay better than it actually is. Many don't offer 2FA and require password lengths that can be cracked in less than a week on decade old hardware.
Having one user isn't really that bad of a limitation for a family, just share the login. Unless you want to prevent certain family members from accessing certain devices at a network level for some reason.
It ties to OAuth logins (Google, github), that doesn't fit soo well with "just share login", although you could make a dedicated Github account just for it I guess.
They have a "Community on GitHub" plan¹ that's free if you're using it either for an open-source project, or "to interconnect your own friends and family". It does mean your friends & family have to use GitHub to authenticate though, but that seems like a pretty small restriction.
I use it for remote access to a fileserver and SSHing into a machine from my phone while I'm on the go, without having to expose a port to the internet. I tried a few alternatives and found them to be clunky. Tailscale just worked, even with someone more complicated parts of networking like Bonjour/Rendezvous/Avahi/mDNS
i dont know man.... been using zerotier quietly for like last 2 years, never i remember i have had problems or anything messed up.
if i have to add a node, i install the app on the device, open the account, copy the code and authorize it and done. no config, ever.
can anyone tell me what is the difference between free account of zerotier and tailscale? the configuration, management, setup, ease, limits?
again, zerotier is set up once and forget. oh, no login on the clients as well, they are preconfigured because they use a key and that key gets verified in the client so no login issues even
The one and only thing that keeps me from using Tailscale is the sign on options. I do have accounts with Microsoft, Github, and Google, but I'm trying to move away from Google (getting any kind of account ban on any Google property means you lose ALL of your Google access with no way to recover, just search HN for stories about this), and I don't frequently use those other services. There's no options for other 'free' identity providers, and you must pay to use any of the commercial solutions. In their support documentation, they note that they absolutely do not want to become an identity provider so they will not offer accounts directly.
I don't mind paying for a service. But to use Tailscale for personal use with a commercial identity provider would get into the "prohibitively expensive" category for me.
Using Tailscale is like using Dropbox back when it was new: it's "just X but without the setup or maintenance", which shouldn't be so gosh darn satisfying to use, but by jove, it is. It makes you feel less shackled by the constraints that defined your world before. Awesome.
Paranoid firewalls blocking NAT traversal are a pain. I am running a private DERP relay to get around public relay congestion. I also have a subnet relay running - I am watching which solution will be long-term more performant and reliable.
Their ACLs definitely take some getting used to but I think I have things about where I want them.
Surprising issue - conflicting address ranges between home users and corporate network prevent subnet relays from working seamlessly.
Centralized logging would be a cash-worthy feature.
Adoption by the team is a bit slow, most people are still using SSH tunnels despite the clumsy nature.
They log to themselves centrally, and they log to the standard locations on the clients, so you can pull the logs in from there using your standard centralized logging setup.
Conflicting address ranges for subnets isn't especially surprising to me. Making sure your corpnet isn't on 192.168/16 probably is the most helpful thing to do there.
I'll probably have to look at this GTM3.0 ideal. I've found, that in the general case, it is fine to underprovision and just let stuff fall over, if no one is paying. I'll get around to fixing it eventually, even if no one alerts me. One of my goal projects is to have a 1U racked in Los Angeles or Dallas that's hiding 8-12 raspberry pi or intel celeron/atoms inside of it, including a pair of switches and all the redundant PSU you can shove in there. A nice "NAS" device for cold storage would be awesome, too.
I run a pastebin server using the legacy t1.micro AWS instance - or even less, i run it on lightsail now. upon reboot, it sets up ~250MB of tmpfs, unzips the actual server code - nodejs in this instance - to tmpfs, and sets the data directory in tmpfs as well. The only way it could cost me a ton of money is someone maliciously requesting the same paste from thousands of remote machines, but my understanding is amazon would reverse the charges, and i'd probably just not run the service anymore. I can almost as easily paste and link stuff using mattermost - except full-frame images from one of my cellphones, which i can't figure out! there's no setting to allow larger format images anywhere in the configs. So i'd be out a few dollars, know that someone had it out for me or one of my anonymous users, and just walk away.
I would miss being able to upload obscenely large (108MP) images and pinch zoom them forever, which is a quirk of the pastebin software i chose.
Free until it isn't. If anyone could eat the cost of a free tier it would be Google but they decided that the Google apps for business free tier has to go after 12+ years.
The worst thing Google did was put the highest potential source of new revenue (GSuite) and the product losing the most money (GCP) under the same executive and tie their compensation to performance. The former gets milked in every was possible rather than what would make the most sense as an individual business unit.
While I don't disagree that this wasn't helpful to GSuite, it is simply cross-subsidization in pursuit of a higher goal and a bigger strategic threat (AWS/Azure).
Sure, but you can get great use of it for a long time, as long as they judge the viral growth outweighs the possible revenue they could get.
I'd be more worried about the VC funding they've taken ($15M according to crunchbase). It may take years, but eventually, somehow, VCs will need to get their money back. That may be an IPO and then public market scrutiny, it may be acquisition, but if the company is a going concern, the VCs will want ROI.
Cloudflare, which went public in 2019, has done pretty well with their free tier, with their reasoning being[0]:
> Our free customers create scale, serve as efficient brand marketing, and help us attract developers, customers, and potential employees...
So for as long as they note that free Tailscale users are worthwhile for how they are effectively free marketing and attract clientele, it shouldn't be a problem. Tailscale doesn't proxy traffic either so the overhead of having free tier customers shouldn't be huge.
If my enterprise network managers could buy a Tailscale Box, they'd readily consider it. As-is, this is a bit far-fetched relative to their current modus operandi -- `Advanced corporate VPNs like Tailscale can abolish concentrators completely: every server can run Tailscale directly, and individual clients can form point-to-point connections to each server it needs to talk to.`
Anyone figured out how to bridge the gap from legacy here?
Yes - you run one or more Tailscale subnet routers instead of your existing concentrators, then slowly migrate to running Tailscale directly from new deployments at your convenience.
Running a subnet router is a matter of installing the Tailscale package on a server and authorizing it to route traffic to certain subnets over Tailscale.
It's an entirely different set of teams who run anything "on a server". Besides the gap in teams or legacy demarcations of responsibility, their next disqualifier is having to think about maintaining a server. At best, the network team has just barely automated their switches & routers with Ansible. The VPN concentrators are treated as black box. And NetEng seem to prefer to stay within that box!
Maybe we're just not normal? (UK/EMEA, public company)
(I wrote the article.) You're not that unusual, we just haven’t had time to address that use case directly yet. I expect an ecosystem of MSPs may arise to offer physical boxes, or some such thing, since the tailscale client is open source. (Or you could buy a Synology with tailscale on it I suppose!)
Many companies just run tailscale in a VM to replace their physical VPN concentrator boxes.
If someone pointedly asked me this in a meeting, my off the cuff response would be "bastion hosts, probably".
if the named service completely integrates with whatever access control a company uses (radius, SAP, whatever) then there shouldn't be any reason to not use this in lieu of concentrators. At least you lose that bottleneck and point of failure. For larger and more geographically disparate companies, i could see this being an even better proposition, but only because this is merely the second time i've seen tailscale at all.
All i know is i've used wireguard recently, and it took me a few tries to get it to do what i wanted. a decade ago i was trying to get some corporate VPN software working on Gentoo, and i managed to cobble enough correct settings to get it working, too. I don't wish that on any user.
I loathe setting up a dialer to connect to a VPN, and even worse is the 3rd party app "ssl VPN" junk - most of the ones we've tried just lose settings on my computers, to the point where dark fiber seems like a better investment of my time.
> We continue to improve our core product so it can build point-to-point links in ever-more-obscure situations.
I'm not sure how high priority this is. We've been wanting a way to force traffic over a private network, for example, but it's still not possible. It's really the only thing we've asked for for our business, and we hope they will introduce it soon, it's easier keeping a lot of ACLs in one single place.
I’m on the free plan at the moment, and it’s pretty neat, but I’d actually be willing to pay for a self-hosted version :P (But I guess the existence of such an option might tempt some paying enterprise customers to attempt self-hosting instead?)
(I’m aware of headscale as an open-source control plane, but the iOS client is still closed-source and hard-coded to only use the first-party control plane :( )
While not apples-to-apples and less polished, we're slowly building up https://github.com/tonarino/innernet as a fully open-source (and self-hosted) alternative to things like Tailscale. It controls vanilla WireGuard under the hood (kernel or userspace implementations), and is lower level (no graphical interfaces yet), though, but depending on your needs it might still fit :).
I have been experimenting with headscale as well. I have it set up and everything works nicely but the Tailscale OSX client actually cannot automatically relogin.
Headscale has the preauthkey, it is still valid even but I need to do the tailscale up --login-server ... dance every time to get it connected.
Interesting. I've been running Headscale since the beginning of 2022, and haven't had any issues like this (MacOS Monterey). Have about 5-6 other MacOS users, and they haven't mentioned anything like this either.
I've probably logged in a grand total of two or three times (during initial testing in Jan). Everything "just works" for us.
Thank you, sounds like this will actually take care of it. I thought the /apple instructions were only for iOS since --login-server option was present on the binary already.
I just tried and found a show-stopper that you might want to solve - it slows down my TortoiseSVN Log Messages view which connects to a local SVN server from 0.1 milliseconds to 20+ seconds.
I love the business model, but I'd wonder if it works forever? Can word of mouth also be enough if you want to go mainstream, if you want to hit the mass market?
“First, the protocol should be based on UDP. You can do NAT traversal with TCP, but it adds another layer of complexity to an already quite complex problem, and may even require kernel customizations depending on how deep you want to go. We’re going to focus on UDP for the rest of this article. If you’re reaching for TCP because you want a stream-oriented connection when the NAT traversal is done, consider using QUIC instead. It builds on top of UDP, so we can focus on UDP for NAT traversal and still have a nice stream protocol at the end.”
That article is the best article I have ever read on the nitty gritty of how NAT traversal works.
I've read it before, fantastic article. I agree that using QUIC is better for the purpose. Still I'm curious whether TCP hole punching worth to support direct connection for UDP blocked environment. Maybe TCP hole punching also won't work on such environment?
From article, they use the relay as a fallback if UDP is blocked.
“We’ll probably also still want fallback relays that use a well-like[d] protocol like HTTP, to get out of networks that block outbound UDP.”
“Having relays to handle the long tail isn’t that bad. Additionally, some networks can break our connectivity much more directly than by having a difficult NAT. For example, we’ve observed that the UC Berkeley guest WiFi blocks all outbound UDP except for DNS traffic. No amount of clever NAT tricks is going to get around the firewall eating your packets. So, we need some kind of reliable fallback no matter what.”
As long as you don't need to access said Plex server from an Android TV device, like Nvidia Shield - AFAIK there's no APK build for tailscale that will install on Android TV (which is a slightly different device profile from phone/tablet).
> Tailscale’s free plan is free because we keep our scaling costs low relative to typical SaaS companies.
Well no, Tailscale's free plan is free because this is a user acquiring strategy (as it is described later in the article ) not because of your low cost. You could have a higher cost and still be free and typical Saas companies have the exact same reasoning than Tailscale, that's why Notion has a free plan, that's why Cloudflare has a free plan.... That's the whole point of a freemium model. What VC call product led growth.
The if you are not paying you are the product is true for free products not for freemium
They’re solving a problem that, in the past, has caused me to be dramatically less productive. I set up Tailscale on all my devices in 20 minutes. It’s like magic.
You can bet that the next time I’m working somewhere and we face a problem Tailscale solves, I’m going to advocate to use it.
I agree with your statement, but to devils-advocate for a moment the details matter do they not? Ie if your scaling costs are high than wouldn't you need to extract much more from each user than if they are low?
Ie high enough and you're doing sketchy shit and selling every ounce of user data you can because you have to recoup some of the user costs. Low enough and users could "just" be a mind-share cost.
If true, the devil is really how the user has no information here. All we can do is assume we are the product and they're fully-evil. Unfortunately.
I think it is important to keep free plan costs low since conversion rate from Free to Paid plan is very low as I heard from a interview with Todoist CEO.
I wonder how companies split their features among pricing tiers? what is the right feature split between free and paid (in the context of B2C)?
This is mentioned in passing, but shows a very good technique. They incentivize technical excellence by tying it to a concrete cost. A free plan, with DERP, is the sacred cow that must not ever be removed. If they don't fix the "ever more obscure situations", then the cost goes up. If they pay an engineer to investigate and fix this, not only does the engineer get to do interesting technical work, and not only does the system become more reliable and "good", they can also think of it as increasing the profit margin of the product (by not increasing costs).
I worked with Avery on Google Fiber, and we did the same thing. Our sacred cow was excellent US-based phone support. That is quite expensive. If there were bugs in our product, users would call in, and our call center costs would increase because we'd have to have more people working. So every week in our team meeting, we would look at summaries of calls, and take on engineering work to address the most common class of problems. That let us scale up the business and still provide friendly and competent phone support, because we were reducing the problems that people called in about. (This was things like having our Wifi access points steer 5GHz capable devices away from flakier 2.4GHz signals, or fixing "black screen" bugs where TV randomly stopped playing for software or network reasons.) Because we had that "sacred cow", every obscure bug that we spent months fixing not only made the product better and were intellectually stimulating to finally figure out, but had a concrete impact on how costly it was to deliver the service.
What most companies would do here to reduce costs is simple. Don't fix DERP bugs, just charge for it. Don't fix "black screen" bugs, just hide the phone number on your website so people can't figure out to call.
Avery has found the perfect balance between cost reduction, interesting engineering, and the somewhat nebulous "good product". Normally conflicting concerns, all living together in harmony. If everyone copied his technique here, the world would be a better place.