Hacker News new | past | comments | ask | show | jobs | submit login
Exposing a web service with Cloudflare Tunnel (erisa.dev)
398 points by geostyx on Feb 8, 2022 | hide | past | favorite | 178 comments



I maintain a list[0] of solutions to this problem. Cloudflare Tunnel is what I currently recommend to most people. IMO it's the easiest way to expose services publicly on the internet. For example a website or shared Plex server.

Main downsides to Cloudflare Tunnel are no e2ee (Cloudflare decrypts all your traffic) and technically anything other than basic HTML websites (ie media streaming) is against their free ToS, though I haven't heard of that being enforced in practice.

If you're the only one ever using your services then I'd recommend Tailscale instead, which sets up a VPN using WireGuard along with slick auto p2p setup (NAT traversal, relays, etc).

[0]: https://github.com/anderspitman/awesome-tunneling


Hi, I'm the author of the blog post being promoted here.

I love that list! I also use Tailscale for a lot of my personal private services as well as Cloudflare Tunnel, I think they're both really great :)

The concern about Cloudflare decrypting the traffic is valid, I just personally feel for a lot of public websites that's often fine especially if the hoster might have been using Cloudflare already anyway. If an individual doesn't want to use Cloudflare for their setup then that's fine and there are lots of cool pieces of tech they can consider!


> though I haven't heard of that being enforced in practice.

It happened here[0], and the reasoning for why they allow some free tier content is in their S-1[1]. Typically, even if you blatant file sharing or video streaming application in violation of 2.8, Cloudflare doesn't necessarily care as long as it's not too bandwidth intensive (eg. I wouldn't recommend having a dozen people streaming Plex from the outside internet).

0: https://community.cloudflare.com/t/the-way-you-handle-bandwi...

1: https://l.judge.sh/85EH


Thanks for this. The thread is confusing because the user is quite upset and hostile and didn't seem to understand Cloudflare very well, but in the end this does indeed seem like a case of the site being shut down due to non-HTML ToS violation.


Consider adding Tor onion services to that list. The idea is that you run a Tor daemon that starts an onion service which can expose any TCP-based service. Communication is facilitated via another node, which makes it possible to host onion services behind NAT.


This isn't required for a shared Plex server, they proxy external connections via their servers automatically.


They limit the bitrate to 4mbps through their relay servers though, which prevents HD streaming.


The relays only come into effect if the client isn't able to form a direct connection to the server. However most are able to do this without issue and its automatically for the most part.


Good to know, thanks. I used Plex as an example since more people know what it is, but in practice I would use Jellyfin for media streaming, since it's open source and doesn't use dark patterns. But you also need to manage tunneling yourself...


What about Slack's Nebula? Tailscale is not fully open source. I believe there is also headscale which is attempting to replace the closed-source parts of Tailscale. But I am curious about Slack's Nebula. Has anyone used it for anything like this?


Never used it, but it seems more complex and doesn't use standard tunneling (eg. wireguard).

You should also check out Innernet if you're interested in this space. Wiretrustee is similar to TS in that it mixes OS/Closed source.


There's also Azure AD Application Proxy [1] and Zscaler Browser Access [2]

No affiliation but what I'm having to use at work.

[1] https://docs.microsoft.com/en-us/azure/active-directory/app-...

[2] https://www.zscaler.com/blogs/company-news/securing-third-pa...


I found this to be good as well, maybe you can add it to your list https://github.com/ferama/rospo


This still feels too cumbersome even for a technical person.

An “easy” solution would be something that gets your local content online in one click or less.


In my biased opinion, the "easiest" solution currently is my own boringproxy, which I mention at the top of the list. Once you have the client daemon running on each of your devices (static executable with minimal CLI params and no config file), adding and removing tunnels is just a few clicks in the web UI.

It also has basic e2ee. The TLS certs never leave the client devices by default.

Even so I agree with you that this is still too much. I think a non-technical person should be able to write some content, go through a quick OAuth2 flow to point a domain name at that content, and have it just work. I'm currently working on building something more like that.


If I wanted my grandma to host a folder from her Mac so I can access it from the web, what solution feels best?


An hosting service is the only solution since otherwise the website will be down when the Mac is rebooted or moved if it's a laptop.

The service in this article is either for development purposes or for people who are running dedicated home servers (which means they have a Linux desktop that they keep on 24/7 without rebooting and are usually programmers and/or system administrators).


What's the goal? Does your grandma want to start a blog and you're talking about hosting the HTML from that folder, or do you want to be able to access the folder to read/write files, or something else?


The goal is to host a html page and/or share a file.


I think our thread got too deep and it won't let me reply. Feel free to contact me directly through https://apitman.com or post on https://forum.indiebits.io if you want to talk more.

But to answer your question, you'll need to run a CLI daemon on your grandma's computer. Something like ngrok static files would probably be the easiest:

https://ngrok.com/docs#http-file-urls

But since you're already setting up one daemon in that case, I'd use Cloudflare Tunnel and also run a basic webserver or WebDAV server alongside it to give you more control over how the files are hosted.

Also pretty sure you have to pay for custom domains with ngrok.


I'm not aware of a good solution to this currently, but it's a space I'm very interested in. The main problem is that the devices most people use these days (phones and laptops) are constantly being connected and disconnected from networks. So even if you solve the software problem and make a nice GUI program for your grandma to use which automatically handles TLS certs and tunneling, if she closes her laptop her blog goes down.

I think the way to do this may be to ship services as Android apps. Imagine something like self-hosted Google Drive that you install as an app on an old Android phone. After install you go through a quick OAuth2 flow to connect it to a subdomain and open a tunnel, and now you have 64-128GB of e2ee cloud storage. Just plug the phone in and leave it in a corner.

This concept can be applied to Nextcloud, Jellyfin, Plex, your grandma's blog, etc.


Overlay networks could offer a good solution here. Today if you have software on OP's grandma's laptop that starts a Wireguard tunnel to a relay host, the laptop can have a stable IPv6 address to which you can connect to. ZeroTier and Tailscale enable this as well.


If persistence is not key, what is the easiest way to do this? Like if I am on a phone with grandma and want to see a local HTML page from her Mac, what do my simplest instructions for her look like?


Out of curiosity, what kind of content are you looking for in that HTML doc?


Dropbox? The steps would be:

1) Copy folder to Dropbox subfolder

OR

1a) Go to Preferences->Sync->select which folders to sync -> add the folder that you want to share

2) right-click on it and select Share Dropbox Link

3) copy the link and send it via email/whatsapp


iCloud Drive has file sharing built right into the OS


> But what if you could host a web service with no ports exposed? Well, you can! Cloudflare Tunnel makes a persistent outbound connection (a tunnel!) between your server and Cloudflare's nearest datacenter. All the traffic to your domain flows through this outgoing tunnel and connects to your server through the protection of Cloudflare. This also has the benefit of being seamlessly encrypted, so you don't have to worry about a thing when it comes to the security of your web service.

Well, a port is exposed, it's just exposed on Cloudflare's reverse proxies. And I think this is probably a dramatic overstatement of the security that Cloudflare provides...


The point is that it's connected via NAT, so you don't have to worry about port scanners hitting your origin IP and seeing any info about your web server (potentially exposing it to DDOS), and it's overall easier when you don't have to touch your inbound firewall.


I understand that. That doesn't mean you don't have to worry about security.

Most stacks would crumble under a relatively small L7 ddos that Cloudflare would not likely mitigate.


well a decent hosting provider such as hetzner provide that service to all their customers. https://www.hetzner.com/unternehmen/ddos-schutz

Being using them for many years, way better and cheaper than AWS.



https://www.cloudflare.com/plans/#overview

The WAF is $20/month and as far as I know you don't get it automatically for free by using Cloudflare Tunnel, though feel free to correct me. There was the case of them enabling mitigations for the log4j vulnerabilities for anyone on Cloudflare, but that was an exception.


Yes, WAF is one of the features you get if you're not on their free app service plan. I think having the option of simply upgrading and turning it on if it becomes necessary, makes the free offering quite attractive.

I haven't used CF in anger, so can't vouch for it more than that.


We are die hard Cloudflare customers, I am speaking from experience. They are phenomenal, but they aren’t magic.


What do you then mean by a "relatively small L7 ddos that Cloudflare would not likely mitigate"? It seems to me that their WAF would mitigate that and I can worry even less about threats.


Could an origin server run a port scanner through the tunnel and hide the origin of the scan?


Well sure the scan would appear to come from cloudflare. But it’d be pretty easy for cloudflare to then identify the tunnel user as the source of the scans.


Well their WAF and dos protection are pretty nice.

An easy secure setup would be to spin up a guest VM and isolate it in its own subnet.

Disable routing between your guest and the rest of your lan and you can sleep easy at night so long as your app doesn’t serve any crazy dynamic content.


"Walking around covered in body armor and allowing the military to drive me to work in a tank" is nice protection but it's also very restrictive. I don't think the argument against this is so much that Cloudflare doesn't provide nice features as that those features are entirely unneeded for 99.99% of people hosting from home. The downsides of heavy protection are vastly increased complexity and dependence on a non-'dumb pipe' non-ISP corporation which kind of defeats the point of hosting from home.

You really can just host your webserver from home network and forward the port using your consumer grade router and consumer home connection most of the time and nothing bad happens. But this kind of tunneling would be great for when you have a bad ISP that blocks port 80 instead of just saying servers aren't allowed.


Lmao your response made me chuckle. You're entirely right! Probably nothing bad will happen. Especially if you partition your network like I mentioned in my OP.

I would get worried about somehow enabling access to defects in my router by opening some inbound ports. I realize that's a little paranoid...but recently I have been playing around with https://github.com/threat9/routersploit and routinely find defects in consumer routers.

Here's my other beef with cloudflare: Once I gotta pay 200+/mo for their security services or whatever, I could just rent out a private rack in a colocation and throw some old beefy lga-2011 xeon hosts. Now I don't need anything on my LAN exposed and I have dedicated IPs, physical security, and backup generators...etc.


> Here's my other beef with cloudflare: Once I gotta pay 200+/mo for their security services or whatever, I could just rent out a private rack in a colocation and throw some old beefy lga-2011 xeon hosts. Now I don't need anything on my LAN exposed and I have dedicated IPs, physical security, and backup generators...etc.

Yeah but now you need to source the hardware for the rack, make sure it stays up and there's no hardware failures, etc, etc. Even simpler is to grab a Linode dedicated box which comes with v4 and v6 IPs and you get all the benefits for only $30 / mo instead.


Second hand dual lga2011 machines are so cheap it’s amazing. Enterprise grade servers are mega reliable I think people overestimate the probability of hardware failure.

A $30 linode box has like 2 vcpus and maybe 4Gb ram.

Where I live I can get a 1U slot in a shared colo rack for $30-$60/mo. Buy a used dual Xeon blade for a few hundred bucks and now I have a setup with 20x the resources. But yeah I admit there’s a lot more manual effort involved.


IMO if you can get a 1U for those prices, it's silly not to take it. Where I'm at I can't though and that's where a dedicated Linode box may make more sense.


You don't have to enable port forwarding to get your router exploited. I'd argue that port forwarding has neither positive nor negative effect on your router's security.

I've been hosting from home for 20+ years and I've never been troubled. But I only run static websites.


Yeah like I said I realize I am being paranoid but there are far fetched scenarios where serving static sites from home could compromise my home network.

Take the recent log4j vulnerabilities. Serving static content and logging trivial fields like request headers would lead to RCE. If that box can route to my home router, and my router has a defect available through routersploit, my network is completely pwned.

A network isolated VM with a tunnel to a remote vps would stop that particular attack.

All that being said…if a sophisticated adversary is targeting me I have to concede there are much easier routes to take.

I’m a security engineer at my day job so I may have conditioned myself into excessive fear.


A static webserver is just the webserver in my mind. If you use something like nginx you are only going to be surprised by a remote exploit about once every two decades. Yeah, if you use some sprawling set of 'apps' that use things like Log4j on top of your server you're exposing attack surfaces.


and the fact that all your data will flow through cloudflare and they decide how to use it.


No no, it's encrypted so you can just completely ignore the security of your web service.

* Broken auth? Doesn't matter, encrypted.

* IDOR? Encryption takes care of it!

* Blind SQL or something from the 90s? EEENNNNCCCRRYYPPPTTIIOOONN!


To be fair, this feature is part of Cloudflare's ZeroTrust offering, so you're meant to put a policy in front of it and forget it. This is great for getting extremely old legacy services that previously relied on VPN network trust onto an actual SSO provider instead.


They probably use military-grade hashes too. So you know it is very secure indeed.


> ... you can just completely ignore the security of your web service

Be weary of such absolute statements -- especially when it comes to security.


you are replying to a sarcastic comment that agrees with you..


If you have $3-5/month to spare on a VPS, a similar but self hosted solution can be achieved- Tunnel/VPN and reverse proxy- using Wireguard and Caddy.

Caddy in particular is extremely easy to configure, with the bonus that HTTPS/Lets Encrypt has never been free'er. Wireguard configuration is also gloriously minimal but admittedly, potentially tricky to get right the first time.

It's just good to consider alternatives to Cloudfare's network dominance, if you can afford it.


I recently used the same Cloudflare Tunnel project to put an internal hosted service behind Cloudflare access.

I chose this over Wireguard because it integrates with our SSO system and users don't have to configure a firewall client. In fact, most users don't know we even did anything special to secure the service.

Secondly, I can set up wireguard, but then I would be responsible for maintenance, keeping the instance up and patched etc. You may save money by using Wireguard, but you pay for it in time, which is the only thing you cannot buy.


Do you have any guides on the same level as simplicity as this one? It seems while we always bring up wireguard, its a big topic with few good places to get hand hold on.


I can't share the code since it's internal but here's the broad strokes.

* Start with a "gateway" managing your WireGuard "PKI". Basically a group of Wireguard servers with an API that have synced configs.

    /proxies - Your frontend servers.
    /endpoints - Your backend servers.
    /gateways - WireGuard servers that your frontend and backend can reach.
* Gateway authenticates your proxies and endpoints and they both hit a /config endpoint to pull something that can be shoved into wg-quick. AllowedIPs restricts what the proxy is allowed to reach.

* Proxies handle user-auth like any web service and then act as a reverse proxy to the endpoints using the Wireguard internal address.

Nothing at all fancy except that in a normal deployment your frontend and backend would be live in the same datacenter and so you don't need any WireGuard BS.

This provides a model where our devs can hit a public endpoint that reverse proxies to their laptops.


The real beauty of cloudflared is that you can just throw it into a sidecar for your k8s pod / docker-compose container set and configure the entire thing in one place.


That's a good point, sounds convenient.


Similar, I use a cheap AWS Lightsail VPS $3.50 (Lightsail has DDOS protection)-> Wireguard -> Apache Reverse Proxy mod -> my local services.


Why not having clients and local services meet on a Wireguard concentrator on VPs? Thus no need for Apache reverse proxy.

Problem is, 3.5$/month has only 500MB RAM which is very little to run Apache + other services.


There's lots of ways to work it. I prefer retaining control over the service plane for ultimate flexibility and so I can easily switch public access point (the VPS) if needed. This also reduces the need for more powerful cloud hardware, more cloud costs, etc. On Apache, I've run Apache for decades for static web, reverse proxy, etc., I have no plans to change that.


Yep, I've done this with nginx and SSH tunnels, it works well.


I've just done the same thing at work. I've got a little Dell Optiplex running bookstack here, and a AWS Graviton2 box running a wireguard server and reverse proxying web traffic over it.


Upvoted you for your username :)


ipv4 costs will keep increasing, so if you want cheap vpses ipv6 will be the only option and this will allow you to use cloudflare network to serve the v4 users.


I just set up a Cloudflare Tunnel this weekend to my homelab. I was able to connect it up with a container within minutes. I also was able to set up their zero trust offering and had route based RBAC against two domains w/ Google OAuth2 login. I have my reservations about CloudFlare with regard to centralizing the web, but this tunnel is fantastic and saved me quite a bit of trouble with messing with my RouterOs config and nginx.


> I have my reservations about CloudFlare with regard to centralizing the web, but this tunnel is fantastic

Superior UI/UX offered by centralized systems is why everything is being centralized.

People will trade everything including privacy and security for ease of use. The market has shown this time and time again.


Getting ddosed by a $5 botnet, which gets cheaper every day, tends to change people's minds about Cloudflare.

Your users don't really care about decentralized utopia when your service doesn't work.


The only decentralization that's going to work is actual decentralization where there's not really anything to DDOS, or rather the entire system is itself a botnet.


There’s always something to DDoS. It comes down to whether the attacker has more resources than you have server capacity, and these days attacks can be measured in terabits.


Ultimately big CDN is the only way to win for DDoS.


Help me understand what you mean: my service ok particular wouldn’t be ddosed because nobody cares.

I guess bots are hitting CF IPs at large and therefore services might be disrupted?


Well, every service that gets ddosed was once a service that nobody cared about.

But if your service is in a category that attracts ddos (like a forum or game) and you ever get enough traction for someone to care (doesn’t take much), it might surprise you how cheap it is to take you down and how limited your options are against a simple volumetric attack.


Not to mention Cloudflare Tunnel is a loss leader. Basically any new entrant has to either get funding or justify charging money for tunnel traffic.

Cloudflare Tunnel has gotten good enough there aren't a lot of ways to be better left. A couple would be offering e2ee and a less stringent ToS (technically anything other than normal HTML websites is not permitted, though I'm not aware of this ever being enforced, yet).


Cloudflare already has the bandwidth. I suppose tunnel doesn't cost much ( or even anything) compared to the rest since they pay for the size of the pipe.

When someone uses the tunnel, they never have to go outside of cloudflare. Since the traffic ( i suspect) would stay very local.

Perhaps it could be even cheaper in the end for them.


Good point, but they do still have to pay development costs for Cloudflare Tunnel.


That's why i explicitly mentioned the cost of the bandwidth.

I wasn't talking about the development/maintenance.


It's unfortunate the only mature open source alternative[1] went on a path to seriously expensive subscriptions, 5x of a tailscale personal subscription.

[1]: https://inlets.dev/


There are lots of other open source options[0]. Whether you would consider any mature is a bit more subjective.

[0]: https://github.com/anderspitman/awesome-tunneling


I did go through this list a few months ago and found most options lacking. But Cloudflare tunnel was still bound to having an Argo subscription back then. (To be fair, their pricing page is still very confusing on this)


Ok, I'm confused... you went through a bunch of awesome solutions and you found them lacking; but the modest price of inlets is unacceptable? If Tailscale works for you, then you don't need Inlets.

I like to have several environments on my laptop, each with a different Ingress and Let's Encrypt certificate, accessible from the public Internet whether I am at home or at Starbucks. If Grandma's mac has 4G of ram, she can do it too!


That you’d think 20 dollars a month is an acceptable price for this tells me that you’re either in the valley, are Alex Ellis or both.

Either way, I’ve built my own solution in go and if that doesn’t work out I also have cloudflared now. inlets is cool, but it is not revolutionary tech that can not be replicated and 20 dollars a month is mighty much for convenience, which would be hampered again by me having to throw a license key at every instance and being unable to share my config easily and reproducibly. And that ultimately matters a lot to me.


If you wouldn't mind opening an issue (or posting on forum.indiebits.io) and sharing anything you learned that's not already in the list it would be very helpful. I don't have time to try them all in depth.


Ummm... you haven't used Inlets, have you? But seriously, folks who use Inlets have typically tried a bunch of the obvious solutions and end up there when all else has failed them.

First of all, it's not "a" tunnel. It's however many you need to access the applications on your private network... which could be your laptop. It's not for everyone, but if you're running lots of apps on, say, your laptop and you want to have TLS everywhere, none of the comparably priced options come close.


Cloudflare tunnels is free now though.


A word of warning wrt hard-relying your service on Cloudflare. They have hidden undocumented limits. When we hit those, they dropped ~10% of our traffic without warning and they did not respond to our support requests with anything other than platitudes, despite us being on their business plan. After ghosting us for 2 weeks they tried to upsell us to the Enterprise plan for more leeway on said undocumented limits (all the while not providing any insights as to what limits we were hitting, nor how).

I don't think they were malicious, I suspect growing pains, but it very much didn't match their stellar reputation.

After that experience we made sure not to rely on them for anything that we couldn't instantly turn off or switch away from. I'd run a blog behind cloudflare without worries but not sure anymore about nontrivial high-traffic applications.


The sounds weird. Please email me (jgc@cloudflare) and tell me what happened.


I come to HN for the articles, but I stay for the customer support.


Cloudflare Tunnel will spin up a free tunnel for you even without a Cloudflare account. If you run `brew install cloudflare/cloudflare/cloudflared` and then `cloudflared tunnel --url http://localhost:8080` you will get a URL you can use to reach that local port from the Internet.

I use it to share in-progress work with co-workers, test webhooks, etc.

Edit: fixed command thanks to comment below :)


Nice alternative to ngrok! I didn't realize this was possible without a cloudflare account.

FWIW the brew install command is `brew install cloudflare/cloudflare/cloudflared` (via https://developers.cloudflare.com/cloudflare-one/connections...)


I would rather use ngrok for these things: https://ngrok.com/

The reason why is because Alan is awesome.


Thanks Kord! Founder of ngrok here, just a quick note of correction for others in this thread: ngrok is absolutely intended for production use cases. There are many customers both hobbyist and enterprise running thousands of production workloads over ngrok's service (including ourselves! we dogfood ngrok for our ingress). We're excited to be sharing more about that with the HN community really soon.


As much as it pains me to say it, Cloudflare seems well positioned to eat ngrok's lunch. AFAIK they offer everything ngrok does plus auto TLS certs, CDN, domain name registration, and tons of other features. They also have way more edge servers for terminating tunnels close to the origin devices. And they can afford to do all this for free as a loss leader product. It's the AWS bundling effect. Oh and the client source code is available.

I don't want to see Cloudflare completely take over this space, but Cloudflare Tunnel is tough to compete with.

One knob ngrok could still turn is adding auto TLS certs which are managed on the client side. Then you can offer e2ee which is something Cloudflare will probably never do.


ngrok employee here (and lead on our cert system). Re client side auto certs, it's an interesting idea. We do support auto certs, managed within the ngrok cloud. We also support passing through tls termination to the ngrok agent and/or the user's upstream server so that users can use their own certs (which could be obtained programmatically). We also support end-to-end encryption as well as authentication (via mutual tls).

We've got a lot in the works as well.. thoon, real thoon. ;-)


out of interest - why? They seem to be targeted at different use cases - ngrok for dev work (looking at pricing and the limits on the free tier), and argo tunnels for permanent services


ngrok is easy to use. Is there any advantage of using Cloudflare Tunnel over ngrok?


Cloudflare tunnels also create multiple connections to Cloudflare for increased reliability. See https://blog.cloudflare.com/argo-tunnels-that-live-forever/


ngrok is meant for temporary quick test environments, Cloudflare Tunnel is more of a long-term solution. Although there is https://try.cloudflare.com/ which is designed to be just as quick and easier as ngrok.


Mind elaborating the service trade-offs?


why?


Easy to expose ssh server too. Use the .ssh/config ProxyCommand at the client. Cloudflare handles the authentication with the default OTP emailed.

They explain towards the end of this tutorial https://developers.cloudflare.com/cloudflare-one/tutorials/s...


Hi, I'm the author of the blog post being promoted here.

This is really cool too!! I use Tunnels with SSH a ton. I was considering making a follow-up post going through the SSH setup too, but I felt it was a bit redundant considering that docs page existed. My post was because of the lack of a clear guide for a simple HTTP webserver.


Your tutorial is already more thorough than others. Ideal to help anyone get their HTTP site accessible to the public.


This is great, I've always found information about how to do this kind of thing to be pretty confusing and not well described. Thanks for adding some more helpful material to the web.

I wrote up a guide [0] for using Nginx on a standard digital ocean droplet, but had I known about cloudflared at the time I think I would have tried that (tailscale was also something I thought about).

There was another recent article about cloudflared I remember seeing (maybe not on HN?), there's not very much good stuff like this about self-hosting. A lot people online just say "use X" without explaining anything helpful.

[0]: https://zalberico.com/essay/2020/06/06/urbit-on-the-cloud.ht...


Hi, I'm the author of the blog post being promoted here.

Thank you for your kind words!

> I've always found information about how to do this kind of thing to be pretty confusing and not well described.

This is the main reason I made this post, there is a lot of documentation but most of it is quite dense and doesn't walk through a simple use-case. When I've recommended Tunnel to my friends I usually have to baby them through the process because of the lack of clear information. This post was made so I have something to point to when I recommend people to use Tunnel for their-usecase. I didn't expect it to blow up this much!


Thanks! Yeah it's great - this kind of thing is super helpful and will be helping random people searching the web for years to come :)


Some issues (and solutions) that I ran into: https://www.maxcantor.com/blog/2021-10-15-ngrok-to-cloudflar...


It's not obvious to me from the blogpost where TLS termination happens in this scenario.

I would want it to happen on my local machine, so that (a) Cloudflare can't read my plaintext traffic, and (b) I can manage subdomain certificates more easily via Caddy.

Is that possible with the cheapo free tunnels or does Cloudflare want to handle the domain and TLS certificates, too?


All this changes is how CF connects to the server. Like the rest of CF, outside of using Spectrum Enterprise (which enables TCP 443 tunneling), CF removes TLS at their servers and inspects the traffic so all of its caching/firewall/etc features can be applied. It does add it back when talking to a tunnel, so it’s non plaintext on the wire.


Thank you. Yes, I assumed that the tunnel was encrypted, but I was interested in using Cloudflare only as an untrusted reverse proxy / bastion server in front of my personal homeserver, no traffic inspection or caching or anything else.

Your comment and u/pedrogpimenta's give very different answers, I guess I'll need to verify for myself.


Cloudflare Tunnel doesn't offer an end-to-end encryption option. If this is a must for you, either my own boringproxy or remotemoe[0] both offer this. I'm sure at least a couple others on the list[1] do as well but you'd have to check them individually. If you find any that do please consider opening an issue so I can add that information to the list.

[0]: https://github.com/fasmide/remotemoe

[1]: https://github.com/anderspitman/awesome-tunneling


You can do both or even no TLS if you want. It's easy to choose so on the domain preferences (it's only per domain, AFAIK)


Quick word of warning: I found it striking that even Cloudflare's Teams product, which supports Tunnels as a feature, does not make Tunnels private (e.g., by enforcing authentication, or restricting who can reach an exposed tunnel to your organization) by default. Anyone on the Internet with the Cloudflare Warp client can reach a Tunnel configured with default settings, a quirk that is not called out in their official documentation.


You can also put authentication in front of cloudflare argo tunnels, so you can securely expose internally hosted applications to the internet. A zero trust or BeyondCorp model is usally way easier than VPNs etc. It is a really nice alternative to hosting Buzzfeed SSO or Pomerium too.


A little off topic, but does anyone know the best way to run software on an unused Android phone? For some reason this seems harder than it used to be. My goal is to run Home assistant on it, and I am struggling with issues on Termux right now. There must be a better way.


Good luck, it's a hot mess. I spent considerable time last year porting boringproxy to run on Android. There are countless hoops to jump through for running server software, including:

* You have to run it as a foreground service so the user knows it's running. Not a problem in theory but annoying to implement.

* DNS name resolution doesn't work by default (with Golang at least) because android doesn't use resolve.conf. I solved this by setting DNS servers manually to 1.1.1.1, 8.8.8.8, etc.

* You have to do weird hacks in order to run native applications such as Golang programs.

* Android has endless optimizations for battery life that are trying to shut down/throttle your program. One example I would see huge performance differences as soon as I turned the screen off.

Overall I consider Android to be a very hostile environment for native applications, and networked apps in particular. iOS is even worse from what I can tell. We need a mobile OS that respects the user's control over their device. I'm fine with sane defaults, but it should be easy to switch them off. I'm hopeful for the Pinephone, but we have a long way to go.


sigh, thanks for the response. I think I may move onto RaspberryPi instead. Boringproxy looks like an interesting tool.


Honestly for technical users the RPi should be preferred IMO. The reason I want to get Android working is to bring self-hosting to the masses. Turning an old Android phone into a personal cloud by installing a couple apps and putting it in a corner would be huge.

Android is such a pain we might have to settle for shipping custom SD cards for RPi's though.


> an old Android phone into a personal cloud by installing a couple apps and putting it in a corner would be huge.

That's not a bad idea. It does seem like things have to be absolutely app driven. I wonder how backups would work with that? Multiple phones?


The ideal thing would be if you have multiple phones and can store one offsite at a friend's house. But that requires more complicated software and assumes people have multiple old Android phones laying around. I think more likely you'd pay a cloud service to handle backups for you. You just need to provide them with a read-only key then they can access the same way you do.


If only there was a straightforward way to manage the credentials used by cloudflared for tunnels, bind them to specific websites, and revoke them.

In principle, there is no reason at all to use TLS inside the tunnel — the tunnel itself is authenticated and encrypted. Unfortunately, cloudflare tunnels feel a bit like a cute 20% project that was never quite finished and is barely integrated with the rest of cloudflare’s offering.

Hey jgc et all, if you’re reading this, maybe the cloudflare console UI could have a pane for managing tunnels. And the pane for managing website origin servers could let you choose between the traditional cloudflare-initiated connection and a tunnel, and the tunnel mode could give some controls for how the origin server is protected, whether connections load balance across multiple tunnels, etc. And maybe even really open-source the tunnel client for real, because it would be quite nice to have the actual origin server connect via a plugin instead of a separate daemon.

In other words, the hard part of this offering is done. Do the boring bits so it can be even better than the primary offering.


Feel free to email me jgc@cloudflare with complaints, ideas, etc.

The team that works on Tunnel just pinged me with the internal ticket where they are working on the management UI you are looking for. So... soon!


Will do!


This looks pretty interesting to me. Self-hosting a webapp origin server on hardware in my house, fronted by CloudFlare... hmm. Food for thought.


One of the limitations that wasn't immediately obvious to me is that you're mapping a single domain with these tunnels. So you cannot easily make *.example.com available via a cloudflare tunnel. (and when I tried it it wasn't possible with ngrok either, perhaps that changed)

I ended up switching to a business connection with my ISP, so I could get an extra fixed IPv4 address at my house and not need any of these tunnels. Obviously that is not an option everywhere.


Yes, we made it easier a while back. Now you can map customname.ngrok.io to your tunnel with a command line switch. If you want to use a CNAME, it's a similar switch, a dashboard entry, and an update to your DNS entries. I did it on my own domain in a couple minutes, flushed the DNS records, and had it routable in ~15 minutes. The full docs are here: https://ngrok.com/docs#http-custom-domains

Disclosure: I work at ngrok


The ingress example with multiple subdomains and a default service seem to suggest one can host more than one subdomain. It would require setting your tunnel DNS on the Cloudflare side to point all of them to the tunnel.


As a matter of fact, I have a 4-node kubernetes cluster running at home which is exposed through a CloudFlare tunnel on the internet. Works like a charm, and you don’t have issues with firewalls, NAT, and/or dynamic IPs.


Yes, this is possible. I have exposed some tools hosted on Raspberry Pi this way.


much cheaper than EC2 or Heroku.


I use this to expose services running in Kubernetes clusters and have Cloudflare tunnel pointing at my Kube gateways.

It makes a ton of things like cluster failover much simpler than they otherwise would be.


Yup, and you can even have multiple tunnels that are load balanced, so that you don’t even have to fail over.

We have a single API service which is exposed to the internet, and put the CloudFlare tunnel as a sidecar inside the same pods. This way, it’s actually CloudFlare which handles the load balancing, which is surprisingly effective.


Could you elaborate on the setup a bit - for cluster fail over do you mean that since cloudflare is your frontend ingress you can easily point it to another cluster or is there more to it?


Not the person replying to (but I am the author of the blog post being promoted here)

I believe they _may_ be referring to the feature of being able to run a single "tunnel" on multiple hosts, using the same credentials and ID. When you do this, not only will Cloudflare automatically serve from the geographically nearest server if it can, but when one client goes offline (When the tunnel is disconnected, not application error sadly) it will automatically ignore that connection and serve from the others, providing some basic degree of failover with no extra payment or much configuration.

I believe you can also easily integrate Tunnels with the paid CF Load Balancer: https://developers.cloudflare.com/cloudflare-one/connections...


We integrate the tunnels with CFs load balancer service which basically lets us route traffic to one or more kubernetes clusters. Right now it’s just for failover where we can repoint a zone from one cluster to another but we’re also looking to route traffic geographically.

One of the great things about cloudflare tunnels is that even without load balancer we can send requests to multiple clusters if we want to.

Makes it really easy to replicate stateless services like ingress gateways.


One place where this would shine is running compute intensive tasks (especially the ones that involve GPU) that are usually queued. Instead of throwing too much money to the cloud providers, setup this tunnel on your unused/even new machine and throw tasks at it.


Does anyone have experience with software you can self-host a dial-out tunnel to achieve the same? I'm looking into a similar setup (connecting from an internal site to a private cloud, rather than to the Internet) and would prefer not to write the software myself if I can avoid it: network programming is tricky; network programming with failover, doubly so.

It's a real system with various security and compliance concerns; Cloudflare and dev-focused services like Inlet or simple SSH forwarding are unfortunately not going to work.


I am keeping an eye on this offering. In a B2B setting, this is a compelling way to expose certain sensitive services to the public web without forcing our customers to make complex/problematic firewall changes. Not everyone is sitting on a fat stack of public IPv4s they can just point at their infra. Many of the businesses we work with can't even accurately describe their own technology circumstances.

Reducing the conversation to "Can that server ping google?" would make my life 1000% easier.


I've been running caddy (with the cloudflare addon) to serve local services on a https url.

I then set my local dns(Adguard home) to redirect my url to it's lan url. Additionally, I run cloudflare tunnel to expose these services on the internet.

This allows me to use the url for internal services both at home or through the internet while having proper auth through cloudflare access when accessed over the internet. It was been working great for me so far


I've just spent a few hours trying to use Cloudflare Tunnels to connect to my machine through SSH after reading this post. Unfortunately, I then found that SSH keys are not supported: https://github.com/cloudflare/cloudflared/issues/319 so I cannot disable Password authentication.


Huh? Cloudflare Access supports SSH. My windows ssh prompts me for my SSH keypair's password, so I assume my server is checking my keypair.


Thanks for your comment. After trying it again, it has worked!


Yes, I use SSH keys, not password authentication, as well as PAM 2FA which is my normal SSH configuration. So the traffic is e2ee from my client to my server. Perhaps that issue refers to using personal SSH keys instead of the ~/.cloudflared/cert.pem which is used to encrypt the tunnel


Thanks for your comment. After trying it again, it has worked!


> No port forward headache, no complex configuration.

That's on page 10 of 12 on the print preview... It has another service running though, I find that adds a lot of complexity to the setup, but as usual, this has pros and cons.

Don't get me wrong, it's a good tutorial but I'm not sure I find port forwarding more complex - but I would argue that that strengths of this setup are different.


There is no mention of prices on that page, does anyone know how much it costs? Is it included on their free tier, or it is a "free" added service for customers who already pay for other services? If so, I'm curious what would be the cost of the minimum package to get this working.


Available on the free plan at no extra charge https://blog.cloudflare.com/tunnel-for-everyone/


Hi, I'm the author of the blog post being promoted here.

As noted by other commenters, Cloudflare Tunnel is completely free forever and does not cost anything. This was not always the case in the past where it was previously tied with the Argo Smart Routing product that cost money. The announcement of it becoming free is here: https://blog.cloudflare.com/tunnel-for-everyone/

I didn't mention price in the post because it was free, however from the comments I am thinking perhaps that is an important point to make. I wiill keep this in mind if I make similar posts in the future :)


I used v2ray+nginx on a linode instance to expose NAT-ed port. I have tried cloudflared before but it seems to not able to proxy the cockpit GUI well. And the credentials (for the whole domain) will have to stay with the device, that make me a little nervous.


Another one for the alternatives list is Kilo[1]

It's a wireguard based kubernetes network overlay. I use it to access private services in my homelab cluster from my laptop, phone, etc.

[1] https://kilo.squat.ai


This appears to be similar to Azure AD Application Proxy. If it is they're one step ahead of MS because their App Proxy Connector clobbers MSAL auth tokens and they can't be bothered to fix the issue a year later.


I do this for our services, it works great and we can easily put SSO in front of them with CF Access. I publish a Docker container that you can use as a sidecar for your Compose deployments:

https://gitlab.com/stavros/docker-cloudflared

I use this with Harbormaster (https://gitlab.com/stavros/harbormaster) so I can expose containerized stuff without ever forwarding any ports outside of Docker.


Hi, I'm the author of the blog post being promoted here.

I maintain my own Docker image too for personal use (https://github.com/Erisa/cloudflared-docker) but I've never ran into a situation where needing everything as an environment variable was required or even desired. I really love the idea of that though, and I love that image!


Yeah, I did it that way because Harbormaster promotes configuration being passed as env vars, so I needed the image to support that. That way, you can deploy cloudflared to a server without touching it beforehand, just by adding the vars to the repo that describes what you want deployed.


An alternative to using cloudflared is using TLS client certificates to authenticate that requests to your origin server come from Cloudflare [1]. This is not quite as airtight as Cloudflare Tunnel because you expose a port for TLS but it comes close.

[1]: https://developers.cloudflare.com/ssl/origin-configuration/a... "Set up authenticate origin pulls"


How does this compare to ngrok and can we combine them to host sites from our own servers behind a dynamic IP given by our ISP? Could be great for developers showing off their sites for instance.


I spent way too much time trying to get cloudlfared working for team RBAC/MFA SSH solution. Ended up going with Teleport instead.

I really wanted to love CF Teams but is lacking some polish IMO.


Nice little write-up. Appreciate the hints on setting up a systemd service. That said, with the service being a system service, I'd probably prefer moving the credentials file:

> credentials-file: /home/ubuntu/.cloudflared/ed5bfe1 (...)

To either /root, or (more likely) /etc/cloudflared/ and making it readable to root, or a system user especially for cloudflared.

I like to think that my services will run regardless of the state of my /home filesystem.


How does it compare with ZeroTier, Tailscale and Nebula?


With Cloudflare Tunnel you don't need a VPN on the client.


You still need to run the cloudflared executable though. Cloudflare Tunnel currently proxies everything over HTTP/2 frames, but they've also started experimenting with QUIC[0]. This means everything runs in userspace. Main advantage here is it doesn't require admin privileges on the client and it doesn't mess with your network configuration.

If you use a VPN like OpenVPN or Tailscale (based on WireGuard), it will require admin in order to configure the network devices. The main advantage of WireGuard solutions is it runs in the kernel and can potentially be much faster, or at least more efficient. For tunneling often your upload throughput and not performance is the bottleneck.

[0]: https://blog.cloudflare.com/getting-cloudflare-tunnels-to-co...


Does anyone know if you can use a Cloudflare tunnel on a single subdomain without using Cloudflare on everything else?

It seemed like I had to run everything on the domain through Cloudflare when I looked into this in the past. That might be fine in the end, but I just wanted to try tunnels out first without committing to anything else.

Edit: thanks, everyone! This was just going to be a tiny web site for hobby purposes at first.


(I work at Cloudflare). You can sign up just a subdomain (sub.foo.xyz) as an enterprise customer and then add an NS records from your DNS provider to Cloudflare for that subdomain.

Tunnels also has a testing domain you can use. It should give you a subdomain like xxx-xxx-xxx.trycloudflare.com for basic "How do I get this thing working" testing.


helo


Unless you want to pay for the business plan with a CNAME Setup[0], you do need to use their DNS offering, even if the rest of your site's DNS records are 'unproxied'. If you just want to try tunnels at all, with a non-descript hostname, Tunnel gives out subdomains that end in trycloudflare.com[1].

If you're referring to the TOS issue that is often discussed here, it depends on what that subdomain is, since Cloudflare doesn't just want to be pushing binary data for free. If the subdomain is some website that is primarily used in the browser, CF will generally be fine leaving it up even if you push TBs a day, but if it's just a file host CF has been known to flag that for abuse and disable proxying for the domain[2]. As for why they bother with a free plan with such cryptic rules, their S1 explains it[3].

0: https://support.cloudflare.com/hc/en-us/articles/36002034883...

1: https://developers.cloudflare.com/cloudflare-one/connections...

2: https://community.cloudflare.com/t/the-way-you-handle-bandwi...

3: https://l.judge.sh/85EH

(I am not a CF employee nor your lawyer)


You can have cloudflare handle your DNS, though nothing more. Each DNS record has an extra setting to Proxy. For the tunneled CNAME the proxy must be turned on. For anything else to be pass though traditional DNS then set the Proxy setting off.

*edit: Learned here in this discussion that moving NS servers to Cloudflare is not even required. I’ll need to test that.


I just started using Cloudflare Tunnel this weekend to expose a service hosted at home. I love that I don't have to open any ports up, that my home IP isn't exposed, and that I don't need to worry about maintaining my own reverse proxy to host multiple sites on the standard ports.

I know there's other ways to do this, but Tunnel made it extremely easy.


I'm using a Cloudflare tunnel to expose Home Assistant protected by Google Auth and use it anywhere from my personal devices.


Does the home assistant Android app allow you to login with your public url?


Unfortunately not. It opens Chrome to authenticate with Google but never redirects back to the native app. So I 'installed' Home Assistant as a PWA and found there is practically no need for me to use the app. On iOS it does work with the native app thought.

Note that if you are not using Cloudflare Access as additional authentication layer and only rely on Home Assistant authentication, the Cloudflare tunnel obviously works with the Android app. It's just that I was too paranoid for this.

Maybe I'm overly cautious. Home Assistant does have two-factor as options as well, doesn't it?


Is it possible to run a mail server behind a Cloudflare tunnel? Our ISP uses CGNAT, making it impossible to port forward.


Hi, I'm the author of the blog post being promoted here.

No, this is not possible. Cloudflare Tunnel focuses mainly on HTTP traffic but also supports SSH, VNC and generic TCP only in situations where the client also uses the cloudflared client to proxy it back to their localhost. Hosting a mail server with these restrictions is not possible I'm afraid.


That's what I thought. Thanks.


Debugging Cloudflare Tunnel is PITA. We are using it in production, and have most random outages that leave us guessing what triggered it. The errors are vague to say the least, and there is not much in terms of existing community. Otherwise, it is easy to setup and works great when it does.


Great write up here, helps supplement the docs perfectly.


My go to is ngrok.


I'm a little confused about hostname routing. You set up a config file with hostname values like either of the two below:

  ingress:
    - hostname: myapp1.examples.com
      service: http://localhost:8080
    - hostname: myapp2.example.com
      service: http://localhost:8081
    - service: http_status:404

  ingress:
    - service: http://localhost:80
Then later you explicitly route to a subdomain for the simple case (the second one above):

  $ cloudflared tunnel route dns mytunnel test.example.com
Now you're on a subdomain, how would I handle this routing case for the more complex case from above?


Hi, I'm the author of the blog post being promoted here.

The `clouflared tunnel route dns` command creates thee DNS record mapping the tunnel to the domain. The tunnels config maps the hostname to the local service, and you can have multiple of those for each service. So for the example above, you would create a DNS record for each domain pointing to the same one tunnel, and that tunnel will route based on the ingress rules.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: