This guy is underestimating just how exactly the target market for such a free tier service he is. He is smart, technical, about to go to college, and in a few years will be evaluating what service to use for his startup or big tech job from among a field of competitors.
Today, I am hosting most of my stuff on GCP and I'm quite reluctant to spend time learning about the other provider's offerings.
If Cloudflare can keep on bleeding money at such scale, in a possibly interest-rate-increasing world, that is.
In 2021 it had a revenue of $656M, and costs of $917M.
> The smart move is to instead pay your employees [...] and grow your customer base
> give away your services for cheap
Disagree. Giving away your services for cheap is great way to build unsustainable business. It's often a part of the effective strategy, sure. But if you just default into "oh, we'll just sell it cheaply, and increase prices when we're big", then be sure to time your exit, before everything collapses. Selling $1 for 90 cents works only for a limited time.
That's the point with VC money, for some markets. By burning the entire market for everyone else, you can then own it. At smaller scales and for some products this sucks because VC money distorts the market for otherwise perfectly/better designed products but that don't have the same monetization possibility.
I believe it's one reason so much VC money pouring into web3 now. They aren't about to let an actual decentralized vision take hold.
I think your parent comment's view is that these go hand in hand. Sure, there are tons of bad examples of this which really do only amount to selling $1 for 90c. Sometimes, however, the point of customer acquisition isn't just so that you can switch from selling $1 for 90c to selling 90c for $1 (which won't work), but to solve your scaling problems before you try making a profit. It's easier to fix issues and deploy new or experimental technologies in a period of easy money and customers flooding your doors than it is when you're skating by on thin margins.
But this describes to me the "sell $1 for $.90" strategy, not the other way around. IOW I think it's an argument for why selling cheaply is often not a good idea.
And wrt the parts of your grandparent's comment you quoted, it's not really accurate to call it "growing your customer base" when you're growing it via giving away unreasonably cheap services. It is often the case that the majority of customers acquired in such a way won't stick around when prices are later increased to a realistic level.
This is why in general I advocate for businesses to price well, not cheaply, and to try to get real growth - that is, people willing to pay the money your services are actually worth. There are times where "acquire new customers at all costs, even by giving unrealistically good bargains" is the optimal strategy, but I think it should be something done 10% of the time rather than the 95% of the time it's done in our industry.
That being said, I think there's a healthy ratio between speculative and dividend-yielding investments, and the market (especially the tech market) is nowhere near that healthy ratio.
You are hitting on a correct notion here though, which is that if the market totally went away - ie you held your shares but couldn't sell or buy - and you're an investor who owns .0001% of a megacorp and >50% of the shareholders won't vote to issue a dividend, then the security is worthless to you. Similarly, if the market does still exist but for whatever reason the company is "unfairly" valued extremely low by the market, then having dividends gives you an "anchor" to hold on to - at least your asset is giving you a 1% or a 3% or a [insert dividend yield here]% dividend while you wait for the market to value it fairly again - whereas without the dividend you've got nothing.
(Of course FU money is another thing and something many VCs approve of. Edit: since it seems to me this is a rather uncommon term these days it refers to giving the founder the initial investment and then a healthy amount extra in payout so their decisions won't be clouded anymore with the fact that they wan't "their" money back. At least that is the theory.)
I'm thinking "toys" was a bad word. I'm thinking getting a reliable Tesla as a means of personal transportation, not a yacht.
This isn't even an entrepreneur/VC/tech thing. You can get there from first principles without intelligence.
Obviously there's a subtlety to it wrt timing etc., but the point of the money is to use the money for some thing.
Net loss 21% of revenue is hardly breaking even
We were profitable the last two quarters and cash flow positive the last quarter.
Tip to anyone reading this, even if you think you're going to be a software developer the rest of your life, take an accounting course.
I currently have it running on top of some ancient legacy appengine apps to clean up domains/URLs, add a caching layer and keep my costs at basically zero overall (workers doing a little bit of work as well).
I'm tempted to ditch Google's legacy free workspace product using the email forwarding as well.
It's like Twilio: there just isn't anyone near from a competitor perspective. As long as they keep improving, they are so far ahead it's their game to lose.
And just to clarify: I do everything on the free tier.
Go on then: https://blog.cloudflare.com/migrating-to-cloudflare-email-ro...
One feature I would love for Cloudflare to add is the ability to Reject emails to specific addresses. Drop works as an alternative because it doesn't deliver the email, but still validates that the address exists. Having Reject as an option will tell the sender that this address is invalid.
Deliverability concerns have me scared to use anything that’s not a proven strong success for outgoing mail and SES seems very affordable for sent emails, especially if it’s not receiving them.
Plus we have a few extra features like blogging from your inbox (like Hey World) and a cdn to host and share images and files.
This looks awesome! Thank you so much for building it and for reaching out here. Very glad I found it.
The biggest feature I “need” is to be able to send emails from any address on my domain even though I only want one login / one inbox. I’d like any wildcard addresses to go to that inbox as well:
So my intended workflow is:
1) Sign up at www.vendor1.com using email@example.com (even if this address is not already registered on my incoming mailserver)
2) Any emails sent to *@mydomain.com should just go to my single unified inbox.
3) When I send an email I’d like to be able to specify to send from firstname.lastname@example.org or email@example.com (or anything else for that matter). Re: Replies would ideally automatically have the sending email filled in as whatever email was used to reach me in the previous email that I’m replying to. I’d like this to be accomplished without the typical use of reply-to or alias, so that the email headers looked perfectly normal for all outgoing emails as if it were a real account.
Curious how much of this functionality your service could support. I was planning to build it all myself so I’d still be happy to build it on top of your service, as long as wildcard sending and receiving could be supported.
Finally, purely out of curiosity (it doesn’t affect my decision), what are your plans for monetizing the free users?
Also some other notes (while reiterating that I like the idea of your service and like that you reached out to me here):
- I can see one of your testimonials is from a technical woman who runs a blog at ——-@moogle.com which is also the domain linked in your HN profile. The testimonial came across as “I had my friends try this and this really was their genuine response but we ‘marketing-ized’ her response” and then seeing the same domain on your profile confirmed the initial reaction. That’s fine but eventually getting some testimonial from someone with gravitas, or just stating “our customers like x, y, z” might make it feel more genuine. The testimonial from “ Sandhya K Assistant Professor, Economics @ MDAE” worked super well, and almost could have been even higher on the splash page because it was the first thing my brain said “ah okay cool I’m starting to see exactly what this product they’re selling is, and it’s potentially right for me”
- I’d love a little animation or additional explanation about how the buckets work. Do they need to be set up ahead of time, does each ‘bucket’ count as an “account” for billing tier purposes, etc
- Pricing page isn’t immediately clear what paying $30/mo actually gets you. Having the features side by side with similar open bullets vs filled in bullets to signify “this is included, this is omitted” helps instantly figure out what the end user value prop is. This info could also be on the landing page. I get suspicious of “click here for pricing” because I worry it will make me email someone and there wont be publicly posted prices. Something about the button style for pricing on main page implies “contact us for pricing info”
- I see now on the notion blog the following verbage. I’m not sure if my use case above is truly your target user story, but had I seen this blurb at the beginning of the main page is already be signed up before typing all this! It’s the perfect summary, at least for my user story.
> With PretzelBox, they can hand out as many email addresses they want without provisioning them upfront at no extra charge to different audiences.
> E.g., their affiliates can use *firstname.lastname@example.org* while inbound sales enquiries can be handled by *email@example.com.*
> The best part is the all these emails received by these different email accounts are automatically forwarded to the email account used to sign up for PretzelBox so while vendors and customers alike think they are communicating with a company with a different departments to manage different facets of their business, behind the scenes there is a far smaller team managing the show.
Yes, it’s possible to send emails from whichever email you received an email on. E.g., if you receive an email at firstname.lastname@example.org, you can reply from email@example.com or optionally firstname.lastname@example.org. Of course, AWS SES manages sending reputation very carefully so email sending might involve some paperwork.
Yes, one of the testimonials is from a moogle user..and it does seem very “you-scratch-my-back-I’ll-scratch-yours”-ish but here’s the backstory. While I was ideating PretzelBox, moogle was the brand name under which I launched the blogging piece of PretzelBox.
I’ll look into your other suggestions and explain/highlight them more on the home page if possible.
I’d love to have you as a user given that we probably match 80% of your states needs. I’m available at Sai @ PretzelBox.cc if you want to follow up.
BTW if you don't go with PretzelBox consider the following for running mail:
I personally use maddy and it's awesome
CloudFlare will forward all mail, so if you used to use your domain for catch-all (to see who passed your address along to spammers) this can let you keep those addresses working, which I realised I probably should if I wanted to be able to do password resets.
The article says:
> My issue arose when I realized that they remove your SSL certificate, then use their own. Cloudflare is a big MITM service.
All you have to do is change one setting in the DNS record (from Proxied to DNS Only) and you totally bypass the Cloudflare reverse proxy for that resource. So you can use your own SSL cert (if that's your thing). And you can still use Cloudflare access/zero trust with that resource. And solid and free DNS service. And cheap domain renewals.
Cloudflare is a great set of tools for many use cases.
How am I supposed to reach out to Cloudflare support?
If I start at cloudflare.com, and click "Support", "Contact Support", that takes me to the Cloudflare Dashboard. In the upper corner, there is "Help Center" and "Cloudflare Community". In the bottom, "Contact support". "Contact support" and "Help Center" both lead to the same page: https://support.cloudflare.com/hc/en-us ; that page has a giant "Submit a request" button on it. Clicking that takes me … back to the dashboard. I suppose there is the community, but the UI seems to imply there is a "file ticket" thing, but the UI seems broken.
Also, your RDAP implementation — is the API (not just the web UI) to it something people are free to use? Additionally, it's not quite RDAP; it doesn't conform to the RDAP RFC (RFC 9083) in the eventAction string used for expiration dates. (It calls the eventAction "Expiration Date"; the RFC calls it "expiration". Parsers are picky.)
For me, it ended up on this URL:
Basically, append /support to your dashboard URL.
That said, I hope he doesn't give up the search for an alternative method, even if it's stashing a box in a hidden part of the library.
"Free Tier: All Google Cloud customers can use select Google Cloud products—like Compute Engine, Cloud Storage, and BigQuery—free of charge, within specified monthly usage limits. When you stay within the Free Tier limits, these resources are not charged against your Free Trial credits or to your Cloud Billing account's payment method after your trial ends."
There is a small catch though, getting a static IP for the instance is not free, and your egress traffic starts costing after you reach 1GB outgoing data use in a months time.
The cool thing is (afaict), getting a static IP and going over 1GB egress doesn't totally knock you out of the free tier, you just pay for the static IP and any data usage past 1GB. My setup ends up costing me under a dollar per month with a static IP and very light egress usage.
The performance of the machine is not great but it's also not terrible consider the cost. I pretty sure you could run a small website and some other services on it no problem.
That's my favorite part of Oracle's always free tier, in comparison to other offerings.
My understanding is that this would require a change to TCP/IP itself, and NATs make this system probably unworkable in IPv4. But it would result in a truly distributed ddos protection. This also brings to mind the nightmare scenario of exponentionally larger parts of the IPv6 space hoovered up to satisfy the needs of spammers. Imagine burning 1 ip address per message!
EDIT: Ok, I thought about this a bit... would it actually work? Would the packets just find another path, and actually end up using more network resources as they traverse successively longer routes?
There's a trust issue, too: how far up the chain do you propagate the signal "suspected abuse from this host"? How hard would it be to abuse this system to censor people you don't like?
I don't have a mathematical proof, just an intuition about how the graph (nodes are routers, links are routes) would change in time. Yes, the packets might find another path, at least at first, but each time the recipient signals rejection every intermediate goes dark. Eventually, the sender will be surrounded, at some radius, by dark nodes. The best case is if the bad sender is rejected very quickly.
>how far up the chain
All the way, as close to 100% as possible. I would want this to be part of TLS encrypted socket such that either side of the socket can signal unhappiness with the counterparty, and have that be respected. This should be difficult to spoof, so you'd have to pick a way to signal it within the TLS stream, and do packet inspection at the middleboxes. (I never said it would be an easy solution!)
When I was in college (2000-2006) you could fill out a form to request to run a server. I'm not sure how common that is today, but it's at least worth asking about.
Some have entire IPv4 subnets. Others have a grand total of three public IP addresses to distribute across every user on the campus.
Yes for sure different times. Still laugh about how easy it was for him to get credentials which gave him a sort of 'super user' privilege's on our own network just by having professor credentials.
… 15+ years later, and my ISP (Verizon) still hasn't caught up…
Note: The graph url is dead.
I put an unpatched Windows 2000 machine on the network, and the system logs were littered with intrusion attempts—more than 500—after being online for a few minutes.
This was in 2014, but afaik it's been like that for the better part of the 2000s at the very least, likely the late 90s as well, though probably not gigabit at that point.
Their networking/routing is based on cheap routing your stuff in the free tier ( eg. Assets).
This is done at the cheapest location/datacenter for cloudflare depending on usage/capacity in nearby datacenters. Performance/nearby for paying customers, cheap routing for free tiers. This helps bring usage to their underused datacenters too.
If you want to bring your own certificate, you aren't letting cloudflare decide on how to route the files and are using a direct line, circumventing costs reductions they implemented.
Because of that, i think they require you to be on the business tier, if you want to use your own certificate. Additionally, this requirement is almost solely for professional clients too.
Can someone from cloudflare correct me if I'm wrong? It's just my 2 cents.
Note: cheap doesn't mean slow. It mostly means that you won't be routed to a datacenter that is almost at capacity.
I'm not sure there's any technical reason for this feature being on the paid tier. Domains on the free tier have their own SSL certificates too -- they're just certificates which Cloudflare issued internally, rather than ones provided by the customer.
The route from cloudflare to the server is without ssl. So i think that makes it possible for cloudflare to cache things on the same domain, but different locations.
Since i don't think they can cache things when you use your own SSL certificate.
That's a user preference, but the most common configuration uses SSL on the back-end (Cloudflare to server) connection.
> Since i don't think they can cache things when you use your own SSL certificate.
I don't see any reason why that would be the case.
Cloudflare does support a "keyless SSL" mode  for enterprise customers, but this only applies to the certificate private key -- the session key for each TLS session is still under Cloudflare's control, so they still have access to the contents of the HTTPS session.
If you’re not going to do that, there is no benefit to using a CDN. Just send all the traffic to your origin and run your own reverse proxy there.
You still get DoS protection.
- Provide a free or very cheap service without lock-in antipatterns
- Capture a large share of the market
- Create lock-in
- Extract money by charging fees or doing surveillance or leveraging market power
It's amazing how the tech crowd falls for the same tricks decade after decade after decade.
It seems like the OP could SSH or FTP into a server at his parents' house from campus.
it’s not limited to 1 year like aws, they say it will be free forever
In fact, their free tier is designed such that you only use free services. If you want to use their paid tier, you need to explicitly upgrade. I like that because that means I’ll never run into any unexpected charges.
Let me check once again.
I'm already abusing (not really of course, I'm within limits) of my Heroku free tier and I had something running on Openshift until they removed the free tier.
Edit: To the downvoters, have you ever been through an Oracle audit? I can tell you it's "bend you over and thoroughly check out your insides" invasive enough that I will literally never again willingly work with such a company.
I was going to try to whitelist Cloudflare IPs, but that seems like a rather large list that can change at any time. This tunnel thing seems perfect. Is it as simple as running some daemon on my home network and then closing the ports?
Edit: but yes, the tunnel ‘thing’ is exactly that! So would be even better.
I mean, did you ask? Tech people work there, too, and in my experience they love working on this sort of cool stuff.
Doesn't hurt to ask.
No need to apologize. Happy to have you. Promise, we won't sell your data or do anything sleazy. Too easy to leave us if we did. And good luck in college!
PS - When you get a good paying job after college and they ask whether they should use Cloudflare or not, hope you'll tell them how well we treated you. That's how we get most of our multi-million dollar customers.
Glitch.com might be another option - static websites are always-on and free. If you do really need hosting, 8$/mo for 5 linux containers is a nice option. Personally that seems cheaper in the long run than running a bunch of stuff on your own hardware, network, power. Maybe even add a "buy me a coffee" badge or hit up your mailing lists for help covering hosting costs.
> There are many other services I am running (web-based IRC bouncer TheLounge, FreshRSS, and Keycloak, just to name a few) that won't be receptive to be run on Github Pages or whatever.
Here is the thing: you can just choose not to do that. Leave it where it is and manage it remotely.
Get a web power switch. With that, you can use its web UI to power cycle equipment.
I have a web switch that can be programmed to periodically ping equipment and power cycle certain outlets if that fails. One good target for that is your router. Routers sometimes screw up and stop routing; in that case you would have no way to get into the web power switch to manually reset anything.
Home is probably the better place for the server.
You think you have solved all the problems in advance with Cloudfare and whatnot but you have no idea. The campus network could block needed ports, or otherwise make trouble. Like, oh, accuse you of "hacking" or something. IT people in schools can be dickheads.
If you're in shared accommodations (dorm), people may have physical access to the server and tamper with it. Even just accidentally.
Plus there is the down-time of moving the server. Transportation risks.
If it's working nicely, just leave it.
I understand it's a kind of comfort blanket that your server is right there under your desk, but that's just an emotional thing you can let go.
If you're in the US, privacy.com. Used it for years, never had an issue. Create a card, use it on whatever, then limit the card to a dollar or something. Or just pause it. Worst case, cloudflare tries to charge you 100$ and emails you when the card declines.
Source: my own recent experience in registering before/after obtaining a credit score. privacy.com won't confirm if this was the cause.
You get $100 in DigitalOcean credits. That's 20 months of hosting a VPS; plenty of time to figure out how to come up with $5 per month thereafter.
I could reduce costs and run a minimal VM that acts as a WireGuard
VPN server and proxy TCP using fancy firewall rules or whatnot,
but that would also cost money.
This whole article was a rant about how I can't get anything good for free.
On a more serious note... I'd like to tell this person that the fact they are already getting so much free stuff is the reason why they have to be and are so skeptical and cyinical about MITM, security and privacy topics.
I don't understand this disqualification. I've been a student, my monthly "income" at that time was probably about $800 before rent, which ate up about half. I don't see how $5 a month for a VPS is something that is unreasonable to assume you could afford, especially if you have a technical interest like this person clearly has.
Fellow student here. I love free tiers, but they can also be extremely frustrating. I actually prefer paying for reasonable priced services, that way you know what you're getting. For example I pay 2€/month for a 1vCPU/2GB Hetzner VPS with 20GB of storage, with 20TB of free egress (prices increased recently due to IPv4 shortage, they still charge me only 2€ however). That's about as much as a cup of coffee in CH, I can't recommend Hetzner enough. It could easily handle all traffic without Cloudflare. Having started to get a lot of spam through that domain, I'll take any protection that CF might be giving me. Also, setting up/maintaining Ubuntu from scratch has been extremely beneficial for me.
If you're limited by your location, there's nothing wrong with offloading to a VPS or running a reverse proxy. Considering you mentioned port 80/443, I imagine that a NGINX reverse proxy would be sufficient? It's basically your own mini-cloudflare proxy, only you're in control.
> I had used Cloudflare briefly before. My issue arose when I realized that they remove your SSL certificate, then use their own. Cloudflare is a big MITM service.
Why is this an issue? Any platform that serves requests on your behalf must be able to decrypt the request payload, I think any security expert would argue that having them use their own trusted certificate is much, much better than giving up the private key associated with your own certificate. Trust me, nobody (+/- 0.1%) checks the certificate when visiting your website... If they found a way to just forward on TCP packets, what useful service would they provide? A port-forwarding alternative? It's not so much a man-in-middle attack, you agree to a ToS and they provide a service. Ultimately, the DNS record is the authority on domain ownership, any server that is pointed to by the DNS record is authorised to represent that domain, and is sufficient to request a unique certificate from Let's Encrypt, for example.
> But I still don't like you.
Why not? I've been using Cloudflare for years, if only for the great DNS management panel. Of the "cloud" companies, they're my favourite. I believe Vercel use their infrastructure (they run their own datacentres), they're great to listen to on podcasts and have some interesting disruptive tech coming out, namely R2, which as a small filmmaker hobbyist, is truely a gamechanger for me. If their interests diverge from mine, I simply terminate my account and reasign the nameservers with my registrar. There's no lock-in.
> I don't like the centralization of the internet. Routing everything through Cloudflare, using Amazon AWS, all of it (in my opinion at least) harms the internet by consolidating power in the hands of just a few large monoliths.
I think you can do the same with NLB's. ALB's, however, have to terminate the HTTP so no choice there.
Still a bit cheaper than AWS Lightsail.
I want to store email on a "backend" server not exposed to the internet, and have a separate email server that interfaces with the rest of the internet and both sends outbound email (delivers emails sent from the backend) and receives email (forwards them to the backend), but doesn't store any at all. Like a forward+reverse http proxy, but for email.
Any email server will can do it, it's a very basic email feature. Many services will do it for you (but a few will block it).
The relay proxies all inbound and outbound mail, performs spam filtering but stores no mail.
I do this because my home PC's IP changes constantly, and my ISP also blocks port 25.
Relays are easy to setup, mailcow-dockerized has built in support.
Maybe it's something on your end? I've checked the site on online services, three different networks and two different devices.
> Unfortunately, I doubt the tech department at the university would appreciate or allow me to poke holes in their firewall to forward 443 and 80 to my server
A machine connected to their networks is just publicly accessible by IP address.
Sound very sad if true!
> Loophole service requires authentication, this command allows you to log in or set up one in case you don't yet have it.
$4 / month cloud VPS running wireguard will do the trick.
This guy is underestimating just how exactly the target market for such a free tier service he is. He is smart, technical, about to go to college, and in a few years will be evaluating what service to use for his startup or big tech job from among a field of competitors.