Hacker News new | past | comments | ask | show | jobs | submit | ji_zai's comments login

Thank you! I've been itching for a product like this. Hate the strain computer screens give me. Kindle, and Remarkable felt too restrictive. Building on Android is a good move.


I believe the underlying cause is that the pay-to-learn model itself is not feasible because the value of a good teacher's time cannot be compensated without charging students and ridiculously high fee, and a student can't justify a high fee when the outcomes aren't guaranteed.

Colleges have worked in the past because the outcomes were more or less guaranteed (i.e. if you got good grades and graduated, you would very likely land a job), but even that is no longer valid (outside of regulated domains like medicine, law, etc.).

I predict we will see many, many more pay-to-learn companies, institutions fall in the coming years. ("fall" could also mean become irrelevant, and catering only to those that don't understand how the world has changed and still incorrectly think that such programs will prepare them well).

BloomTech, in order to survive, had no choice but to try and play the games they did, and mislead. It's a byproduct of not having a business that is viable.


That, plus there is an extremely strong negative select for coding bootcamps.

Your local community college could have the best program ever, but they won't beat Ivy League grads, purely because of the inputs.


Absolutely. Higher quality inputs actually reduces the forcing function for the education to be high quality - since you'll get good outcomes anyway.


>reduces the forcing function for the education to be high quality

Not just the education, but the selection process too.

If the higher-quality inputs are abundant, then the overly-exclusive selection process is somewhat deleveraging of overall potential.

If the higher-quality inputs are scarce, they are outnumbered by others having average-to-below-average potential, who often seek the exclusive membership more so than any actual high-quality performance.

Once again a large percentage of the highest-quality inputs can be systematically excluded in a disadvantageous way.

With good fortune at least a good number of high-quality inputs do gain entrance and it can set a good example, sometimes realistic, sometimes not.

Either way the higher-quality inputs are best identified beforehand, not the result of an overly exclusive selection process.

But it's this type selection process that contributes so much to some institutions' perceptions of quality, when they could be doing so much more.

From the least-prestigious programs all the way up to the most-prestigious, it seems like there is always going to be some temptation to blur the distinction among peers and tiers in a way that's confusing to students, and it's just a matter of integrity whether that is taken a bit too far.


Bug: I chose French, and I had to type “quelqu'un” - but no apostrophe I used was accepted.

Neat project though!


> Using a Mistral-7B FT trained on GPT-4 outputs allowed us to parse through hundreds of thousands of Reddit threads in a simple way with just hundreds of dollars of compute.

Great idea. These sort of clever approaches are needed to be able to build these sort of products that benefit from scale. When the cost of inference goes down, it enables new experiences. And clever ways to reduce cost before the big providers do, is a massive competitive advantage that makes it tough for those who wait to compete with you.

Anyone building AI products should take note.


The missing part of the story is when we made an early prototype using GPT-4, leaving it on overnight, and realizing that we've spent several thousand dollars of OpenAI credits...


Aaah, I can imagine the panic I'd be in.

Yet, such pain is where the innovation comes from :). Wishing you all the best! And plan to try this out once it covers more product categories.


I agree with the author, but they don't seem to get into what we can do to get it back to a more delightful experience.

I don't think the solution is to have more passionate builders in existing organizations that have overly short-term priorities (increase revenue this week vs. re-imagine a market).

I only see things changing through better competing products. Similar to how LLMs have enabled products that completely circumvent crap in search results (e.g. low quality listicles), we will see the same happen for most products and services.

Some products are going to be tough to displace, and it might take a new airplane startup that owns the entire ticket purchasing to inflight entertainment and last mile transportation (airport to destination) to show the world what an incredible flying experience could look like, that will force other airlines to at least try and catch up.

I hope more people with these sort of frustrations (which I share!) try to re-imagine experiences from the ground up and not shy away from tackling problems in big ways. We need more of it.


> If you're willing to do a lot of rewriting, you don't have to guess right. You can follow a branch and see how it turns out, and if it isn't good enough, cut it and backtrack. I do this all the time.

I feel like even if you guess right, you can't help but do a lot of rewriting and exploration because otherwise you'll always feel that "something's missing", and that you're further from the truth than you could be, that there's a better explanation around the corner if you study this subtree a bit more.

In the process of an essay I'm currently writing, I've probably written a short book of meandering thoughts / notes. I can't imagine it any other way because doing so would be imagining me actively not exploring an interesting subtree / implication of something I wrote, that I feel could impact the crux of the essay.

And of course none of this is wasted effort, as PG talks also mentions. Identifying the next essays from those ideas I've left out of this one is simply an act of observation and organization based on the exploration I've already done.

And likely when I start the next essay, this process will repeat itself.


How do you organize those notes?


Right now using Obsidian (without being good about connecting notes). I'm just writing, dumping into a folder, and leaving further organization / analysis for later.


So you write when inspiration strikes, and you'll re-read and sift through later?


It's sad to see many comments here assume that Sam was at fault for something without really knowing what happened behind the scenes.

Let's try not to jump to conclusions, as hard as that may be.

And if you're actively wishing people to fail, then I ask you to try to understand where that stems from.

If you think Openai should be run differently, and if you have the inclination, I genuinely encourage you to build a competitor. I actually mean it.

People think that Openai is too far ahead, but the fact is that someone with a capability to understand things from first principles will often find a unique, valuable insight to compete with. And the world is sufficiently decentralized such that you can't be stopped from pursuing this, even by Openai.

As technology becomes more powerful, first principles reasoning >> factual knowledge (information) about how things work. In a sense, Openai's work reduces their own moat.


I assume many people who want OpenAI to fail want to whole AGI program in general to fail, at least for a few decades longer. A competitor won't help, it'll plausibly make it worse.


Agreed. I was talking about folks that wish AI were being built differently (i.e. more transparent, etc.) as opposed to being against the rapid pace of AI development itself. I should have made myself more clear.


Personally the way I'm thinking is AI cooperatives. Democratic membership controlled organisations with relatively low fees to train AI aligned with the interests of the members.

Such organisations won't solve the fact that functioning AGI or very effective LLMs that can program or solve general text tasks precisely reduces the power of workers, but they'd at least mean that ordinary people control their own propaganda apparatus and are not stuck with models that are part of other people's propaganda apparatus (the state, Microsoft shareholders, some charity board that they do not control etc.).


Something impactful like AI shouldn't be a technology dominated by a single entity. That can only have bad outcomes. I hope OpenAI will fail, but I hope they'll fail because people wake up and realize it's not a good thing to have a limited group of people that has so much power.

And then I hope people will invest in a truly open technology that truly benefits everyone.


If you don't remember the past, slow down relative to what?


That's exactly my question: is memory or timespan that regulate the speed of time-passing?

Another thought experiment: suppose reincarnation exists and, as soon as I'm born, I remember my past lives, would time go faster or slower?

I don't expect an answer, they are just thoughts that I have...


This is my worst nightmare as a bootstrapped founder. And that there's no way to put a limit on spend is ridiculous. Someone that doesn't want me to do well can simply ddos me into bankruptcy out of nowhere.

Just went through Vercel's docs:

---

"Vercel helps to mitigate against L3 and L4 DDoS attacks at the platform level. Usage will be incurred for requests that are successfully served prior to us automatically mitigating the event. Mitigation usually takes place within one minute.

Usage will be incurred for requests that are not recognized as a DDoS event, such as bot and crawler traffic.

You should monitor your usage and utilize Edge Middleware to protect against undesired traffic based on its IP, User-Agent header value, or other identifiers."

---

That doesn't help me sleep well.

I feel that by now, these hosting providers should simply adopt best ddos protection practices and take responsibility for failure to protect.

"You should monitor your usage and utilize Edge Middleware to protect against undesired traffic based on its IP" - there should be some really good defaults for this right?

Clearly it's possible - Cloudflare's ddos protection is worded more strongly.

I'm willing to pay more for the service for peace of mind. Like, even $10/mo more to insure against getting smacked out of nowhere.


> Cloudflare's ddos protection

Yeah, we got hammered once with over 10TB/mo and noped out of Netlify as fast as we could: https://twitter.com/rethinkdns/status/1370342245841342466 Had to pay the bill in full.

Cloudflare's free tier is ridiculous: We do over 30TB+ of genuine traffic for $0. Makes it hard to move to any other platform. As a small tech shop, this is my Hotel California I'm happy to never leave.


CloudFlare pricing is indeed positively ridiculous.

At OpenTofu[0] we’re using CloudFlare R2 to host the providers and modules registry[1]. Bandwidth is free, you only pay for requests.

This already would be great, but there’s more - you only pay for requests that actually hit R2. So with an almost 100% cache hit ratio, we barely register any billable requests.

Recently someone decided to load test us and generated ~1TB of traffic over 1-3 days. All but a few of these requests were cached, so the whole situation probably cost us less than a cent.

[0]: https://opentofu.org

[1]: https://github.com/opentofu/registry


Is this in line with the TOS? I thought there were restrictions on serving non-website content in the free tier, or does that not apply to the CDN if you're using R2 as an origin?


They updated TOS to enable proxing R2 via CDN with cache enabled: https://blog.cloudflare.com/updated-tos


> R2 as an origin

We front our distribution service with Cloudflare Workers fronting R2 fronting S3 / Lightsail Object Store (https://blog.cloudflare.com/cloudflare-r2-super-slurper/). That brought our costs down from $500 to $2 serving the same amount of traffic.


> Cloudflare's free tier is ridiculous: We do over 30TB+ of genuine traffic for $0. Makes it hard to move to any other platform. As a small tech shop, this is my Hotel California I'm happy to never leave.

Yeah that's how Cloudflare can reach total control over the Internet. With thunderous applause by people that should know better.

I know that my position is outright blasphemous in this day and age, where even self-hosting a static site has become black magic and we need a third party to do it for us.


I don't understand this take. First of all, moving off of Cloudflare is trivial if you really have an alternative. Second of all, self hosting a static website is easy, but that's not we're talking about here. We're talking about DDoS mitigation, which is not gonna be solved over a weekend hack with a load balancer. At least, not at the scale that matters.

What would the Cloudflare going evil phase even look like? Is it anything like Netlify charging me 100k because they don't provide ANY DDoS protection? I don't see any FOSS tools preventing this problem.


You mean that all the people on HN that use Cloudflare's very generous free tier are at risk of DDOS?

Of course there is a benefit to selling your soul to the devil, what's the bloody point otherwise? I do not need to hear all the good things the devil got you, I am telling you that it is silly that "go Cloudflare" is the default advice is any situation because we have become lazy and complacent and we do not really care that we give the keys to the internet to one company.

The Internet gets shittier because people are lazy, and I need better arguments to being complicit to this than "I need DDoS protection for my 100-visitor a month blog."


> You mean that all the people on HN that use Cloudflare's very generous free tier are at risk of DDOS?

Yes? That's what this story is about. A random small website incurred a 100k charge because someone had the boredom to DDOS them today. Do you think you're not at risk?

> The Internet gets shittier because people are lazy, and I need better arguments to being complicit to this than "I need DDoS protection for my 100-visitor a month blog."

Gonna need you to explain the mechanism here. Because my argument is that Cloudflare is not the devil no matter how much you say it, and that using their service doesn't give them any keys.

What exactly are people lazy about and what are you doing alternatively that makes you different? Just not using Cloudflare? Because that's like not using condoms because you don't want to support a condom monopoly.


That's not CloudFlare's fault.


I dread the day they go evil


> Yeah that's how Cloudflare can reach total control over the Internet. With thunderous applause by people that should know better.

This is an emotionally-manipulative, anti-intellectual comment that certainly does not belong on HN. There's no intellectual curiosity or value in this comment - just scoffing, predictions of doom, manipulative statements like "I know that my position is outright blasphemous in this day and age", and other drivel that belongs on Reddit, not here.


That’s a free tier that doesn’t sound sustainable then, so that raises alarm bells to me.


That's because amazon and big telecom convinced you that bandwidth is expensive. It isn't. Once the equipment is there, you might as well use it.


Well, they have to pay for the amortized equipment cost. Which, yes, is much less than you think. The big 3 clouds have set their prices in an age when services were much more expensive to provide, and they make a big deal out of the fact they've never raised their prices - but they rarely lower them, either. Now they have insane profit margins.

The invisible hand of the free market has come to fix that, *but you have to opt into the hand by shopping around.* If you don't, you don't get its benefits! You have to willingly take the choice to move to cheaper providers instead of overpriced ones.

Hetzner Cloud: $1/TB (20TB free) Digital Ocean: $10/TB (few TB free depending on server size) AWS: $90/TB (0.1TB free, used to be 0.001TB free) Netlify: $550/TB (0.1TB or 1TB free)

If you move up from $5/month VPSes, to real dedicated servers, you are now spending a lot more money and therefore you get more free perks. A huge number of providers exist that will give you unlimited or unlimited† bandwidth depending on how much you spend. Renting a powerful server with unlimited 1Gbps should cost a few hundred to several hundred dollars per month, and a powerful server with unlimited 10Gbps (i.e. 3000TB/month) should cost a few thousand dollars per month. You can even get some with 100Gbps (for tens of thousands).

Also consider asking your local ISPs and datacenters. If you live in a central area, you can probably get a comparable connection to a nearby datacenter if not straight to your office, for a comparable price. Data center connections are their bread and butter and they should be able to give you a quote quite rapidly; to your office will be a more custom thing.

Recently I got a quote for AMS-IX peering in Berlin, i.e. a peering in Amsterdam plus a link from Amsterdam to Berlin, about a 600km distance. That would cost 950 euros per month. If 1Gbps, it would cost 300 euros per month. Even though it's not really got anything to do with internet access (transit), I include this number to give some indication of the "true" cost of "raw" bandwidth.


> Now they have insane profit margins.

"your margin is my opportunity"


Wouldn't there be at least a handful of competitors if the economics worked out that way?


A good number of small hosts offer very cheap bandwidth compared to AWS. With Cloudflare’s economy of scale, their costs should be even lower. You only need a ~100Mbps link to serve 30TB/mo, which would cost them ~$10, maybe less.

They’ve written about it before: https://blog.cloudflare.com/aws-egregious-egress


There are tons, the big providers like AWS, GCS, etc are really the only ones who charge ridiculous amounts for bandwidth and everything else.

Those big providers have pretty much normalized high fees and convinced people that's what it costs, the reality is any normal provider like Hetzner for example gives you tons of bandwidth for essentially zero cost included with your servers.


A good data center can sell you a sustained 10Gbps for, and I’m guessing at going rate, but like 4-7k a month? If you’re making a commitment cheaper, and that’s basically a retail pipe for someone in a colocated facility.

For larger providers, bandwidth cost drops tremendously, especially if you’re well connected as transit is much cheaper and if you are really large or a network provider you may even be routing between your own facilities or in some cases from one customer to another and every large scale isp is going to want a “direct link” to your facility (a peering relationship). Those costs are astronomically small at scale for bandwidth.

The ISP or similar then turns around and sells a sustained network throughout as GB transferred, which isn’t how wholesale bandwidth is sold at all. So the get to charge for the data the pipe moves while they only pay for the connection itself — the markup added to this process is considerable.

For someone operated a global CDN, which is basically what they do, they have racks of storage and computer collocated all over the world and optimize the living crap out of their network to reduce their costs and make it run on as many peering relationships as possible. It’s an expensive and complex business to set up, but once it’s set up you get a fairly good and consistent return out of it.

The reason for this article is related to the nature of that business: it’s the issue of liability.

When you have policies where you protect your clients from downsides and excessive use on the network, you suddenly have to assume the role of paying attention to what’s on the network and policing it’s contents. That’s not possible with a massive system like this generally, so they push the liability down to the customer and discount the mistakes that come up. That’s why things are set up like this… this kind of stuff isn’t their business at all really. They are looking for the customers that convert and pay, which is very profitable, and the free tier is often thought of as a sustainable cost if you are large enough scale, as it substitutes for the rather massive expense of marketing and sales which is one of the largest expenses in a bandwidth focused business. CAC is the free tier.

There also competitors, but the benefits of scale are tremendous in terms of cost efficiency. A large provider might be paying just a very small fraction of a penny or less (even “free”) compared to what a small provider is paying. So that’s why you end up with fewer competitors because it truly is a business that benefits from economies of scale.

There are other smarter people on here who can correct any mistakes I’ve made or provide better pricing or whatever, but that’s the more in depth answer.



Have you not... looked? They exist - arguably too many of them. Clouds aren't a good indicator of reasonable pricing.


In EU, yes. EU cloud providers offers bandwidth on the cheap, much cheaper than anywhere else.


I believe it's quite the opposite, cloud has normalized absurdly high traffic fees, and that is what should be raising alarm bells.


cloudflare has a blogpost that kind of explains a bit on cost of bandwidth https://blog.cloudflare.com/the-relative-cost-of-bandwidth-a...

(from 2014, so it might be super outdated)


Yes, cloud services have inflated both bandwidth and amortized hardware costs to absurd levels. You pay for not having to know what to do in order to run something online. Until it breaks.


Peering.

Here's how it works:

1) I have a big network and I exchange traffic with another big network. Think of "eyeball" networks like last-mile ISPs (Comcast, mobile providers, etc) where a substantial portion of end-user traffic is going to handfuls of well known networks - Cloudflare, AWS, Netflix, etc.

2) Comcast and Cloudflare say "Hey, I send you X TB/PB/etc and you send me X TB/PB/etc. We both currently pay another provider to route that traffic between us. Let's not do that."

3) In locations where it makes sense they basically throw a cable across datacenters, POPs, internet exchanges, etc. The cost for this is typically extremely low - it's basically a port on a switch/router on each side and MAYBE a "cross connect fee" from the facility. This is usually billed in the tens of dollars/mo if at all. It takes very little time/effort to configure this but of course the details are more complex - multiple ports, multiple facilities, etc.

4) Both sides start routing traffic between their networks over their new shiny direct cables and extremely high speed ports. Faster throughput, lower latency, improved reliability, frees up bandwidth to the transit provider they were using previously, and most importantly the cost of bandwidth between the two networks goes to zero.

This is all well known and publicly available because it's visible in the global routing table(s). Cloudflare, for example[0].

All of the large providers do this and AWS, etc charging in bandwidth per GB (especially at their rates) is more-or-less pure profit.

I have a theory that AWS, etc capitalize on people not really understanding this anymore. AWS is 20 years old - that's an entire generation of CTO/CIOs on down that are completely unfamiliar with these details and think $0.10/GB or whatever is "just what bandwidth costs". It is not.

[0] - https://bgp.he.net/AS13335#_peers


People don’t really and have never fully understood this - and why Netflix using a lower tier provider with bad peering caused companies to … not upgrade their links.


I have heard that they rather drastically constrain QoS instead, which does sound reasonable. So you are still not charged for abusive traffic, but your service will be much slower than what is actually possible with paid tiers.


So you'd be either slow or pay them "for protection". Something that reminds me of;)


Capitalism? Mob-style "protection" would be if Cloudflare were the ones who DDoSed you if you didn't pay.


Yeah. Instead Cloudflare hosts the websites of DDoS sellers and refuses to take them down or tell you who they are. A lot of these DDoS-for-hire services use Cloudflare to hide their real IP.


How naive if you think the mob would disclose when it's affiliates trash your shop.


I think a lot of people don't understand how cheap bandwidth is and is decreasing in cost practically every day. Amazon and Google have a lot of people fooled. Go ask someone operating in China and East Asia (and Japan) how much they're paying for local solutions.


These guys know what they're doing. If and when Cloudflare dies we'll find something else.


it's 100% not sustainable. Use it while it's good, but don't get vendor locked in, because sooner or later they will increase the prices


> it's 100% not sustainable

As a business for Cloudflare?

  Cloudflare in 2014 blogged about how they work relentlessly to bring down bandwidth costs by peering aggressively where possible [2] (which apparently means $0 for unlimited bandwidth [3]). And where they can't / don't [4], egress is 5x (est) the ingress (one pays for the higher among the two), but this creates an opportunity for an arbitrage and give away DDoS protection for free.

  This is pretty similar to Amazon's free-shipping offer for Prime customers despite it being one of the biggest loss makers to their retail business. Prime basically has since forced Amazon to bring down costs through building expensive and vast distribution & logistics network that spawns the globe. Doing so was a considerable drain on the resources in the short-run, but in the long run, it has become an unbreachable moat around its largest business.

  Analysts like Ben Thompson (stratechery.com) and Matthew Eash (hhhypergrowth.com) have written in detail about Cloudflare's modus operandii over the years, with both agreeing that Cloudflare's model is so brilliantly disruptive that even Clayton Christensen would be proud of it.
https://news.ycombinator.com/item?id=33337183


This is why we still use services on VM's and open source containers. We can move our services anywhere, including selfhosting. AWS and Google offer some amazing solutions, but lock in ain't worth it if you can manage your own stack via serverless/vm solutions.


They've been going for at least 10 years...


Their stock performance would agree


While a funny comment, stock performance is at best loosely coupled to sustainability as a company.


By the time it isnt sustaninable I will have IPO'd and be the next offensive new money tech billionaire writing threads on twitter telling you the secret to success is the 5am grindset and everyone who isnt sinking 5mil into the next big thing (tm) can have fun staying poor.


> Cloudflare's free tier is ridiculous: We do over 30TB+ of genuine traffic for $0

It's not really ridiculous if you think about what you're giving them.

You are massively benefiting their platform by providing them data which they use to train their services and then sell those services to other customers.

I'd make a case that the data they collect is the most important part of their business and the free tier is a major component of this.


If you are not paying for it, you are not the customer; you're the product being sold.


I don't think it's fair to call it their free tier - it's their discretionary tier, there are numerous cases of the rug being pulled as and when it suits their business requirements to do so. Being left homeless vs. urgently coughing up is exactly the wrong problem to be dealing with mid-attack, I can't see any way to consider it free by any practical definition


I know that putting all eggs on one basket and giving it all to Cloudflare is not a good idea, if they have an outtage then I would also have it to. But when they are down, one third of the internet is down with them too. With 240$ a year for CDN, 60$ a year for serverless and $0.015 / GB-month for S3-compatible storage with free egress, I don't think anyone could find a better alternative than CF. I'm mixing with AWS, CF and self-hosted machines and the infra cost is less than 5k$ a year. Now I can spend the remain hard earned money for some fresh marlboro cigarettes.


Use a token bucket on your web server to catch abusive IPs and then blackhole them using `iptables -t raw -I PREROUTING -s ip -j DROP`. I know. I run https://ipv4.games/ which invites hackers to unleash their botnets, and the service runs on a small VM with only a few cores. It's been attacked by botnets with 49,131,669 IP addresses. There's no Cloudflare frontend or anything like that, because back when I used Cloudflare, the people who attacked the service would actually bring down the Cloudflare nodes before they brought down my web server. I doubt I've ever paid more than $100/month to operate the service. Please note that your service provider needs to have free ingress in order for this strategy to be effective.


This strategy may work for a (D)DoS that is targeted to an application layer, but won't work if the attack is designed to exhaust your bandwidth.

Once you're receiving more traffic than you network cards can handle, it does not matter if you'll drop the packets with iptables or not.

I was the target of attacks that caused Hetzner to terminate my contract. I was leasing physical servers there, so I assume the attacks were overwhelming their infrastructure.


These days it seems that DDoS attacks are often not targeted at bandwidth either, but rather packets per second. It is (apparently) much easier to exhaust routing capacity with an inordinate number of tiny packets than with a still large number of large packets. Cloudflare has some fun ways to deal with this [0].

[0] https://blog.cloudflare.com/mitigating-a-754-million-pps-ddo...


What they did to me was flood the Linux Kernel with TCP connections. That's why it's so important to block IPs in the raw PREROUTING table. You need to nip it in the bud before Linux starts allocating any memory to the attacker.


I rent a GCE VM and there's not many if any people out there who can exhaust Google's network infrastructure. The only thing I have to worry about is making sure my server doesn't respond to abusive traffic.


Eventually you're probably going to want an ipset, at least. Otherwise processing your chain will continuously cost more, and more, and more.


I just declare firewall jubilee every now and then, where I flush the iptables and let people try again. It's also because people usually only control the IPs they use temporarily, so I don't want someone innocent later on to be blocked from using the service because someone abusive used their IPs beforehand. But even if I didn't do this, it doesn't cost much for Linux to iterate over an array of blocked int32's. It's really only allocated TCP connection resources that are problematic.


I'm glad you make a point to flush the chain/let things retry. I often hear about people adding drops.. to just then forget about them

I saw millions and started to feel my heart race a little


If you want to sleep tight just get a dedicated server or VPS from something like Hetzner and/or combine with CDN providers like BunnyCDN - set up alerts just in case though. It takes more time and resources to manage it but you could save a lot on it in this case.


This so much. My hetzner (best choice for a media server within Europe) has 0 downtime in 1.5 years. And exactly as you said I am using bunny as well, which costs me a few $ per year.


> It takes more time and resources to manage it

For most of the new web projects, setting up your brand new server is pretty well documented process and should not take more than couple of hours.

It get complicated when you grow and add more servers or components. But at that point, you should be able to afford a part-time consultant to handle complicated tasks or just use Cloud then.


I'd even say build your system so as it can run on shared hosting. This way you even save the management.


That is my setup after leaving AWS for some of my services (low user amount b2b).

I put in far less resources and maintenance after I had the system running. Especially if you need to manage the software running anyway.


This might be a good time to point out Cloudflare Pages: https://pages.cloudflare.com/

Under the free tier:

> Unlimited bandwidth


I'm moving everything to Cloudflare.


Ditto. "Migrating from Netlify to Pages" : https://developers.cloudflare.com/pages/migrations/migrating...


Just looking at pages.cloudflare.com now and I think I'm going to be using cloudflare from now on.


I'm trying to sign up but it keeps saying "Verification is taking longer than expected. Check your Internet connection and refresh the page if the issue persists."

Does anyone else experience this as well?


That's the Cloudflare user experience in a nutshell. Your users will see the same thing when they visit your Cloudflare-hosted site.


As a heads up, you can disable challenges almost entirely if you don't want any visitors challenged. Security -> Settings -> Security Level -> Essentially off


Might be iCloud private relay if you use that


I didn't even know Cloudflare offered a JAMstack platform. I'm going to switch as I already use Cloudflare for domains.


Yeah, I'm already using Cloudflare because of Google Domains got de facto killed by Google via transferring it to Squarespace. Why not Cloudflare Pages, CDN, and R2 (S3-compatible storage) too? I'm even considering paying for the paid tier in the future if I ever go above the limits of 20 000 files per static site and the 25 MiB single file size limit [^1] (more than enough right now or in the near future).

[^1]: https://developers.cloudflare.com/pages/platform/limits/


I was looking for a static site hosting option recently and tried out cloudflare pages. Fit my need perfectly. The generous free tier and the reasonable pricing model were the big factors.

Oh, and the ability to put some authentication in front of it was a big feature for me.


Host on a provider which bills per hour. This caps your cost. It also makes your users pissed because you will go down, but if you’re small, you can afford that. If you’re big, you already have scaling options and should have a team to handle ddos.


My experience is that customers don't really care that much about small amounts of downtime no matter what size you are, people mostly get that unexpected stuff happens as long as you don't get hacked or misplace their data. Customers might complain a bit but seldom leave because of a few hours downtime.

This seems to mostly hold true to developers also, GitHub manages to survive just fine after all.


Depends on your service. 20 second downtime on loading HN? Nobody cares. 20 second downtime on the last play of the Super Bowl - big problems.

For most internet consumers we’re accustomed to poor service so if a page doesn’t load we’ll assume it’s a local problem and try again 20 seconds later, same with buffering, it’s just something that happens occasionally. This is increasing the case for phone calls too. Legacy live tv and radio going silent though is still a major issue, especially on live events.


Sure, but now you're talking about sites with completely different service level objectives, and conversely, different budgets for their hosting. The problem here, to play off of your analogy, is that Netlify is treating every customer, many with SLOs likely less strict than HN as if they are the Super Bowl. This is an assumption that, according to the most recent policy discoverable by looking through their forum posts, is a constraint of their platform, and something they tout as a feature, not a bug.

When users expressed concerns for a similar scenario that the OP experienced on their community forum, Netlify's staff responded with "how likely is this, really?" Only has to happen once to put someone in significant financial harm.


Yeah. Any host that won't infinitely scale out will solve this concern for you.


I think most people pick Netlify for it's Infra as a Service offering, so it would be nice if they had a way to throttle and budget in that offering.

I would even imagine Netlify's target market is small to mid size businesses who really don't need ridiculous burstable scaling capacity at all. Seems like a bit of a trap door for that customer base.

I agree though, I wouldn't host on them as a small business due to that risk, but I am also happy running my own server so I might be an edge case.


Vervel charges $400/TB for excess bandwidth, it's not even DDoS you should worry about, just moderate success.


That’s a crazy high bandwidth. Bandwidth isn’t free, but $400 will get you a month of 10gig in my local peering point, that’s 1TB in 15 minutes.


> Someone that doesn't want me to do well can simply ddos me into bankruptcy out of nowhere.

An interesting story that expands on the above concept but a different vector entitled, "Illegal Life Pro Tip: Want to ruin your competitors business?" : https://news.ycombinator.com/item?id=36566634


Imagine you lost your job. So you are here enjoying creating and hosting your hobby projects in theses services. Now, suddenly one fine morning you get slapped with $104K bill because someone decided to randomly ddos your one page dog lover website.

Now, who in the would would be thinking of having ddos protection for their hobby project? This is just absurd thinking.


No. This is absolutely common. I remember well how shared hosters 10 years ago already put caps on cheap packages and took the websites offline in case of traffic. And today it's Amazon who bills small players into dept.

There are many provider who don't tho.


I always loved nearlyfreespeech.com for this, (prepay, and if you run out of money the site goes down) but found it to be a pain for projects that really needed a VPS


Can't hosts just make a site unavailable once it reaches its plan's bandwidth limit, DDoS or not?

I think being offline is a lesser headache than a large bill, especially for those who are inclined to a free tier to begin with.


Folks regularly show up in HN comments during these discussions stating the opposite—that it's categorically better for all sites/projects, now matter how inconsequential, to stay online. It's weird.

This includes some of the TPTB, too. Occasionally, though, someone'll say the quiet part out loud. E.g. re fly.io:

> putting work into features specifically to minimize how much people spend seems like a good way to fail a company

<https://news.ycombinator.com/item?id=24699292>


This may seem weird, but I believe ToS ae the real problem here. I call it the "car rental" problem.

When I rent a car in person, I am often given a contract. And this contract is filled with tiny print, and pages of it.

There are often people behind you, waiting, and bored/annoyed people behind the counter, waiting. This is beyond unreasonable.

A point of sale contract should be short, in readable text, and understandable. For example, renting a car? Under a page, easily parseable, and if the person behind the counter cannot explain it, it is null and void.

From a legal side, you can do this. And you can explain legal terms. Of course this means you are describing intent, which limits one in court, oh boo hoo Mr Lawyer. Cry me a river.

Well the same should be true of any retail contract. Sign up for a service? One page with costs listed.

At least then, there is hope of an end-user sort of understanding. And as one could claim that a DoS was actually targetting the provider, and not the website, that should be described too.

So back to the topic at hand. I would write a demand letter, insistong Netify explain the charges, and ask them if they and their IP ranges were DoS, and if so that the charges be reversed.

Because you shpuld not be paying, if someone attacks Netify.

This letter should also be sent by mail, sig required, to the corporate address too.


> When I rent a car in person, I am often given a contract. And this contract is filled with tiny print, and pages of it.

As someone who reads the agreements I sign, one thing that has become prevalent is that they're so used to people not paying attention to what they're signing that they're sometimes not even giving you an accurate copy to review. For example, you read the thing and think, "Okay, I can work within these parameters," then you sign, and later get an email containing your "agreement", but it turns out what's in the email is a different set of terms with a bunch of stuff that wasn't in the terms you actually agreed to when you signed. Or someone hands you a pad with an "I agree to the terms" box checked beside the signature line, and when you ask to see the terms you're agreeing to, they're caught off guard (being totally unequipped to let you do that), which turns into being flummoxed with how to proceed, which turns into getting angry with you for asking.


Yup. And that's the part that needs to end. The angry part.

I have seen people understanding, but with a "oh, you're one of those people" looks on their face. That too is entirely uncouth. But people should start recording these interactions, not obtrusively as the purpose is not to intimidate, but instead just make a record of what transpires.

I think legislation that makes it completely legal and admissible in court, any recorded retail interaction, might be an interesting change.

Because if you are presented with a contract and "JUST SIGN THE DAMN THING!", or "It just means $x", or "People are waiting, just sign it!" and so on, that would likely go a long way indicate compulsion, or even (by describing intent) change the entire contract itself.

If this happens, it may be cheaper to just have sane contracts, and do non-dumb things, then try to train every employee that has public contact.


Every rental and service is so optimized against scammers and abusers that being a perfect legit customer ie. simply want to pay, use the resource, then return the item or terminate the service, you're walking along the edge of a cliff. Annexes, penalties, fees and charges, exclusions, "sign this one more form, everyone signs it". Housing rental is another extreme example, one is simply unable to just get a job in new location and rent something long term.


This applies even offline! Have you ever tried to get a hold of exact insurance policy wording before going through their entire sales process? Impossible, in my experience, whether it's long-term insurance, vehicle insurance, pet insurance, etc.


It shouldn't be like this, but it is.

Unfortunately, in today's world, DDoS protection is the equivalent of basic hygiene, foid and road safety. It's just a travesty that the hosting providers don't feel like it's their responsibility to address it.


I always run mine through Cloudflare, at least in part for this reason.


How do these hosting providers sleep?


In enormous mansions atop large piles of money.


I guess on eiderdown pillows.


Vercel seems to exist only to promote lock-in.


“We leave your safe deposit box unlocked. You might want to forge your own lock and key. If we happen to notice someone stealing out of your box, we will let them grab as much as they can for one minute, then maybe install our own lock if our revenue is close to target.”


Can anyone share an example of edge middleware that might protect you on Vercel?


You can also turn on soft and hard spend limits on Vercel (including SMS alerts) https://vercel.com/blog/introducing-spend-management-realtim...


This is not a hard limit, this is just a webhook and you need to handle disabling everything on your own.

1. You can make mistakes in your code.

2. Some junior developer can by mistake change the code and make it ineffective.

3. Webhook is not instant so you can get billed more than the limit.

4. There is no information which project hit the limit, so you need to Fetch all of your project ID's and then disable all of them. You need to basically disable all the projects assigned to your team/organization.

5. Vercel doesn't guarantee in any way that you won't get billed more, they are just sending an information to you (with a delay).

Hard limit should be a deal between user and company that the user won't ever get billed more than X$


Those services exist, and you have the option to use them. Netlify is not one. Apparently, you chose that the un-insured solution was best for you.


Wait until you learn that Vercel only supports blocking IP CIDR ranges on the Enterprise plan.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: