Hacker News new | past | comments | ask | show | jobs | submit | Imustaskforhelp's comments login

As someone using nix because I messed up my arch luks encryption because of single command on stackoverflow.....

I haven't really learned nix os language right now and I have only installed bare minimal software like a librewolf, prism launcher, kde, I used to be a hyprland user and I might switch back in some days.

The biggest advantage to me feels nix-shell which is cross platform

I installed obs-studio from nix-shell, built a video, then used another nix-shell for ffmpeg and converted video to mp4 and uploaded it to github.

This is my general use case of obs and I very rarely make recordings and so this idea that I can install/try out software without having to worry about anything while still having no xdg issues unlike flatpak is a godsend.

Did I mention I started using nix-shell for places I would've used docker, like stirling-pdf one off cases

Most of the software on my computer needs to be rarely accessed and I love the sanity that nix provides knowing that It won't make my update times faster/if-any unlike archlinux and dependency hell is a problem I truly despise.

I have looked for better alternatives, maybe spack comes to mind, but nix-shell is still crazy good. And I can also some day use a functional programming to make it even more automated.


Isn’t your primary benefit you’re looking for the same as what containers offer? If that is true, isn’t the only major benefit that nix is a little less sluggish?

FWIW the way we use nix where I work is quite a bit lighter touch than the setups being discussed here. There are kind of three tiers of Nix implementations

- I want Nix to manage my entire os (NixOS)

- I want Nix to manage my user shell and dotfiles (home manager)

- I want Nix to manage per repository shells (nix + direnv with a flake.nix in each repo, or an .envrc that pulls a shared flake and extends it)

We use the latter and find it to be a good mix of keeping nix configuration simple but also enabling per environment shells that are reproducible


I am not sure, deno seems really bleak as I had read the HN post about deno actively shrinking its datacenters and fresh activity.

This seems an attempt to show to the world that its not that, but I would say that it's still not working.

Deno deploy, their product is better on supabase or even bunny cdn which provides more locations than deno itself.

Bun also exists. And did we forget about node itself?

Cloudflare workers is a beast if we can work with non standards/ I personally feel a little bit that some code meant for node won't work on cf as much as it would on deno


hey Andy from Deno here. thanks for your comment. we reduced our Deno Deploy regions not because Deno is in decline. (in fact we have seen about 2x adoption since release of Deno 2.) it's just that we noticed more users using Deno Deploy for hosting applications vs. edge functions, which was our original vision for Deploy. in many scenarios, application performance is improved with fewer, more highly trafficked regions vs. many spread out but more idle regions. we will be covering that in more detail in a dedicated blog post next week.

Wow! Didn't expect anyone from Deno team to respond. I just thought I was shouting stuff in the void for my fellow HN people at most reading it.

Thanks for the clarification.

I have some questions though:-

Aside from the fact that deno allows npm compatibility, how is deno deploy any better than cloudflare hosting which I currently use, since cloudflare workers also have some compatibility layer I suppose and cloudflare workers can and do host applications and edge functions both.

I can understand if things like nextjs which don't run as smoothly on cf workers, if at all (I haven't tried them but I do know that its easier to run nextjs on vercel than any other competitor), but nextjs can run easily on deno, so that might be a really big niche tbh

But as a sveltekit user, here's my opinion I have deployed so many sveltekit websites to cloudflare workers, 100k limit never disappoints me or limits me, cf workers also has a kv which is good for simple databases. I am seriously considering to always use cloudflare workers since currently I am just a student and it gives me 100k requests per month limit and after that its still really really cheap.

I had compared deno's tier and cf tier's sometime ago and cf was the winner there, I don't know what's the situation now but I am willing to hear.

Cf workers wrangler developer-experience is genuinely decent in my opinion, not as easy as deno deploy I suppose but still its worth it given all the previous points.

Deno is really nice compared to node, I in fact was the guy who watched fireship video and then the deno video itself and then I legit went to my brother's room knocking on his door (my brother is also a coder, in fact he knows his stuff whereas I am just this 16 year old student) and I legit wanted him to try out deno. But I am having mixed opinions now and I don't mean any defamation of deno team who are really nice and kind for what I've seen.

Have a really nice day Andy. Hope deno achieves success.


> I am not sure, deno seems really bleak…

There are plenty of moribund open source projects. But looking at this month's 2.3 release, Deno's future strikes me as the opposite of bleak.

As for Deno Deploy, sure —¹ it may fail. But regarding the Deno folks scaling their footprint in response to changes like real-world usage patterns, adoption timelines, our rapidly-contracting economy, or whatever, that's a far better response than ostriching in the longer run.

¹ This em-dash was lovingly hand-crafted by a Mac user. Please don't em-shame.


I meant deno's future as of how the company would survive with competitors/ the ecosystem growing and node adopting. Because the company is basically the only one "really" working on the product, if the company can't find a decent source of revenue, (I am not sure "what" is the product that deno sells that others can't / aren't selling already ie. bunny cdn / supabase hosting deno subhosting)

I would love deno to win, maybe be even faster than bun and maybe help with permissions a little bit more as I played around with deno scripts and it quickly devolved into a permission nightmare from my personal experience, maybe I am inexperienced, but I didn't had the issues in bun.


I don't want to live in a world where we have to add footnotes when using em dashes

Same, my typographically-savvy friend. This "dead giveaway" too shall pass — first it was certain words, today it's em-dashes, and tomorrow we'll be forced to introduce grammatical errors.

hey now typos make you authentic, that's a plus

1337$|"33|< ph7\^/

The HN post referenced in case others missed it:

Deno's Decline - https://news.ycombinator.com/item?id=43863937 - May 2025 (157 comments)


Thanks a lot!

I might add it in the original comment attributing it to you as well. Thanks again.

Edit: wait I can't edit my original comment, I think it might be because its already commented on by people, so yeah this is the only way for people to find out about it.


I have been around for a bit, until now being a laggard in alternative implementations has always paid off.

Eventually the reference implementation gets the features that are more relevant, and we move on.


Yes, I think that is why I had written "have we forgotten about node" because most people seem to forget node given deno/bun (including "me"), so Of course, node won't have deno or bun occupy its market share and it would also add those features.

Its a net positive for everybody except the people who are working in alternative implementations unless they are being sponsored or they are doing it for fun.

As a company, I am sorry, but I just see no ways of pure revenue once node improves


oh yes. as a firefox (now librewolf) user, it deeply saddens me.

Maybe somebody could explain me why your comment is in different contrast of grey?

I think somebody might have flagged your comment, but it is a real fact.

This is one of the reasons why people say cloudflare owns the majority of internet but I think I am okay with that since cloudflare is pretty chill. And they provide the best services but still it just shows that the internet isn't that decentralized.

But google captcha is literally tracking you IIRC, I would personally prefer hcaptcha if you want centralized solution or anubis if you want to self host (I Prefer anubis I guess)


Cloudflare is not chill because they, either ignorantly or purposefully, block everything that's not Chromium or Firefox[1].

Or sometimes everything that's not just Chromium[2].

[1] - https://www.theregister.com/2025/03/04/cloudflare_blocking_n...

[2] - https://www.techradar.com/pro/cloudflare-admits-security-too...


Don't worry. They sometimes block Chromium too.

> Maybe somebody could explain me why your comment is in different contrast of grey?

Downvotes. Comments with negative scores are shown with lower contrast. The more negative the score, the less contrast they get.


Yeah, your goal is pretty nice if you want to open source blacksmith at their level, but I think most people would be pretty happy with just a hetzner vm using act https://github.com/nektos/act if they want github actions, or jenkins.

I think we can rent hetnzer vms on a per hour basis or maybe we can't , but I do know that there are services like (linode?) I guess, which use a per second model.

Combine that with I think automatic installation of act and you pay for per second use of your CI.

Plus points if we can use criu to scale from lower end machines to higher end machines depending upon the task while continuing the task from where it was left.


> Crud app

> Slaps cryptocurrency sticker in 2019-2021 era

Gets 1 million $ funding

> in 2025, slaps AI powered sticker

Gets 10 million $ funding.

But its still a crud app nonetheless.

I know it sounds really over the line example but I am sure that there are examples like this where the same thing gets some new terms and it gets a lot of funding that is, there is an incentive to put on new stickers.

The goal is not to appear different, the goal is probably profit, which they can get if they can get better funding I suppose, and they get better funding by slapping stickers.


very interesting idea!

How much cheap is this as compared to github actions?

also why are you using gcloud? would certain other competitors like aws/(hetzner? if we are talking about vps) also suit the case.

I would love it if you could write a blog post about it.


We don't really pay for gh actions; we're staying below the freemium limit.

We use gcloud for convenience. Our production environment is there. So spinning up a vm is easy. Our builds also deploy there so we need gcloud credentials in our gh actions anyway. It only runs for a few hours per month in total. So the cost isn't very high. A few dollar at most.

No time for blog posts but feel free to adapt my gh action: https://gist.github.com/jillesvangurp/cccf5f9d61f4b457a994dc...

It basically runs a script on the vm. Should be fairly easy to adapt. There's a bit of bash in there that waits for the machine to come up before it does the ssh command that runs the build script.


it does appear cheaper because you can handle the highest workloads for just a fraction of the total cost if you had to host the highest workload yourself.

Like lets say my company gets a very huge spike and lets say I wanted to be prepared for it for all times in baremetal, then I had to had a lot of vacant free metal.

But I personally believe that a dual strategy should be used, a minimum amount of stuff should be baremetal for the average traffic and only for huge spikes should cloud be used.

Except, clouds seem to be a lock-in, so people prefered the clouds untill the cloud started raining and asking them for a big ton of money.


This old tale gets always told, and it is still a lie. Since AWS exists, every five years someone calculated the cost of AWS for our PHP stuff. And the results were always the same: we could have 4 to 7 times more CPU, ram, storage and bandwidth when we just rent servers somewhere. AWS is ridiculously expensive and renting servers was always the cheaper and better option (yes, we calculated the price for our system admins). Never was any spike of usage, or growth, a problem for our servers and software.

wait what?

I think you might've misunderstood me.

By what you mean, calculated the cost of AWS for your php stuff, and saying that it was 4-7x more expensive.

Let's say you are using AWS, and your website suddenly gets 1000% spike or even more suddenly, then AWS can still host it but if you were using bare metal, you would've had to manually scale.

Other than this nice benefit of AWS, there is none other benefits aside from maybe not managing servers but with things like coolify, there isn't much managing servers I guess...

I am genuinely interested how you are quoting 4-7x, I know cloud is expensive but sheesh.

Also, one other aspect I have always considered cloud to be cheap is probably storage, maybe please elaborate on that as well.

And also what hyperscaling techniques are you currently using without AWS in case you get some really really huge unexpected traffic because that is the stuff AWS was meant for tbh


The really huge traffic spike is the lie.

Most of the time you have a pretty good idea how much traffic you will get. Only once in 20 years we noticed that our servers are getting too much traffic, because too much people clicked on our advertising. So we stopped the ads, rented more servers and started the ads again. This took half a day and costed us a few hundred euros and that was that.

We aren't doing any hyper scaling and almost nobody needs this. We start PHP workers on a few servers and put the database on a beefy Server. What a beefy Server is, changes over time. In 2005 this was like 4 cores and 16 GB of RAM. Today you get like 24 cores and 128 GB RAM for about 200 € per month. On machines like that you can serve data for millions of users. AWS charges ridiculous amounts of money for servers like that. Also bandwidth pricing is a joke and always was.


For pricing on non-aws "clouds", look at a hetzner, ionos, maybe Strato. There is also digital ocean, vultr, ovh and many others.

> just a fraction of the total cost if you had to host the highest workload yourself. Like lets say my company gets a very huge spike and lets say I wanted to be prepared for it for all times in baremetal, then I had to had a lot of vacant free metal

Huh? Autoscaling turned out to be a myth. Even in k8, you cant make users wait for nodes to come online, so you have to always have spare nodes waiting. And the spare nodes must be in proportion to regular spikes you expect and additionally any unexpected spike that you estimate. Why would that be different from having extra free metal for much cheaper and simpler? Along with an easier time finding infra people who can manage that as opposed to finding expensive talent with the expensive bloat of technical knowledge that AWS today requires?

Lets face it - this is the enshittification of infra brought by the lock-in Amazon was able to lure the orgs into. First, they locked-in everyone. Now they are squeezing everyone dry.


I am not sure but aren't lambda functions literally autoscaled? maybe you are mentioning that not everything can/should be a lambda function...

and even forget about lambda function, cloudflare workers also do the same thing. I have hosted many websites on it, and the timing is literally negligible / even faster imo than trying to self host it / bare metal.

I know It feels really shitty to move to JS just for such a huge boost IMO to get cloudflare workers but boy I am in love of cloudflare workers and I know cf has some bad sales tactics but it was just this one off instance or very rarely and I think cf was in the right on that one, all be it, they miscommunicated.

Seperate the art from the artist. Seperate the sales from the tech team (for cloudflare), and you would see that cloudflare is literally great.

Though I used to believe that cf workers + r2 was best but now thinking storage should probably be done r2 + wasabi


Just my two cents on this topic.

What we are looking for, are easier migrations in cloud I suppose and multi cloud strategy.

I am pretty sure that 37Signals had gotten their servers set up in a lot of countries for easier latency but a lot of companies can't.

We are then forced to use S3 or the likes and honestly I am starting to wonder about S3....

Like I was watching theo's video on everything being a wrapper and he mentioned that in some sense he basically created uploadthing because s3 was cheaper but had higher egress and cloudflare r2 was more expensive but had no egress and so he wanted a way to optimize it... thus uploadthing.

But this whole idea seems so bizarringly stupid to me, I had seen a website which compared such providers pricing and to be quite frank, cloudflare was in the lowest maybe only more expensive than backblaze or wasabi but both of these had a sort of fair use limit which might be vague...

In the meanwhile, I have found another website giving some decent comparisons and though I don't like its UI as much as the other website which had some really cool graphs.., its also well built and professional https://www.s3compare.io/

and I have to say, somebody might see the amazon 4$ per Tb compared to cloudflare 15$ per tb and could've said wow maybe theo was right... untill they saw the 90.00 USD/tb...

I mean, I get that but if that has to be an archive / very rarely accessed, then why not just use wasabi or backblaze (I was going to prefer backblaze untill I saw the 10$ per tb egress for backblaze and 0 $ for wasabi.... yeah)

wasabi/backblaze both seems to be really great options, they are just fractionally more expensive than Aws s3 glacier (4.99$ per Tb) and they don't have egregious egress fees....

For something more frequent, use cloudflare r2 and for archiving/backup, use wasabi/backblaze , maybe even the 3-2-1 strategy... I am not sure if wasabi/backblaze already follow that


by bare metal, are we saying things like hetzner, ovh? or full on, renting servers spaces themselves like railway?(I am not sure if it was railway or render) did a while back

It’s both, we can rack up hardware if there is the need. But building, racking, and financing servers is a fairly well solved problem (especially here in the EU). So, in general, if we can avoid solving that problem ourselves we will.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: