Hacker Newsnew | past | comments | ask | show | jobs | submit | zamalek's commentslogin

Windows and Linux trade punches in terms of overall experience. What I've seen around is that if Linux has worse FPS, it tends to have more consistent pacing (it generally has better pacing - but not always, I had to abandon Once Human).

Gamer's Nexus has a pretty extensive benchmark video: https://youtu.be/ovOx4_8ajZ8?si=Cx5Q1a-lMMm14H4i . They refuse to compare to Windows, and it kinda makes sense: if it's satisfactory on Linux for your demands then who cares what Windows can do?

Here's a less professional, but direct comparison https://youtu.be/Giois6VtLPM?si=XFaVUMbea3u0AmP. An extremely important thing to note: AMD GPU. I personally have no idea what NVIDIA is like, but it sounds like their drivers are still all over the place.

And kernel-level anti-cheat doesn't work, though some (e.g. EAC) run in user mode if the developer allows it. Make sure to check ProtonDB for the games you care about. I have personally never had a good experience with Linux builds of games, so I just always use Proton now - but maybe I'm cursed because others have passionately disagreed with my experience. Either way, if a Linux game is broken/bad, try forcing it into Proton.

I don't want to say, "switch now" because it still has rough edges in terms of gaming. Better for you to have a great experience and stick around, than hate it and leave for good. Only you can figure out if it needs more time to cook based on some very light (ProtonDB) research.

I last used a Windows machine about a year ago, and I can say with confidence that the average desktop experience is significantly superior to the barrage of bullshit that Windows puts you through.


> if it's satisfactory on Linux for your demands then who cares what Windows can do?

Pretty much everyone? If bread and water is satisfactory for your demands then who cares about Beef Wellington?

If it was better than Windows they sure as hell would be comparing.


> they sure as hell would be comparing.

You're implying that Gamers Nexus is some form of Linux outlet/content creator. They aren't, they only started doing Linux content this year/late last year (and only plan to do it rarely).

You've taking a surprisingly hostile stance on the one (at the time of writing) pro-Linux comment that suggests that it might not be ready for everyone, and to wait if it's not a good fit.

And it isn't beef wellington vs bread and water. It's 80% lean beef vs 82% lean beef, in the majority of cases (and in either direction). And "suitable for your purposes" also means that 160FPS is really fine if your screen is 144Hz - doesn't matter if Windows does 180FPS (unless you're doing something competitive or extremely latency sensitive).

I think Microsoft can do fine without people tilting at windmills for them.


Yeah i got a CC1 due to price alone (Prusa was out of my range but obviously vastly preferred), and then they started trying to pull bullshit with firmware. They backed off after outrage, but don't forget that this is exactly what Bambu initially tried to do - so it will be the first and last Elegoo that I purchase.

Printer is great though. I've never used a Bambu, but after a thorough round of Orca calibration this (at the time) newbie was able to get some really decent PCTG and even PA-CF prints.

Web interface can't hold a stable video stream.


> Louis Rossmann posted a video saying he'd pledge $10,000 to help the open source dev fight Bambu's legal threats. And I'd happily chip in too, but that's only useful if the dev wants to put himself back in Bambu's crosshairs.

Louis Rossmann has decided to put himself in the crosshairs instead, with a video goading Bambu: https://youtu.be/1jhRqgHxEP8?si=BwfoCKxujd0XwNJ0

Here's what I don't get. How is infra load any different between someone using their slicer build, and someone using their code in another slicer (or a fork)? It's still (ultimately) the same human making the same requests. If they can't handle the load then the solution is to obviously carefully manage the supply of the printers, if your infra is incapable of handling more than 3 users (accurate figure going by the tone of their blog post), then don't have more than 3 of your printers in the wild at any single time. Problem solved.


Is the idea here to add function calling to models that don't have it, or even improve function calling (qwen quirks)?

So it’s a tiny model capable of function calling that could run locally on cheap devices.

Obviously this is bad in general, but what's the threat profile here? Google already has the content of your emails and I guarantee they have pinned down your fingerprint unless you're using Snowden-level counter-int.

Protest this by using a paid email provider. My $60 yearly payment just went through today, is that honestly too much for the typical person around here?


My sentiment matches your exactly. I'm sick and tired of CUDA - but it's really not going to change.

Could maybe be forked with some dynamic smarts, HIP is basically 1:1 with CUDA: https://github.com/amd/amd-lab-notes/blob/release/hipify%2Fs...


Does it support a graphical GPU debugging for C++, Fortran and Python JIT GPU code?

Otherwise it isn't 1:1 with CUDA, and I am not counting everything else on CUDA ecosystem



All those are far from the 1:1 CUDA experience.

FWIW, if you want frontier-level performance, as it was a few months back, Deepseek v4 and K2.6 are there. Almost zero chance you can run them locally, but you do have choice in terms of providers.

Qwen-coder-next is considered SOTA for things you could actually run locally.


Associating dark matter with epicycles is unfair, but it's still a risk (CDM, WDM, SIDM, and probably more by next year). In that light, of course it has more observations (by virtue of creating branches to explain incompatible observations) - but counterpoint is that it has made some predictions.

My stance is that anyone pointing at it in either light probably isn't taking everything into account. It's an incredibly immature theory space - are we going to get 20 more branches of it (making it modern epicycles), or are we going to see one of the current branches pay off?


You only allocate on box futures, which are much more rare than naked futures - generally only used where object safety (essentially dyn support) is required. Even then some workarounds exist.

Edit: and tasks.


Tokio is a general purpose async runtime. Much the same could probably be said for async-std (except IIRC they do have a barebones reactor for you to build your own on). In general, a general-purpose async runtime will do worse for highly specific tasks than a purpose-built one (especially e.g. NUMA).

I think avoiding async entirely might be a mistake, and I'm not entirely convinced anything better than a general-purpose async runtime might exist for a JS runtime (it itself is general purpose after all).

Avoiding std::fs is fucking bizarre to me: it's completely sync and is a really lightweight abstraction over syscalls.


my guess is they want to do AI/O as part of their event loop explicitly, and blocking a thread in a syscall waiting for an IOP (ala std::fs) isn't the vibe.

Ah good point, complete brain fart on my part.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: