Hacker Newsnew | past | comments | ask | show | jobs | submit | hu3's commentslogin

can't agree. this name has logical meaning

Apple might have convinced some gullible customers that this was something new.

But to the rest of the world variable refresh rate existed for years by then. As is with most Apple "inventions".

In this case the patent goes back to 1982: https://patents.google.com/patent/US4511892A/en


Indeed but:

1) That is relatively very slow.

2) Can also be done, simpler even, with SoTA models over API.


Right, this works with any models. To me, the most interesting part is that you can use a smaller model that you could run locally to get results comparable to SoTA models. Ultimately, I'd far prefer running local, even if slower, for the simple reason of having sovereignty over my data.

Being reliant on a service means you have to share whatever you're working on with the service, and the service provider decides what you can do, and make changes to their terms of service on a whim.

If locally running models can get to the point where they can be used as a daily driver, that solves the problem.


Can confirm. I have a Ryzen 9800X3D with RTX 5070, 128GB of RAM and TBs of Gen 5.0 NVMes.

Not only it is screamingly fast (the fastest on earth for some workloads), but I can upgrade it easily. And is dead silent too.

The best thing is it runs native Linux and it just works.


And a 9800X3D is not even the fastest CPU out there, nor even the fastest CPU you could use with your specific motherboard. A 9950X3D is essentially two of the 9800X3Ds combined, and would be a drop-in replacement.

Wrong. See benchmarks. Many games and single-threaded workloads run faster on 9800X3D.

There are various reasons for this, major one being that the 9800X3D has more L3 cache per thread than the 9950X3D.

And also wrong that a 9950X3D is 2x 9800X3D combined. A quick glance would tell you that, since 9950X3D has 128MB of L3 cache shared between more threads while 9800X3D has 96mb for half the threads, so more L3 per thread.

And most of the times, even when a 9800X3D loses to 9950X3D in games, it's usually within 1-4% margin for most games.

It's a monster for games and some workloads.

It's funny that people who blindly buy 9950X3D for gaming+office workloads without checking benchmarks often end up with similar or slower performance.

Much smarter to use the price difference on other hardware to speedup other things such as faster NVMEs, efficient silent cooling, faster GPUs, etc.


source? they actually just added 16bit support. Something not even Windows support anymore.

> I don’t believe that’s true. Things are moving constantly, and in the right direction.

Hah! I'll use that argument if I ever get PIP'd.

No but seriously, constantly moving doesn't mean fast enough. Swift took took long to have cross-platform support.

And it is still uberslow to compile. To the point of language servers giving up on analyzing it and timeout.


Not just uber slow to compile, because as a Rust dev I could take that. But it rejects correct programs without telling you why! The compiler will just time out and ask you to refactor so it has a better shot. I understand that kind of pathological behavior is present in many compilers but I hit it way too often in Swift on seemingly benign code.

Did that happen recently (the compiler just bailing out)?

Because they got much better at that, and it’s been a long while since that happened to me. Like “I don’t even remember when was the last time it happened” long.


The last time I used Swift was 4 months ago. It was recent enough that I'm still salty about it! :P

If cross platform support took so long, it's a major red flag.

Plus Swift is arguably too unnecessarily complex now.

And there's Rust/Zig so why use Swift for low level?



> Plus Swift is arguably too unnecessarily complex now.

I would argue the allegations of complexity against Swift are greatly exaggerated. I find the language to be very elegant and expressive in syntax, high in readability, and fairly terse. Other than that, Swift feels near identical to every other OoP language I have used.


I'll steal this to my projects bug template! /s

"Please consider cosmic rays hitting the computer, defective ram chips, weird modifications of the system before submitting the bug. Unlesss you explicitly acknowledge that, your bug will be closed automatically in 30 days. Thank you very much"


They could just, not close the bug?

Mozilla is famous for having 20 year old bug reports that gets fixed after all that time.


> What are you building? Does the tool help or hurt?

> People answered this wrong in the Ruby era, they answered it wrong in the PHP era, they answered it wrong in the Lotus Notes and Visual BASIC era.

I'm assuming you're saying these tools hurt more than help?

In that case I disagree so much that I'm struggling to reply. It's like trying to convince someone that the Earth is not flat, to my mental model.

PHP, Ruby and VB have more successful code written in them than all current academic or disproportionately hyped languages will ever have combined.

And there's STILL software being written in them. I did Visual Basic consulting for a greenfield project last week despite my current expertise being more with Go, Python, C# and C. And there's a RoR work lined up next. So the presence gap between these helpful tools and other minor, but over index tools, is still increasing.

It's easy to think that the languages one see mor often in HN are the prevalent ones but they are just the tip of the iceberg.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: