Hacker News new | past | comments | ask | show | jobs | submit | redacted's comments login

Can't believe you're getting downvoted for one of Ireland's greatest cultural contributions. Behold The Rubberbandits, Horse Outside (helpfully timestamped to the lyric in question)

https://www.youtube.com/watch?v=ljPFZrRD3J8&t=85s


Nvidia for compatibility, and as much VRAM as you can afford. Shouldn't be hard to find a 3090 / Ti in your price range. I have had decent success with a base 3080 but the 10GB really limits the models you can run



Hoping for something in WSL land.


`vmIdleTimeout` in .wslconfig might be an option? Win 11 only though


If you're able to install PowerToys it includes a utility for this

https://learn.microsoft.com/en-us/windows/powertoys/awake


If anyone is interested in using Foliate through WSL, https://opticos.github.io/openinwsl/ is great - it lets you double-click a file in Windows, then Foliate in WSL2 launches to view it


I'm really curious as to why Apple has been unable to reproduce their leap in CPUs in the GPU space.

It's not exactly surprising when Nvidia parts handily beat the M1/M2, but when both Qualcomm and Mediatek have better GPU performance _and_ efficiency [0] something is up, especially given just how far ahead Apple has been in mobile CPU

[0] https://twitter.com/Golden_Reviewer/status/16056046174164295...


They have been designing the CPU since A4; the CPU success didn’t materialize from nothing, the M1 is the 10th gen.

They have only been designing the GPU since A11.


No, they started designing them earlier. A11 was the first one that they publicly claimed to be fully in-house. They were substantially (but not wholly) Apple-designed as early as A8, and generations prior to that they did significant tweaking to.


I wonder how closely related their GPU is to PowerVR these days as well. With both PowerVR and the Asahi GPU driver it would be interesting to see if any of the design still resembles PowerVR.


> I'm really curious as to why Apple has been unable to reproduce their leap in CPUs in the GPU space.

GPUs are highly parallelized and specialized systems. The workloads are already being optimized for the GPU, rather than having a CPU which is being optimized to deal with more arbitrary workloads (with things like branch prediction, superscalar architecture, etc).

So you could say, without creating new instructions to represent the workflow better, there is a fixed amount of digital logic needed to perform the given work, and that translates to a fixed amount of power draw needed on a particular fabrication process.

So Apple could throw more transistors at the problem (with a memory bus that can support the extra need), but the same amount of work still would take the same amount of power and generate the same amount of heat. It is usually far easier and more efficient to create dedicated logic for particular common problems, such as certain ML operations or toward hardware video encoding/decoding.

> It's not exactly surprising when Nvidia parts handily beat the M1/M2, but when both Qualcomm and Mediatek have better GPU performance _and_ efficiency [0]

Benchmarks are highly subjective, so I'd wait for more reviews (preferably by people with more established reputations, and perhaps a website). Reviewers who might try to determine _why_ one platform is doing better than another.

GPU benchmarks are even more so, because again the workloads are targeted toward the GPU, while the GPU is also optimized for handling particular workloads. This means that benchmarks can be apple-to-oranges comparisons - even before you find out that a given benchmark was optimized differently for different platforms.

There is also of course the reality that some vendors will optimize their code for the benchmarks specifically, going as far as to overclock the chip or to skip requested instructions when a particular benchmark is detected.


The thing is that mobile GPUs are hardly utilized unless they end up in something like the Oculus Quest or Nintendo Switch.


Is Apple really that far ahead in mobile CPU, or is it just a node issue?

https://www.notebookcheck.net/Qualcomm-Snapdragon-8-Gen-2-be...


Does Apple need to catch up with Qualcomm and MediaTek in terms of raw gpu performance when Apple can optimize software and apis given to developers to work on its hardware? Or am I really out of date and is their public evidence of Qualcomm and Mediattek outperforming apple's hardware in real world workloads?

Nvidia primarily makes add on GPU's, if I understand their business correctly. Apple integrated a GPU onto its m2 (or whichever chip is used in their studio) that performs comparably to the 3060, and even beat the the 3090 in some benchmarks/workloads. I think that's pretty impressive.


This isn’t at all true despite Apple’s marketing. The M2 gets trounced by the 3060 in any graphical benchmark other than power draw, comparing it to a 3090 is just laughable.

https://nanoreview.net/en/laptop-compare/razer-blade-15-2022...

Like I absolutely love my M2 air, it’s the best laptop I’ve ever owned but it is definitely not a competitive gaming machine.


> comparing it to a 3090 is just laughable.

The idea of trying to fit a 3090 in a laptop is amusing.


That’s the point the comment you’re replying to is making, just with more words.


The original topic of conversation was:

> Nvidia primarily makes add on GPU's, if I understand their business correctly. Apple integrated a GPU onto its m2 (or whichever chip is used in their studio) that performs comparably to the 3060, and even beat the the 3090 in some benchmarks/workloads. I think that's pretty impressive.

The form-factor of the 3090 isn't relevant.


Kind of like how the form factor of the space shuttle doesn't matter when comparing it's peak speed and cargo capacity to my pickup truck.


Its more the fact that we're talking about Apple catching up at all. Android SOCs have been generationally behind Apple for a long time (and MediaTek in particular as a "budget" option), but now in the GPU space that is reversed.

The situation on the desktop/laptop is muddied by CUDA and other Nvidia-exclusive tech - while the M1/M2s indeed trade blows with laptop parts like the 3060 in some specific tasks, once CUDA comes into play Nvidia walks it (unfortunately IMO, even AMD can't compete there and its holding the industry back)


> beat the the 3090 in some benchmarks/workloads

Did it actually do that or was it in the "performance per watt" comparison?


Nah, it gets 10x fewer fps in anything, if you can even run it. Laughable comparison, really, given the disparity of the two.

This isn't an ARM vs AMD64 competition where Apple has a 40 year instruction set advantage it can exploit. The 3090 is nearly state of the art.


The official marketing comparison was to a mobile 3090, not a desktop 3090. Completely different GPU.


There isn't a 10x performance difference between the desktop and the mobile 3090, but nice try, Tim.


i think m1 was a big boost because of risc--compilers have gotten really good, and cpu pipelining has been well researched/developed, so there was a lot of performance to be harvested by putting everything together.

gpus, on the otherhand, are already risc. so where is apple going to improve? not by integrating everything: lots of companies have done this for years and years. if you want to do more with the same transistors, you'll need an even more clever execution model...


This is not correct. M1 is designed to take advantage of being RISC, but that doesn't mean it was fast because it went RISC.


As opposed to x86 processors which are designed for cisc, but just not to take advantage of it?


No, they do. It's just that x86 processors are currently built by people who did a worse job overall.


GPUs are more power-dense. Battery power or thermal envelopes limit what they can pull off.


In 22H2 it's fully keyboard drivable

  - Win+Z brings up the layout menu
  - number keys select the layout, then where in the chosen layout the current window should go
  - arrow keys then let you select the other windows to complete the tiling
So if I want Firefox (current window) and Slack side by side

  - Win-Z inside FF
  - 1 selects side-by-side
  - 1 again snaps FF to the left (2 would snap to right, and so on for more complex layouts)
  - Slack is the first suggested window (MRU), so Space to snap right
It's not as quick as the mouse interface yet - especially with the mouse improvements MS made - but seems like it could be easily automated with eg Autohotkey


The Linux/not-Windows instructions on https://github.com/hlky/stable-diffusion/wiki/Docker-Guide worked well for me using WSL2 with nvidia-docker


It's available in Windows 10 (Pro or Enterprise) and all Windows 11 versions, you can activate it using 'Turn Windows features on or off':

https://docs.microsoft.com/en-us/windows/security/threat-pro...

You do need hardware that can support Hyper-V


Original title - "The best part of Windows 11 is a revamped Windows Subsystem for Linux"

Why remove "The best"? Feels like unnecessary editorializing


I've noticed this too around here. Titles will be changed and their meaning altered, effectively putting words in the original author's mouth. Seems kinda unethical to me, but it happens all the time.


HN automatically edits out some patterns from titles. If it happens to you and you want to put it back, you can just edit the title again after submission and it will stick.


Check the guidelines, it encourages neutering headlines in multiple ways; believe the idea is people can form their own judgement on whether it's good or bad.


I believe it is intended to limit click-bait. I'm happy HN does this kind of stuff, because it increases the quality of discussion.


Because it's an opinion and doesn't give any useful information to anyone reading the thread/article.


But it's the opinion of the article author - the article that is being linked to. And removing this makes the title nonsensical.

Unless the title is too long or is obvious scummy clickbait, IMHO titles shouldn't be editorialised.


I found the headline here actually incomprehensible. Had Microsoft built Windows 11 on top of WSL? Built part of it that way?

What you'd means is "Windows 11 includes a revamped Windows Subsystem for Linux" if you wanted to removed the editorializing. But the article is editorializing - they don't like the rest of windows 11 and say so.


Without "the best" the title reads to me as that they somehow removed chunk of original Windows and replaced it with WSL. That's why I came to this topic. I thought to myself - really - so they now replacing windows code with Linux? Original title is clear and not clickbaity.


Same. I think modified title is more clickbait by confusing.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: