Hacker News new | past | comments | ask | show | jobs | submit | bound008's comments login

Happy to say so.... just sprinkle in a little caldav ;)


You can't civilly sue the gun maker for the actions of their users: https://en.wikipedia.org/wiki/Protection_of_Lawful_Commerce_...


Anymore. You only need a law because it was happening.


> The play/pause media key automatically starting the Music app

$ sudo chmod -x /Applications/Music.app

This was a common complaint I have heard from those coming from Windows/Android and even Linux... It's unix.

Also, the play/pause key works with other apps. Other default apps can be set.

Many other valid points in this article and thread, but I don't know why people expect the default behavior of the play button to behave any differently when no media is playing. Music.app may have an upsell, but it's still a place to organize your own personal music library. No cloud or SaaS required.


Play/pause always starts the Music app, regardless of your preferred music player. It's annoying for those of us who don't use the built-in Music app at all.

Your command doesn't work. The music app actually lives at `/System/Applications/Music.app`. And since Monterey, I think, even sudo won't let you modify data in that folder because of macOS's built-in protections that cannot be disabled:

  chmod: Unable to change file mode on /System/Applications/Music.app: Read-only file system
Even disabling SIP and booting into safe mode doesn't let you do this.


If you haven't seen it yet, NoTunes[1] does a great job of fixing the hijacking of the play button or when bluetooth headphones are connected or disconnected.

1: https://github.com/tombonez/noTunes


I think the complaint is that it shouldn't hijack the button in the first place.


There are solutions, but on some level this is no different from telling Windows users to just tweak some registry keys. It all leads to the same place.


Solutions aplenty yes, just not for, say, my mother. Not providing a normal way for users to select a an alternative default (like competitor Spotify), is disproportionally inconvenient relative to the benefits for Apple.

Accidental music app launches by my pinky finger brushing the play/pause touchbar control strip button, well.. let's just say it happened more then once. Absolutely maddening.


Firstly, .app is a bundle, so

sudo chmod -x /Applications/Music.app/Contents/MacOS/Music

Secondly, on modern macOS you'll have to disable system integrity protection for this to have a chance of succeeding.


> It's unix.

Unix has ads?


I would highly recommend any readers on here who have ADHD to read "Scattered: How Attention Deficit Disorder Originates" by Dr. Gabor Maté[1]

It provides a compelling argument as to the root cause of ADHD.

[1] https://drgabormate.com/book/scattered-minds/


I would recommend taking this book or similar ones with a grain of salt. Just going off the summary it claims to "prove" that ADHD isn't inheritable, which goes against lots of well established research showing strong inheritability of ADHD.

Any books that claim to undercover the "root cause" of something as varied and complex likely haven't. Now it might have great insights, but presenting the book this way gives a bit of undertone as "cure ADHD with this one simple trick!".

Guessing about the content of the book, it likely seems to discuss healing emotional regulation by some means. If so I'd agree it's a valuable aspect of treating ADHD, as most ADHD'ers also suffer significant emotional damage. That damage and accompanying maladaptive coping often is far impactful than ADHD itself.


Maté says the inheritable trait is a kind of nervous sensitivity that develops into ADHD in certain early childhood environments especially when the caretakers are stressed, depressed, or otherwise unable to provide the infant with the stable, responsive gaze that they need. I don’t know how true that is, but that’s the idea in “Scattered Minds.”


He literally starts the book by saying that he didn't know much about ADHD until four or five years before writing and then he and his three(?) sons were all diagnosed with it and then (or so I gather because I stopped reading then) he comes up with some novel theories contradicting the experts in the field.

There's so much evidence-based material to read from people who have spent decades studying ADHD that it's a shame instead to waste time on Mate. He may be an expert in some things, just not this


Nice nominative determinism, maté is quite the most caffeinated thing I've had in my life.


Agree with other responses. Maté's book, as his other works, resonates intuively - a humanistic shout in this cold and sick world.

Things are however more complex. He gets a lot of publicity, eg with recent Joe Rogan interview clips on the topic being spammed to me by YouTube's rec engines. Mental health issues are something that many people take intellectually lazy stances, to one direction or another, and this is somewhat worrying.

This book is 20 years old. I'd imagine the scientific & clinical understanding of ADHD has progressed since. Opposing views are of course important for progression, but I'd like to understand his thinking in wider context of psychiatry. Would be interesting to see a dissection of his ADHD points, carried out in a rigorous way.


I was really excited to see these benchmarks until I realized:

1. The Ryzen setup has twice the ram

2. The Ryzen setup has twice the cpu cores

3. The Ryzen setup has 3x the power envelope (in watts) as the M2[1]

4. They are using Asahi Linux on the M2.[2]

Too bad I missed the window to downvote.

If this said: Apple M2 (Linux/8gb/4c) vs. Ryzen 7 Pro 6850U (Linux/16gb/8c)... it would have been more honest, and potentially interesting, as it would also be an indication of the progress of Asahi Linux.

(edited to add line breaks)

[1]: Ryzen 46w (not sure if this includes the GPU) vs M2 15w (not sure if this includes the GPU)

[2]: Asahi Linux is super cool, but most M2 chips in the wild are not running it. So it's not a useful comparison.


It all makes a lot more sense if you remember that this is Phoronix we're talking about. This is the rest of today's posts:

  * Radeon ROCm 5.2.3 Released (...)
  * Mesa 22.2-rc3 Released (...)
  * [bootloader project] Updated With Improved SMBIOS Support
  * GCC 12.2 Compiler Released With 70+ Bug Fixes
They're a Linux-centric outlet through and through, and the Phoronix Test Suite (the benchmark they're running) is Linux-based. Of course they're running the benchmarks on Linux, that's whole point!

As for the hardware, the Air is just the entry level model, and AFAICT the X13 they're using is roughly the same price as the Air. About as fair a comparison as you could make it. They also specifically call out the power thing:

> Due to the Apple M2 currently lacking any power/temperature sensor support under Linux, this is simply looking at the raw performance of the M2 and Ryzen 7 PRO 6850U with not being able to accurately compare the M2 power efficiency / performance-per-Watt at this time.

It's important to look at the context: Lenovo laptops have been a staple in the Linux world for a long time, and people in the Linux world are genuinely excited about the Apple Silicon laptops, hell even Linus is running Asahi. For the target audience, this comparison is exactly the benchmark they want.


> They also specifically call out the power thing:

saying "we can't measure the power on M2" isn't the same thing as pointing out that in broad terms the M2 is only using 1/3 of the power of the 6850U. These are really different power classes and yeah, you'd expect the processor with triple the power budget to pull ahead.

Triple isn't my number, but, Ryzen is allowed to boost extremely high during short tests like Phoronix is doing, where a MBA is always 15W max, period the end, even in max clock states.


This is just completely false. You have no idea what the actual max power consumption of an MBA is.

15W is it's TDP, and various companies release chips that run at triple their TDP - including repeated by Apple in the past. Saying it's never going to exceed it's TDP is just blatantly ridiculous, and Phoronix were completely right in treating it as a useless number (which it is).

The only reliable way to measure power draw is either to monitor the exact voltages and amperages on each power lane - which is not made available by Apple - or wire into the motherboard. Anything else is not serious, least of all taxing the TDP to its word.


I think you're overstating the issues here; and it's necessary to take what you can get in hard-to-do comparisons like this. There are caveats: certainly.

The Ryzen chip's TDP is configurable between 15W and 28W, which would include the GPU (but I doubt most of these tests touch the GPU anyhow). There will be power usage differences, but the TDP difference won't be huge - most differences will be due to utilization differences, which might be large.

The RAM issue usually doesn't affect most benchmarks, but it's something to consider if there are odd outliers.

The CPU core count number represents the reality of these chips (but M2 has a 4+4 config, apparently?).

Using linux is likely far from ideal, but probably necessary for them to run such an extensive and mostly comparable set of benchmarks. It's a significant caveat, but for CPU-limited benchmarks it's hopefully merely of reasonably limited impact (on the order of 30% on average rather than 300%, say?). OS differences tend to be larger for I/O, GPU, and some niche things like context-switching benchmarks. Of course, the power scheduler stuff matters.

It's worth pointing out that phoronix itself (on different hardware and slightly older OS versions) looked at some linux vs. macos benchmarks: https://www.phoronix.com/review/apple-m1-linux-perf - and while there were a few unsurprising macos wins, there also were some (far fewer) linux wins.

I think the benchmark is very interesting for what it is. Despite limitations, it _is_ informative. You can't always get a shrinkwrapped answer to whatever question you really have; so extra data-points such as this are helpful.

You know what they say: there are lies, damn lies, and benchmarks. Or something like that; caveat lector anyhow.


> Using linux is likely far from ideal,

This is Phoronix. Using Linux is the whole point, that's their audience.


Yep; I know. But for most people interested in this comparison that's nevertheless likely to make the comparison less representative of their workloads.

Hats off to phoronix; it's not meant as a criticism - just an expectation that the benchmark results may not satisfy everyone.


Feels like it makes perfect sense to compare performance on the same system. Is someone going to switch systems for some minor performance gains? I really doubt that. Usually it's something else that motivates people to switch.

That said people pick different manufactures for their next device all the time (at least in the Linux/Windows world) and having performance comparison for these devices on your favorite system helps you choose.


> You know what they say: there are lies, damn lies, and benchmarks.

Also, all benchmarks flawed, some are useful. This one is definitely useful.


> The Ryzen chip's TDP is configurable between 15W and 28W,

TDP isn't a relevant number since the Ryzens will exceed their "configured TDP" for an unlimited period of time.

see, eg, Anandtech's Tiger Lake review where a "15W cTDP" Renoir chip pulls 23W, and it is allowed to sustain that boost for an unlimited period of time, going against Intel chips that actually obey the 15W configured sustained-power limits...

https://images.anandtech.com/doci/16084/Power%20-%2015W%20Co...

The relevant number for comparing Ryzen against Intel/Apple SKUs is what AMD calls the "PPT" which is their sustained-boost TDP. The "cTDP" number is basically pure marketing these days, nobody is running at base clocks outside of contrived scenarios.

(the Intel boost TDP is allowed to exceed the rated TDP too, but they only allow it for a limited time, it's a "sprint" feature on their chips, where AMD has just turned it into their new sustained TDP number, with the advertised TDP being essentially fake for marketing purposes it's specified at base-clocks that nobody ever runs.)


As far as I understand it, it's up to the device manufacturer to pick the exact TDP. It's by design that some devices might structurally exceed the base cTDP.

I don't know the figures for lenovo's thinkpad, which is why I quoted the full range AMD offers. And even then, TDP is just a long term target for the purposes of cooling - workload, implementation and environmental (i.e. temperature) details might leave two chips with identical TDPs to nevertheless consume vastly different amounts of energy.

In any case, we simply don't know how much power these systems used under this load. The expectation certainly is that the AMD system used more, but it's not clear how much more.


The focus on _power_, as opposed to _energy_, always annoyed me. When I'm using a notebook away from home, what matters to me is how long I can keep working.

Phoronix should have measured how long each notebook could run the benchmark suite sustained and scaled the benchmark results accordingly.

updated wording.


There is no sane universe in which we lie about how fast we can do something in order to account for differences in how long we can do it. If you have two models identical save for one having twice the battery we don't account for this by pretending the lesser endowed unit is actually only half as fast.

While these things are interrelated you actually want both pieces of data and you don't want them complected by using them to forge some number made entirely useless and incomparable by incorporating both facets in one useless whole.


15-20w is never the peak TDP. They will actually hit as much as 50-60w with minimal GPU usage just from trying to hit that insane clockspeed.

M1 maxed out at 22w according to Anandtech and the max shouldn't be too much different for the M2.


Actually Phoronix has power benchmarks for the ACPI power profiles[1]. Seems like a boost to ~28W or so with a long term average of 22-23W.

(Note that this is SoC package power instead of the full laptop though, but that is pretty typical, so no idea what the 22W was for comparison.)

[1] https://www.phoronix.com/review/ryzen7-6850u-acpi


I don't believe those numbers.

The 4800U hit north of 65w in a NUC device[0]. The 5800U hits 52w in a laptop[1]. AMD runs their chips hot.

TSMC N6 is a 7nm++ node. The even more advanced 5nm node only offers either 15% performance increase OR 30% power reduction vs N7. The 6850 clocks 10% higher, so the power shouldn't be dropping AT ALL.

The 6850H has an identical turbo clockspeed of 4.7GHz and a TDP of 45w with real-world power consumption according to other reviewers being north of 65w. The base clocks for these chips is 2.7 and 3.5GHz respectively.

In the case that they really are getting those numbers, they are certainly only at base clockspeeds (2.7GHz) which raises the impossible question of how Zen 3 suddenly got such a huge performance per clock advantage going from desktop to mobile.

The numbers simply do not add up.

[0] https://www.anandtech.com/show/16236/asrock-4x4-box4800u-ren...

[1] https://www.notebookcheck.net/R7-5800U-vs-E-2186M-vs-R7-PRO-...


As the sibling said those numbers are very configurable. e.g. around the x86 handhelds there are tinkerers running their U/P class chips at anywhere from 5W configured to 30+W configured (and with the appropriate cooling could totally go to 60W+ if configured to do so. Silicon-wise there is very little difference between AMD's U and H class chips, mainly just stuff like binning and some configuration in the firmware). Typical reasons to configure this would e.g. be cooling or battery restrictions from the OEM.

Furthermore note there is a difference between SoC package power and power pulled from the wall. That depends on the rest of the device, but 5-10W isn't unreasonable. (So e.g. at 22W package power, pulling ~32W from battery is pretty common). Looking at the detailed graph in the NUC review, that difference is particularly large though, the package only pulls <20W in steady state and maybe 30W at boost[0], but this device somehow seems to have significant power draw coming from somewhere else, this is pretty atypical wrt laptops & battery draw at least.

I'm not sure what performance per clock advantage going from desktop to mobile you're referring to? The mobile chips are slower due to the lower max boost and power limits, but due to non-linear power scaling it tends to be not that* much worse than the desktop parts.

As an aside I find the Anandtech Zen3 review[1] a pretty good resource when I want to have a clue about reasonable clock/power expectations on zen3 (though it is only a single workload)

(Also not sure where the 52W claim comes from, my searching on the linked page seems to yield no results?)

*: With the notable exception that pre-6000 series AMD had some horrible delay clocking up the cores when on battery, hitting short tests like geekbench. That said even on AC the chips should be adhering to configured power limits.

[0] https://images.anandtech.com/doci/16236/wm-aida_power.png [1] https://www.anandtech.com/show/16214/amd-zen-3-ryzen-deep-di...


If that NUC CPU is using 30w, then what else is using the other 40w? Where is that heat dissipating to when the ONLY heatsink is on the CPU itself?

What is missing in that story? Is it more likely that a mystery chip is dissipating 40w into that tiny chassis or that something isn't reported correctly?

Perhaps the answer is something to do with the chipset using a massive 40w of power. That isn't likely as even the actively-cooled desktop chipsets don't use that much power. Even if that's true, not only does the dissipation issue still go unanswered, but the SoC it is being compared to also moves everything on-chip, so adding in that power consumption is necessary for an equal comparison.

There's no suitable answer to this problem other than the reported numbers being wrong.

> I'm not sure what performance per clock advantage going from desktop to mobile you're referring to?

M2 peak performance at 3.5GHz is equal or better than desktop Zen 3 chips running at 5+GHz.

The second link you post shows the 5950X (with the best binned chiplets with the lowest power/core numbers) requiring 7.5w per core at 3.7GHz. That means you could barely get HALF your cores running at 3.7GHz at 30w. All of them at that speed (1GHz less than turbo) would take around 60w of power (by the way, 6850H has a 45w TDP and claims base clocks of 3.7GHz, so something doesn't quite match up there either).

Even hitting 8 cores at 2.7GHz seems to be a stretch within that 28w TDP limit. EPYC with it's top-binned chiplets requires around 4-4.5w per core to hit 2.45GHz base clocks.

Adding between 700MHz and 1GHz to the clockspeed plus a well-known 60% faster per clock design plus cutting cache in half for Zen 3 mobile (which increases total power usage too) makes the benchmark results super fishy as they would require massive increases in IPC (or massive crippling of the M2 chip). This is of course not true, which indicates something is wrong somewhere.


Firmware can set the power limit to whatever it wants. With the exact same processor, one computer may hit 65 W while a different one is limited to 28 W. There's also huge diminishing utility when it comes to power, so it's possible that the chip is only slightly slower at 28 W than at higher power.


That diminishing return is much more true at 150w than at 50w.

https://www.anandtech.com/show/16214/amd-zen-3-ryzen-deep-di...

When you look at per-core performance of desktop chips, things aren't rosy. Looking at the 5950 (most power efficient per core), it's taking almost 7.5w PER CORE to hit 3.7GHz. It hits 4.7GHz (peak frequency of 6850U) on just 4 cores while using 110w to do it. That's almost 23w per core.

Based on these numbers, taking 8 cores up to 3.7GHz would require 60w of power and that's just to hit the "base" frequencies of the 6850H which they say can be done with a TDP 25% lower than that (maybe dropping half the L3 cache makes that possible, but at an overall performance loss as moving stuff from RAM takes more energy than keeping it in L3 and I don't know that losing half the cache saves more energy than the addition of a GPU adds back to the chip).

Even if we assume a 12% energy efficiency bump (two thirds of the 18% transistor reduction and roughly inline with the 30% efficiency at N5 with a 45% transistor reduction), we aren't doing anything multithreaded anywhere near peak frequency. In fact, we aren't doing anything multithreaded past those terrible 2.7GHz base clocks at best.

Meanwhile, the M2 TDP didn't really increase. They can hit 3.5GHz on 4 cores while using 60% less energy than those AMD cores at 3.7GHz despite being about twice as wide and over 60% faster per clock. Back of the envelope calculations seem to indicate that 4 M2 big cores at 3.5GHz should be more than 2x as fast as 8 Zen 3 cores at 2.7GHz without even using little cores and completely ignoring Amdahl's Law.

Of course, all this just goes back to the question of how questionable this entire article really is.


If I remember correctly, benchmarks of the ryzen 6800u (not quite this chip, but close) showed that the perf/watt sweet spot was probably _less_ than 15W. Even at 28W there were clear signs of diminishing returns, and anything north of 50W is largely pointless.

The intel competition in the form of alder lake scales for much longer if you just pump more juice into it. Scaling varies from chip to chip.

Edit - I might be misremembering, because 6900HS's sweet spot was at 20W: https://www.anandtech.com/show/17276/amd-ryzen-9-6900hs-remb... But anyhow, they also noted that "going from 50 W to 80 W is a 60% power increase for only +375 MHz and only +7.7% increased score in the benchmark".

But TL;DR: 15-28W probably really is the ideal range for a chip like this. Which won't stop ODMs from pushing well beyond that, of course.


I highly doubt those 50 or 60 watt examples are representative of the power use of those U-series CPUs in a 13" laptop. Perhaps it has been configured differently in a NUC.

Anecdotally, my work laptop has a previous-generation Ryzen 7 5850U (8 cores, 16 logical) in a 14-inch ThinkPad chassis. I just tried a 16-thread xz compression on it a few times and eyeballed the average power use with powertop. The baseline power with no load (except for what little idle load there was from whatever I had open) of the entire system was around 5 W with display brightness at a minimum. The battery discharge rate rose to between 21.5 and 22.5 watts for the couple of minutes the compression took.

This was on battery since I had no tools for monitoring power use at hand otherwise. (Turbostat reports detailed per-core and per-package wattage from the CPU at least on Intel but the distro I've got for work apparently doesn't have that packaged and I'm not going to be hunting for a source package for this.)

However, the compression took pretty much exactly the same amount of time on AC, so I doubt having it on battery affected CPU clock rates significantly.

A 13" laptop chassis would also have serious trouble getting rid of 50 or 60 watts of prolonged heat production from the CPU.

Also, where did you get the 52 watts in a laptop figure? The only reference to 52 on that page w.r.t the 5800U that I can find is a results of 51.9 points per watt, whatever those points are, in some kind of a power efficiency test. The power consumption test above that reports 42.9 watts, measured with an external monitor, so probably including more than the CPU. I also can't immediately find a reference to what kind of a chassis the CPU was in, laptop or otherwise.

> raises the impossible question of how Zen 3 suddenly got such a huge performance per clock advantage going from desktop to mobile

What kind of a PPC increase did Zen 3 get going mobile? I've only read about PPC comparisons between Zen 2 and Zen 3.


That's Intel's boost behavior, AMD is much more linear. My 4750U in performance mode never exceeds 25W and settles at 23W under sustained load.


What device is that chip in and at what frequencies is it running (by number of cores in use)?


Lenovo ThinkPad Gen1 AMD, Arch Linux, acpi platform_profile = performance

When running stress-ng, measured with turbostat:

- 4.18 GHz single thread consuming 9W

- 3 GHz 16 threads consuming 25W, ramping down to 2.94 / 23W after a couple of seconds

and when compiling Linux on 16 threads it stays at 2.85 GHz / 23W

In performance mode it does run at 80 degC under sustained load, so usually when undocked I just leave it in balanced mode where it's capped at 20W short / 15W long

Also worth noting is that in low-power mode (11W / 11W) it'll still hit max single core frequency possible.


[citation needed]


> it would also be an indication of the progress of Asahi Linux.

Other than power management support, the progress of Asahi doesn't really impact CPU performance.

> Asahi Linux is super cool, but most M2 chips in the wild are not running it. So it's not a useful comparison.

Clearly this is aimed at an audience who wants to run Asahi. It's a perfectly useful comparison for that.

Prices weren't mentioned, but I suspect the prices of the two systems being benchmarked were similar. (Edit: I take that back, the AMD system looks considerably more expensive - it would be interesting to see a more price-matched comparison)

Maybe these benchmarks aren't useful for you personally, but I think a lot of people (myself included) consider it to be an interesting and useful comparison.


> The Ryzen setup has 3x the power envelope (in watts) as the M2[1]

TDP does not represent the power envelope of the chips. It's the sustained load target at best, but mostly even that is kinda made up and the rules don't matter. But for any short (where short is less than a minute or even 5 minutes) burst, that TDP figure is completely irrelevant. Nothing respects it. Also for single threaded tasks, neither one is even going to pull that high.

You have to compare actual power used during the specific test in question to make any such comparison. Otherwise trying to judge power consumption by looking at TDP is like trying to judge performance by only looking at clock speeds. Within the same manufacturers same generation you can kinda infer something useful. Other than that? Not a chance.


https://www.phoronix.com/review/ryzen7-6850u-acpi/6

It's also interesting that the maximum power consumption eked out of the Ryzen 7 Pro 6850U was a touch under 32W. Where did this 46W number of the parent comment come from? Or the 50-60W in the comment below? They must be thinking of a different CPU/APU.


For a lot of Mac users, x86 = Skylake, as featured in the last Intel MacBook pro, and a lack of realisation that x86 has moved on.


Not only that, but the MacBooks with the very bad heat management (I've owned some I couldn't keep on my lap, also see Linus and Louis videos).


For small differences in TDP you kind of have a point, but 3x is relevant far more quickly.


But again it's not. TDP isn't a defined thing. There's no standard for it. It's literally a made up value by the SoC manufacturer, and in the case of laptops the laptop manufacturer even gets to fiddle with it.

That said in this case it is a small difference regardless. The 6850U's TDP per AMD's spec is 15W-28W. That's bang on the same as M2 basically. There's no 3x difference here. Unless there is because again TDP is a made up number with no formal definition.


I had assumed it was 3x simply because you didn't object to their figure.

This laptop actually uses 28W TDP, so TDP is 2x not 3x which is still significant in less than 60 seconds.


M1/M2 use around 20-22w MAX in CPU-only tests. AMD and Intel chips usually max out at around 50-60w real-world power.

That's hardly apples to apples..


> AMD and Intel chips usually max out at around 50-60w real-world power.

That's a sweepingly broad claim that's not at all well supported. AMD & Intel both make CPUs that top out far below 50-60w real-world power and also top out far, far above 50-60W real-world power.

From a different review of the same CPU (didn't check if same laptop, could be though) "The Ryzen 7 PRO 6850U with the Linux 5.18 kernel had a 18.4 Watt average (or 16.6 Watt average with Linux 5.19 Git) while the Ryzen 7 PRO 5850U average on Linux 5.18 was up at 21.96 Watts." https://www.phoronix.com/review/amd-ryzen7-6850u/8

So no, this isn't a 50-60W real-world power CPU. That's the non-U SKUs.

But if this AMD was a 50-60W CPU that'd exactly prove my point that TDP is a stupid number to compare and you always, always have to look at actual power consumption used (which this M2 vs. 6850U didn't do because as noted briefly in the intro power monitoring of the M2 doesn't work on Asahi yet)


Here's the 4800U running in a tiny NUC at over 60w and the 5800U running at almost 52w. They absolutely DO hit very high real power usage.

TSMC N5 only offers 15% performance OR 30% lower power vs 7nm. Peak clocks on the 6850U are 10% higher than the 5xxx generation AND is on N6 instead of N5 which offers a significantly lower advantage than what I stated. N6 offers an 18% reduction in area vs N7 while N5 offers a 45% reduction in area, so you can do your math from there.

https://www.anandtech.com/show/16236/asrock-4x4-box4800u-ren...

https://www.notebookcheck.net/R7-5800U-vs-E-2186M-vs-R7-PRO-...


Why are you using peek instead of average? The average power over the test is what matters, not the instantaneous spikes. And did you miss the whole "configurable TDP" thing? The OEM gets to change the power targets. They aren't fixed. Saying the 4800U in a NUC uses power X, therefore all 4800U's use power X is flat out wrong. That's not how it works, different usages will set different power limits.


> Saying the 4800U in a NUC uses power X, therefore all 4800U's use power X is flat out wrong.

If the TDP is only configurable up to 28w, but they're pulling almost 70w total system power (when there's simply no other real power users in the entire system), then there's a serious problem with their default "average" power usage not respecting TDP in any real way.

A lot of AMD laptops have historically dropped 30+% of their performance when you unplug them (a notable example would be the MS Surface 4s) so they can look better than they really are as reviewers tend to be lazy about this and test performance while plugged in then do a rundown test to push out a "review" within the couple day window they can still make ad money.

Comparing to other Zen 3 chips (I've gone into that elsewhere in this thread), top-binned EPYC uses 4-4.5w per core to do 2.45GHz base clocks (that's 32-36w for 8 cores at ~2.5GHz while they claim base clocks of 2.7GHz). If you lower that max TDP even further, those clocks go down even farther.

I disliked when Intel started going with TDP as a recommendation rather than a limit 15 years ago. I disliked when AMD followed suit so they wouldn't seem to be at a disadvantage (well, at less of a disadvantage as they were getting crushed at that time). I dislike the M2 Air which throttles back rather quickly.

The only good thing in recent times has been Intel's 12th gen adding a peak power consumption metric (no doubt because the difference between that and their normal TDP would have basically guaranteed a lawsuit).

> Why are you using peek instead of average?

Many (most) benchmarks are short-lived, but people infer long-term performance from these.

If a chip is getting N marks at 60w, long-term performance where power adjusts down to 30-40w isn't going to be anywhere close to that number.

M1/M2 when actively cooled can run at peak clocks/performance all the time. This means that performance expectations are in line with what reviews show. This is good for consumers (and they need to be more upfront about the air thermal throttling).


> 4. They are using Asahi Linux on the M2.[2]

To you and all the rest saying the same: do you really think that when trying to judge a processor speed you should run benchmarks with different OSes? So, why not benchmark Lenovo + Windows vs MacBook + macOS? Those are the most sold configurations for both laptops, no? To me, it makes sense try to compare the raw HW with the same layer of software, unless there is some big glaring missing feature in Linux, inhibiting the use of some HW capability of the M2 processor.


It's a Linux website. They are not interested in the performance of the CPU with Windows or Mac OS.

As far as the mismatched CPU and whatnot... These are the same sort of CPUs that you can get in the same form factor at around the same price.

So the comparison is valid.

The only thing suspect is the 8 vs 16GB of RAM. But since these are CPU-bound benchmarks I don't know how much of a actual difference that will make.


> They are not interested in the performance of the CPU with Windows or Mac OS.

Phoronix regularly benchmarks across OSs.

Type "Windows" into the search bar in the top right. Example: https://www.phoronix.com/review/windows-linux-mid22adl

The memory difference might be addressed with the `mem=8G` Linux kernel boot parameter.


Usually these comments are by people that are angry that their preferred choice didn't win the benchmark and try to find a rationale to invalidate the finding.

Considering the context, they probably wanted apple to clearly win and not just be basically on par or slightly behind.

Often these people aren't even aware of this themselves.

I found the stats to be missing the most important datapoint for laptops however. If performance/wattage is irrelevant you really don't need a laptop after all, and they're even using the low energy /underperforming ryzen chipset...

I know they couldn't get the data from the drivers on Linux, unfortunate nonetheless.


Raw hardware comparison is interesting, but it’s also valid to want to test overall system performance, which Apple have demonstrated is a collaboration between hardware and software (particularly for battery life).


OTOH 8GB RAM is Apple's own self-inflicted limitation. Why should the other CPU be given artificially less RAM only because Apple chose a design that limits how much RAM their chip can use?


M2 is not limited to 8GB. 8GB is only the base Macbook Air M2 model. You can buy one with 16 or 24GB. Most people who buy 8GB Macs these days complain about performance, while people with 16GB praise it.


That is, however, an artifact of how Apple is pricing the models. The price difference between 8GB and 16GB is $200. This is absurd -- there's no way the added cost in ram is more than $50 for them and due to their market position and massive volume, it's probably quite a bit below that.


ThinkPad X13 Gen 3 starts at $1919. Macbook Air M2 with 8GB RAM is $1199. Macbook Air M2 with 16GB RAM is $1,399. So even at 16GB it's still a lot cheaper than the Lenovo laptop used. Price doesn't explain the disparity in the hardware used in the benchmarks.


The consumer edition of the same laptop is $1200 https://www.newegg.com/p/1TS-000E-10RU3

(the 6800u ~= 6850u unless you're deploying them en masse in the enterprise, in which case there's extra management features in the Ryzen pro that the Mac doesn't have)


So, you're comparing a lower build quality on the Lenovo side against the Apple? Yeah, of course if you start cutting costs out of other places, the price comes down, that's not a particularly interesting or insightful point, you've just loaded the comparison.


Looks like they don't have pictures of the current consumer model yet, but looking at last gen:

https://p4-ofp.static.pub/fes/cms/2022/04/20/j7rnbsqww5svxas...

https://p2-ofp.static.pub/fes/cms/2022/03/17/14ph1afu8vmtli8...

The better build quality amounts to black paint, a trackball, and clicky buttons on the trackpad.


Also seems to be 16:10 instead of 16:9, and have a different set of ports. Honestly I don't see what those two laptops have in common beyond both being Lenovo.


no one is paying Lenovo listing price. In fact, there is a 40% off coupon right there on the page, bringing that $1919 down to $1151. You may even be able to get deeper discounts through your employer corporate discount plan.


The RAM on these chips is integrating, making your statement not-even-wrong.

Furthermore, price matters when comparing price to performance, not when, say, nerfing a benchmark by running one chip with twice as much RAM.


> The RAM on these chips is integrating,

This statement is not even wrong, in the sense that it is nonsensical. If you intended to say that the RAM is integrated on chip, that is incorrect. Apple marketing made that statement early on, but teardowns very clearly show that the RAM is soldered on package, not on integrated on chip. This is still a leap forward in power from other common devices, but a much smaller one than integrating the RAM on the chip.

Furthermore, the RAM on M2 is bog-standard SK Hynix LPDDR5. 2x 32gbit chips for the 8GB model, 2x 64gbit for the 16GB model, and 2x 96gbit for the 24GB model. I can buy the exact same chips that are used in 16GB Macbook Air for ~$70 each, and I am not Apple.

If the SKUs had the same level of markup, the price difference between 8GB and 16GB would be less than $50.


> I can buy the exact same chips that are used in 16GB Macbook Air for ~$70 each, and I am not Apple.

Makes me think of licensed trades. Anything a plumber or electrician installs for me will be significantly marked up from what I could buy it for myself.


It feels weird that 8GB is the entry level for a $1000+ ultrabook in 2022. I remember 4GB being standard 10 years ago. That whole "doubling every two years" thing really tapered off.


My Zenbook UX305FA from 2015 with a dual-core passively cooled Broadwell Core-M processor had 8 GB of memory, and I think it was $799.


And there I am with a Lenovo sporting 64 GB of RAM...


I haven't written my comment in the spirit of entering a competition of who can get more RAM, but in the spirit that the Phoronix benchmark compares completely different computers, one with 8GB, the other with 16GB of RAM and gives a title suggesting that it's a comparison between chips, when the chip with 8GB isn't limited to 8GB. They could have gone for the 16GB option and made the comparison more honest. The chip doesn't have any such limitation.


The apps they use for comparison are limited by the CPU performance not memory size, so 8/16 shouldn't matter here. Unless you can point at something that's likely losing due to swapping?


1. The RAM doesn't matter in any of those benchmarks 2. The Ryzen is 6nm the M2 is 5nm 3. Ryzen specs list TDP at 15W-28W


TDP is a pretty fake spec usually. I’m pretty sure the processors can use twice as much power


This is kind of true for desktop and maybe high end laptops but here we're talking about a chip for ultrabooks. Those laptop designs don't have a lot of headroom for cooling and power delivery. The listed spec is accurate.

Note that it is possible to basically overclock the chip and raise this limit up to around 50W for the combo GPU/CPU if you really want to. This requires enhancing the cooler as shown in [0] and no OEM does that out of the box.

[0] https://www.youtube.com/watch?v=-6D6NT5rGyk


Are we sure this is actually 4 cores and not a reporting error? AFAICT with some surface Googling Apple doesn't sell the M2 with cores disabled does it? Also 4 cores 8 threads is suspicious given that AFAIU the M2 doesn't have SMT. So you'd have 4 big and 4 little cores for 8 total. (at which point I think that part is fair. This is a chip to chip, not an uarch to uarch comparison)


They're just ignoring the efficiency cores. They are definitely there.


> Apple M2 (Linux/8gb/4c) This is an error on Phoronix's part. The M2 has 8 physical cores with no SMT which makes it 8c/8t. You can see all 8 cores in the die shots https://semianalysis.substack.com/p/apple-m2-die-shot-and-ar...


>The M2 has 8 physical cores

4 performance cores and 4 efficiency cores.


Correct. I should have specified in my comment


While it technically has 8 cores, you can only use 4 at a time, so it's in most cases more correct to think of it as having 4 cores.


This is not the case. All 8 physical cores of the M1/2 can be utilized simultaneously, since each is a full physical core, independent of the others. The statement about only being able to use 4 at a time is true of threads, such as those found on SMT enabled processors that have core/thread counts like 4c/8t. In those systems, the threads share logic and decoding circuitry with each other, which would make your statement correct. You can see this in the M2 die shots. Both banks of 4 cores are in separate areas of the die. https://semianalysis.substack.com/p/apple-m2-die-shot-and-ar...

(edit): Granted 4 of the cores are lower powered "efficiency" cores, this is by no means equivalent to having just 4 cores


The hardware doesn’t have such a restriction, is there software/kernel issue you’re referring to?


No, they're thinking of how older ARM E-cores worked.

(aka big.LITTLE which is some strange branding.)


First thing I thought as well.

I don't see any value in this comparison. These processors are consuming vastly different amounts of power. Not only that, but they are using a passively cooled Macbook Air (ultraportable) and a nearly twice as thick, actively cooled Lenovo X13.

They are not even price equivalent: The base price of the X13 is $1,800 compared to $1,200 for the M2 Macbook Air. And they aren't even comparing base model to base model. They are comparing base model Macbook Air to a ~$2,500 X13 Lenovo that has been upgraded with additional memory and the faster processor.

So the Lenovo is twice as expensive, uses twice as much power, is actively vs passively cooled, and is twice as thick. What are we comparing?


> 1. The Ryzen setup has twice the ram

> 2. The Ryzen setup has twice the cpu cores

Why should this be held against the Ryzen? It was the competitive advantage that they had against Intel, and if Apple choose not to compete then it should be to their disadvantage. It's as ridiculous as this hypothetical argument:

> 1. The M2 is arm64.


The MBA M2 has been shown to throttle itself vs the MBP running M2.

So, it's not a comparison of the processors. If they selected them based on price point, form factor or something else, then that should be stated.


The M2 air with the smaller SSD configuration (tested here) is also at a big performance disadvantage from all the other models with bigger disks[1]. AFAIK it is due to the low end config coming with a single physical chip for the disk, where other configs have two and can do dual channel. It also has no active cooling - apparently more performance can be squeezed out of it by disassembling and applying nerdcore thermal paste.

- 1. https://9to5mac.com/2022/06/26/m2-macbook-pro-slower-ssd/


But it’s still a fair comparison in that this is the performance you will get.


As the owner of an M1 and M2 machine, I am really impressed with Ryzen.


It's not even running the same compiler (x86 is a GCC 12.1.1 snapshot, ARM64 is GCC 12.1.0), so even before you get to some of these tests having hand optimised x86 codepaths, you're already potentially skewed.

No, I don't expect it accounts for the differences, but there's a ton of procedural flaws in this benchmark that make using the results for any purpose very tricky.

The other one that really concerns me is that the M2 was tested in the MBA. The MBA is thermally constrained and throttles. They have zero data on what temperatures it was (because it's not implemented yet!) so they don't know when/if the MBA was throttling in each test. The M2 MBP would have been a better comparison.

I'm wary of trying to draw any useful conclusions from this article about the M2's general performance vs the new Ryzen 6xxx lineup, the methodology is just too flawed.


I'm no fanboy (but, admittedly, a Mac user), but this was a pretty rough comparison.

Linux can't make use of half of the specialized chips and functions in the M2.

I suppose, if someone needed an article to learn that M2 hardware isn't needed for linux, you have your confirmation. But was that ever in question?


What do you mean by specialised chips and functions? The tests here are running pretty generic code where there's no M2 magic that could speed them up. The unsupported hardware in Asahi currently is just extra hardware, not the basic CPU operation.

Then again, there are some features supported by Asahi but not macos https://twitter.com/marcan42/status/1556297295838453760


My understanding is that using the XCode compiler and libraries means that your could could very much wind up using co-processors that other compilers and codebases aren't aware of and aren't able to take advantage of.


Absolutely not. The M architecture with their Firestorm and Icestorm cores are basically a big.LITTLE architecture. Some cores are made for low power, energy efficient work with a lower frequency, while others are made to run at full blast. There are no magic coprocessors to run on.

The goal of the benchmark is to run the two exact same binaries, not to make one that uses all of Apple's ARM extensions and one with AVX-512 to see who can go fastest in the best conditions.


This isn't strictly true. Apple has other co-processors and some custom instructions (eg, matrix instructions). If you use their compiler and code, they can offload some calls to these while if you used another compiler and different libraries, these co-processors will be completely unused in all cases.


Using clang won't get you any of those things. Autovectorization is mostly mythical.

Calling into a system BLAS library might, but most benchmarks wouldn't do that.


>> The Ryzen setup has twice the cpu cores

That may explain why is crushes Apple on some benchmarks. I wonder what causes Apple to crush Ryzen on other ones. This was the strangest comparison I've ever seen with large wins going to both contenders.


With cross-arch benchmarks like this, you find a lot of software that has hyper-optimised x86_64 implementations does better on AMD/Intel because of that.

It skews the numbers and makes a good comparison really annoying to do, unless you do your own comparison on your own workloads.


The exact same binary saw crushing victors in different directions, though, depending on what it was testing. Look at the GNU Radio results for example.


It's not the exact same binary - that's the point. It is different instruction sets, with potentially different optimisations. They're even compiled with different compilers - what looks like a GCC 12.1.1 snapshot for x86, and GCC 12.1.0 for ARM64.

It might be the same C, it might have hand-coded assembly for important bits in x86 but not in ARM, or vice versa, it might be just one specific algorithm executes particularly well on one CPU rather than the other, it might be that the slightly different version of GCC did a new optimisation.

edit: they're also comparing an actively cooled laptop to a passive one - so you would expect M2 to throttle in longer benchmarks, for extra distortion.

The methodology is flawed. It lets you cherry pick some individual results, and if your particular usecase is in there, great. But you don't know what state the M2 was in when a test started (eg if it was already hot and throttling, etc).

It's basically impossible to draw any useful generalised conclusions from these benchmarks.


Picking individual use cases is exactly how you should be reviewing benchmarks. You should be picking a laptop based on what you do. If you don't do certain things, why would you care that some other CPU is faster at what you are not doing?

If you're looking for a "general" comparison, there is none. General usage of a laptop computer for who? What do you consider general usage? What exactly are you looking for? For 99% of what people do, they won't even be able to tell apart a Celeron from an M2.

Why do you think we have things like discreet GPUs? You buy certain hardware for certain tasks.

Or do you just want to say that your CPU is better than someone else's? Who gives a shit? That's really all you get from a "general" performance review, a bunch of vague crap.

This review was great. It shows what Linux users can expect under certain workloads on two laptops that cost the same amount of money. You can then decide which one is best for you as a Linux user who may be interested in an M2.


Drawing general comparisons is exactly what this article tries to do at the end, that's one of my concerns with it.

They don't know what state the M2 was in at the start of each test (because the hw monitoring support isn't there yet), and this is a system that is known to thermally throttle, so the individual results are potentially flawed too.

The laptops don't cost the same amount. This is comparing an $1100 laptop to an $1800 laptop.

FWIW, I don't actually care which is faster (I own both AMD and Apple hardware that I use for different things) - I just think the review is flawed.


> It's not the exact same binary - that's the point

I think you misunderstood. "GNU Radio" on M2 was sometimes screamingly fast, and sometimes embarrassingly slow, depending on which test it was.

I'm not saying that "GNU Radio" was the same on M2 and AMD, obviously it's not. I'm saying the performance results for GNU Radio specifically were insanely inconsistent - M2 won by a stupidly huge amount in one of the GNU radio results, and AMD by an equally absurd margin in the other GNU radio results.

Same "GNU Radio" compiled binary on the respective platforms, huge swings in performance depending on what that binary was doing.

There were a couple other similar examples, where performance for the same program swung wildly depending on the exact task.


That's not necessarily unexpected and still follows most of what I've said - if one particular algorithm is hand-optimised and one isn't, you will see wild swings - especially if one has been hand-vectorised and one is failing to auto-vectorise during compilation.


Without examining the behavior of the specific wins my guess is memory. The M1 and M2 have amazing memory bandwidth. That's something their design needs as clock for clock they would be massively pipeline starved if they didn't because of how wide the apple design is.


Seems to me the real story here is that even with those disadvantages, the M2 is very much holding its own in the comparison. It's not blowing away the Ryzen, sure, but neither is it being left in the dust.


The RAM difference probably isn't material to tasks which use less than the 8GB involved, the power envelope is part of the difference between the parts involved, and the m2 mac air has 8 cores. Not sure where they would have bought hardware that never existed like a 4c m2.

If one wanted to know what the difference in performance per watt or per core it would be useful to constrain the matter but that isn't the most useful question for measuring performance.

Honestly I think the important takeaway is that both chips are in the same ballpark.


Sorry but this is completely fair. Despite the Ryzen system having far more resources on paper, they're going to be pretty similar on price and on silicon area.

I don't see how Asahi is going to limit performance in those benchmarks.

TDP is a complete garbage metric and doesn't mean anything.

I think this is a more than fair comparison. They are both the high end, low power chips.


What is the state of compiler optimization for Apple Silicon under Linux? I’ll wager it’s far less mature than amd64. Indeed the whole point of Apple Silicon is its deep integration with macOS. These benchmarks are interesting, but of little practical value. No sane person should make system purchase decision solely based on this.


this is a straightforward, real-world use case for two systems. while memory could have been set equal perhaps, even doing so would not make it all even. you are only focusing on capacity while ignoring memory speed/timings and channel.

if Apple can advertise their findings and compare to desktop CPUs, then the TDP debate is completely pointless. moreover, each chip maker has their own way of calculating what they call "TDP".

to the intended reader (Linux users), this is a great guide to decide which options works best from actual performance standpoint.


> The Ryzen setup has twice the ram

CPU micro benchmarks aren't typically affected by memory quantity, especially at these levels.

> The Ryzen setup has twice the cpu cores

The Ryzen chip has eight homogeneous cores, the M2 has eight heterogeneous cores. [0] All else being equal, having high performance and low power cores in the same package _should_ result in better performance and efficiency.

I do note that the machine specs in the article show the M2 as having 4c/8t, so I wonder if there's an SMP issue in Asahi on the M2 currently? They don't make it clear if this is the case, or why all eight cores aren't being reported/used.

Doing some preliminary searching, it seems the M2 doesn't employ SMT, so perhaps this is misreported, or Asahi does something weird with scheduling.

> The Ryzen setup has 3x the power envelope (in watts) as the M2 > Ryzen 46w (not sure if this includes the GPU)

The 6850U is reported by the manufacturer as having a configurable TDP of 15-28W [1]. I believe this is total package power usage, so using either the CPU or GPU would allow better performance than maxing out both, and this probably applies to both chips. Some board/laptop manufacturers will also extend these limits, especially when plugged in, and design their cooling solutions to handle the increased power usage accordingly. Intel, for example, has published a TDP of 125W for some chips that can consume up to 230W while boosting [2]. I'm curious what both of these pull from the battery under load. I would expect the M2 to be more efficient, being manufactured on a newer, denser node.

> They are using Asahi Linux on the M2.

Any other comparison would be wholly meaningless. You (probably) can't run macOS on a Ryzen 6850U. Running benchmarks built by different compilers to run on different kernels wouldn't really be comparing the same thing.

In the end, this benchmark, while interesting, measures what it measures. The results should be taken with a grain of salt, and should not be extrapolated to predict the performance of other applications on these platforms.

[0] https://en.wikipedia.org/wiki/Apple_M2

[1] https://www.amd.com/en/products/apu/amd-ryzen-7-pro-6850u

[2] https://www.anandtech.com/show/16495/intel-rocket-lake-14nm-...


Do those four affect the results of the benchmarks? What color was the laptop case?


One quick reason why not to flash an existing OTS router is RAM.

IMHO most consumer routers need to be reset because they have memory leaks, that reduce the amount of memory available for the routing table. A power cycle becomes the common fix.

Meanwhile 4GB of RAM on PfSense can probably power an office of 50 engineers with 50% of its ram left available (and no swap)... and it will never need to be restarted.


Looks really cool, just a friendly reminder to specify a license (or no license)


Thanks, I've just added MIT licence :)


This point is very interesting. I was watching a PG interview on YC's resources for startups.

An audience member asked about how to get pricing right early. Part of PG's answer (from autogenerated CC):

you can always change your prices later though if you want to lower your prices no one's gonna complain and if you want to raise your prices you just grandfather your existing users which if you have exponential growth will always be a tiny subset of your total users and then no one will complain about that either

https://www.ycombinator.com/library/85-a-conversation-with-p...


This is so obvious that I’m surprised more companies don’t do it - even Google falls into these blunders.


To experience the browser in a modern browser check out: https://html5zombo.com/


Had to edit the title to fit... original title: > Ford surprises F-150 Lightning owners with accessory that can recharge stranded Teslas


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: