Hacker News new | past | comments | ask | show | jobs | submit login
Snapdragon Oryon, faster than Apple's M2 chip at 30% less power (twitter.com/iancutress)
69 points by fariszr 7 months ago | hide | past | favorite | 58 comments



I mean Snapdragon's chip won't ship till mid next year. Apple's chip is over a year old. Not sure this is that much of an accomplishment. We'll see how it competes against M3.

Hopefully Snapdragon will have user replaceable ram and hard drives.


This is on par with their history of competing with A-series: they’ve always been promising that their next-year flagship will be as fast as the last year’s Apple chip.

But even the “old” M1 is still amazing, so if this pans out it will be awesome. But so far Qualcomm has been in the business of quantity over quality, so they’ve always cut some corners that made their chips less than high-end. Are they’re serious this time?



> Not sure this is that much of an accomplishment.

Man, some of you can be ruthless


Alright, it's an accomplishment, but I think it was important to point out that we're comparing something which will be released when the product they are comparing it to has already been out for two years and probably six months after Apple has already released the next generation.


Even with the mandated multitude of phone PMICs?

https://www.semiaccurate.com/2023/09/26/whats-going-on-with-...


What a way to shoot a great product in the foot.

How does a "mandate" like that work? What if someone uses a different PMIC? Or, what if they include one "mandated" PMIC for boot, then offload to a sane one a few hundred ms later?


> How does a "mandate" like that work?

A couple of different levels of mandate come to mind, some of which may overlap:

1. Qualcomm won't sell you the SoC without the appropriate number of their PMICs to support it.

2. Qualcomm's reference designs only include their PMICs, and the datasheets don't tell you what electrical requirements those parts are satisfying.

3. The reference firmware from Qualcomm expects those PMICs to be present, and will panic if they aren't.

4. Same as #3, except it's in the boot ROM.

5. Part of the power supply is internal to the SoC, and the matching Qualcomm PMIC is required to complete it.

The SemiAccurate article suggests that #1 and #2 are definitely the case, and #3-5 may be true as well.


Supporting a wide variety of pmics in bootrom is quite a bit of work, and can be an attack surface. Hardcoding pmic details in bootrom sounds nice. If I worked on Qualcomm boot and they did 1-3 but still made BR have pmic flexibility I would br annoyed. Though if you work on boot you’re always a little annoyed.


the article mentions the OEM reactions.

could the OEMs pay for the QCOM PMIC but use a different PMIC in their laptop?

QCOM says no because there are proprietary protocols between the SOC (the CPU part) and the PMIC.

The QCOM PMIC, multiple of them in order to handle the workload on a laptop, would require a more expensive PCB as well.

The proprietary SOC-PMIC protocol would preclude the use of a non-QCOM PMIC.

The article states that QCOM is giving the OEMs money to placate them, more than the actual revenue from the PMICs. But that won't help the power usage of the system.

So look carefully at the power usage benchmarks for the new QCOM-powered laptops.


I’ve been part of unrelated projects that involved SoC part selection and negotiation with… large chip vendors. They always pull this shit, although this case seems particularly annoying.

They’ll tie their SoC to some other random part, invent some proprietary protocol to brand it with, and then get upset when you say you don’t want the extras. I’m sure there are some technical advantages to their solutions, but they’re just as often bloated and not appropriate for the design.

We all know that chip vendors don’t exactly write the best code, that’s expected. But inviting even more proprietary drivers or even blobs(!), for a solution that sales says will just work and solve all our problems, is usually a risky move. Heavily modifying or reimplementing boot code and drivers is an inevitability, and no engineer who’s been through that is going to invite more blobs into their codebase if they can avoid it.


https://www.tomshardware.com/news/qualcomm-snapdragon-elite-...

“Over the summer, reports popped up that Qualcomm was requiring the use of its power management integrated circuits (PMICs) with the Oryon-based processors. Kondap said that Qualcomm's PMICs are "an option for OEMs to choose," and that the company has been providing them for "many years now."”


Apple Silicon proves the importance of integration and market power. Seems a bit absurd to complain about a chip that mandates a PMIC when the alternative is a chip that mandates an operating system.


I'd agree, but QC isn't in the business of selling laptops directly to the consumer. If OEMs decide the platform isn't worth it for them, There's not much Qualcomm can do.


Not yet.


I’ve not seen anything saying these new platforms won’t be hard locked to Windows as Microsoft would prefer.


I think the Ars article is a better link: https://arstechnica.com/gadgets/2023/10/qualcomm-snapdragon-...


I cringe a little every time I click a Twitter/X (identity crisis) link. Ars is a solid resource.


For single threaded performance. I would certainly be interested to see a more thorough benchmark across categories like multi-threaded perf, GPU perf, etc.


One thing the presentation highlighted in contrast to Apple is how dependent Qualcomm is on Microsoft, Google, and the smartphone/chromebook/PC manufacturers. Looking forward to seeing this thing in the wild.


Although the performance numbers are impressive, especially for PC, yesterday's presentation was a bit of a disappointment. You have a chip that you claim smokes the competition in performance, yet you don't show any demo of it, and instead focus 90% of the presentation on AI capabilities.

For example, just partner with one of the popular game companies, let's say Riot Games, and natively compile one of their games and show side by side against a competitors CPU how much more FPS Oryon gets, or how much more the battery life goes on.

It doesn't even have to be a game, they could have shown a video export or compilation of a big code base...


What are they using to benchmark single-threaded performance? I don't think it's good to state that without clarifying what is the task being run. You could have a benchmark that increments a number in register. Since add/increment runs in a single clock, a high frequency CPU (Pentium 4 at 3.8Ghz?) would win over a wide core with great memory bandwidth (M2 at 3.7Ghz). Technically, that's a greater single thread performance, but not really applicable to the real world.

If it's over a variety of tasks and/or in a task that's similar to real world computation, kudos to them!


Just geekbench, not much else.


Sign me up for a Thinkpad x series with this puppy and nixos


https://news.ycombinator.com/item?id=35262156 (aka https://www.lenovo.com/us/en/p/laptops/thinkpad/thinkpadx/th... ) may interest you, although just like every "WinModem" style repurposing of Windows-centric hardware it's not so simple as "wipe disk, install Linux, profit" :-(


Those run the old soc. That is like 3 time slower than the one announced.


Apologies, I didn't mean to equate that machine to an alleged M2-class chip, but unless I have misunderstood the situation, nor can anyone buy one of those M2-class chips whereas (supply chain silliness aside) one can certainly convert USD$1000 into an X13s on their doorstep

I am very much in the jaded camp of "tomorrow, all beer is free" when it comes to product announcements that no normal person can purchase


I would love something like an M2 steam deck, and if what the presentation it's true I could see it happening. I wonder if this chip has hardware acceleration to translate x86/amd64 to ARM instructions, and what the performance cost is if that's the case.


GPU performance would likely be terrible. Combine that with the CPU translation and it’d be pretty bad. Better to just wait for the AMD chip to process shrink:


The M2 macs already do great with Apple’s Game porting toolkit that’s basically a repackaged version of the WINE API translation technology which allows you to run Windows games.

Diablo 4, Baldur’s Gate 3, etc run great with it


No need for BG3 to run like that, they have a macOS-native version.


I'm not sure how well that would work since nearly every game you play on it would need an x86 translation layer that will eat up all the performance gains.

The Deck is successful as it is because it requires very little effort from developers to adequately support. As long as they implement full controller support and avoid certain problematic middleware, their plain old Windows builds will most likely run on it just fine without modification. Once you start asking them to deploy separate builds on a different architecture, the whole thing will fall apart.


If that's true, then that's great!


Very excited about the AI stats: 7B model running 30 tokens/ second, 13B+ parameters running on device, first token in 2.2sec. It seems we'll have more powerful AI models running on devices soon.


I would love a Thinkpad with this in it and Windows 11 on ARM.

If it worked as well as a mac did and had the same or better battery life.


They already sell the X13s: https://www.lenovo.com/us/en/p/laptops/thinkpad/thinkpadx/th...?

I've been sorely tempted to pick up a refurb off eBay, they're only about $500. Supposedly can run Linux OK too.


They run a bit hot and a bit slow, but it's great that Lenovo is keeping an eye on Snapdragon. Hopefully, the new chip will address those limitations.

Right now, AMD ThinkPads are better. They are fast, run cool, and have pretty long battery. Still far from >18 h you can get from an M2, but can go ~11 h easily.

Hopefully Snapdragons can become a compelling alternative. Personally, I think they need to be much better than x86_64 to be worth the switch, as some software does not run so well on ARM.


I daily drive a T14s with a Ryzen 5 CPU. It's easily the best non-ARM laptop I've used.


the cpu race on mobile will be interesting to see.

let's see how this matches up to m2 / m3 vs the amd 7840u series chips too in terms of performance vs power draw.

people soon will be spoilt for choice. no more noisy laptops.


Where does it say 30% less power?



This implies 30% less power at matching peak performance, not faster performance like the title says.


The bar chart on the same slide is what causes the confusion. It's really nice sleight of hand.


God, I miss the days when we'd have actual press releases and news stories, not just an unreadable chain of tweets with crappy pictures of powerpoints...


There are links in the comments here to actual "reporting" by e.g. Ars Technica.

If you only look at Twitter, Twitter's all you're going to see.


I don't use Twitter at all, but the Ars link wasn't there when I first came upon this thread. Thanks for pointing it out!

https://news.ycombinator.com/item?id=38005867

It indeed is much more readable than Twitter.


Soon you'll remember this time as the good old days, before all technical documentation was hallucinated on X by AI.


Thanks.

Couldn't see the thread, no account =/ Twitter is so broken these days...


nitter works again at the moment (until Elmo breaks it again)

Addon to redirect automatically: https://addons.mozilla.org/fr/firefox/addon/nitter-redirect/


Thanks! Nitter doesn't seem to go to the right post though.

https://twitter.com/IanCutress/status/1716902706621866487 takes you to the specific slide, but https://nitter.net/IanCutress/status/1716902706621866487 takes you to the top of the thread.

Still, a lot better than enshittified-twitter. Similar extension for Chrome users: https://chrome.google.com/webstore/detail/nitter-redirect/mo...


it does go to the right post, but adds context on top (the selected tweet with bigger font size is the right one, but previous tweets of the thread are shown before it). I do agree that this behavior is confusing though.


Oh wow, I never would've noticed that! Good catch.

They should've just added an anchor and made the browser scroll to it.


My thought exactly when I commented, I should open an issue on their github


I don't believe it.

If it was true it would have been reported elsewhere.


It’s part of their keynote https://www.youtube.com/live/h_vh7_n_OPs?si=IShsKlKCKpBUlD6e

Though there’s some sleight of hand in that they appear to do that while allowing two of their cores to boost really high, count the ST perf of the boosted cores while claiming efficiency on the whole SoC.

So it’s true and not at the same time.


Just like every CPU benchmark. You just do certain tests and only reveal the ones where your chip is faster, and you put caveats on everything. This is how every new chip has been since the dawn of time. No single chip is faster at everything than it's competition...

That's why you have to look at extensive, detailed benchmarks and see which ones are closest to your workload and pick what's right for you.



There we go ..




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: