Hacker News new | past | comments | ask | show | jobs | submit login
ARM Mac Impact on Intel (mondaynote.com)
324 points by robin_reala 11 months ago | hide | past | favorite | 525 comments

Intel’s real money is in high-end CPUs sold to prosperous Cloud operators, not in supplying lower-end chips to cost-cutting laptop makers.

I keep searching for "Graviton" in these thinkpieces. I keep getting "no results found."

Mac ARM laptops mean cloud ARM VMs.

And Amazon's Graviton2 VMs are best in class for price-performance. As Anandtech said:

If you’re an EC2 customer today, and unless you’re tied to x86 for whatever reason, you’d be stupid not to switch over to Graviton2 instances once they become available, as the cost savings will be significant.


While Graviton is impressive and probably an indication of things to come, you can't outright use "Amazon rents them cheaper to me" as an indication of the price performance of the chips themselves.

Amazon is exactly the kind of company that would take 50% margin on their x86 servers and 0% margin on their Graviton servers in order to engineer a long term shift that's in their favor - the end of x86 server duopoly (or monopoly depending on how the wind is blowing).

I don't feel as sure about this; there's very little evidence that Amazon is putting a "50% margin", or any significant percent, on x86 servers. Sure, they're more expensive than, just for comparisons' sake, Linode or DigitalOcean, but EC2 instances are also roughly the same cost per-core and per-gb as Azure compute instances, which is a far more accurate comparison.

Many people complain about AWS' networking costs, but I also suspect these are generally at-cost. A typical AWS region has terabits upon terabits of nano-second scale latency fiber ran between its AZs and out to the wider internet. Networking is an exponential problem; AWS doesn't overcharge for egress, they just simply haven't invested in building a solution that sacrifices quality for cost.

Amazon really does not have a history of throwing huge margins on raw compute resources. What Amazon does is build valuable software and services around those raw resources, then put huge margins on those products. EC2 and S3 are likely very close to 0% margin; but DynamoDB, EFS, Lambda, etc are much higher margin. I've found AWS Transfer for SFTP [1] to be the most egregious and actually exploitative example of this; it effectively puts an SFTP gateway in front of an S3 bucket, and they'll charge you $216/month + egress AND ingress at ~150% the standard egress rate for that benefit (SFTP layers additional egress charges on-top-of the standard AWS transfer rates).

[1] https://aws.amazon.com/aws-transfer-family/pricing/

A 32 core EPYC with 256gb will cost you $176/mo at Hetzner, and $2,009/mo on EC2.

Obviously it's on demand pricing and the hardware isn't quite the same, with Hetzner using individual servers with 1P chips. Amazon also has 10gbps networking.

But still, zero margin? Let's call it a factor of two to bridge the monthly and on demand gap - does it really cost Amazon five times as much to bring you each core of Zen 2 even with all their scale?

I don't think Amazon overcharges for what they provide, but I bet their gross margins even on the vanilla offerings are pretty good, as are those of Google Cloud and Azure.

Very few of AWS's costs are in the hardware. Nearly all of Hetzner's costs are in the hardware. That's why AWS, and Azure, and GCP are so much more expensive.

Margin is a really weird statistic to calculate in the "cloud". Sure, you could just mortgage the cost of the silicon across N months and say "their margin is huge", but realistically AWS has far more complexity: the costs of the datacenter, the cost of being able to spin up one of these 32 core EPYC servers in any one of six availability zones within a region and get 0 cost terabit-scale networking between them, the cost of each of those availability zones not even being one building but being multiple near-located buildings, the cost of your instance storage not even being physically attached to the same hardware as your VM (can you imagine the complexity of this? that they have dedicated EBS machines and dedicated EC2 machines, and yet EBS still exhibits near-SSD like performance?), the cost of VPC and its tremedous capability to model basically any on-prem private network at "no cost" (but, there's always a cost); that's all what you're paying for when you pay for cores. Its the stuff that everyone uses, but its hard to quantify into just saying "jeeze an EPYC chip should be way cheaper than this"

And, again, if all you want is a 32 core EPYC server in your basement, then buy a 32 core EPYC server and put it in your basement. But, my suspicion is not that a 32 core EPYC server on AWS makes zero margin; its that, if the only service AWS ran was EC2, priced how it is today, they'd be making far less profit than when that calculation includes all of their managed services. EC2 is not the critical component of AWS's revenue model.

Margin calculations include all that. And I suspect most of AWS's marginal cost is _still_ hardware.

The marginal cost of VPC is basically 0. Otherwise they couldn't sell tiny ec2 instances. The only cost differences between t3.micro and their giant ec2 instances are (a) hardware and (b) power.

> The marginal cost of VPC is basically 0. Otherwise they couldn't sell tiny ec2 instances.

That's not strictly true. They could recoup costs on the more expensive EC2 instances.

I have not idea what the actual split is, but the existence of cheap instances doesn't mean much when Amazon has shown itself willing to be a loss-leader.

So what you're saying is kind of the opposite of a marginal cost.

If they are recouping their costs, it's a capital expense, and works differently than a marginal cost. AWS's networking was extremely expensive to _build_ but it's not marginally more expensive to _operate_ for each new customer. Servers are relatively cheap to purchase, but as you add customers the cost increases with them.

If they're selling cheap instances are a marginal loss, that would be very surprising and go against everything I know about the costs of building out datacenters and networks.

>Many people complain about AWS' networking costs, but I also suspect these are generally at-cost. A typical AWS region has terabits upon terabits of nano-second scale latency fiber ran between its AZs and out to the wider internet.

I'm skeptical about this claim. Most cloud providers try to justify their exorbitant bandwidth costs by saying it's "premium", but can't provide any objective metrics on why it's better than low cost providers such as hetzner/ovh/scaleway. Moreover, even if it were more "premium", I suspect that most users won't notice the difference between aws's "premium" bandwidth and a low cost provider's cheap bandwidth. I think the real goal of aws's high bandwidth cost is to encourage lock-in. After all, even if azure is cheaper than aws by 10%, if it costs many times that for you to migrate all your over to azure, you'll stick with aws. Similarly, it encourages companies to go all-in into aws, because if all of your cloud is in aws, you don't need to pay bandwidth costs for shuffling data between your servers.

Right, but that's not what I'm saying. Whether or not the added network quality offers a tangible benefit to most customers isn't relevant to how it is priced. You, as the customer, need to make that call.

The reality is, their networks are fundamentally and significantly higher quality, which makes them far more expensive. But, maybe most people don't need higher quality networks, and should not be paying the AWS cost.

>You, as the customer, need to make that call.

But the problem is that you can't. You simply can't use aws/azure/gcp with cheap bandwidth. If you want to use them at all, you have to use their "premium" bandwidth service.

> Amazon really does not have a history of throwing huge margins on raw compute resources

What? My $2000 Titan V GPU and $10 raspberry pi both payed for themselves vs EC2 inside of a month.

Many of AWS's managed services burn egregious amounts of EC2, either by mandating an excessively large central control instance or by mandating one-instance-per-(small organizational unit). The SFTP example you list is completely typical. I've long assumed AWS had an incentive structure set up to make this happen.

"We're practically selling it at cost, honest!" sounds like sales talk.

Yes. Look at the requirements for the EKS control plane for another example. It has to be HA and able to manage a massive cluster, no matter how many worker boxes you plan to use.*

*Unless things have changed in the last year or so since I looked

It is currently 10 cents an hour flat rate for the control plane. That actually saved us money. Even if you weren't going to run in HA, that is still the cost of a smallish machine to run a single master. I am not sure who running K8s in production would consider that too high. If you are running at the scale where $72 a month is expensive or don't want to run HA, you might not want to be running managed Kubernetes. I'd just bootstrap a single node then myself.

You said it yourself: production at scale is the only place where the current pricing makes sense. That's fine, but it means I'm not going to be using Amazon k8s for most of my workloads, both k8s and non-k8s.

If you cut out the Kubernetes hype, you could simple use AWS' own container orchestration solution (ECS) whose control plane is free of charge to use.

> Many people complain about AWS' networking costs, but I also suspect these are generally at-cost.

This seems to be demonstrably false, given that Amazon Lightsail exists. Along with some compute and storage resources:

$3.50 gets you 1TB egress ($0.0035/GB)

$5 gets you 2TB egress ($0.0025/GB)

Now, its certainly possible that Amazon is taking a loss on this product. Its also possible that they have data showing that these types of users don't use more than a few percent of their allocated egress. But I suspect that they are actually more than capable of turning a profit at those rates.

And I mean, if you just compare the price of Amazon's egress compared to that of a VPS at Hetzner or OVH, to say nothing of the cheaper ones, you can be sure that they are making margins of over 200% on it for EC2. There's a 4$ VPS on OVH with unlimited egress at 100Mbps.


That's a theoretical maximum of 1Tb egress each three hours. So for the cost of 3 hours of egress you can buy an entire VPS with a month of egress, for cheaper. It's insane just how much cheaper it really is.

Indeed it is. But rest assured that every provider will shut your server down if it's running at full bandwidth all day.

Sure. But you just need to run your server at full bandwidth for one hour every day to use up 10 times more bandwidth than even lightsail would give you for the price of the server.

I assure you that you can run these servers for one hour a day and no one will bat an eye. I know people running seedboxes at full speed for 10 hours a day or so without an issue - that's 100 times the bandwidth of even Lightsail for the same price.

No, Hetzner charges you <$2/TB. for that price, they'll be happy to route as much traffic as you want. Never heard that they complain in these cases.

They have worse peering than AWS but the difference in cost to them is certainly not 100x or more.

Obviously the comment is in reply to having unlimited traffic for a flat fee and not for a price per TB.

This doesn’t add up. Amazon is profitable mostly because of AWS which in turn is profitable mostly due to EC2 and S3.

Clearly they have margins, and fat ones at that.

Yes, by many estimates some of the highest in the industry

"When factoring in heavy depreciation, AWS has EBITDA margins of around 50%."


Building managed applications is where the money is at for AWS for sure, Elasticache is another good example. The beauty is their managed services are great and worry free.

Shameless plug - partly because of the high cost of sftp in AWS, and lack of ftp (understandable), and a bunch of people wanting the same in Azure / GCS, that made us start https://docevent.io which has proved quite popular.

Long term, Amazon is also the exact kind of company that would start making "modifications" to Graviton requiring you to purchase/use/license a special compiler to run ;)

Can you point to a time Amazon did something like that? Not saying they won't, just any company can do it, it's more likely when they've done it in the past.

Why would they want to shift people to ARM based instances if they weren't more efficient?

even if it were the case, it is still a saving for the end user and it is a threat to Intel.

>Intel’s real money is in high-end CPUs sold to prosperous Cloud operators, not in supplying lower-end chips to cost-cutting laptop makers.

And to make another point. Apple isn't a lower-end cost cutting laptop makers.

Apple sell ~20M Mac per year. Intel ship roughly ~220M PC CPU per year ( I think recent years saw the number trending back towards 250M) That is close to 10%. Not an insignificant number. Apple only use expensive Intel CPU. Lots of survey shows most $1000+ PC belongs to Apple. Most of the business Desktop, and Laptop uses comparatively cheap Intel CPU. i.e I would not be surprised the median price of Apple's Intel CPU purchase is at least 2x to total market median if not more. In terms of revenue that is 20% of Intel's consumer segment.

They used to charge a premium for being the leading edge Fab. You cant get silicon that is better than Intel. You are basically paying those premiums for having the best. Then Intel's Fab went from 2 years leading, to now 2 years behind, ( That is 4 years difference. ) all while charging the same price. And since Intel wants to keep its margin, it is not much of a surprise customers, ( Apple and Amazon ) looks for alternative.

Here is another piece on Amazon Graviton 2.


May be I should submit it to HN.

Yea, last I looked at it, Apples average Mac sakes price was $1300, HP/Dell/et al were under $500.

Apple owns the premium PC market, it’s Mac division is not only the most profitable PC company in the world, it might be more profitable than all the others combined.

It’s share of Intels most expensive desktop CPUs is much higher than its raw market share.

We recently evaluated using Graviton2 over X86 for our Node.js app on AWS. There was enough friction, some native packages that didn't work out of the box, etc. and some key third party dependencies missing completely, that it wasn't worth the effort in the end even considering the savings, as we'd likely keep having these issues pop up and having to debug them remotely on Graviton.

If macOS goes ARM and there's a sizable population of developers testing and fixing these issues constantly, the math changes in favor of Graviton and it would make it a no-brainer to pick the cheaper alternative once everything "just works".

Unfortunately you might have witnessed the achilles heel of ARM ecosystem, where certain SW/binaries are not available yet. Most open-source code can be compiled for ARM without much hassle[1][2][3], but some might require explicit changes to port certain x86 specific instructions to ARM.

I've been shifting my work to ARM based machine for some years now, mainly to reduce power consumption. One of my current projects - A problem validation platform[4](Go) has been running nicely on a ARM server(Cavium ThunderX SoCs) on scaleway; but weirdly scaleway decided to quit on ARM servers[5] sighting hardware issues which not many of the ARM users seem to have faced. Only ARM specific issue I faced with scaleway was that the reboot required power-off.

[1]cudf: https://gist.github.com/heavyinfo/da3de9b188d41570f4e988ceb5...

[2]Ray: https://gist.github.com/heavyinfo/aa0bf2feb02aedb3b38eef203b...

[3]Apache Arrow: https://gist.github.com/heavyinfo/04e1326bb9bed9cecb19c2d603...

[4]needgap: https://needgap.com

[5]Scaleway ditched ARM: https://news.ycombinator.com/item?id=22865925

Filing bugs against the broken packages would be a nice thing to do. Easy enough to test on a Raspberry Pi or whatever.

I have a hunch our industry will need to increasingly deal with ARM environments.

>Mac ARM laptops mean cloud ARM VMs.

What is the connection here ? ARM servers would be fine in a separate discussion. What does it have to do with Macs ? Macs aren't harbingers of anything. They have set literally no trend in the last couple of decades, other than thinness at all costs. If you mean that developers will use Gravitons to develop mac apps, why/how would that be ?

To quote Linus Torvalds:

"Some people think that "the cloud" means that the instruction set doesn't matter. Develop at home, deploy in the cloud.

That's bull*t. If you develop on x86, then you're going to want to deploy on x86, because you'll be able to run what you test "at home" (and by "at home" I don't mean literally in your home, but in your work environment)."

So I would argue there is a strong connection.

> If you develop on x86, then you're going to want to deploy on x86

I can see this making sense to Torvalds, being a low-level guy, but is it true for, say, Java web server code?

Amazon are betting big on Graviton in EC2. It's no longer just used in their 'A1' instance-types, it's also powering their M6g, C6g, and R6g instance-types.


I agree about Java. I'm using Windows to write Java but deploy on Linux and it works. I used to deploy on Itanium and also on some IBM Power with little-endian and I never had any issues with Java. It's very cross-platform.

Another example is android apps development. Most developers use x86_64 CPU and run emulator using intel android image. While I don't have vast experience, I did write few apps and never had any issue because of arch mismatch.

High level languages mostly solved that issue.

Also note that there are some ARM laptops in the wild already. You can either use Windows or Linux. But I don't see that every cloud or android developer hunting for that laptop.

It works until it doesn't. We had issues where classes were loaded in a different order on linux causing issues that we could not repro on windows.

Interesting, but in that case you changed OS rather than changing CPU ISA, so not quite the same thing.

No, it's exactly the same thing. The more variables you change, the harder it will be to debug a problem.

I've deployed C++ code on ARM in production that was developed on X64 without a second thought, though I did of course test it first. If it compiles and passes unit tests, 99.9% of the time it will run without issue.

Going from ARM to X64 is even less likely to have issues as X64 is more permissive about things like unaligned access.

People are making far too big a deal out of porting between the two. Unless the code abuses undefined behaviors in ways that you can get away with on one architecture and not the other, there is usually no issue. Differences in areas like strong/weak memory ordering, etc., are hidden behind APIs like posix mutexes or std::atomic and don't generally have to be worried about.

The only hangup is usually vector intrinsic or ASM code, and that is not found in most software.

For higher level languages like Go or Java, interpreted languages like JavaScript or Python, or more modern languages with fewer undefined edge cases like Rust, there is almost never an issue.

This is just not a big deal unless you're a systems person (like Linus) or developing code that is really really close to the metal.

I've developed for x86, and deployed on x86. Some years later we decided to add arm support. Fixing the only on arm bug made our x86 software more stable. Turns out some 1 in a million issues on x86 that on arm happen often enough that we could isolate them and then fix them.

Thus I encourage everyone to target more than one platform as it makes the total better. This even though there are platform specific issues that won't happen on the other (like the compiler bug we found)

Apparently Apple had macos working for years on x86 before they switched their computers to intel CPUs. The justification at the time was exactly this - by running their software on multiple hardware platforms, they found more bugs and wrote better code. And obviously it made the later hardware transition to intel dramatically easier.

I would be surprised if Apple didn’t have internal prototypes of macos running on their own Arm chips for the last several years. Most of the macos / iOS code is shared between platforms, so it’s already well optimized for arm.

They've had it deployed worldwide on ARM -- every iPhone and iPad runs it on an ARM chip.

To add to your example, everyone that targets mobile devices with native code tends to follow a similar path.

Usually making the application run in the host OS is much more productive than dealing with the emulator/simulator.

Most of my work outside of the day job is developed on x86 and deployed on ARM.

Unless you're talking about native code (and even then, I've written in the past about ways this can be managed more easily), then no, it really doesn't matter.

If you're developing in NodeJS, Ruby, .NET, Python, Java or virtually any other interpreted or JIT-ed language, you were never building for an architecture, you were building for a runtime, and the architecture is as irrelevant to you as it ever was.

> Python

Well I can't speak to some of the others... but Conda doesn't work at all on ARM today (maybe that will change with the new ARM Macs, though), which is annoying if you want to use it on, say, a Raspberry Pi for hobby projects.

Additionally, many scientific Python packages use either pre-compiled binaries or compile them at install-time, for performance. They're just Python bindings for some C or Fortran code. Depending on what you're doing, that may make it tricky to find a bug that only triggers in production.

Sorry, yes this is an exception.

Also one I've come across myself so I'm a bit disappointed I didn't call this out. So... kudos!

If you're on a low enough level where the instruction set matters (ie. not Java/JavaScript), then the OS is bound to be just as important. Of course you can circumvent this by using a VM, though the same can be said for the instruction set using an emulator.

But that's the other way round. If you have an x86 PC, you can develop x86 cloud software easily. You don't develop cloud software on a mac anyway (i.e., that's not apple's focus). You develop mac software on macs for other macs. If you have to develop cloud software, you'll do so on linux (or wsl or whatever). What is the grand plan here ? You'll run an arm linux vm on your mac to develop general cloud software which will be deployed on graviton ?

> If you have to develop cloud software, you'll do so on linux (or wsl or whatever).

I think you are vastly underestimating how many people use Mac (or use Windows without using WSL) to develop for the cloud.

I can say our company standardized on Macs for developers back when Macs were much better relative to other laptops. But now most of the devs are doing it begrudgingly. The BSD userland thing is a constant source of incompatibility, and the package systems are a disaster. The main reason people are not actively asking for alternatives is that most use the Macs as dumb terminals to shell into their Linux dev servers, which takes the pressure off the poor dev environment.

The things the Mac is good at:

1) It powers my 4k monitor very well at 60Hz

2) It switches between open lid and closed lid, and monitor unplugged / plugged in states consistently well.

3) It sleeps/wakes up well.

4) The built in camera and audio work well, which is useful for meetings, especially these days.

None of these things really require either x86 or Arm. So if a x86 based non-Mac laptop appeared that handled points 1-4 and could run Linux closer to our production environment I'd be all over it.

I think you've hit the nail on the head, but you've also summarised why I think Apple should genuinely be concerned about losing marketshare amongst developers now that WSL2 is seriously picking up traction.

I started using my home Windows machine for development as a result of the lockdown and in all honesty I have fewer issues with it than I did with my work MacBook. Something is seriously wrong here.

I think Apple stopped caring about devs marketshare a long time ago and instead is focusing on the more lucrative hip and young Average Joe consumer.

Most of the teens to early 20 somethings I know are either buying or hoping to buy the latest Macs, iPads, iPhones and AirPods while most of the devs I know are on Linux or WSL but devs are a minority compared to the Average Joes who don't code but are willing to pay for nice hardware and join the ecosystem.

Looking at the arch slide of Apple's announcement about shifting Macs to ARM, they want to people to use them as dev platforms for better iPhone software. Think Siri on chip, Siri with eyes and short term context memory.

And as a byproduct perhaps they will work better for hip young consumers too. Or anyone else who is easily distracted by bright colours and simple pictures, which is nearly all of us.

> I think you are vastly underestimating how many people use Mac (or use Windows without using WSL) to develop for the cloud.

The dominance of Macs for software development is a very US-centric thing. In Germany, there is no such Mac dominance in this domain.

To be fair in the UK Macs are absolutely dominant in this field.

Depends very much what you're doing; certainly not in my area (simulation software) at least, not for other than use as dumb terminals.

Yes, in Germany it's mostly Linux and Lenovo / Dell / HP desktops and business-type laptops. Some Macs, too.

I have no idea where in Germany you're based, or what industry you work in, but in the Berlin startup scene, there's absolutely a critical mass of development that has coalesced around macOS. It's a little bit less that way than in the US, but not much.

Berlin is very different from the rest of Germany.

This. According to my experience and validated by Germans and expats alike, Berlin is not Germany :)

In Norway where I live Macs are pretty dominating as well. Might be Germany is the outlier here ;-)

When I go to Ruby conferences, Java conferences, academic conferences, whatever, in Europe, everyone - almost literally everyone - is on a Macintosh, just as in the US.

Most people don’t go to conferences.

Ruby conference goers don't represent all the SW devs of Europe :)

Why do you think not?

And why not Java developers?

They seem pretty polar opposite in terms of culture, but all still turn up using a Macintosh.

Because every conference is its own bubble of enthusiasts and SW engineering is a lot more diverse than Ruby, from C++ kernel devs to Firmware C and ASM devs.

Even the famous FailOverflow said in one of his videos he only bought a Mac since he saw that at conferences everyone had Macs so he thought that must mean they're the best machines.

Anecdotally, I've interviewed at over 12 companies in my life and only one of those issues Mac to its employees the rest were windows/Linux.

True, but it is full of developers using Windows to deploy on Windows/Linux servers, with Java, .NET, Go, node, C++ and plenty of other OS agnostic runtimes.

Given the fact that the US has an overwhelming dominance in software development (including for the cloud) I think that the claim this is only a US phenomenon is somewhat moot. As a simple counter-point, the choice of development workstation in the UK seems to mirror my previous experience in the US (i.e. Macs at 50% or more.)

My experience in Germany and Austria mirrors GPs experience with windows/linux laptops being the majority and Mac being present in well funded hip startups.

Same in South Africa (50% mac, 30% windows, 20% ubuntu) and Australia.

> You don't develop cloud software on a mac anyway

I've got anecdata that says different. My backend/cloud team has been pretty evenly split between Mac and Windows (with only one Linux on the desktop user). This is at a Java shop (with legacy Grails codebases to maintain but not do any new development on).

Mac is actually way better for cloud dev than Windows is, since it's all Unix (actual Unix, not just Unix-like). And let's be honest, you'll probably be using docker anyway.

Arguably now, with WSL, Windows is closer to the cloud environment than macOS. Its a true Linux kernel running in WSL, no longer a shim over Windows APIs.

Yep. WSL 2 has been great so far. My neovim setup feels almost identical to running Ubuntu natively. I did have some issues with WSL 1, but the latest version is a pleasure to use.

Do you use VimPlug? For me :PlugInstall fails with cannot resolve host github.com

I do use VimPlug. Maybe a firewall issue on your end? I'm using coc.nvim, vim-go, and a number of other plugins that installed and update just fine.

That is just utter pain though. I’ve tried it and I am like NO THANKS! Windows software operates too poorly with Unix software due to different file paths (separators, mounting) and different line endings in text files.

With Mac all your regular Mac software integrates well with the Unix world. XCode is not going to screw up my line endings. I don’t have to keep track of whether I am checking out a file from a Unix or Windows environment.

Your line-ending issue is very easy to fix in git:

`git config --global core.autocrlf true`

That will configure git to checkout files with CRLF endings and change them to plain LF when you commit files.

Eating data is hardly a fix for anything, even if you do it intentionally.

If the cloud is mostly UNIX-like and not actual UNIX, why would using “real UNIX” be better than using, well, what’s in the cloud?

Agree, although I think this is kind of nitpicking, because "UNIX-like" is pretty much the UNIX we have today on any significant scale.

macOS as certified UNIX makes no sense in this argument. it doesn't help anything, as most servers are running Linux.

I develop on Mac, but not mainly for other Macs (or iOS devices), but instead my code is mostly platform-agnostic. Macs also seem to be quite popular in the web-frontend-dev crowd. The Mac just happens to be (or at least used to be) a hassle-free UNIX-oid with a nice UI. That quality is quickly deteriorating though, so I don't know if my next machine will actually be a Mac.

True, but then the web-fronted dev stuff is several layers away from the ISA, isn't it ? As for the unix-like experience, from reading other people's accounts, it seemed like that was not really Apple's priority. So there are ancient versions of utilities due to GPL aversion and stuff. I suppose docker, xcode and things like that make it a bit better, but my general point was that didn't seem like Apple's main market.

> So there are ancient versions of utilities due to GPL aversion and stuff.

They're not ancient, but are mostly ports of recent FreeBSD (or occasionally some other BSD) utilities. Some of these have a lineage dating back to AT&T/BSD Unix, but are still the (roughly) current versions of those tools found on those platforms, perhaps with some apple-specific tweaks.

It works great though, thanks to Homebrew. I have had very few problems treating my macOS as a Linux machine.

> You don't develop cloud software on a mac anyway

You must be living in a different universe. What do you think the tens of thousands of developers at Google, Facebook, Amazon, etc etc etc are doing on their Macintoshes?

> What do you think the tens of thousands of developers at Google ... are doing on their Macintoshes?

I can only speak of my experience at Google, but the Macs used by engineers here are glorified terminals, since the cloud based software is built using tools running on Google's internal Linux workstations and compute clusters. Downloading code directly to a laptop is a security violation (With an exception for those working on iOS, Mac, or Windows software)

If we need Linux on a laptop, there is either the laptop version of the internal Linux distro or Chromebooks with Crostini.

They individually have a lot of developers, but the long tail is people pushing to AWS/Google Cloud/Azure from boring corporate offices that run a lot of Windows and develop in C#/Java.

edit: https://insights.stackoverflow.com/survey/2020#technology-pr...

>What do you think the tens of thousands of developers at Google, Facebook, Amazon, etc etc etc are doing on their Macintoshes?

SSH to a linux machine ? I get that cloud software is a broad term that includes pretty much everything under the sun. My definition of cloud dev was a little lower level.

This is the same Linus who's recently switched his "at home" environment to AMD...


Which is still x86...?

What point are you trying to make?

So? Linus doesn’t develop for cloud. His argument still stands.

Because when you get buggy behaviour from some library because it was compiled to a different architecture it's much easier to debug it if your local environment is similar to your production one.

Yeah, I'm able to do remote debugging in a remote VM but the feedback loop is much longer, impacting productivity, morale and time to solve the bug, a lot of externalised costs that all engineers with reasonable experience are aware of. If I can develop my code on the same architecture that it'll be deployed my mind is much more in peace, when developing in x86_64 to deploy on ARM I'm never sure that some weird cross-architecture bug will pop up. No matter how good my CI/CD pipeline is, it won't ever account for real-world usage.

on the other hand, having devs an alien workstation really put stress into the application configurability and adaptability in general.

it's harder in all the way you describe, but it's much more likely the software will survive migrating to the next debian/centos release unchanged.

it all boils down to the temporal scale of the project.

I'd say that on my 15 years career I had many more tickets related to bugs that I needed to troubleshoot locally than issues with migrating to a new version/release of a distro. To be honest it's been 10 years since the last time I had a major issue caused by distro migration or update.

> "Macs aren't harbingers of anything."

I have to agree. It's not like we're all running Darwin on servers instead of Linux. Surely the kernel is a more fundamental difference than the CPU architecture.

ARM Macs means more ARM hardware in hands of developers. It means ARM Docker images that can be run on hardware on hand, and easier debugging (see https://www.realworldtech.com/forum/?threadid=183440&curpost...).

> They have set literally no trend in the last couple of decades, other than thinness at all costs

Hahaha then you have not kept attention. Apple led the trend away from beige boxes. Style of keyboard used. Large track pads. USB. First to remove floppy drive. Both hardware, software and web design has been heavily inspired by Apple. Just look at icons used, first popularized by Apple.

Ubuntu desktop is strongly inspired by macOS. Operating system with drivers preloaded through update mechanism was pioneered by Apple. Windows finally seem to be doing this.

Because if you're developing apps to run in the cloud, it's preferable to have the VM running the same architecture that you're developing on.

maybe what he means is, if macs are jumping on the trend, man that must be a well-established trend, they're always last to the party.

Epyc Rome is now available on EC2 and the c5a.xlarge16 plan appears to be about the same or slightly cheaper than the Graviton 2 plan.

Being cheaper isn't enough here - Graviton needs to be faster and it needs to do that over generations. It needs to _sustain_ its position to become attractive. Intel can fix price in a heartbeat - they've done that in the past when they were behind. Intel's current fab issues does make this a great time to strike, but what about in 2 years? 3 years? 4? Intel's been behind before, but they don't stay there. Switching to Epyc Rome at the moment is an easy migration - same ISA, same memory model, vendor that isn't new to the game, etc... But Graviton needs a bigger jump, there's more investment there to port things over to ARM. Will that investment pay off over time is a much harder question to answer.

> But Graviton needs a bigger jump, there's more investment there to port things over to ARM.

I agree to some extent but don’t underestimate how much this has changed in the cloud era: it’s never been cheaper to run multiple ISAs and a whole lot of stuff is running in environments where switching is easy (API-managed deployments, microservices, etc.) and the toolchains support ARM already thanks to phones/tablets and previous servers - so much code will just run on the JVM, high level languages like JavaScript Python, or low-level ones like Go and Rust with great cross-compiler support, etc. and hardware acceleration also takes away the need to pour engineer-hours into things like OpenSSL which might have blocked tons of applications.

At many businesses that is probably over the threshold where someone can say it’s worth switching the easy n% over since they’ll save money now and if the price/performance crown shifts back so can their workloads. AWS has apparently already done this for managed services like load-balancers and I’m sure they aren’t alone in watching the prices very closely. That’s what Intel is most afraid of: the fat margins aren’t coming back even if they ship a great next generation chip.

The problem here is that ARM & x86 have very different memory models, not that the ISA itself is a big issue. Re-compiling for ARM is super easy, yes absolutely. Making sure you don't have any latent thread-safety bugs that happened to be OK because it was on x86 that are now bugs on ARM? That's a lot harder, and it only takes a couple of those to potentially wipe out any savings that were potentially had, as they are in the class of bugs that's particularly hard to track down.

If you do have hidden thread safety bugs you are only one compile away from failure, even on x86.

Some function gets vectorized and a variable shifts in or out of a register? Or one of your libraries gets rebuilt with LTO / LTCG and its functions are inlined.

If your code was wrong, any of that can ruin it and you're left trying to figure it out, and requiring your developers to use exactly one point release of Visual Studio and never, ever upgrade the SDK install.

And precisely for this reason if I were Amazon/AWS I would buy ARM from Softbank right now - especially at a time when the Vision Fund is performing so poorly, and therefore there might be financial distress on the side of Softbank.

I'm not so sure about so many parts of this.

I love working on ARM, but with the licensing model I'm not sure how much of a positive return Amazon would really be able to squeeze from their investment.

It also potentially brings them a truckload of future headaches if any of their cloud competitors raise anti-trust concerns down the road.

Beyond that, I think Apple, ARM and AMD get a lot of credit for their recent designs - a lot of which is due, but quite a bit of which should really go to TSMC.

The TSMC 7nm fabrication node is what's really driven microprocessors forward in the last few years, and hardly anyone outside of our industry has ever heard of them.

I don't know that Amazon couldn't transition to RISC-V if they needed to in a couple of years.

I think it's still a sound investment, not for the ROI, but for a stake in controlling the future. They could mitigate the anti-trust angle a bit by getting Apple, IBM, and maybe Google onboard.

Microsoft controls a lot of lucrative business-centric markets. An advantage that seems to have helped MS Azure surpass AWS in marketshare. One of Microsoft's weaknesses is their in-ability to migrate away from x86 in any meaningful way. IBM could used Redhat to push the corporate server market away from x86 under the guise of lower operating costs, which could deal a tremendous blow to MS, leaving Amazon and Google with an opening to hit the Office and Azure market.

Imagine if Oracle buys it, and it becomes another SUN-type outcome?

>you’d be stupid not to switch over to Graviton2

You overestimate the AWS customer base. Lots of them do silly things that cost them a lot of money.

It's because AWS is designed in a such a way that it's very easy to spend a lot, and very difficult to know why.

If you can throw money at the problem and invest engineering resources to do cost optimization later, this is often a valid strategy.

It's often easy to test if scaling the instance size/count resolves a performance issue. If it does, you know you can fix the problem by burning money.

When you have reasonable certainty of the outcome spending money is easier than engineering resources.

And later it's easier for an engineer to justify performance optimizations, if the engineer can point to a lower cloud bill..

I'm not saying it's always a well considered balance, just that burning money temporarily is a valid strategy.

>If you can throw money at the problem and invest engineering resources to do cost optimization later, this is often a valid strategy.

If only AWS had thousands of engineers to create a UX that makes cost info all upfront and easy to budget. Clearly it's beyond their capability /s

>I'm not saying it's always a well considered balance, just that burning money temporarily is a valid strategy.

Yes, but I bet a majority of AWS users don't want that to be the default strategy.

> Mac ARM laptops mean cloud ARM VMs.

Why do you think that? Cloud means linux, apple does not even have a server OS anymore.

We are seeing some ARM devices entering the server space, but these have been in progress for ears and have absolutely nothing to do with apples (supposed) CPU switch.

Why do you think developers don't run Linux VMs on macOS laptops?

They absolutely do.

One reason is to get the same environment they get in a cloud deployment scaled down. Another is Apple keeps messing with compatibility, most recent example being the code signing thing slowing down script executions.

Edit, this includes docker on Mac: it's not native.

When they run Docker that's precisely what they do...

Everyone mentions Intel high margins on servers and somehow does not consider if Apple wants these margins too.

> Guys, do you really not understand why x86 took over the server market?

> It wasn’t just all price. It was literally this “develop at home” issue. Thousands of small companies ended up having random small internal workloads where it was easy to just get a random whitebox PC and run some silly small thing on it yourself. Then as the workload expanded, it became a “real server”. And then once that thing expanded, suddenly it made a whole lot of sense to let somebody else manage the hardware and hosting, and the cloud took over.

It was also because Intel was like this giant steamroller you couldn't compete with because it was also selling desktop CPUs.

Sure they have lower margins on desktop but these brings sh!t tons of cash, cash you will then use for research and to develop your next desktop and server CPUs. If I believe this page [1], consumer CPUs brought almost $10 billions in 2019... In comparison, server CPUs generated $7 billions of revenues... And these days, Intel has like >90% of that market.

Other players (Sun, HP, IBM, Digital, ...) were playing in a walled garden and couldn't really compete on any of that (except pricing because their stuff was crazy expensive).

So not only they were sharing the server market with Intel but Intel was most likely earning more than the sum of it all from their desktop CPUs... More money, more research, more progress shared between the consumer and server CPUs: rinse and repeat and eventually you will catch up. And also, they could sell their server CPUs for a lot less than their "boutique" competitors.

You just can't compete with a player which has almost unlimited funds and executes well...

[1] https://www.cnbc.com/2020/01/23/intel-intc-earnings-q4-2019....

Apple backed away from the server market years ago despite having a quite nice web solution.

Exactly. Yes, some people need memory and multi-core speeds, but for most, it's the cost per "good enough" instance, and that's CapEx and power, both of which could be much lower with ARM.

If you need multi-core speeds, you'd be silly to not go AMD Epyc (~50% cheaper), or ARM.

People paid the IBM tax for decades. Intel is the new IBM. Wherever you look, Intel's lineup is not competitive and it is only surviving due to inertia.

If the operating margins in the article are realistic then there is a lot of room for undercutting Intel if you can convince the customer to take the plunge. That is, if you’re not in an IBM-vs-Amdahl situation where the lower price is not enough.

It isn't quite that simple, Intel has way more engineering resources than AMD and for some complicated setups like data centers, Intel really does have good arguments that their systems are better tested than the competition, and Intel does have better ability to bundle products together than AMD.

Intel is behind in fab technology. I don't think the problem is that Intel's chip designs are what's holding them back. AMD offered better multi core performance and Intel responded with more cores as well. However, I do believe that Intel suffers from an awful corporate environment. There was a story about ISPC [0] with one chapter [1] talking about the culture at the company.

[0] https://pharr.org/matt/blog/2018/04/30/ispc-all.html

[1] https://pharr.org/matt/blog/2018/04/28/ispc-talks-and-depart...

> their systems are better tested than the competition

Yes, Spectrum and Meltdown definitely proves that...

I think I heard that argument already, in the 90's. But instead of Intel it was Sun, it was SGI, it was Digital, etc

The truth is that money wins and while there is an inertia, it's more like a Cartoon inertia where your guy won't fall from the cliff until he looks down. But then it's too late.

If by tested you mean, getting eaten alive by vulnerability reports monthly and furthering degrading performance, sure. There isn't much rocket science otherwise to "better tested" in a server platform. Either it passes benchmarks of use case scenarios or it doesn't.

Yes there is. Server platforms are connected to more storage, higher bandwidth networking and more exotic sockets (more than one CPU). It is one thing to support lots of PCI express lots on paper. It is another thing to have a working solution that doens't suffer degraded performance when all the PCI express slots are in use at the same time.

None of this is rocket science but it takes money and engineers time to make these things happen. Intel has more of both at the moment.

There are plenty of applications where single-threaded clock speed matters, and Intel still wins by a wide margin there. Cache size is also a factor, and high end Xeon's have more cache than any competing CPU I've seen.

The just announced Intel Xeon Cooper Lake top end processor has about 38.5MB of cache. The AMD Rome top end has 256MB of cache.



I'm not sure this is the whole story, Intel has twice the L2 cache as AMD but I'm not sure that's enough to make a huge difference.

Epyc 7H12[1]:

- L1: two 32KiB L1 cache per core

- L2: 512KiB L2 cache per core

- L3: 16MiB L3 cache per core, but shared across all cores.

The L1/L2 cache aren't yet publicly available for any Cooper Lake processors, however the previous Cascade Lake architecture provided:

All Xeon Cascade Lakes[2]:

- L1: two 32 KiB L1 cache per core

- L2: 1 MiB L2 cache per core

- L3: 1.375 MiB L3 cache per core (shared across all cores)

Normally I'd expect the upcoming Cooper Lake to surpass AMD in L1, and lead further in L2 cache. However it looks like they're keeping the 1.375MiB L3 cache per core in Cooper Lake, so maybe L1/L2 are also unchanged.

0: https://www.hardwaretimes.com/cpu-cache-difference-between-l...

1: https://en.wikichip.org/wiki/amd/epyc/7h12

2: https://en.wikichip.org/wiki/intel/xeon_platinum/9282

Edit: Previously I showed EPYC having twice the L1 as Cascade Lake, this was a typo on my part, they're the same L1 per core.

Zen 2 has 4MiB L3 per core, 16 MiB shared in one 4-core CCX.

Thanks, I wrote that while burning the midnight oil and didn't double-check the sanity of those numbers. It's too late to edit mine but I hugely appreciate the clarification.

NP. It's still a huge amount of LLC compared to the status quo. Says something about how expensive it really is to ship all that data between the CCXs/CCDs.

Intel L3 does not equal AMD L3 cache regarding latencies. Depending on the application this can matter a lot. https://pics.computerbase.de/7/9/1/0/2/13-1080.348625475.png

You'd need that latency to be significant enough that AMD's >2x core count doesn't still result in it winning by a landslide anyway, and you need L3 usage low enough that it still fits in Intel's relatively tiny L3 size.

There's been very few cloud benchmarks where 1P Epyc Rome hasn't beaten any offering from Intel, including 2P configurations. The L3 cache latency hasn't been a significant enough difference to make up the raw CPU count difference, and where L3 does matter the massive amount of it in Rome tends to still be more significant.

Which is kinda why Intel is just desperately pointing at a latency measurement slide instead of an application benchmark.

Cache per tier matters a lot, total cache does not tell much. L1 is always per core and small, L2 is larger and slower, L3 is shared across many cores and access is really slow compared to L1 and L2. In the end performance per watt for a specific app is what matters, that is the end goal.

Interesting, that's news to me. Guess Intel just has clock speed then. That's why I still pay a premium to run certain jobs on z1d or c5 instances.

As another commenter pointed out, though, not all caches are equal. Unfortunately, I was not able to easily find access speeds for specific processors, so single-threaded benchmarks are the primary quantitative differentiator.

Given the IPC gains of Zen 2, the single-threaded gap is closing, and even reversed in some workloads.

And I think Xeon L3 cache tops out at about 40MB, whereas Threadripper & Epyc go up to 256MB.

Really? 'Entry-level' EPYC's (7F52) have 256MB of L3 cache for 16 cores.

I don't think there's any Intel CPU's with more than 36MB L3?

77MB for 56 cores. That's that ploy where they basically glued two sockets together so they could claim a performance per socket advantage even though it draws 400W and that "socket" doesn't even exist (the package has to be soldered to a motherboard).

IIRC the only people who buy those vs. the equivalent dual socket system are people with expensive software which is licensed per socket.

Those applications exist, but not enough to justify Intel’s market cap.

Do you have a source for any of the things you said?

Anecdotal: back then when SETI@home was a thing, I was running it on some servers; a 700 MHz Xeon was a lot faster (>50%, IIRC) than a 933 MHz Pentium 3, Xeon had a lot lower frequency and slower bus (100 vs 133MHz), but the cache was 4 times larger and probably the dataset or most of it was running in cache.

The same happened with the mobile Pentium-M with 1 (Banias) and 2MB (Dothan) cache - you could get the whole lot in cache and it just flew, despite the (relatively) low clock speed. There were people building farms of machines with bare boards on Ikea shelving.

Even worse for Intel, there are lots of important server workloads that aren't CPU intensive, but rely on the CPU coordinating DMA transfers between specialized chips (SSD/HDD controller, network controller, TPU/GPU) and not using much power or CapEx to do so.

> If you need multi-core speeds, you'd be silly to not go AMD Epyc (~50% cheaper), or ARM.

But Amdhal's Law shows us this doesn't make sense for most people.

Graviton is only cheaper because Amazon gouges you slightly less.

Graviton is still almost an order of magnitude slower than a VPS of the same cost, which is around what the hardware and infra costs Amazon.

And since you can already run windows 10 on arm https://docs.microsoft.com/en-us/windows/arm/ I is only a matter of time before we get Windows Server on arm. Though I guess you can run SQL Server on Linux on ARM already I think, though I am not entirely sure about that.

I would suspect that Apple is likely to enforce exclusive hardware for ARM; just like it does now, for Intel (which is a lot more common than ARM).

The limitation is not technical (as hackintoshes demonstrate); it’s legal.

That said, it would be great to be able to run Mac software (and iOS) on server-based VMs.

I just don’t think it will happen at scale, because lawyers.

Pedantic: actually, cloud ARM VMs and ARM laptops mean eventual Mac ARM laptops. The former two are already widespread in Graviton2 you mentioned and Surface X and also 3rd party ones.

there is no mainstream ARM laptop as of writing.

what are my options if I want a ARM laptop with say good mobile processor performance close to the i7-8850H found in a 2018 mbp 15, 16GB RAM and 512GB NVME SSD to setup my day to day dev environment (Linux + Golang + C++ etc)?

surface x is the only ARM laptop you can easily purchase, but it is windows only and the processor is way too slow. there is also close to 0 response from app vendors to bring their apps to native ARM windows environment.

How is the X slow? The reviewers have only claimed it was slow when emulating x86, but not in native apps.

Not sure how slow it’s ARM processor is in actual use, but we know it’s far slower than Apple ARM CPUs.

AFAIK, you can game on Surface X without much issue. There are numerous YouTube videos of popular games doing 60 FPS on descent settings.

Apple fans just seem to be in denial about being late to the game.

I have to admit, the Apple's ARM processors will likely be significantly faster per core. But they are not the driver of the switch to ARM. If anything, Chromebooks were.

Does anyone know if AWS Graviton supports TrustZone?

> Mac ARM laptops mean cloud ARM VMs.

If you develop for Mac, chances are you want your CI to use the same target hardware, which means cloud ARM hardware to run Mac VMs.

Chances indeed, but given how a lot of software is written in crossplatform languages (Java, JS, Ruby, etc) or the underlying CPU hardware is abstracted away (most compiled languages), I like to think it doesn't really matter except in edge cases and / or libraries.

Wishful thinking though, probably.

I can abstract away the OS (mostly), but I can’t abstract away the ISA without paying a pretty hefty performance cost.

Except you’re not going to be selling access to a Mac VM running on Graviton anytime soon.

> you’d be stupid

there's still a load of non scale-out services that the world depends upon.

Try Ampere:


Ever since Cavium gave up Arm server products and pivoted to HPC, there hasn't been a real Arm compititor.

Ampere is almost all ex-Intel.

Intel has a couple of issues.

1. Back in the 1990s, the big UNIX workstation vendors were sitting where Intel is now at the high end, being eaten from the bottom by derivatives of what was essentially a little embedded processor for dumb terminals and scientific calculators. Taken in isolation, Apple's chips aren't an example of low-margin high-volume product eating its way up the food chain, but the whole ARM ecosystem is.

2. For a lot of the datacenter roles being played by Intel Xeons, flops/Watt or iops/watt isn't the important metric. For many important workloads, the processor is mostly there to orchestrate DMA from the SSD/HDD controller to main memory and DMA from main memory to the network controller. The purchaser of the systems is looking to maximize the number of bytes per second divided by the amortized cost of the system plus the cost of the electricity. My understanding is that even now, some of the ARM designs are better than the Atoms in term of TDP, even forgetting the cost advantages.

And we should not forget, that ARM is already there. It is used by almost all the IO controllers, including the SSD/HDD controller.

Curious -- with that being the case, why havent non-Intel system taken more market share on these use cases?

Momentum. I think one component of momentum is just the time it takes to develop mature tooling for the ecosystem and port existing software over.

I invested in ARM (ARMH) back in 2006, partly because I realized that whether Apple or Android won more marketshare, nearly everyone was going to have an ARM-based smartphone in a few years. Part of it was also realizing the above and hoping ARM would take a good share of the server market. SoftBank took ARM private before we saw much inroads in the server market, but it was still one of the best investments I've made.

Of course, maybe I'm just lucky and my analysis was way off.

clearly this analysis was accurate! prescient and well done. was ARM running most non-iphone smartphones in 2006? the iphone launched in 2007, so this analysis must have been based on other smartphones (unless you had inside intel on the iphone).

1) which 2006 analyses proved wrong? it would be interesting to see which assumptions made sense in 2006 but were ultimately proven wrong.

2) which companies are you evaluating in 2020, and why?

thanks for sharing.

Oops. Must have been 2007 that I bought ARMH. I was working at Google at the time, and I had an iPhone, and Google had announced they were working on a smartphone. Some of my colleagues were internally beta-testing the G1 at the time, but it was before we all got G1s for end-of-year bonuses. I think December 2007 was the first year we got Andraid phones instead of (non-performance-based) cash for the holidays.

1) I thought the case for high-density/low-power ARM in the datacenter was pretty clear-cut, but it was an obvious enough market that within a few multi-year design cycles, Intel would close the gap with Atoms and close that window of opportunity forever, especially considering Intel's long-running fabrication process competitive advantage. In late 2012, one of my friends in the finance industry wanted to be "a fly on the wall" in an email conversation between me and a hedge fund manager he really respected who had posted on SeekingAlpha that his fund was massively short ARMH. A few email back-and-forths shook my confidence enough to convince me that the server market window had closed, and that it was possible (though unlikely) that Atom was about to take over the smartphone market. I reduced my position at that time by just enough to guarantee I'd break even if ARMH dropped to zero. In hindsight, I'm still not sure that was the wrong thing to do given what I knew at the time. I had two initial thesies: the smartphone thesis had played out and was more-or-less believed by everyone in the market, and the server thesis was reaching the end of that design window and I was worried that Intel was not far from releasing an ARM-killer Atom, backed up by the arguments of this hedge fund manager. I'm really glad the hedge fund manager didn't spook me enough to close out my position entirely.

2) I'm not a great stock picker. I've had some great calls and had about as many pretty bad calls. I'm now doing software development, but my degree is in Mechanical Engineering, and I took a CPU design course (MIT's 6.004) back in college. I think my edge over the market is realizing when the market is under-appreciating a really good piece of engineering, though I punch well below my weight in the actual business side analysis.

Jim Keller is a superstar engineer, who attracted other superstars, and I don't think the market ever really figured that out. Jim Keller retired now, so there goes about half my investment edge. Though, maybe I'm just deluding myself on the impact really good engineering really has on the business side.

thanks for the thoughtful reply. mistakes are expected in any field; you are too humble and should give yourself a little more credit. :)

could you share what arguments the hedge fund manager made?

do you find any companies interesting now?

I am a programmer. All the software that I run is cross platform, so I expect a smooth transition.

Elixir, my main programming language, will use all the cores in the machine, e.g. parallel compilation. Even if an ARM-based mac has worse per-core performance than Intel, I am ahead.

Apple can easily give me more cores to smooth the transition. Whatever the profit margin was for Intel on the CPU, they can give it to me instead. And they can optimize the total system for better performance, battery life and security.

>Whatever the profit margin was for Intel on the CPU, they can give it to me instead.

HAHAHA you sweet summer child.

Next thing you know, you'll be demanding Apple puts back the heatpipe in the Macbook Air for cooling the cpu instead of hoping theres enough airflow over the improperly seated heatsink (which just has to work until the warranty expires ;) )

Not an unreasonable expectation, if you’ve been paying attention to Apple’s pricing lately

They now sell an iPad, arguably better than any of the non-iPadOS competition, for $350.

The iPhone SE, one of the fastest smartphones on the market (only other iPhones are faster) starts at just $399.

They’ve aggressively been cutting prices, I believe, so that they can expand the reach of the services business. They’re cutting costs to expand their userbase.

I wouldn’t be surprised to see the return of the 12-inch MacBook at the $799 to $899 price point, now that they no longer have to pay Intel a premium for their chips.

> The iPhone SE, one of the fastest smartphones on the market (only other iPhones are faster)

I just replaced my Google Pixel (4 years old) with an SE and it's like... They're not even comparable. I'm sure it makes clear sense as to how so much progress could be made in 4 years, and how it could cost so much less (I paid $649 USD for the Pixel), but it feels a bit magical as a consumer and infrequent phone user. It's a fantastic little device.

I'm still 900% pissed about the MacBook Air they sold me in 2018 and I resent the awful support they gave me for the keyboard (It's virtually unusable already), but as a phone business, they seem hard to beat right now.

I can't do iOS still. Too many restrictions, compromise and missing functionality.

Like, give me a custom launcher option (custom everything options), emulation of classic consoles and handhelds, and easy plug and play access via a PC. If I can't even get one of those, I'm on Android regardless of how shiny I think Apple hardware is.

But if you don't want nay of that, it probably works grate. Just not for me.

I hear you, I used to feel the same. I was a heavy phone user once and that's why I got the Pixel. A lot of things mattered to me then that just don't now. I could almost get by with a flip phone, but there are still a few things I like about smart phones:

- I like to use my phone to check bathy charts while I'm out free diving. In a water proof case, a phone is a huge asset for finding interesting spots to dive. I don't really want to buy a dedicated device for this. I can just hook it to my float and it's there for exploration/emergencies/location beacon for family/etc.

- When I forget to charge my watch, it's nice to have GPS handy for tracking a run or ride

- It's really nice to be able to do a decent code review from a phone if I'm out and about. I wouldn't do this with critical code or large changes, but it's nice to give someone some extra eyes without committing to sitting at the desk or bringing my computer places

- I have ADHD and having a full-fledged reminder/task box is a god send. I'd be lost without todoist

I could do this all with any modern smart phone, but I went with the best 'bang for the buck' model I could find. I don't think I'll miss anything from Android.

I’m torn between an iPhone 11 and the SE to replace my Pixel 2. I have an XR for work and if it wasn’t so locked down for security I think I’d be using this one more than my Pixel.

I do miss a home screen widget and some apps, but I think overall the experience is really nice, and being able to talk to the iMessage people would be nice.

I built a Hackintosh on a Thinkpad and Handoff, AirDrop and all those connections are way more useful than I thought. It’s been 3 years and Android/Windows seems to be pretty much in the same state minus some minimal improvements that really don’t add that much value.

Integration and unified workflows is where it’s at for me at least.

Likewise, the degree of integration is kind of absurd and luxurious for me. I expected it to be a perk but it's quite a bit better than I thought it would be.

When I picked up my SE and signed in with my Apple ID it gave me access to all of my keychain and tons of other stuff I use frequently like photos and documents. Things I didn't realize I was missing on Android. I installed multiple apps and was automatically signed in! I didn't expect that. Wifi just works everywhere I've been with my MacBook. People near me on iPhones can have all kinds of things exchanged with minimum effort. Contacts, files, wifi passwords.

I know all of this works on Android too (kind of), but since I have a MacBook and an Apple ID I use a lot, my experience with an iOS is faaaar better than it was with Android.

I used to go days without touching my Pixel, but this iPhone is so much nicer to use and so much more useful that I carry it regularly and use it quite a bit. I'm really happy with it.

There should be minimal to no difference in BOM cost from iPad Pro to MacBook 12". As a matter of fact it is probably cheaper for MacBook without the touch sensors, Pro Motion, Camera Modules. A 12" MacBook at $799 is very much plausible.

Hey now, that heat sink is going to be the main difference between an Air and iPad before too long.

"they can give it to me instead" yes that is exactly what Apple will do

Apple is no stranger to aggressive pricing when it suits them.

Sure as long as they have that 30% profit rate

got any post-Jobs examples?

The iPhone SE and entry level iPad.

According to the usually accurate Ming Chi Kuo, the entry level iPad will be moving to using a top of the line chip as well, which is a needed change.

>Kuo first says that Apple will launch a new 10.8-inch iPad during the second half of 2020, followed by a new iPad mini that’s between 8.5-inches and 9-inches during the first half of 2021. According to the analyst, the selling points of these iPads will be “the affordable price tag and the adoption of fast chips,” much like the new iPhone SE.


not only rumors, but unpriced rumors?

and iphone SE is not a low margin example https://www.phonearena.com/news/apple-iphone-se-2020-teardow...

so question still stands, got any post jobs example?

I'll let the guys at Android Central judge how much value for the money the iPhone SE delivers:

>I didn't think we would get to a point where an iPhone offers more value than Android phones, particularly in the mid-range segment. But that's exactly what's going on with the iPhone SE.


Feel free to ignore Ming Chi Kuo, but given his track record for accuracy and known good sources inside Apple's supply chain, people pay attention to what he has to say.

More interestingly, will Apple use the same strategy of selling at entry level prices with a custom ARM SOC that outperforms the competition for an ARM Mac as well?

iPhone SE is absolutely not cheap when you can buy a powerful android for $200. It's not absurdly priced, but that's about it.

>powerful android for $200

Even the $1500 Android flagships are completely outclassed by that SE.


>Overall, in terms of performance, the A13 and the Lightning cores are extremely fast. In the mobile space, there’s really no competition as the A13 posts almost double the performance of the next best non-Apple SoC.


and GPU:

>On the GPU side of things, Apple has also been hitting it out of the park; the last two GPU generations have brought tremendous efficiency upgrades which also allow for larger performance gains. I really had not expected Apple to make as large strides with the A13’s GPU this year, and the efficiency improvements really surprised me. The differences to Qualcomm’s Adreno architecture are now so big that even the newest Snapdragon 865 peak performance isn’t able to match Apple’s sustained performance figures. It’s no longer that Apple just leads in CPU, they are now also massively leading in GPU.



No a cheap Porsche is a much better driver than an expensive Corvette. Porsche is about the driving experience as much as speed.

that's exactly the point, not everyone purchase by driving experience, and value is subjective, don't know why you're contrarian to the statement.

the point is that nothing of value can come from arguing subjective points.

“Better driver” /= “faster”

what has that anything to do with my statement?

It’s exactly your statement.

None of this has anything to do whatsoever with margins for apple.

They offer the same chip on their 1k USD smartphone as on their $399 phone. For many people who like iOS but don't want to spend a lot on a phone, they no longer need to spend $600+ USD to get iOS, which is a lot of people.

You don't need best CPU and GPU to chat in whatsapp and make some selfies. Also you conveniently did not mention RAM which is more important for good user experience in typical smartphone apps.

AFAICT, given the lack of Java on iOS, applications tend to use less memory than their Android counterparts, which is what has enabled Apple to put less RAM in their phones for years.

IOS has always done a lot more with less memory than Android, by design. No JVM, lots of memory maximizing OS features.

that's nice and all but it's a thread about aggressive pricing and I just linked you source for the Iphone SE having big fat margins, so, no, you're talking about another thing completely, the Iphone SE is not priced aggressively. it's at most built on the low end to compete strategically, and that has nothing to do with the point being made.

So Apple makes a great product at a great price, and you still complain about their margins? What’s it matter what the margins are if it’s competitive with alternatives?

It’s always kind of strange to me to see all the people who criticize Apple’s profits. Look at the profit margins of other major tech companies, and you’ll find they make similarly large margins.

you're mistaken, I'm not complaining.

this thread is about apple margins and people that is arguing they will transfer margins into lower margins. I really don't know why you're arguing about product value. this is the thread starter: https://news.ycombinator.com/item?id=23598122

what has your point about prices and product quality anything to do with this?

The current 329$ iPad. The 499$ iPhone SE.

Are you sure it does not carry same margin +/- as everything else ?

I'm getting downvoted to oblivion but it's a hill I'm happy to die on - so far everyone has brought up lines envisioned by jobs or items that actually enjoy high margins.

The point was about agressive pricing and you keep referring to high margins. You can still be agressive on price (based on what the market commends) and have high margin. You're making a separate point and you fail to notice it.

all envisioned and positioned in the job era, got any post jobs example?

and 2020 se is not a low margin example https://www.phonearena.com/news/apple-iphone-se-2020-teardow...

Mac mini, low-end MacBook Air, iPod touch…

The Mac mini in no way can be considered "aggressive pricing". It is well over the price of anything in its range. I wouldn't doubt that Apple's margins on the Mac mini are higher than on some of the portables.

Yeah, but what if there was one that was aggressively priced due to using an ARM processor?

I'm really not sure how the market would respond to it. So long as it played in the Apple ecosystem, I think it's interesting for some things though.

Yeah, but what if there was one that was aggressively priced due to using an ARM processor?

What makes people think that Apple will lower the price because they are now using their ARM processor?

I mean the original Mac mini price point, rather than the current one. It may have been a bit overpriced even then, but it was a convincing entry-level product for those switching from the PC.

and it was job who positioned it in the lower end of pricing anyway, so I don't see how it could ever be thought of as a post-job example

all envisioned and positioned in the job era, got any post jobs example?

Slightly off-topic, but does anyone have any details on how the new ARM-powered MacBooks will perform compared to the Intel-powered MacBooks? According to this[1] article, "the new ARM machines (is expected) to outperform their Intel predecessors by 50 to 100 percent". Can anyone shed some insight into how this is possible?

[1] https://www.theverge.com/2020/6/21/21298607/first-arm-mac-ma...

The Macbooks will have a CPU that's comparable or better than the iPad Pro. The iPad Pro already beats the Macbook Pro in some well-known benchmarks, such as Geekbench.


SPEC is a better cross platform benchmark, since it's an industry standard and was designed just for cross platform testing.

>Overall, in terms of performance, the A13 and the Lightning cores are extremely fast. In the mobile space, there’s really no competition as the A13 posts almost double the performance of the next best non-Apple SoC. The difference is a little bit less in the floating-point suite, but again we’re not expecting any proper competition for at least another 2-3 years, and Apple isn’t standing still either.

Last year I’ve noted that the A12 was margins off the best desktop CPU cores. This year, the A13 has essentially matched best that AMD and Intel have to offer – in SPECint2006 at least. In SPECfp2006 the A13 is still roughly 15% behind.


I think performance per watt is going to be just as important as overall performance. Apple's "little" cores were a new design in the A13, and compare very well against stock ARM cores on a performance per watt basis.

>In the face-off against a Cortex-A55 implementation such as on the Snapdragon 855, the new Thunder cores represent a 2.5-3x performance lead while at the same time using less than half the energy.


It will be interesting to see how much performance they can get when they are on a level playing field when it comes to power and cooling constraints.

geekbench is a horrid benchmark.The amount of info on how they run the tests is extremely limited (mostly what lib they opted to use). The tests are very short, no power or thermally limited. I'd try and link to their methodology[0]:

for instance: Integer WorkloadsLZMA CompressionLZMA (Lempel-Ziv-Markov chain algorithm) is a lossless compression algorithm. The algorithm uses a dictionary compression scheme (the dictionary size is variable and can be as large as 4GB). LZMA features a high compression ratio (higher than bzip2).The LZMA workload compresses and decompresses a 450KB HTML ebook using the LZMA compression algorithm. The workload uses the LZMA SDK for the implementation of the core LZMA algorithm.

They compress just 450KB of text as a benchmark - even the dict size is greater then the input. If both fits the CPU caches - the results are supreme, if not - horrible compared to the former. Also it'd vastly depend how fast the CPU can switch to full turbo mode (for servers that doesn't matter at all)

[0]: https://www.geekbench.com/doc/geekbench4-cpu-workloads.pdf

FWIW you've linked Geekbench 4; Geekbench 5 is https://www.geekbench.com/doc/geekbench5-cpu-workloads.pdf

I agree these benchmarks have issues, but there's a consistent trend (eg. AnandTech's SPEC results are similar—they use active cooling to avoid the question about thermals) and the most reliable benchmarks, like compilers, aren't outliers.

True that, they did increase the book size and yet it's till below the dict size: compresses and decompresses a 2399KB HTML ebook using the LZMA compression algorithm with a dictionary size of 2048KB.

Doing the test below 4MB of L3 would hurt, doing with with less than 2MB would decimate the test.

The Geekbench results are questionable because they do things like give significant weight to SHA2 hashing, which is hardware-accelerated on the iPad Pro but not the older Intel processors in current Macbooks:


> Geekbench is also browser-based

That's not true.

> The iPad Pro already beats the Macbook Pro in some well-known benchmarks, such as Geekbench.

We keep hearing this especially from the Apple-bubble blogs, yet why does it not translate into higher end work being done on these iPads if they are supposedly so powerful, yet all we ever see is digital painting work that isn't pushing it at all and extremely basic video editing.

iPad-using musicians, synth-nerds and audio hackers are having a BLAST with the high-end iPads, I can tell that much ..

You’re essentially downplaying a huge percentage of people doing real work.

it’s bc of iOS and AppStore. No serious dev will port their desktop apps to that.

Also good luck to Apple if they lock down macOS the same way.

>Also good luck to Apple if they lock down macOS the same way.

They are locking down macOS gradually. Their hardware revenue is declining indefinitely and they know it. Their push is to lock down the software and reap a cut as much as they can.

The rumor is a chip that is twice the core counts of the A12Z/X and built on TSMC 5nm plus with active cooling. I’m guessing somewhere around 3X iPad performance.

Probably 3 chips for different models? I could imagine the iPad Pro chip in the Macbook Air, a mid-range chip and a high performance (the one with 3x performance) for the Macbook Pro.

Could make the Macbook Pro the fastest laptop in some benchmarks while also offering more affordable options.

Wow..incredible if true.

This might be a dumb question, but what on earth is the point of x86 if ARM performs better and is also more power efficient?

x86 vs. ARM is irrelevant, that's just the instruction set. There's (almost) no performance or efficiency differences from the instruction set by itself.

Apple's particular ARM CPU core happens to look like a beast vs. Intel's particular x86 CPU core. But those are far from the only implementations. AMD's x86 is, at the moment, also very strong, and incredibly efficient. Lookup Zephyrus G14 reviews for example - the AMD 4800HS in that is crazy fast & efficient vs. comparable Intel laptops.

Similarly, lookup Apple A12/A13 vs. Qualcomm benchmarks - Snapdragon is also ARM, but it kinda sucks vs. Apple's A12/A13.

In terms of power realize that there's quite a bit of overlap here. Apple's big cores on the A13 pull about 5W. That's not really that different from X86 CPU cores at similar frequencies. The quad-core i5 in a macbook air, for example, pulls around 10w sustained. It's just a product design question on what the power draw stops at. More power = more performance, so usually if you can afford the power you spend it. So far there's been relatively little product overlap between x86 & ARM. Where there has been, like the recent ARM server push, power consumption differences tend to immediately go away. Laptops will be a _very_ interesting battle ground here, though, as differences get amplified.

I thought RISC chips are easier to create in a more power efficient way as the chip design is way easier than for CISC (like x86)? If Apple had been able to create a competitive x86 chip they would've done that. But the instruction set is an extremely important part of that.

Please no - that debate is an anachronism.

Apple simply can't create any kind of x86 chip whatsoever (at least if they plan to use it :) ) - they don't have an ISA license and likely won't ever be able buy one, because Intel doesn't sell.

Well, apart from outright buying AMD. Or Intel. Or both. :)

So you basically say it's not hard to design a processor (even x86), just patents prohibit it? I had always thought that at least some part of the Intel/AMD dominance was the complexity of the designs and manufacturing that even a company like Apple couldn't build within a decade.

It's worth noting that this benchmark uses the 13" MBP; the 16" MBP has a more powerful CPU.

Apple is the most valuable public company in the world. They have been making their own SoCs for the iPhone for over a decade. Their internal-consumption focus along with the bankroll of the world's most valuable corporation means that they can make the best chips. Intel's market cap is just 1/8th of Apple.

Intel chips have basically stagnated for about 5 years and they abused their monopoly and market leader positioning by offering marginal improvements each year in order to protect margins, as well as fab shrink problems that insiders describe as coming from culture not dissimilar to Boeing; just with less lives at stake.

In the meantime, competitor foundries have caught up, and exceed Intel's ability to ship in volume. ARM obviously is eyeing the desktop, but more critically server market so the latest ARM IP is geared towards higher performance computing, quite well I may say.

State of the art fab + state of the art processor IP = state of the art processor. Not a huge surprise :)

Yet Apple can't even make a working keyboard.

They can, they just compromised on an overengineered but stupid design so that they could shave off half a mm on their laptop thickness.

Butterfly keyboard works fine for me. It's actually my favorite MacBook keyboard so far.

Agreed. I have both the 2019 16" and 2018 15" side by side, I greatly prefer the butterfly keyboard.

They made working keyboards for years and have gone back to doing so. The bit in the middle was a blip.

nah. the "gone back" wasn't far enough. I liked say the first generation macbook pro, with concave keys and longer throw (although the creaky case was not as nice as the sculpted aluminum ones)

I had a friend with an old old powerbook and the key shape reminded me of a thinkpad.

They can do it, but the flat short throw keys are too form over function.

I still use a 2014 era Apple wireless keyboard and prefer it over any of the mechanical variants I’ve tried. Key travel distance is a personal metric.

the first macbook pro didn't have THAT long a throw.


Right, I had that mac and the KB was fine, as is the 2014 wireless model I have, despice neother being long throw. I'm fine with short throw, it's a preference.

The 2016 keyboard fiasco was too far for me though, and even now with my 16" 2019 mac with the "fixed" keyboard, I still prefer the wireless 2014 model, or the KB from my 2014 MBP.

I, for one, prefer a keyboard that is comfortable to type on even if doesn’t have the greatest travel over a significantly thicker laptop.

I’ve had many macs and all the keyboards have been best in class. I just skipped buying a model with the bad keyboard. That was made possible by the fact that their laptops have serious longevity so I could just hold onto machine until they fixed it.

I think when a threat comes along, intel steps up.

they WILL compete, and they've been playing leapfrog with AMD for years.

I'm waiting to see the intel response for the new 32 and 64 core chips from amd.

Maybe. The problem is that I'm guessing Apple wants to move to their own ARM not for more compute, but more power efficiency. Intel has never been able to deliver on the low power side.

> Intel has never been able to deliver on the low power side.

Yes they have. It's why we have 10+ hour thin & light laptops. The quad-core i5 in a macbook air runs at 10W, that's comparable per-core power draw to an A13.

Controlling the CPU will give Apple more controls to tune total-package power, particularly on the GPU / display side of things, but CPU compute efficiency isn't going to be radically different, and that's not going to translate into significant battery life differences, either, since the display, radios, etc... all vastly drown out low-utilization CPU power draw numbers.

> The problem is that I'm guessing Apple wants to move to their own ARM not for more compute, but more power efficiency.

I'm guessing the switch is for those reasons, plus a more important one (my opinion only), control.

If they use their own chips, they're not stuck waiting on Intel (or any other x86 CPU manufacturer) to build CPUs that have the attributes that matter to them the most.

What I really don't understand is why can Apple out-compete others by that much? Sure they're bigger than Intel but for them, Chip design is only one small part. Have they outspent Intel, Qualcomm, AMD & Co so massively?

Why should we expect anything significantly different than a year iteration on what they already use in the iPad Pro?

A laptop (or desktop!) computer has a significantly higher energy budget than the iPad. The SoC Apple uses in those devices could conceivably be clocked higher and/or have more cores than the A12 parts they're currently using in the iPad Pro.

You can't exactly just shove a turbo on an under-powerd commuter I4 engine (fuel efficient) and expect it to turn out well.

I wouldn't imagine just upping the vcore and the frequency clock will just magically make it faster and compete at the same level as an intel chip.

I'm deeply skeptical as a developer, you -will- get compiler errors, and it's going to suck for a while.

Well, it entirely depends on how they designed the chip. If they planned ahead they could design the chip for 4Ghz ahead of schedule and then take advantage of the higher frequency once they decide to actually release a desktop or laptop with an ARM chip. I don't believe that Apple did this so they will either create a new chip exclusively for desktop/laptop or they just keep the low frequency.

> You can't exactly just shove a turbo on an under-powerd commuter I4 engine (fuel efficient) and expect it to turn out well.

Isn’t this the exact concept of the VAG EA888 motor used in the Audi A4 and extended all the way up to the S3?

That said, the rumored chip is an actively cooled 12 core 5nm part which is more like an RS3 motor with extra cores/ cylinders

Way different thermal envelope? iPads are passively cooled.

Isn't that a selling point of ARM? Would you prefer a passively cooled but only slightly slower laptop or a faster laptop with a noisy fan?

I don't think there can be a universally accepted answer to this, as there are so many different user priorities for laptops.

Having said that, I think the top requirement for most laptops is that "it's gotta run what I need it to run on day one".

For a huge amount of light users, ARM Macs are going to satisfy that requirement easily.

Once you move up the food chain to users with more complicated requirements, however, it's less of a slam dunk... for the moment.

This is like saying Qualcomm and Samsung should be able to make top tier desktop chips but the reality is that it doesn't translate so easily.

I guess its all trust that Apple can pull it off and there's no detailed rumors yet?

That's because neither does build even a top tier mobile ARM chip. Each of their best offerings is about half as fast as Apple's A12X/Z

Because with the A series they show that they know how to make a good CPU that fits into the thermal restrictions of the chassis.

>fab shrink problems that insiders describe as coming from culture not dissimilar to Boeing; just with less lives at stake.

Since normal processors calculate a lot of important things, I'm wondering if there really are fewer lives at stake. Of course it would be more indirect, but I could imagine that there are many areas where lives are tied to PCs doing the right thing.

E.g. what if a processor bug leads to security issues in hospital infrastructure.

Like Meltdown and Spectre?

Maybe we should wait for it to actually be announced?

Otherwise it's just fantasy sports.

The whole fun of product roadmap speculation is it being a fantasy sport.

(Fun and potentially useful, if you want to plan ahead, play the market, and other causes...)

> Maybe we should wait for it to actually be announced?

This[1] claims ARM Macs will be unveiled today (Monday) at WWDC. I guess we'll see soon enough.


1. https://www.theverge.com/2020/6/21/21298607/first-arm-mac-ma...

That's my whole point. All the discussion up til now has been based on rumours and speculation. There's only so much we can discuss around that.

Until we actually have details of Apple's implementations, there's no realistic way we can discuss

> how the new ARM-powered MacBooks will perform compared to the Intel-powered MacBooks

Apple and the Kremlin share a few things, one of which is that, due to the nature of how official announcements are done, a desire if not need for outside analysts has developed, who specialize in forecasting those official announcements both on outside trends and close observation of the company and any clues they might forget to filter.

I'll have to dig around and find relevant phoronix benchmarks later for you. But I think it's somewhat of a hive-mind assumption that Intel has generally higher performance all round. A majority of benchmarks show that Intel is generally better in single threaded benchmarks whereas ARM CPU's are better in multicore, multi-threaded applications with a lower TDP and leagues better efficiency. Heck the Surface Pro X that Microsoft brought out has performance on par with an Intel i5 with at least 1.5x the battery life of its Intel counterpart and without needing an internal fan whatsoever. I wouldn't be surprised at all if Apple will make the switch to ARM for all but their highest spec MacBook pro's.

I will immediately believe them when you interpreted "outperforming" as "better performance per watt".

However, I can also see their chips outperforming Intel in sustained workloads. Apple has a history of crippling Intel performance by allowing the CPU to only run at full speed for a short while, quickly letting it ramp up the temperature before clocking down aggressively. Some of that is likely because of their power delivery system, some of it is because they choose to limit their cooling capacity to make their laptops make less noise or with fewer "unattractive" airholes. Either way, put the same chip in a Macbook and in a Windows laptop and the Windows laptop will run hotter but with better performance.

However, with a more efficient chip, Apple could allow their CPU to run at a higher frequency than Intel for longer, benefiting sustained workloads despite the IPC and frequency being lower on paper. This is especially important for devices like the Macbook Air, where they didn't even bother to connect a heatpipe to the CPU and aggressively limit the performance of their Intel chip.

I'd even consider the conspiracy theory that Apple has been limiting their mobile CPU performance for the last few years to allow for a smoother transition to ARM. If they can outperform their current lineup even slightly, they can claim to have beaten Intel and then make their cooling solution more efficient again.

Everything you said is echoed in this LTT video from yesterday[0].

It does appear to be a very poor thermal design, but you can see that most of the issues appear to be from trying to make the user not feel the heat.

It feels a bit icky to leave performance on the table like that; since you paid for a CPU and are getting a marginal performance from it. But I remember Jobs giving a talk before about how "Computers are tools, and people don't care if their tools give them all it can, they care if they can get their work done with them well"

That's not a defense of a shitty cooling solution, but it is a defence of why they power limit the chips. When I got my first (and only) Macbook in 2011, it had more than twice the real battery life of any machine on the market. That meant that I was _actually_ untethered. That's what I cared about at the time much more than how much CPU I was getting.

[0]: https://www.youtube.com/watch?v=MlOPPuNv4Ec

> Either way, put the same chip in a Macbook and in a Windows laptop and the Windows laptop will run hotter but with better performance.

The Windows laptop can't possibly run hotter than a Macbook, all of which reach the throttle temperature (~100 °C) almost instantly.

I actually agree to the artificially limited performance, artificially worsened thermals theory. The Macbook Air design is extremely fishy.

The laptop can be hotter if it dissipates more heat to the case. The reason Macbooks reach 100C fast is because they don't give off a lot of heat so that the case doesn't burn your skin.

I think all of that is because of "the triangle".

the vertices would be power, heat and perf.

Apple just hasn't shipped an arm chip with high power and "excessive" cooling.

The article doesn't really mention outperform in what aspect. The sentence following that claim mentions "50% more efficient" which seems likely because ARM chips usually use <10W power while Intel U series chips use 15-25W

With geekbench scores, the A13 is outperforming some mobile i7 processors: https://gadgetversus.com/processor/intel-core-i7-8565u-vs-ap....

With a new A14 chip combined with intel stagnation, it wouldn't be too crazy to see a 50-100% increase, especially if you looked at multi-core scores.

While I certainly agree A13 is a fast chip, the more I've compared Geekbench scores the less convinced I am that the results can be compared between platforms.

Also first i7 chip was released in 2008. (The chip you refer to appears to be modern though, from 2018.)

To prevent any misunderstandings, I agree A13 has performance comparable to a modern laptop. I just don't think Geekbench is a right tool to measure that difference.

> Geekbench 4 - Multi-core & single core score - iOS & Android 64-bit

What does that even mean? The Intel platform is running Android?

It's entirely possible they installed Android on the Intel platform. Unlikely, but possible.

I had an Intel x86 Android phone from Asus a while ago, complete with Intel Inside logo on the back case. Not sure if they still sell x86 phones these days though.


I assume the tests are same regardless of platform.

They aren't, though. Compare windows vs. linux geekbench scores of identical hardware, they are extremely different:

Linux: https://browser.geekbench.com/v5/cpu/2638097 Windows: https://browser.geekbench.com/v5/cpu/2635048

Geekbench is a terrible benchmark.

You cherry picked a terrible example. Here are some more Windows examples with the same chip: https://browser.geekbench.com/v5/cpu/2628613 https://browser.geekbench.com/v5/cpu/2628613

In my own personal experience, running Geekbench on many machines, Geekbench scores are pretty close running between Windows and Linux on the same machine. At least for compiling code (my main use for powerful machines), their compilation benchmark numbers also fairly closely match compilation times I observe.

Your links are identical.

50-100% seems a little high. I think Apple has picked the low hanging fruit. This kind of performance increase is only possible in multicore workloads. Single core would be more like 20-30%. But maybe they can increase clock speeds with the new 5nm node.

They’ll be working with a more forgiving thermal envelope than in iOS devices too, though.

Real world? AFAIK no not yet. The issue is that traditional benchmarks don't really tell the whole story. IPC isn't directly comparable because ARM is ostensibly RISC (this is debatable I'm aware) and Intel is CISC (also debatable). So until things like Adobe Premiere or Media Encoder are ported over (if they are), or similar real world workloads like Blender, it's going to be really hard to compare. Even then things like design power matter. A 15w Intel chip is likely to outperform a 5w ARM chip just because it has more headroom thermally. It doesn't mean that either is faster (IPC per watt goes up the more you downclock for a variety of reasons).

If this sounds like I'm saying it's apples (no pun intended) to oranges... that's pretty much the case. Apple is likey not going to produce an ARM chip in directly comparable fashion to an Intel chip. E.g. equal cache, memory bandwidth, TDP (using Intel calc method), PCI-e lanes etc. Mostly because it doesn't make sense to do so because Intel's chips are designed to be a component of the system while ARM chips are largely designed to be an SOC.

They need to prove they can do a few things before I'm worried.

* They need to ramp the clockspeeds -- much easier to say than do.

* They need to include 256 or 512-bit SIMD operations without breaking the power budget.

* They need to design a consistent and fast RAM controller (and likely need to add extra channels)

* They need to integrate power-hungry USB4/thunderbolt into their system.

* They need to add a dozen extra PCIe 3 or 4 lanes without ballooning the power consumption.

* They need an interconnect (like QuickPath or HyperTransport/Infinity Fabric) so all these things can communicate.

These are the hard issues that Intel, AMD, IBM, etc all must deal with and form a much bigger piece of the puzzle than the CPU cores themselves when dealing with high-performance systems.

I'd be very surprised by that figure unless it's multicore benchmarks. Many macbooks are still around the 4 CPU/8 thread marks, so there's a lot of space to improve there with multiple cores.

Why would you be surprised? Apple's chip design team is the best in the entire industry, funded by a virtually infinite bankroll and with a razor focus on only developing chips for Apple's products; so microcode to drivers can be intimately optimized like a gaming console.

The access to iOS and macOS means that they can profile every single app everyone runs and improve real-world performance, and they do.

If you are a company that supplies anything (software or hardware) to Apple or Amazon, and you are operating on >20% gross margins, Apple and Amazon will destroy you, and destroy your margins. You have to continuously innovate and can't just relax and chill and collect your margins, like Intel has been doing for the past 5 years, milking their advantage that has evaporated.

Other than the modem, I don't think there's a component on the latest iPhones that has more than 20% margins for the supplier. Apple is ruthless. So is Amazon, when it comes to fulfilment.

Just because you have lots of money doesn't mean laws of physics suddenly are out to lunch.

Apple is using the same fabrication process as everyone else in a mature industy. There simply isn't 2x improvement possible unless everyone else was staggeringly inconpetent. Imagine Apple decided to build their own car, and made their own electric motors. Would you believe claims that their motors are 2x as efficient?

Agreed. A lot of people seem to think x86 has some inherent bottleneck that switching to ARM will magically bypass.

Single core performance is currently doubling every five years or so. We’re not about to see it double in the next 24 hours.

I've never understood this argument. For some reason there is this RISC vs CISC war debate but it's not actually based on reality of modern day chip design. The idea is that the decoder is consuming too much power and this design flaw cannot be fixed. Simplifying the decoder reduces the power consumption and therefore allows higher clock frequencies. But when we look at the clock frequencies of ARM and x86 chips we consistently see that x86 chips run at higher frequencies (roughly 4 Ghz for x86 and 2.7 Ghz for ARM). If there is a difference between ARM and x86 chips then it's not in the decoder. The difference must lie in the micro architecture of the chips and therefore RISC vs CISC is no longer relevant because the ISA is by definition not part of the micro architecture.

Apple's ARM chips probably perform much better than reference ARM chips because they are much closer to the micro architecture that Intel uses. But Apple's chips still consume less power than Intel chips used in laptops. The secret sauce is probably the fact that the SoC has cores with different performance and power profiles. Big cores are used for peak performance bursts and lots of small cores for energy efficiency while the device is waiting for user input.

I think Geekbench is measuring peak performance and not sustained performance but the numbers are probably correct. Making a representative comparison between mobile devices and desktops isn't possible because their different thermal profiles but it's highly likely that Apple will go for a passively cooled design if they can.

x64 chips can run at 4-5 GHz because these are desktop parts that can drop 10+ W sustained on a single core if need be. Power-frequency scaling tells us that of course this is very inefficient (perf/W), but efficiency isn't all, sometimes you just gotta go fast.

> double in the next 24 hours

there is a time-honored way to do exactly that.

Constrain the comparison.

"Double the performance" (of $899 to $901 laptops with 2560x1440 resolution)

"Twice as fast" (as all laptops with thunderbolt 3 and no sd card reader)

That's a fair comparison though, if you're a Mac user and this machine is twice as fast and gets 5 more hours on a charge, you'll probably be persuaded to upgrade. If you aren't a Mac user I don't think they need to convince you now, they need to convince the developers and artists to stay on the platform.

sooo. should i NOT invest in a 2K MBP 16 in to do mostly heavy multiple web apps ?

I'd say just buy them if you need it now. Web developers sometimes need to use docker and VMs for various development tasks and testing. Sure you can run docker in arm, but not all docker images has arm version available. Running Linux and windows vm will probably a challenge as well, at least until we know how well x86 simulator run in the new arm laptop.

That's one use-case where they absolutely shine, in my experience.

For one, because we had evidence of this for a few years now - that Apple's Arm chips can beat Intel's chips. It was mostly ignored because "but that's just peak performance - not sustainable in a phone/tablet envelope."

Yeah, until they put the same chip, or better in a laptop envelope. Then what for Intel?

This was from 2016. People being surprised by this haven't been paying attention to where this was going:


Also, I said it many times before, Intel trying to be as misleading as possible with the performance of its chips, such as renaming Intel Atom chips to "Intel Celeron/Pentium" and renaming the much weaker Core-M/Y-Series (the same chips used in Macbook Air for the past few years) as "Core i5" is going to bite them hard in the ass, when Apple's chips are going to show that they can beat "Intel Core i5". But Intel kept making this worse with each generation.


I'm suspicious but for the pain it's going to cause it better have some real benefits

Out of curiosity, what pain are you expecting?

Maybe I'm spoiled because my development is mostly java, python, Go, docker, etc, all of which should have no runtime problem.

In terms of IDE, It looks like vscode is ported which is great.

I see two projects in my workflow: intellij and iterm2. I can live with a vanilla terminal but I'll need intellij. It looks like there's some[1] discussion around support.

1. https://github.com/JetBrains/intellij-community/commit/db531...

How do you expect docker to run fine when it doesn't even run good on x86 OSX? There are constant performance issues (https://github.com/docker/for-mac/issues/3499) because docker-on-mac is basically a VM running Linux

Docker is a linux tech tied to features in Linux kernel. You don't have cgroups and the like in XNU (OSX kernel) to be able to run docker natively on OSX.

It's OSX's fault then. Windows can run containers natively now as long as the base image is windows too. No reason why OSX couldn't extend support either.

I would suggest that the VAST majority of Docker usage on Windows is not “windows containers” but Linux-based containers in a VM, be it WSL or Docker for Windows. The promise of Docker is that a container image built on machine A will run on machine B. That relies on them running the same operating system in some capacity.

I don't expect it to be too painful. Most common software has already been build for ARM on Linux. A lot of things have event been built for Apple's ARM chips running iOS, though the App Store restrictions have limited the usefulness of these builds. Unmaintained closed source software will stop working but most of these applications were already killed by the removal of 32-bit x86 support in Catalina.

The biggest potential issue will be how people get hardware for doing the builds. Hopefully Apple will provide a good way of cross-compiling or a new machine that is suitable for datacenter deployment (ARM based mac mini?). We'll have to wait for the WWDC announcement to find out.

You mention Docker which most definitely will be impacted by this change. Docker on Mac runs off a virtualized x86 Linux instance. Docker only announced ARM support last year I think, I can't imagine Docker for ARM would be as anywhere near as fully supported as it is for x86, and you certainly won't be able to build x86 images to push to x86 servers, without using some sort of emulation layer which will be horribly slow compared to current macOS x86 docker.

Currently running docker using WSL2 on an ARM laptop. So it's pretty much native Linux, as an anecdote I can tell you that it's just that much more stable than with my Mac or with a traditional Windows laptop. The only downsides is there's a few docker containers that you have to hunt around for to get the ARM64 version. The only way I see docker performance to be better than this or more stable is to switch to a Linux machine entirely.

Yes - that's what I was alluding to in my original reply, re: lack of support, lack of images. Docker running on ARM might be great, but one of the original purposes of docker was to build and run the exact same images that would run on your servers, which will no longer be the case unless those servers are ARM based as well.

What laptop ?

Surface Pro X

I use ARM64 Docker on my Raspberry Pi 4s, it’s excellent and loads of images are already available.

But for instance emulating an ARM raspberry on a fairly powerful i7 through qemu is... an exercice in patience to say the least, from my experience compiling the same codebase on the host system and the emulated Pi, it is almost 10x slower. So I'm not holding out on ARM having powerful x86 emulation.

Use qemu user mode with a chroot, it's much faster.

I'm using https://github.com/multiarch/qemu-user-static - that's what it is, right ?

Yes, indeed.

But instead of docker, just deboostrap the amd64 release under a directory, copy qemu-amd64 to $DIR/usr/bin and chroot into that directory. Docker is a bit of a mess.

Before chrooting, bind-mount:

/dev to $DIR/dev

/proc to $DIR/proc

/dev/pts to $DIR/pts

/sys to $DIR/dev

/home to $DIR/home

copy /etc/resolv.conf to $DIR/etc/resolv.conf

Then chroot and login (su -l yourusername). That way you could try running a lot of software.

okay, will try to see what that gives. I'm curious as to why it would be slower (when taking into account that an equivalent docker container, say running debian stable, but with the same architecture as the host, runs in more-or-less the same time as building directly with my host's GCC)

>ut with the same architecture as the host

debootstrap an ARM rootfs, please.

Yeah but that's Linux. Docker works excellent on Linux distros

Yep - I make no claims as to how well Docker works on an ARM Mac.

You are probably going to have a great deal of pain with Docker. If you use Docker for Mac, it basically uses Hypervisor.framework to run a slim x86_64 Linux virtual machine. Rumors suggest that Hypervisor.framework won't emulate x86_64 in the next version of macOS. We'll find out tomorrow. So you'll either set up docker-machine with a x86_64 based remote server to run your docker commands, or you'll just use ARM versions of the various docker images. I don't know where you get your images from, but many of them won't be available for ARM.

There's a surprisingly large number of ARM docker images. You can partially thank the Raspberry Pi for that.

Is that ARM v6/v7 32bit or ARM v8 64bit? Most Raspberrys run a 32bit userland (armhf).

Plenty of ARM64-architected images available. Lots of us using 64-bit Ubuntu.

Yeah but then you have consistency issue which Docker aimed to get rid of. You local setup won't replicate what runs on production. The enterprise infra isn't going to switch to ARM in next 1-2 years as Apple hopes to.

I bet that all popular docker images will be available for ARM very soon. It's a matter of recompiling.

What are the largest benefit for iterm2? I use it but honestly I don't see much improvement over Terminal.

Probably tmux integration, but the speed decrease against Terminal doesn’t warrant the additional features for me at least.

A trick I recently learned is to add iTerm 2 to the “Developer Tools” section of the Security prefpane - it makes it fly. Similarly if you’ve ported your config through several versions of iTerm, ensure that GPU-accelerated rendering is on.

It’s no stretch to say that iTerm 2 is a major one of the reasons I cannot stand desktop Linux (or Windows, even with the new terminal there).

For me, it's the little things such as ability to undo accidental terminal tab closing with cmd+z, restoring sessions after restart so I don't need to reopen a bunch of tabs and cd manually, ability to show timestamp for each command run, etc.

VS Code is not yet ported to ARM, there are bootleg builds but they do not work with many extensions, for example Remote SSH relies on a project that lies below the VS Code project proper, and that (apparently) is a fair bit of work to bring to ARM (though it's being worked on) [0]

[0] https://github.com/microsoft/vscode/issues/6442

You are wrong. I currently use VS Code insiders daily on ARM with all my extensions working.


I second this, of course VS Code needed Electron which needs Node all ready for ARM https://github.com/nodejs/node/issues/25998#issuecomment-637... Also to note Edge is available on ARM. Very happy Surface Pro X user here.

I second Edge working fantastic on ARM! I have a Galaxy Book S.

Really? That is GREAT NEWS!!!

EDIT: It works great, and it has remote for ssh and wsl!!! This made my day!

out of what you've listed docker is probably the biggest issue -- there is support for multiarch containers but migration has some pain points depending on your setup, also docker will be have to made to work on the arm build of macos assuming virtualization is supported and speedy

What about legacy software and software provided by a vendor? Let's say I want to run Fusion 360 and Altium on my hypothetical ARM Mac. Can I do that today?

Outdated information: Windows on ARM requires you to build UWP apps which is more work than just a recompile.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact