Hacker News new | past | comments | ask | show | jobs | submit login
ARM Mac Impact on Intel (mondaynote.com)
324 points by robin_reala on June 22, 2020 | hide | past | favorite | 525 comments



Intel’s real money is in high-end CPUs sold to prosperous Cloud operators, not in supplying lower-end chips to cost-cutting laptop makers.

I keep searching for "Graviton" in these thinkpieces. I keep getting "no results found."

Mac ARM laptops mean cloud ARM VMs.

And Amazon's Graviton2 VMs are best in class for price-performance. As Anandtech said:

If you’re an EC2 customer today, and unless you’re tied to x86 for whatever reason, you’d be stupid not to switch over to Graviton2 instances once they become available, as the cost savings will be significant.

https://www.anandtech.com/show/15578/cloud-clash-amazon-grav...


While Graviton is impressive and probably an indication of things to come, you can't outright use "Amazon rents them cheaper to me" as an indication of the price performance of the chips themselves.

Amazon is exactly the kind of company that would take 50% margin on their x86 servers and 0% margin on their Graviton servers in order to engineer a long term shift that's in their favor - the end of x86 server duopoly (or monopoly depending on how the wind is blowing).


I don't feel as sure about this; there's very little evidence that Amazon is putting a "50% margin", or any significant percent, on x86 servers. Sure, they're more expensive than, just for comparisons' sake, Linode or DigitalOcean, but EC2 instances are also roughly the same cost per-core and per-gb as Azure compute instances, which is a far more accurate comparison.

Many people complain about AWS' networking costs, but I also suspect these are generally at-cost. A typical AWS region has terabits upon terabits of nano-second scale latency fiber ran between its AZs and out to the wider internet. Networking is an exponential problem; AWS doesn't overcharge for egress, they just simply haven't invested in building a solution that sacrifices quality for cost.

Amazon really does not have a history of throwing huge margins on raw compute resources. What Amazon does is build valuable software and services around those raw resources, then put huge margins on those products. EC2 and S3 are likely very close to 0% margin; but DynamoDB, EFS, Lambda, etc are much higher margin. I've found AWS Transfer for SFTP [1] to be the most egregious and actually exploitative example of this; it effectively puts an SFTP gateway in front of an S3 bucket, and they'll charge you $216/month + egress AND ingress at ~150% the standard egress rate for that benefit (SFTP layers additional egress charges on-top-of the standard AWS transfer rates).

[1] https://aws.amazon.com/aws-transfer-family/pricing/


A 32 core EPYC with 256gb will cost you $176/mo at Hetzner, and $2,009/mo on EC2.

Obviously it's on demand pricing and the hardware isn't quite the same, with Hetzner using individual servers with 1P chips. Amazon also has 10gbps networking.

But still, zero margin? Let's call it a factor of two to bridge the monthly and on demand gap - does it really cost Amazon five times as much to bring you each core of Zen 2 even with all their scale?

I don't think Amazon overcharges for what they provide, but I bet their gross margins even on the vanilla offerings are pretty good, as are those of Google Cloud and Azure.


Very few of AWS's costs are in the hardware. Nearly all of Hetzner's costs are in the hardware. That's why AWS, and Azure, and GCP are so much more expensive.

Margin is a really weird statistic to calculate in the "cloud". Sure, you could just mortgage the cost of the silicon across N months and say "their margin is huge", but realistically AWS has far more complexity: the costs of the datacenter, the cost of being able to spin up one of these 32 core EPYC servers in any one of six availability zones within a region and get 0 cost terabit-scale networking between them, the cost of each of those availability zones not even being one building but being multiple near-located buildings, the cost of your instance storage not even being physically attached to the same hardware as your VM (can you imagine the complexity of this? that they have dedicated EBS machines and dedicated EC2 machines, and yet EBS still exhibits near-SSD like performance?), the cost of VPC and its tremedous capability to model basically any on-prem private network at "no cost" (but, there's always a cost); that's all what you're paying for when you pay for cores. Its the stuff that everyone uses, but its hard to quantify into just saying "jeeze an EPYC chip should be way cheaper than this"

And, again, if all you want is a 32 core EPYC server in your basement, then buy a 32 core EPYC server and put it in your basement. But, my suspicion is not that a 32 core EPYC server on AWS makes zero margin; its that, if the only service AWS ran was EC2, priced how it is today, they'd be making far less profit than when that calculation includes all of their managed services. EC2 is not the critical component of AWS's revenue model.


Margin calculations include all that. And I suspect most of AWS's marginal cost is _still_ hardware.

The marginal cost of VPC is basically 0. Otherwise they couldn't sell tiny ec2 instances. The only cost differences between t3.micro and their giant ec2 instances are (a) hardware and (b) power.


> The marginal cost of VPC is basically 0. Otherwise they couldn't sell tiny ec2 instances.

That's not strictly true. They could recoup costs on the more expensive EC2 instances.

I have not idea what the actual split is, but the existence of cheap instances doesn't mean much when Amazon has shown itself willing to be a loss-leader.


So what you're saying is kind of the opposite of a marginal cost.

If they are recouping their costs, it's a capital expense, and works differently than a marginal cost. AWS's networking was extremely expensive to _build_ but it's not marginally more expensive to _operate_ for each new customer. Servers are relatively cheap to purchase, but as you add customers the cost increases with them.

If they're selling cheap instances are a marginal loss, that would be very surprising and go against everything I know about the costs of building out datacenters and networks.


>Many people complain about AWS' networking costs, but I also suspect these are generally at-cost. A typical AWS region has terabits upon terabits of nano-second scale latency fiber ran between its AZs and out to the wider internet.

I'm skeptical about this claim. Most cloud providers try to justify their exorbitant bandwidth costs by saying it's "premium", but can't provide any objective metrics on why it's better than low cost providers such as hetzner/ovh/scaleway. Moreover, even if it were more "premium", I suspect that most users won't notice the difference between aws's "premium" bandwidth and a low cost provider's cheap bandwidth. I think the real goal of aws's high bandwidth cost is to encourage lock-in. After all, even if azure is cheaper than aws by 10%, if it costs many times that for you to migrate all your over to azure, you'll stick with aws. Similarly, it encourages companies to go all-in into aws, because if all of your cloud is in aws, you don't need to pay bandwidth costs for shuffling data between your servers.


Right, but that's not what I'm saying. Whether or not the added network quality offers a tangible benefit to most customers isn't relevant to how it is priced. You, as the customer, need to make that call.

The reality is, their networks are fundamentally and significantly higher quality, which makes them far more expensive. But, maybe most people don't need higher quality networks, and should not be paying the AWS cost.


>You, as the customer, need to make that call.

But the problem is that you can't. You simply can't use aws/azure/gcp with cheap bandwidth. If you want to use them at all, you have to use their "premium" bandwidth service.


> Amazon really does not have a history of throwing huge margins on raw compute resources

What? My $2000 Titan V GPU and $10 raspberry pi both payed for themselves vs EC2 inside of a month.

Many of AWS's managed services burn egregious amounts of EC2, either by mandating an excessively large central control instance or by mandating one-instance-per-(small organizational unit). The SFTP example you list is completely typical. I've long assumed AWS had an incentive structure set up to make this happen.

"We're practically selling it at cost, honest!" sounds like sales talk.


Yes. Look at the requirements for the EKS control plane for another example. It has to be HA and able to manage a massive cluster, no matter how many worker boxes you plan to use.*

*Unless things have changed in the last year or so since I looked


It is currently 10 cents an hour flat rate for the control plane. That actually saved us money. Even if you weren't going to run in HA, that is still the cost of a smallish machine to run a single master. I am not sure who running K8s in production would consider that too high. If you are running at the scale where $72 a month is expensive or don't want to run HA, you might not want to be running managed Kubernetes. I'd just bootstrap a single node then myself.


You said it yourself: production at scale is the only place where the current pricing makes sense. That's fine, but it means I'm not going to be using Amazon k8s for most of my workloads, both k8s and non-k8s.


If you cut out the Kubernetes hype, you could simple use AWS' own container orchestration solution (ECS) whose control plane is free of charge to use.


> Many people complain about AWS' networking costs, but I also suspect these are generally at-cost.

This seems to be demonstrably false, given that Amazon Lightsail exists. Along with some compute and storage resources:

$3.50 gets you 1TB egress ($0.0035/GB)

$5 gets you 2TB egress ($0.0025/GB)

Now, its certainly possible that Amazon is taking a loss on this product. Its also possible that they have data showing that these types of users don't use more than a few percent of their allocated egress. But I suspect that they are actually more than capable of turning a profit at those rates.


And I mean, if you just compare the price of Amazon's egress compared to that of a VPS at Hetzner or OVH, to say nothing of the cheaper ones, you can be sure that they are making margins of over 200% on it for EC2. There's a 4$ VPS on OVH with unlimited egress at 100Mbps.

4$!

That's a theoretical maximum of 1Tb egress each three hours. So for the cost of 3 hours of egress you can buy an entire VPS with a month of egress, for cheaper. It's insane just how much cheaper it really is.


Indeed it is. But rest assured that every provider will shut your server down if it's running at full bandwidth all day.


Sure. But you just need to run your server at full bandwidth for one hour every day to use up 10 times more bandwidth than even lightsail would give you for the price of the server.

I assure you that you can run these servers for one hour a day and no one will bat an eye. I know people running seedboxes at full speed for 10 hours a day or so without an issue - that's 100 times the bandwidth of even Lightsail for the same price.


No, Hetzner charges you <$2/TB. for that price, they'll be happy to route as much traffic as you want. Never heard that they complain in these cases.

They have worse peering than AWS but the difference in cost to them is certainly not 100x or more.


Obviously the comment is in reply to having unlimited traffic for a flat fee and not for a price per TB.


This doesn’t add up. Amazon is profitable mostly because of AWS which in turn is profitable mostly due to EC2 and S3.

Clearly they have margins, and fat ones at that.


Yes, by many estimates some of the highest in the industry

"When factoring in heavy depreciation, AWS has EBITDA margins of around 50%."

https://seekingalpha.com/news/2453156-amazon-plus-15-percent...


Building managed applications is where the money is at for AWS for sure, Elasticache is another good example. The beauty is their managed services are great and worry free.

Shameless plug - partly because of the high cost of sftp in AWS, and lack of ftp (understandable), and a bunch of people wanting the same in Azure / GCS, that made us start https://docevent.io which has proved quite popular.


Long term, Amazon is also the exact kind of company that would start making "modifications" to Graviton requiring you to purchase/use/license a special compiler to run ;)


Can you point to a time Amazon did something like that? Not saying they won't, just any company can do it, it's more likely when they've done it in the past.


Why would they want to shift people to ARM based instances if they weren't more efficient?


even if it were the case, it is still a saving for the end user and it is a threat to Intel.


>Intel’s real money is in high-end CPUs sold to prosperous Cloud operators, not in supplying lower-end chips to cost-cutting laptop makers.

And to make another point. Apple isn't a lower-end cost cutting laptop makers.

Apple sell ~20M Mac per year. Intel ship roughly ~220M PC CPU per year ( I think recent years saw the number trending back towards 250M) That is close to 10%. Not an insignificant number. Apple only use expensive Intel CPU. Lots of survey shows most $1000+ PC belongs to Apple. Most of the business Desktop, and Laptop uses comparatively cheap Intel CPU. i.e I would not be surprised the median price of Apple's Intel CPU purchase is at least 2x to total market median if not more. In terms of revenue that is 20% of Intel's consumer segment.

They used to charge a premium for being the leading edge Fab. You cant get silicon that is better than Intel. You are basically paying those premiums for having the best. Then Intel's Fab went from 2 years leading, to now 2 years behind, ( That is 4 years difference. ) all while charging the same price. And since Intel wants to keep its margin, it is not much of a surprise customers, ( Apple and Amazon ) looks for alternative.

Here is another piece on Amazon Graviton 2.

https://threader.app/thread/1274020102573158402

May be I should submit it to HN.


Yea, last I looked at it, Apples average Mac sakes price was $1300, HP/Dell/et al were under $500.

Apple owns the premium PC market, it’s Mac division is not only the most profitable PC company in the world, it might be more profitable than all the others combined.

It’s share of Intels most expensive desktop CPUs is much higher than its raw market share.


We recently evaluated using Graviton2 over X86 for our Node.js app on AWS. There was enough friction, some native packages that didn't work out of the box, etc. and some key third party dependencies missing completely, that it wasn't worth the effort in the end even considering the savings, as we'd likely keep having these issues pop up and having to debug them remotely on Graviton.

If macOS goes ARM and there's a sizable population of developers testing and fixing these issues constantly, the math changes in favor of Graviton and it would make it a no-brainer to pick the cheaper alternative once everything "just works".


Unfortunately you might have witnessed the achilles heel of ARM ecosystem, where certain SW/binaries are not available yet. Most open-source code can be compiled for ARM without much hassle[1][2][3], but some might require explicit changes to port certain x86 specific instructions to ARM.

I've been shifting my work to ARM based machine for some years now, mainly to reduce power consumption. One of my current projects - A problem validation platform[4](Go) has been running nicely on a ARM server(Cavium ThunderX SoCs) on scaleway; but weirdly scaleway decided to quit on ARM servers[5] sighting hardware issues which not many of the ARM users seem to have faced. Only ARM specific issue I faced with scaleway was that the reboot required power-off.

[1]cudf: https://gist.github.com/heavyinfo/da3de9b188d41570f4e988ceb5...

[2]Ray: https://gist.github.com/heavyinfo/aa0bf2feb02aedb3b38eef203b...

[3]Apache Arrow: https://gist.github.com/heavyinfo/04e1326bb9bed9cecb19c2d603...

[4]needgap: https://needgap.com

[5]Scaleway ditched ARM: https://news.ycombinator.com/item?id=22865925


Filing bugs against the broken packages would be a nice thing to do. Easy enough to test on a Raspberry Pi or whatever.

I have a hunch our industry will need to increasingly deal with ARM environments.


>Mac ARM laptops mean cloud ARM VMs.

What is the connection here ? ARM servers would be fine in a separate discussion. What does it have to do with Macs ? Macs aren't harbingers of anything. They have set literally no trend in the last couple of decades, other than thinness at all costs. If you mean that developers will use Gravitons to develop mac apps, why/how would that be ?


To quote Linus Torvalds:

"Some people think that "the cloud" means that the instruction set doesn't matter. Develop at home, deploy in the cloud.

That's bull*t. If you develop on x86, then you're going to want to deploy on x86, because you'll be able to run what you test "at home" (and by "at home" I don't mean literally in your home, but in your work environment)."

So I would argue there is a strong connection.


> If you develop on x86, then you're going to want to deploy on x86

I can see this making sense to Torvalds, being a low-level guy, but is it true for, say, Java web server code?

Amazon are betting big on Graviton in EC2. It's no longer just used in their 'A1' instance-types, it's also powering their M6g, C6g, and R6g instance-types.

https://aws.amazon.com/about-aws/whats-new/2019/12/announcin...


I agree about Java. I'm using Windows to write Java but deploy on Linux and it works. I used to deploy on Itanium and also on some IBM Power with little-endian and I never had any issues with Java. It's very cross-platform.

Another example is android apps development. Most developers use x86_64 CPU and run emulator using intel android image. While I don't have vast experience, I did write few apps and never had any issue because of arch mismatch.

High level languages mostly solved that issue.

Also note that there are some ARM laptops in the wild already. You can either use Windows or Linux. But I don't see that every cloud or android developer hunting for that laptop.


It works until it doesn't. We had issues where classes were loaded in a different order on linux causing issues that we could not repro on windows.


Interesting, but in that case you changed OS rather than changing CPU ISA, so not quite the same thing.


No, it's exactly the same thing. The more variables you change, the harder it will be to debug a problem.


I've deployed C++ code on ARM in production that was developed on X64 without a second thought, though I did of course test it first. If it compiles and passes unit tests, 99.9% of the time it will run without issue.

Going from ARM to X64 is even less likely to have issues as X64 is more permissive about things like unaligned access.

People are making far too big a deal out of porting between the two. Unless the code abuses undefined behaviors in ways that you can get away with on one architecture and not the other, there is usually no issue. Differences in areas like strong/weak memory ordering, etc., are hidden behind APIs like posix mutexes or std::atomic and don't generally have to be worried about.

The only hangup is usually vector intrinsic or ASM code, and that is not found in most software.

For higher level languages like Go or Java, interpreted languages like JavaScript or Python, or more modern languages with fewer undefined edge cases like Rust, there is almost never an issue.

This is just not a big deal unless you're a systems person (like Linus) or developing code that is really really close to the metal.


I've developed for x86, and deployed on x86. Some years later we decided to add arm support. Fixing the only on arm bug made our x86 software more stable. Turns out some 1 in a million issues on x86 that on arm happen often enough that we could isolate them and then fix them.

Thus I encourage everyone to target more than one platform as it makes the total better. This even though there are platform specific issues that won't happen on the other (like the compiler bug we found)


Apparently Apple had macos working for years on x86 before they switched their computers to intel CPUs. The justification at the time was exactly this - by running their software on multiple hardware platforms, they found more bugs and wrote better code. And obviously it made the later hardware transition to intel dramatically easier.

I would be surprised if Apple didn’t have internal prototypes of macos running on their own Arm chips for the last several years. Most of the macos / iOS code is shared between platforms, so it’s already well optimized for arm.


They've had it deployed worldwide on ARM -- every iPhone and iPad runs it on an ARM chip.


To add to your example, everyone that targets mobile devices with native code tends to follow a similar path.

Usually making the application run in the host OS is much more productive than dealing with the emulator/simulator.


Most of my work outside of the day job is developed on x86 and deployed on ARM.

Unless you're talking about native code (and even then, I've written in the past about ways this can be managed more easily), then no, it really doesn't matter.

If you're developing in NodeJS, Ruby, .NET, Python, Java or virtually any other interpreted or JIT-ed language, you were never building for an architecture, you were building for a runtime, and the architecture is as irrelevant to you as it ever was.


> Python

Well I can't speak to some of the others... but Conda doesn't work at all on ARM today (maybe that will change with the new ARM Macs, though), which is annoying if you want to use it on, say, a Raspberry Pi for hobby projects.

Additionally, many scientific Python packages use either pre-compiled binaries or compile them at install-time, for performance. They're just Python bindings for some C or Fortran code. Depending on what you're doing, that may make it tricky to find a bug that only triggers in production.


Sorry, yes this is an exception.

Also one I've come across myself so I'm a bit disappointed I didn't call this out. So... kudos!


If you're on a low enough level where the instruction set matters (ie. not Java/JavaScript), then the OS is bound to be just as important. Of course you can circumvent this by using a VM, though the same can be said for the instruction set using an emulator.


But that's the other way round. If you have an x86 PC, you can develop x86 cloud software easily. You don't develop cloud software on a mac anyway (i.e., that's not apple's focus). You develop mac software on macs for other macs. If you have to develop cloud software, you'll do so on linux (or wsl or whatever). What is the grand plan here ? You'll run an arm linux vm on your mac to develop general cloud software which will be deployed on graviton ?


> If you have to develop cloud software, you'll do so on linux (or wsl or whatever).

I think you are vastly underestimating how many people use Mac (or use Windows without using WSL) to develop for the cloud.


I can say our company standardized on Macs for developers back when Macs were much better relative to other laptops. But now most of the devs are doing it begrudgingly. The BSD userland thing is a constant source of incompatibility, and the package systems are a disaster. The main reason people are not actively asking for alternatives is that most use the Macs as dumb terminals to shell into their Linux dev servers, which takes the pressure off the poor dev environment.

The things the Mac is good at:

1) It powers my 4k monitor very well at 60Hz

2) It switches between open lid and closed lid, and monitor unplugged / plugged in states consistently well.

3) It sleeps/wakes up well.

4) The built in camera and audio work well, which is useful for meetings, especially these days.

None of these things really require either x86 or Arm. So if a x86 based non-Mac laptop appeared that handled points 1-4 and could run Linux closer to our production environment I'd be all over it.


I think you've hit the nail on the head, but you've also summarised why I think Apple should genuinely be concerned about losing marketshare amongst developers now that WSL2 is seriously picking up traction.

I started using my home Windows machine for development as a result of the lockdown and in all honesty I have fewer issues with it than I did with my work MacBook. Something is seriously wrong here.


I think Apple stopped caring about devs marketshare a long time ago and instead is focusing on the more lucrative hip and young Average Joe consumer.

Most of the teens to early 20 somethings I know are either buying or hoping to buy the latest Macs, iPads, iPhones and AirPods while most of the devs I know are on Linux or WSL but devs are a minority compared to the Average Joes who don't code but are willing to pay for nice hardware and join the ecosystem.


Looking at the arch slide of Apple's announcement about shifting Macs to ARM, they want to people to use them as dev platforms for better iPhone software. Think Siri on chip, Siri with eyes and short term context memory.

And as a byproduct perhaps they will work better for hip young consumers too. Or anyone else who is easily distracted by bright colours and simple pictures, which is nearly all of us.


> I think you are vastly underestimating how many people use Mac (or use Windows without using WSL) to develop for the cloud.

The dominance of Macs for software development is a very US-centric thing. In Germany, there is no such Mac dominance in this domain.


To be fair in the UK Macs are absolutely dominant in this field.


Depends very much what you're doing; certainly not in my area (simulation software) at least, not for other than use as dumb terminals.


Yes, in Germany it's mostly Linux and Lenovo / Dell / HP desktops and business-type laptops. Some Macs, too.


I have no idea where in Germany you're based, or what industry you work in, but in the Berlin startup scene, there's absolutely a critical mass of development that has coalesced around macOS. It's a little bit less that way than in the US, but not much.


Berlin is very different from the rest of Germany.


This. According to my experience and validated by Germans and expats alike, Berlin is not Germany :)


In Norway where I live Macs are pretty dominating as well. Might be Germany is the outlier here ;-)


When I go to Ruby conferences, Java conferences, academic conferences, whatever, in Europe, everyone - almost literally everyone - is on a Macintosh, just as in the US.


Most people don’t go to conferences.


Ruby conference goers don't represent all the SW devs of Europe :)


Why do you think not?

And why not Java developers?

They seem pretty polar opposite in terms of culture, but all still turn up using a Macintosh.


Because every conference is its own bubble of enthusiasts and SW engineering is a lot more diverse than Ruby, from C++ kernel devs to Firmware C and ASM devs.

Even the famous FailOverflow said in one of his videos he only bought a Mac since he saw that at conferences everyone had Macs so he thought that must mean they're the best machines.

Anecdotally, I've interviewed at over 12 companies in my life and only one of those issues Mac to its employees the rest were windows/Linux.


True, but it is full of developers using Windows to deploy on Windows/Linux servers, with Java, .NET, Go, node, C++ and plenty of other OS agnostic runtimes.


Given the fact that the US has an overwhelming dominance in software development (including for the cloud) I think that the claim this is only a US phenomenon is somewhat moot. As a simple counter-point, the choice of development workstation in the UK seems to mirror my previous experience in the US (i.e. Macs at 50% or more.)


My experience in Germany and Austria mirrors GPs experience with windows/linux laptops being the majority and Mac being present in well funded hip startups.


Same in South Africa (50% mac, 30% windows, 20% ubuntu) and Australia.


> You don't develop cloud software on a mac anyway

I've got anecdata that says different. My backend/cloud team has been pretty evenly split between Mac and Windows (with only one Linux on the desktop user). This is at a Java shop (with legacy Grails codebases to maintain but not do any new development on).


Mac is actually way better for cloud dev than Windows is, since it's all Unix (actual Unix, not just Unix-like). And let's be honest, you'll probably be using docker anyway.


Arguably now, with WSL, Windows is closer to the cloud environment than macOS. Its a true Linux kernel running in WSL, no longer a shim over Windows APIs.


Yep. WSL 2 has been great so far. My neovim setup feels almost identical to running Ubuntu natively. I did have some issues with WSL 1, but the latest version is a pleasure to use.


Do you use VimPlug? For me :PlugInstall fails with cannot resolve host github.com


I do use VimPlug. Maybe a firewall issue on your end? I'm using coc.nvim, vim-go, and a number of other plugins that installed and update just fine.


That is just utter pain though. I’ve tried it and I am like NO THANKS! Windows software operates too poorly with Unix software due to different file paths (separators, mounting) and different line endings in text files.

With Mac all your regular Mac software integrates well with the Unix world. XCode is not going to screw up my line endings. I don’t have to keep track of whether I am checking out a file from a Unix or Windows environment.


Your line-ending issue is very easy to fix in git:

`git config --global core.autocrlf true`

That will configure git to checkout files with CRLF endings and change them to plain LF when you commit files.


Eating data is hardly a fix for anything, even if you do it intentionally.


If the cloud is mostly UNIX-like and not actual UNIX, why would using “real UNIX” be better than using, well, what’s in the cloud?


Agree, although I think this is kind of nitpicking, because "UNIX-like" is pretty much the UNIX we have today on any significant scale.


macOS as certified UNIX makes no sense in this argument. it doesn't help anything, as most servers are running Linux.


I develop on Mac, but not mainly for other Macs (or iOS devices), but instead my code is mostly platform-agnostic. Macs also seem to be quite popular in the web-frontend-dev crowd. The Mac just happens to be (or at least used to be) a hassle-free UNIX-oid with a nice UI. That quality is quickly deteriorating though, so I don't know if my next machine will actually be a Mac.


True, but then the web-fronted dev stuff is several layers away from the ISA, isn't it ? As for the unix-like experience, from reading other people's accounts, it seemed like that was not really Apple's priority. So there are ancient versions of utilities due to GPL aversion and stuff. I suppose docker, xcode and things like that make it a bit better, but my general point was that didn't seem like Apple's main market.


> So there are ancient versions of utilities due to GPL aversion and stuff.

They're not ancient, but are mostly ports of recent FreeBSD (or occasionally some other BSD) utilities. Some of these have a lineage dating back to AT&T/BSD Unix, but are still the (roughly) current versions of those tools found on those platforms, perhaps with some apple-specific tweaks.


It works great though, thanks to Homebrew. I have had very few problems treating my macOS as a Linux machine.


> You don't develop cloud software on a mac anyway

You must be living in a different universe. What do you think the tens of thousands of developers at Google, Facebook, Amazon, etc etc etc are doing on their Macintoshes?


> What do you think the tens of thousands of developers at Google ... are doing on their Macintoshes?

I can only speak of my experience at Google, but the Macs used by engineers here are glorified terminals, since the cloud based software is built using tools running on Google's internal Linux workstations and compute clusters. Downloading code directly to a laptop is a security violation (With an exception for those working on iOS, Mac, or Windows software)

If we need Linux on a laptop, there is either the laptop version of the internal Linux distro or Chromebooks with Crostini.


They individually have a lot of developers, but the long tail is people pushing to AWS/Google Cloud/Azure from boring corporate offices that run a lot of Windows and develop in C#/Java.

edit: https://insights.stackoverflow.com/survey/2020#technology-pr...


>What do you think the tens of thousands of developers at Google, Facebook, Amazon, etc etc etc are doing on their Macintoshes?

SSH to a linux machine ? I get that cloud software is a broad term that includes pretty much everything under the sun. My definition of cloud dev was a little lower level.


This is the same Linus who's recently switched his "at home" environment to AMD...

https://www.theregister.com/2020/05/24/linus_torvalds_adopts...


Which is still x86...?

What point are you trying to make?


So? Linus doesn’t develop for cloud. His argument still stands.


Because when you get buggy behaviour from some library because it was compiled to a different architecture it's much easier to debug it if your local environment is similar to your production one.

Yeah, I'm able to do remote debugging in a remote VM but the feedback loop is much longer, impacting productivity, morale and time to solve the bug, a lot of externalised costs that all engineers with reasonable experience are aware of. If I can develop my code on the same architecture that it'll be deployed my mind is much more in peace, when developing in x86_64 to deploy on ARM I'm never sure that some weird cross-architecture bug will pop up. No matter how good my CI/CD pipeline is, it won't ever account for real-world usage.


on the other hand, having devs an alien workstation really put stress into the application configurability and adaptability in general.

it's harder in all the way you describe, but it's much more likely the software will survive migrating to the next debian/centos release unchanged.

it all boils down to the temporal scale of the project.


I'd say that on my 15 years career I had many more tickets related to bugs that I needed to troubleshoot locally than issues with migrating to a new version/release of a distro. To be honest it's been 10 years since the last time I had a major issue caused by distro migration or update.


> "Macs aren't harbingers of anything."

I have to agree. It's not like we're all running Darwin on servers instead of Linux. Surely the kernel is a more fundamental difference than the CPU architecture.


ARM Macs means more ARM hardware in hands of developers. It means ARM Docker images that can be run on hardware on hand, and easier debugging (see https://www.realworldtech.com/forum/?threadid=183440&curpost...).


> They have set literally no trend in the last couple of decades, other than thinness at all costs

Hahaha then you have not kept attention. Apple led the trend away from beige boxes. Style of keyboard used. Large track pads. USB. First to remove floppy drive. Both hardware, software and web design has been heavily inspired by Apple. Just look at icons used, first popularized by Apple.

Ubuntu desktop is strongly inspired by macOS. Operating system with drivers preloaded through update mechanism was pioneered by Apple. Windows finally seem to be doing this.


Because if you're developing apps to run in the cloud, it's preferable to have the VM running the same architecture that you're developing on.


maybe what he means is, if macs are jumping on the trend, man that must be a well-established trend, they're always last to the party.


Epyc Rome is now available on EC2 and the c5a.xlarge16 plan appears to be about the same or slightly cheaper than the Graviton 2 plan.

Being cheaper isn't enough here - Graviton needs to be faster and it needs to do that over generations. It needs to _sustain_ its position to become attractive. Intel can fix price in a heartbeat - they've done that in the past when they were behind. Intel's current fab issues does make this a great time to strike, but what about in 2 years? 3 years? 4? Intel's been behind before, but they don't stay there. Switching to Epyc Rome at the moment is an easy migration - same ISA, same memory model, vendor that isn't new to the game, etc... But Graviton needs a bigger jump, there's more investment there to port things over to ARM. Will that investment pay off over time is a much harder question to answer.


> But Graviton needs a bigger jump, there's more investment there to port things over to ARM.

I agree to some extent but don’t underestimate how much this has changed in the cloud era: it’s never been cheaper to run multiple ISAs and a whole lot of stuff is running in environments where switching is easy (API-managed deployments, microservices, etc.) and the toolchains support ARM already thanks to phones/tablets and previous servers - so much code will just run on the JVM, high level languages like JavaScript Python, or low-level ones like Go and Rust with great cross-compiler support, etc. and hardware acceleration also takes away the need to pour engineer-hours into things like OpenSSL which might have blocked tons of applications.

At many businesses that is probably over the threshold where someone can say it’s worth switching the easy n% over since they’ll save money now and if the price/performance crown shifts back so can their workloads. AWS has apparently already done this for managed services like load-balancers and I’m sure they aren’t alone in watching the prices very closely. That’s what Intel is most afraid of: the fat margins aren’t coming back even if they ship a great next generation chip.


The problem here is that ARM & x86 have very different memory models, not that the ISA itself is a big issue. Re-compiling for ARM is super easy, yes absolutely. Making sure you don't have any latent thread-safety bugs that happened to be OK because it was on x86 that are now bugs on ARM? That's a lot harder, and it only takes a couple of those to potentially wipe out any savings that were potentially had, as they are in the class of bugs that's particularly hard to track down.


If you do have hidden thread safety bugs you are only one compile away from failure, even on x86.

Some function gets vectorized and a variable shifts in or out of a register? Or one of your libraries gets rebuilt with LTO / LTCG and its functions are inlined.

If your code was wrong, any of that can ruin it and you're left trying to figure it out, and requiring your developers to use exactly one point release of Visual Studio and never, ever upgrade the SDK install.


And precisely for this reason if I were Amazon/AWS I would buy ARM from Softbank right now - especially at a time when the Vision Fund is performing so poorly, and therefore there might be financial distress on the side of Softbank.


I'm not so sure about so many parts of this.

I love working on ARM, but with the licensing model I'm not sure how much of a positive return Amazon would really be able to squeeze from their investment.

It also potentially brings them a truckload of future headaches if any of their cloud competitors raise anti-trust concerns down the road.

Beyond that, I think Apple, ARM and AMD get a lot of credit for their recent designs - a lot of which is due, but quite a bit of which should really go to TSMC.

The TSMC 7nm fabrication node is what's really driven microprocessors forward in the last few years, and hardly anyone outside of our industry has ever heard of them.

I don't know that Amazon couldn't transition to RISC-V if they needed to in a couple of years.


I think it's still a sound investment, not for the ROI, but for a stake in controlling the future. They could mitigate the anti-trust angle a bit by getting Apple, IBM, and maybe Google onboard.

Microsoft controls a lot of lucrative business-centric markets. An advantage that seems to have helped MS Azure surpass AWS in marketshare. One of Microsoft's weaknesses is their in-ability to migrate away from x86 in any meaningful way. IBM could used Redhat to push the corporate server market away from x86 under the guise of lower operating costs, which could deal a tremendous blow to MS, leaving Amazon and Google with an opening to hit the Office and Azure market.


Imagine if Oracle buys it, and it becomes another SUN-type outcome?


>you’d be stupid not to switch over to Graviton2

You overestimate the AWS customer base. Lots of them do silly things that cost them a lot of money.


It's because AWS is designed in a such a way that it's very easy to spend a lot, and very difficult to know why.


If you can throw money at the problem and invest engineering resources to do cost optimization later, this is often a valid strategy.

It's often easy to test if scaling the instance size/count resolves a performance issue. If it does, you know you can fix the problem by burning money.

When you have reasonable certainty of the outcome spending money is easier than engineering resources.

And later it's easier for an engineer to justify performance optimizations, if the engineer can point to a lower cloud bill..

I'm not saying it's always a well considered balance, just that burning money temporarily is a valid strategy.


>If you can throw money at the problem and invest engineering resources to do cost optimization later, this is often a valid strategy.

If only AWS had thousands of engineers to create a UX that makes cost info all upfront and easy to budget. Clearly it's beyond their capability /s

>I'm not saying it's always a well considered balance, just that burning money temporarily is a valid strategy.

Yes, but I bet a majority of AWS users don't want that to be the default strategy.


> Mac ARM laptops mean cloud ARM VMs.

Why do you think that? Cloud means linux, apple does not even have a server OS anymore.

We are seeing some ARM devices entering the server space, but these have been in progress for ears and have absolutely nothing to do with apples (supposed) CPU switch.


Why do you think developers don't run Linux VMs on macOS laptops?


They absolutely do.

One reason is to get the same environment they get in a cloud deployment scaled down. Another is Apple keeps messing with compatibility, most recent example being the code signing thing slowing down script executions.

Edit, this includes docker on Mac: it's not native.


When they run Docker that's precisely what they do...


Everyone mentions Intel high margins on servers and somehow does not consider if Apple wants these margins too.

> Guys, do you really not understand why x86 took over the server market?

> It wasn’t just all price. It was literally this “develop at home” issue. Thousands of small companies ended up having random small internal workloads where it was easy to just get a random whitebox PC and run some silly small thing on it yourself. Then as the workload expanded, it became a “real server”. And then once that thing expanded, suddenly it made a whole lot of sense to let somebody else manage the hardware and hosting, and the cloud took over.


It was also because Intel was like this giant steamroller you couldn't compete with because it was also selling desktop CPUs.

Sure they have lower margins on desktop but these brings sh!t tons of cash, cash you will then use for research and to develop your next desktop and server CPUs. If I believe this page [1], consumer CPUs brought almost $10 billions in 2019... In comparison, server CPUs generated $7 billions of revenues... And these days, Intel has like >90% of that market.

Other players (Sun, HP, IBM, Digital, ...) were playing in a walled garden and couldn't really compete on any of that (except pricing because their stuff was crazy expensive).

So not only they were sharing the server market with Intel but Intel was most likely earning more than the sum of it all from their desktop CPUs... More money, more research, more progress shared between the consumer and server CPUs: rinse and repeat and eventually you will catch up. And also, they could sell their server CPUs for a lot less than their "boutique" competitors.

You just can't compete with a player which has almost unlimited funds and executes well...

[1] https://www.cnbc.com/2020/01/23/intel-intc-earnings-q4-2019....


Apple backed away from the server market years ago despite having a quite nice web solution.


Exactly. Yes, some people need memory and multi-core speeds, but for most, it's the cost per "good enough" instance, and that's CapEx and power, both of which could be much lower with ARM.


If you need multi-core speeds, you'd be silly to not go AMD Epyc (~50% cheaper), or ARM.

People paid the IBM tax for decades. Intel is the new IBM. Wherever you look, Intel's lineup is not competitive and it is only surviving due to inertia.


If the operating margins in the article are realistic then there is a lot of room for undercutting Intel if you can convince the customer to take the plunge. That is, if you’re not in an IBM-vs-Amdahl situation where the lower price is not enough.


It isn't quite that simple, Intel has way more engineering resources than AMD and for some complicated setups like data centers, Intel really does have good arguments that their systems are better tested than the competition, and Intel does have better ability to bundle products together than AMD.


Intel is behind in fab technology. I don't think the problem is that Intel's chip designs are what's holding them back. AMD offered better multi core performance and Intel responded with more cores as well. However, I do believe that Intel suffers from an awful corporate environment. There was a story about ISPC [0] with one chapter [1] talking about the culture at the company.

[0] https://pharr.org/matt/blog/2018/04/30/ispc-all.html

[1] https://pharr.org/matt/blog/2018/04/28/ispc-talks-and-depart...


> their systems are better tested than the competition

Yes, Spectrum and Meltdown definitely proves that...

I think I heard that argument already, in the 90's. But instead of Intel it was Sun, it was SGI, it was Digital, etc

The truth is that money wins and while there is an inertia, it's more like a Cartoon inertia where your guy won't fall from the cliff until he looks down. But then it's too late.


If by tested you mean, getting eaten alive by vulnerability reports monthly and furthering degrading performance, sure. There isn't much rocket science otherwise to "better tested" in a server platform. Either it passes benchmarks of use case scenarios or it doesn't.


Yes there is. Server platforms are connected to more storage, higher bandwidth networking and more exotic sockets (more than one CPU). It is one thing to support lots of PCI express lots on paper. It is another thing to have a working solution that doens't suffer degraded performance when all the PCI express slots are in use at the same time.

None of this is rocket science but it takes money and engineers time to make these things happen. Intel has more of both at the moment.


There are plenty of applications where single-threaded clock speed matters, and Intel still wins by a wide margin there. Cache size is also a factor, and high end Xeon's have more cache than any competing CPU I've seen.


The just announced Intel Xeon Cooper Lake top end processor has about 38.5MB of cache. The AMD Rome top end has 256MB of cache.

https://ark.intel.com/content/www/us/en/ark/products/205684/...

https://www.amd.com/en/products/cpu/amd-epyc-7h12


I'm not sure this is the whole story, Intel has twice the L2 cache as AMD but I'm not sure that's enough to make a huge difference.

Epyc 7H12[1]:

- L1: two 32KiB L1 cache per core

- L2: 512KiB L2 cache per core

- L3: 16MiB L3 cache per core, but shared across all cores.

The L1/L2 cache aren't yet publicly available for any Cooper Lake processors, however the previous Cascade Lake architecture provided:

All Xeon Cascade Lakes[2]:

- L1: two 32 KiB L1 cache per core

- L2: 1 MiB L2 cache per core

- L3: 1.375 MiB L3 cache per core (shared across all cores)

Normally I'd expect the upcoming Cooper Lake to surpass AMD in L1, and lead further in L2 cache. However it looks like they're keeping the 1.375MiB L3 cache per core in Cooper Lake, so maybe L1/L2 are also unchanged.

0: https://www.hardwaretimes.com/cpu-cache-difference-between-l...

1: https://en.wikichip.org/wiki/amd/epyc/7h12

2: https://en.wikichip.org/wiki/intel/xeon_platinum/9282

Edit: Previously I showed EPYC having twice the L1 as Cascade Lake, this was a typo on my part, they're the same L1 per core.


Zen 2 has 4MiB L3 per core, 16 MiB shared in one 4-core CCX.


Thanks, I wrote that while burning the midnight oil and didn't double-check the sanity of those numbers. It's too late to edit mine but I hugely appreciate the clarification.


NP. It's still a huge amount of LLC compared to the status quo. Says something about how expensive it really is to ship all that data between the CCXs/CCDs.


Intel L3 does not equal AMD L3 cache regarding latencies. Depending on the application this can matter a lot. https://pics.computerbase.de/7/9/1/0/2/13-1080.348625475.png


You'd need that latency to be significant enough that AMD's >2x core count doesn't still result in it winning by a landslide anyway, and you need L3 usage low enough that it still fits in Intel's relatively tiny L3 size.

There's been very few cloud benchmarks where 1P Epyc Rome hasn't beaten any offering from Intel, including 2P configurations. The L3 cache latency hasn't been a significant enough difference to make up the raw CPU count difference, and where L3 does matter the massive amount of it in Rome tends to still be more significant.

Which is kinda why Intel is just desperately pointing at a latency measurement slide instead of an application benchmark.


Cache per tier matters a lot, total cache does not tell much. L1 is always per core and small, L2 is larger and slower, L3 is shared across many cores and access is really slow compared to L1 and L2. In the end performance per watt for a specific app is what matters, that is the end goal.


Interesting, that's news to me. Guess Intel just has clock speed then. That's why I still pay a premium to run certain jobs on z1d or c5 instances.

As another commenter pointed out, though, not all caches are equal. Unfortunately, I was not able to easily find access speeds for specific processors, so single-threaded benchmarks are the primary quantitative differentiator.


Given the IPC gains of Zen 2, the single-threaded gap is closing, and even reversed in some workloads.

And I think Xeon L3 cache tops out at about 40MB, whereas Threadripper & Epyc go up to 256MB.


Really? 'Entry-level' EPYC's (7F52) have 256MB of L3 cache for 16 cores.

I don't think there's any Intel CPU's with more than 36MB L3?



77MB for 56 cores. That's that ploy where they basically glued two sockets together so they could claim a performance per socket advantage even though it draws 400W and that "socket" doesn't even exist (the package has to be soldered to a motherboard).

IIRC the only people who buy those vs. the equivalent dual socket system are people with expensive software which is licensed per socket.


Those applications exist, but not enough to justify Intel’s market cap.


Do you have a source for any of the things you said?


Anecdotal: back then when SETI@home was a thing, I was running it on some servers; a 700 MHz Xeon was a lot faster (>50%, IIRC) than a 933 MHz Pentium 3, Xeon had a lot lower frequency and slower bus (100 vs 133MHz), but the cache was 4 times larger and probably the dataset or most of it was running in cache.


The same happened with the mobile Pentium-M with 1 (Banias) and 2MB (Dothan) cache - you could get the whole lot in cache and it just flew, despite the (relatively) low clock speed. There were people building farms of machines with bare boards on Ikea shelving.


Even worse for Intel, there are lots of important server workloads that aren't CPU intensive, but rely on the CPU coordinating DMA transfers between specialized chips (SSD/HDD controller, network controller, TPU/GPU) and not using much power or CapEx to do so.


> If you need multi-core speeds, you'd be silly to not go AMD Epyc (~50% cheaper), or ARM.

But Amdhal's Law shows us this doesn't make sense for most people.


Graviton is only cheaper because Amazon gouges you slightly less.

Graviton is still almost an order of magnitude slower than a VPS of the same cost, which is around what the hardware and infra costs Amazon.


And since you can already run windows 10 on arm https://docs.microsoft.com/en-us/windows/arm/ I is only a matter of time before we get Windows Server on arm. Though I guess you can run SQL Server on Linux on ARM already I think, though I am not entirely sure about that.


I would suspect that Apple is likely to enforce exclusive hardware for ARM; just like it does now, for Intel (which is a lot more common than ARM).

The limitation is not technical (as hackintoshes demonstrate); it’s legal.

That said, it would be great to be able to run Mac software (and iOS) on server-based VMs.

I just don’t think it will happen at scale, because lawyers.


Pedantic: actually, cloud ARM VMs and ARM laptops mean eventual Mac ARM laptops. The former two are already widespread in Graviton2 you mentioned and Surface X and also 3rd party ones.


there is no mainstream ARM laptop as of writing.

what are my options if I want a ARM laptop with say good mobile processor performance close to the i7-8850H found in a 2018 mbp 15, 16GB RAM and 512GB NVME SSD to setup my day to day dev environment (Linux + Golang + C++ etc)?

surface x is the only ARM laptop you can easily purchase, but it is windows only and the processor is way too slow. there is also close to 0 response from app vendors to bring their apps to native ARM windows environment.


How is the X slow? The reviewers have only claimed it was slow when emulating x86, but not in native apps.


Not sure how slow it’s ARM processor is in actual use, but we know it’s far slower than Apple ARM CPUs.


AFAIK, you can game on Surface X without much issue. There are numerous YouTube videos of popular games doing 60 FPS on descent settings.

Apple fans just seem to be in denial about being late to the game.

I have to admit, the Apple's ARM processors will likely be significantly faster per core. But they are not the driver of the switch to ARM. If anything, Chromebooks were.


Does anyone know if AWS Graviton supports TrustZone?


> Mac ARM laptops mean cloud ARM VMs.

If you develop for Mac, chances are you want your CI to use the same target hardware, which means cloud ARM hardware to run Mac VMs.


Chances indeed, but given how a lot of software is written in crossplatform languages (Java, JS, Ruby, etc) or the underlying CPU hardware is abstracted away (most compiled languages), I like to think it doesn't really matter except in edge cases and / or libraries.

Wishful thinking though, probably.


I can abstract away the OS (mostly), but I can’t abstract away the ISA without paying a pretty hefty performance cost.


Except you’re not going to be selling access to a Mac VM running on Graviton anytime soon.


> you’d be stupid

there's still a load of non scale-out services that the world depends upon.


Try Ampere:

https://amperecomputing.com/

Ever since Cavium gave up Arm server products and pivoted to HPC, there hasn't been a real Arm compititor.

Ampere is almost all ex-Intel.


Intel has a couple of issues.

1. Back in the 1990s, the big UNIX workstation vendors were sitting where Intel is now at the high end, being eaten from the bottom by derivatives of what was essentially a little embedded processor for dumb terminals and scientific calculators. Taken in isolation, Apple's chips aren't an example of low-margin high-volume product eating its way up the food chain, but the whole ARM ecosystem is.

2. For a lot of the datacenter roles being played by Intel Xeons, flops/Watt or iops/watt isn't the important metric. For many important workloads, the processor is mostly there to orchestrate DMA from the SSD/HDD controller to main memory and DMA from main memory to the network controller. The purchaser of the systems is looking to maximize the number of bytes per second divided by the amortized cost of the system plus the cost of the electricity. My understanding is that even now, some of the ARM designs are better than the Atoms in term of TDP, even forgetting the cost advantages.


And we should not forget, that ARM is already there. It is used by almost all the IO controllers, including the SSD/HDD controller.


Curious -- with that being the case, why havent non-Intel system taken more market share on these use cases?


Momentum. I think one component of momentum is just the time it takes to develop mature tooling for the ecosystem and port existing software over.

I invested in ARM (ARMH) back in 2006, partly because I realized that whether Apple or Android won more marketshare, nearly everyone was going to have an ARM-based smartphone in a few years. Part of it was also realizing the above and hoping ARM would take a good share of the server market. SoftBank took ARM private before we saw much inroads in the server market, but it was still one of the best investments I've made.

Of course, maybe I'm just lucky and my analysis was way off.


clearly this analysis was accurate! prescient and well done. was ARM running most non-iphone smartphones in 2006? the iphone launched in 2007, so this analysis must have been based on other smartphones (unless you had inside intel on the iphone).

1) which 2006 analyses proved wrong? it would be interesting to see which assumptions made sense in 2006 but were ultimately proven wrong.

2) which companies are you evaluating in 2020, and why?

thanks for sharing.


Oops. Must have been 2007 that I bought ARMH. I was working at Google at the time, and I had an iPhone, and Google had announced they were working on a smartphone. Some of my colleagues were internally beta-testing the G1 at the time, but it was before we all got G1s for end-of-year bonuses. I think December 2007 was the first year we got Andraid phones instead of (non-performance-based) cash for the holidays.

1) I thought the case for high-density/low-power ARM in the datacenter was pretty clear-cut, but it was an obvious enough market that within a few multi-year design cycles, Intel would close the gap with Atoms and close that window of opportunity forever, especially considering Intel's long-running fabrication process competitive advantage. In late 2012, one of my friends in the finance industry wanted to be "a fly on the wall" in an email conversation between me and a hedge fund manager he really respected who had posted on SeekingAlpha that his fund was massively short ARMH. A few email back-and-forths shook my confidence enough to convince me that the server market window had closed, and that it was possible (though unlikely) that Atom was about to take over the smartphone market. I reduced my position at that time by just enough to guarantee I'd break even if ARMH dropped to zero. In hindsight, I'm still not sure that was the wrong thing to do given what I knew at the time. I had two initial thesies: the smartphone thesis had played out and was more-or-less believed by everyone in the market, and the server thesis was reaching the end of that design window and I was worried that Intel was not far from releasing an ARM-killer Atom, backed up by the arguments of this hedge fund manager. I'm really glad the hedge fund manager didn't spook me enough to close out my position entirely.

2) I'm not a great stock picker. I've had some great calls and had about as many pretty bad calls. I'm now doing software development, but my degree is in Mechanical Engineering, and I took a CPU design course (MIT's 6.004) back in college. I think my edge over the market is realizing when the market is under-appreciating a really good piece of engineering, though I punch well below my weight in the actual business side analysis.

Jim Keller is a superstar engineer, who attracted other superstars, and I don't think the market ever really figured that out. Jim Keller retired now, so there goes about half my investment edge. Though, maybe I'm just deluding myself on the impact really good engineering really has on the business side.


thanks for the thoughtful reply. mistakes are expected in any field; you are too humble and should give yourself a little more credit. :)

could you share what arguments the hedge fund manager made?

do you find any companies interesting now?


I am a programmer. All the software that I run is cross platform, so I expect a smooth transition.

Elixir, my main programming language, will use all the cores in the machine, e.g. parallel compilation. Even if an ARM-based mac has worse per-core performance than Intel, I am ahead.

Apple can easily give me more cores to smooth the transition. Whatever the profit margin was for Intel on the CPU, they can give it to me instead. And they can optimize the total system for better performance, battery life and security.


>Whatever the profit margin was for Intel on the CPU, they can give it to me instead.

HAHAHA you sweet summer child.

Next thing you know, you'll be demanding Apple puts back the heatpipe in the Macbook Air for cooling the cpu instead of hoping theres enough airflow over the improperly seated heatsink (which just has to work until the warranty expires ;) )


Not an unreasonable expectation, if you’ve been paying attention to Apple’s pricing lately

They now sell an iPad, arguably better than any of the non-iPadOS competition, for $350.

The iPhone SE, one of the fastest smartphones on the market (only other iPhones are faster) starts at just $399.

They’ve aggressively been cutting prices, I believe, so that they can expand the reach of the services business. They’re cutting costs to expand their userbase.

I wouldn’t be surprised to see the return of the 12-inch MacBook at the $799 to $899 price point, now that they no longer have to pay Intel a premium for their chips.


> The iPhone SE, one of the fastest smartphones on the market (only other iPhones are faster)

I just replaced my Google Pixel (4 years old) with an SE and it's like... They're not even comparable. I'm sure it makes clear sense as to how so much progress could be made in 4 years, and how it could cost so much less (I paid $649 USD for the Pixel), but it feels a bit magical as a consumer and infrequent phone user. It's a fantastic little device.

I'm still 900% pissed about the MacBook Air they sold me in 2018 and I resent the awful support they gave me for the keyboard (It's virtually unusable already), but as a phone business, they seem hard to beat right now.


I can't do iOS still. Too many restrictions, compromise and missing functionality.

Like, give me a custom launcher option (custom everything options), emulation of classic consoles and handhelds, and easy plug and play access via a PC. If I can't even get one of those, I'm on Android regardless of how shiny I think Apple hardware is.

But if you don't want nay of that, it probably works grate. Just not for me.


I hear you, I used to feel the same. I was a heavy phone user once and that's why I got the Pixel. A lot of things mattered to me then that just don't now. I could almost get by with a flip phone, but there are still a few things I like about smart phones:

- I like to use my phone to check bathy charts while I'm out free diving. In a water proof case, a phone is a huge asset for finding interesting spots to dive. I don't really want to buy a dedicated device for this. I can just hook it to my float and it's there for exploration/emergencies/location beacon for family/etc.

- When I forget to charge my watch, it's nice to have GPS handy for tracking a run or ride

- It's really nice to be able to do a decent code review from a phone if I'm out and about. I wouldn't do this with critical code or large changes, but it's nice to give someone some extra eyes without committing to sitting at the desk or bringing my computer places

- I have ADHD and having a full-fledged reminder/task box is a god send. I'd be lost without todoist

I could do this all with any modern smart phone, but I went with the best 'bang for the buck' model I could find. I don't think I'll miss anything from Android.


I’m torn between an iPhone 11 and the SE to replace my Pixel 2. I have an XR for work and if it wasn’t so locked down for security I think I’d be using this one more than my Pixel.

I do miss a home screen widget and some apps, but I think overall the experience is really nice, and being able to talk to the iMessage people would be nice.

I built a Hackintosh on a Thinkpad and Handoff, AirDrop and all those connections are way more useful than I thought. It’s been 3 years and Android/Windows seems to be pretty much in the same state minus some minimal improvements that really don’t add that much value.

Integration and unified workflows is where it’s at for me at least.


Likewise, the degree of integration is kind of absurd and luxurious for me. I expected it to be a perk but it's quite a bit better than I thought it would be.

When I picked up my SE and signed in with my Apple ID it gave me access to all of my keychain and tons of other stuff I use frequently like photos and documents. Things I didn't realize I was missing on Android. I installed multiple apps and was automatically signed in! I didn't expect that. Wifi just works everywhere I've been with my MacBook. People near me on iPhones can have all kinds of things exchanged with minimum effort. Contacts, files, wifi passwords.

I know all of this works on Android too (kind of), but since I have a MacBook and an Apple ID I use a lot, my experience with an iOS is faaaar better than it was with Android.

I used to go days without touching my Pixel, but this iPhone is so much nicer to use and so much more useful that I carry it regularly and use it quite a bit. I'm really happy with it.


There should be minimal to no difference in BOM cost from iPad Pro to MacBook 12". As a matter of fact it is probably cheaper for MacBook without the touch sensors, Pro Motion, Camera Modules. A 12" MacBook at $799 is very much plausible.


Hey now, that heat sink is going to be the main difference between an Air and iPad before too long.


"they can give it to me instead" yes that is exactly what Apple will do


Apple is no stranger to aggressive pricing when it suits them.


Sure as long as they have that 30% profit rate


got any post-Jobs examples?


The iPhone SE and entry level iPad.

According to the usually accurate Ming Chi Kuo, the entry level iPad will be moving to using a top of the line chip as well, which is a needed change.

>Kuo first says that Apple will launch a new 10.8-inch iPad during the second half of 2020, followed by a new iPad mini that’s between 8.5-inches and 9-inches during the first half of 2021. According to the analyst, the selling points of these iPads will be “the affordable price tag and the adoption of fast chips,” much like the new iPhone SE.

https://9to5mac.com/2020/05/14/ipad-mini-apple-glasses-kuo/


not only rumors, but unpriced rumors?

and iphone SE is not a low margin example https://www.phonearena.com/news/apple-iphone-se-2020-teardow...

so question still stands, got any post jobs example?


I'll let the guys at Android Central judge how much value for the money the iPhone SE delivers:

>I didn't think we would get to a point where an iPhone offers more value than Android phones, particularly in the mid-range segment. But that's exactly what's going on with the iPhone SE.

https://www.androidcentral.com/iphone-se-2020-just-killed-pi...

Feel free to ignore Ming Chi Kuo, but given his track record for accuracy and known good sources inside Apple's supply chain, people pay attention to what he has to say.

More interestingly, will Apple use the same strategy of selling at entry level prices with a custom ARM SOC that outperforms the competition for an ARM Mac as well?


iPhone SE is absolutely not cheap when you can buy a powerful android for $200. It's not absurdly priced, but that's about it.


>powerful android for $200

Even the $1500 Android flagships are completely outclassed by that SE.

On CPU:

>Overall, in terms of performance, the A13 and the Lightning cores are extremely fast. In the mobile space, there’s really no competition as the A13 posts almost double the performance of the next best non-Apple SoC.

https://www.anandtech.com/show/14892/the-apple-iphone-11-pro...

and GPU:

>On the GPU side of things, Apple has also been hitting it out of the park; the last two GPU generations have brought tremendous efficiency upgrades which also allow for larger performance gains. I really had not expected Apple to make as large strides with the A13’s GPU this year, and the efficiency improvements really surprised me. The differences to Qualcomm’s Adreno architecture are now so big that even the newest Snapdragon 865 peak performance isn’t able to match Apple’s sustained performance figures. It’s no longer that Apple just leads in CPU, they are now also massively leading in GPU.

https://www.anandtech.com/show/15246/anandtech-year-in-revie...


[flagged]


No a cheap Porsche is a much better driver than an expensive Corvette. Porsche is about the driving experience as much as speed.


that's exactly the point, not everyone purchase by driving experience, and value is subjective, don't know why you're contrarian to the statement.

the point is that nothing of value can come from arguing subjective points.


“Better driver” /= “faster”


what has that anything to do with my statement?


It’s exactly your statement.


None of this has anything to do whatsoever with margins for apple.


They offer the same chip on their 1k USD smartphone as on their $399 phone. For many people who like iOS but don't want to spend a lot on a phone, they no longer need to spend $600+ USD to get iOS, which is a lot of people.


You don't need best CPU and GPU to chat in whatsapp and make some selfies. Also you conveniently did not mention RAM which is more important for good user experience in typical smartphone apps.


AFAICT, given the lack of Java on iOS, applications tend to use less memory than their Android counterparts, which is what has enabled Apple to put less RAM in their phones for years.


IOS has always done a lot more with less memory than Android, by design. No JVM, lots of memory maximizing OS features.


that's nice and all but it's a thread about aggressive pricing and I just linked you source for the Iphone SE having big fat margins, so, no, you're talking about another thing completely, the Iphone SE is not priced aggressively. it's at most built on the low end to compete strategically, and that has nothing to do with the point being made.


So Apple makes a great product at a great price, and you still complain about their margins? What’s it matter what the margins are if it’s competitive with alternatives?

It’s always kind of strange to me to see all the people who criticize Apple’s profits. Look at the profit margins of other major tech companies, and you’ll find they make similarly large margins.


you're mistaken, I'm not complaining.

this thread is about apple margins and people that is arguing they will transfer margins into lower margins. I really don't know why you're arguing about product value. this is the thread starter: https://news.ycombinator.com/item?id=23598122

what has your point about prices and product quality anything to do with this?


The current 329$ iPad. The 499$ iPhone SE.


Are you sure it does not carry same margin +/- as everything else ?


I'm getting downvoted to oblivion but it's a hill I'm happy to die on - so far everyone has brought up lines envisioned by jobs or items that actually enjoy high margins.


The point was about agressive pricing and you keep referring to high margins. You can still be agressive on price (based on what the market commends) and have high margin. You're making a separate point and you fail to notice it.


all envisioned and positioned in the job era, got any post jobs example?

and 2020 se is not a low margin example https://www.phonearena.com/news/apple-iphone-se-2020-teardow...


Mac mini, low-end MacBook Air, iPod touch…


The Mac mini in no way can be considered "aggressive pricing". It is well over the price of anything in its range. I wouldn't doubt that Apple's margins on the Mac mini are higher than on some of the portables.


Yeah, but what if there was one that was aggressively priced due to using an ARM processor?

I'm really not sure how the market would respond to it. So long as it played in the Apple ecosystem, I think it's interesting for some things though.


Yeah, but what if there was one that was aggressively priced due to using an ARM processor?

What makes people think that Apple will lower the price because they are now using their ARM processor?


I mean the original Mac mini price point, rather than the current one. It may have been a bit overpriced even then, but it was a convincing entry-level product for those switching from the PC.


and it was job who positioned it in the lower end of pricing anyway, so I don't see how it could ever be thought of as a post-job example


all envisioned and positioned in the job era, got any post jobs example?


Slightly off-topic, but does anyone have any details on how the new ARM-powered MacBooks will perform compared to the Intel-powered MacBooks? According to this[1] article, "the new ARM machines (is expected) to outperform their Intel predecessors by 50 to 100 percent". Can anyone shed some insight into how this is possible?

[1] https://www.theverge.com/2020/6/21/21298607/first-arm-mac-ma...


The Macbooks will have a CPU that's comparable or better than the iPad Pro. The iPad Pro already beats the Macbook Pro in some well-known benchmarks, such as Geekbench.

https://www.macrumors.com/2020/05/12/ipad-pro-vs-macbook-air...


SPEC is a better cross platform benchmark, since it's an industry standard and was designed just for cross platform testing.

>Overall, in terms of performance, the A13 and the Lightning cores are extremely fast. In the mobile space, there’s really no competition as the A13 posts almost double the performance of the next best non-Apple SoC. The difference is a little bit less in the floating-point suite, but again we’re not expecting any proper competition for at least another 2-3 years, and Apple isn’t standing still either.

Last year I’ve noted that the A12 was margins off the best desktop CPU cores. This year, the A13 has essentially matched best that AMD and Intel have to offer – in SPECint2006 at least. In SPECfp2006 the A13 is still roughly 15% behind.

https://www.anandtech.com/show/14892/the-apple-iphone-11-pro...

I think performance per watt is going to be just as important as overall performance. Apple's "little" cores were a new design in the A13, and compare very well against stock ARM cores on a performance per watt basis.

>In the face-off against a Cortex-A55 implementation such as on the Snapdragon 855, the new Thunder cores represent a 2.5-3x performance lead while at the same time using less than half the energy.

https://www.anandtech.com/show/14892/the-apple-iphone-11-pro...

It will be interesting to see how much performance they can get when they are on a level playing field when it comes to power and cooling constraints.


geekbench is a horrid benchmark.The amount of info on how they run the tests is extremely limited (mostly what lib they opted to use). The tests are very short, no power or thermally limited. I'd try and link to their methodology[0]:

for instance: Integer WorkloadsLZMA CompressionLZMA (Lempel-Ziv-Markov chain algorithm) is a lossless compression algorithm. The algorithm uses a dictionary compression scheme (the dictionary size is variable and can be as large as 4GB). LZMA features a high compression ratio (higher than bzip2).The LZMA workload compresses and decompresses a 450KB HTML ebook using the LZMA compression algorithm. The workload uses the LZMA SDK for the implementation of the core LZMA algorithm.

They compress just 450KB of text as a benchmark - even the dict size is greater then the input. If both fits the CPU caches - the results are supreme, if not - horrible compared to the former. Also it'd vastly depend how fast the CPU can switch to full turbo mode (for servers that doesn't matter at all)

[0]: https://www.geekbench.com/doc/geekbench4-cpu-workloads.pdf


FWIW you've linked Geekbench 4; Geekbench 5 is https://www.geekbench.com/doc/geekbench5-cpu-workloads.pdf

I agree these benchmarks have issues, but there's a consistent trend (eg. AnandTech's SPEC results are similar—they use active cooling to avoid the question about thermals) and the most reliable benchmarks, like compilers, aren't outliers.


True that, they did increase the book size and yet it's till below the dict size: compresses and decompresses a 2399KB HTML ebook using the LZMA compression algorithm with a dictionary size of 2048KB.

Doing the test below 4MB of L3 would hurt, doing with with less than 2MB would decimate the test.


The Geekbench results are questionable because they do things like give significant weight to SHA2 hashing, which is hardware-accelerated on the iPad Pro but not the older Intel processors in current Macbooks:

https://www.pcworld.com/article/3006268/tested-why-the-ipad-...


> Geekbench is also browser-based

That's not true.


> The iPad Pro already beats the Macbook Pro in some well-known benchmarks, such as Geekbench.

We keep hearing this especially from the Apple-bubble blogs, yet why does it not translate into higher end work being done on these iPads if they are supposedly so powerful, yet all we ever see is digital painting work that isn't pushing it at all and extremely basic video editing.


iPad-using musicians, synth-nerds and audio hackers are having a BLAST with the high-end iPads, I can tell that much ..


You’re essentially downplaying a huge percentage of people doing real work.


it’s bc of iOS and AppStore. No serious dev will port their desktop apps to that.

Also good luck to Apple if they lock down macOS the same way.


>Also good luck to Apple if they lock down macOS the same way.

They are locking down macOS gradually. Their hardware revenue is declining indefinitely and they know it. Their push is to lock down the software and reap a cut as much as they can.


The rumor is a chip that is twice the core counts of the A12Z/X and built on TSMC 5nm plus with active cooling. I’m guessing somewhere around 3X iPad performance.


Probably 3 chips for different models? I could imagine the iPad Pro chip in the Macbook Air, a mid-range chip and a high performance (the one with 3x performance) for the Macbook Pro.

Could make the Macbook Pro the fastest laptop in some benchmarks while also offering more affordable options.


Wow..incredible if true.


This might be a dumb question, but what on earth is the point of x86 if ARM performs better and is also more power efficient?


x86 vs. ARM is irrelevant, that's just the instruction set. There's (almost) no performance or efficiency differences from the instruction set by itself.

Apple's particular ARM CPU core happens to look like a beast vs. Intel's particular x86 CPU core. But those are far from the only implementations. AMD's x86 is, at the moment, also very strong, and incredibly efficient. Lookup Zephyrus G14 reviews for example - the AMD 4800HS in that is crazy fast & efficient vs. comparable Intel laptops.

Similarly, lookup Apple A12/A13 vs. Qualcomm benchmarks - Snapdragon is also ARM, but it kinda sucks vs. Apple's A12/A13.

In terms of power realize that there's quite a bit of overlap here. Apple's big cores on the A13 pull about 5W. That's not really that different from X86 CPU cores at similar frequencies. The quad-core i5 in a macbook air, for example, pulls around 10w sustained. It's just a product design question on what the power draw stops at. More power = more performance, so usually if you can afford the power you spend it. So far there's been relatively little product overlap between x86 & ARM. Where there has been, like the recent ARM server push, power consumption differences tend to immediately go away. Laptops will be a _very_ interesting battle ground here, though, as differences get amplified.


I thought RISC chips are easier to create in a more power efficient way as the chip design is way easier than for CISC (like x86)? If Apple had been able to create a competitive x86 chip they would've done that. But the instruction set is an extremely important part of that.


Please no - that debate is an anachronism.

Apple simply can't create any kind of x86 chip whatsoever (at least if they plan to use it :) ) - they don't have an ISA license and likely won't ever be able buy one, because Intel doesn't sell.

Well, apart from outright buying AMD. Or Intel. Or both. :)


So you basically say it's not hard to design a processor (even x86), just patents prohibit it? I had always thought that at least some part of the Intel/AMD dominance was the complexity of the designs and manufacturing that even a company like Apple couldn't build within a decade.


It's worth noting that this benchmark uses the 13" MBP; the 16" MBP has a more powerful CPU.


Apple is the most valuable public company in the world. They have been making their own SoCs for the iPhone for over a decade. Their internal-consumption focus along with the bankroll of the world's most valuable corporation means that they can make the best chips. Intel's market cap is just 1/8th of Apple.

Intel chips have basically stagnated for about 5 years and they abused their monopoly and market leader positioning by offering marginal improvements each year in order to protect margins, as well as fab shrink problems that insiders describe as coming from culture not dissimilar to Boeing; just with less lives at stake.

In the meantime, competitor foundries have caught up, and exceed Intel's ability to ship in volume. ARM obviously is eyeing the desktop, but more critically server market so the latest ARM IP is geared towards higher performance computing, quite well I may say.

State of the art fab + state of the art processor IP = state of the art processor. Not a huge surprise :)


Yet Apple can't even make a working keyboard.


They can, they just compromised on an overengineered but stupid design so that they could shave off half a mm on their laptop thickness.


Butterfly keyboard works fine for me. It's actually my favorite MacBook keyboard so far.


Agreed. I have both the 2019 16" and 2018 15" side by side, I greatly prefer the butterfly keyboard.


They made working keyboards for years and have gone back to doing so. The bit in the middle was a blip.


nah. the "gone back" wasn't far enough. I liked say the first generation macbook pro, with concave keys and longer throw (although the creaky case was not as nice as the sculpted aluminum ones)

I had a friend with an old old powerbook and the key shape reminded me of a thinkpad.

They can do it, but the flat short throw keys are too form over function.


I still use a 2014 era Apple wireless keyboard and prefer it over any of the mechanical variants I’ve tried. Key travel distance is a personal metric.


the first macbook pro didn't have THAT long a throw.

https://upload.wikimedia.org/wikipedia/commons/7/76/MacBook_...


Right, I had that mac and the KB was fine, as is the 2014 wireless model I have, despice neother being long throw. I'm fine with short throw, it's a preference.

The 2016 keyboard fiasco was too far for me though, and even now with my 16" 2019 mac with the "fixed" keyboard, I still prefer the wireless 2014 model, or the KB from my 2014 MBP.


I, for one, prefer a keyboard that is comfortable to type on even if doesn’t have the greatest travel over a significantly thicker laptop.


I’ve had many macs and all the keyboards have been best in class. I just skipped buying a model with the bad keyboard. That was made possible by the fact that their laptops have serious longevity so I could just hold onto machine until they fixed it.


I think when a threat comes along, intel steps up.

they WILL compete, and they've been playing leapfrog with AMD for years.

I'm waiting to see the intel response for the new 32 and 64 core chips from amd.


Maybe. The problem is that I'm guessing Apple wants to move to their own ARM not for more compute, but more power efficiency. Intel has never been able to deliver on the low power side.


> Intel has never been able to deliver on the low power side.

Yes they have. It's why we have 10+ hour thin & light laptops. The quad-core i5 in a macbook air runs at 10W, that's comparable per-core power draw to an A13.

Controlling the CPU will give Apple more controls to tune total-package power, particularly on the GPU / display side of things, but CPU compute efficiency isn't going to be radically different, and that's not going to translate into significant battery life differences, either, since the display, radios, etc... all vastly drown out low-utilization CPU power draw numbers.


> The problem is that I'm guessing Apple wants to move to their own ARM not for more compute, but more power efficiency.

I'm guessing the switch is for those reasons, plus a more important one (my opinion only), control.

If they use their own chips, they're not stuck waiting on Intel (or any other x86 CPU manufacturer) to build CPUs that have the attributes that matter to them the most.


What I really don't understand is why can Apple out-compete others by that much? Sure they're bigger than Intel but for them, Chip design is only one small part. Have they outspent Intel, Qualcomm, AMD & Co so massively?


Why should we expect anything significantly different than a year iteration on what they already use in the iPad Pro?


A laptop (or desktop!) computer has a significantly higher energy budget than the iPad. The SoC Apple uses in those devices could conceivably be clocked higher and/or have more cores than the A12 parts they're currently using in the iPad Pro.


You can't exactly just shove a turbo on an under-powerd commuter I4 engine (fuel efficient) and expect it to turn out well.

I wouldn't imagine just upping the vcore and the frequency clock will just magically make it faster and compete at the same level as an intel chip.

I'm deeply skeptical as a developer, you -will- get compiler errors, and it's going to suck for a while.


Well, it entirely depends on how they designed the chip. If they planned ahead they could design the chip for 4Ghz ahead of schedule and then take advantage of the higher frequency once they decide to actually release a desktop or laptop with an ARM chip. I don't believe that Apple did this so they will either create a new chip exclusively for desktop/laptop or they just keep the low frequency.


> You can't exactly just shove a turbo on an under-powerd commuter I4 engine (fuel efficient) and expect it to turn out well.

Isn’t this the exact concept of the VAG EA888 motor used in the Audi A4 and extended all the way up to the S3?

That said, the rumored chip is an actively cooled 12 core 5nm part which is more like an RS3 motor with extra cores/ cylinders


Way different thermal envelope? iPads are passively cooled.


Isn't that a selling point of ARM? Would you prefer a passively cooled but only slightly slower laptop or a faster laptop with a noisy fan?


I don't think there can be a universally accepted answer to this, as there are so many different user priorities for laptops.

Having said that, I think the top requirement for most laptops is that "it's gotta run what I need it to run on day one".

For a huge amount of light users, ARM Macs are going to satisfy that requirement easily.

Once you move up the food chain to users with more complicated requirements, however, it's less of a slam dunk... for the moment.


This is like saying Qualcomm and Samsung should be able to make top tier desktop chips but the reality is that it doesn't translate so easily.

I guess its all trust that Apple can pull it off and there's no detailed rumors yet?


That's because neither does build even a top tier mobile ARM chip. Each of their best offerings is about half as fast as Apple's A12X/Z


Because with the A series they show that they know how to make a good CPU that fits into the thermal restrictions of the chassis.


>fab shrink problems that insiders describe as coming from culture not dissimilar to Boeing; just with less lives at stake.

Since normal processors calculate a lot of important things, I'm wondering if there really are fewer lives at stake. Of course it would be more indirect, but I could imagine that there are many areas where lives are tied to PCs doing the right thing.

E.g. what if a processor bug leads to security issues in hospital infrastructure.


Like Meltdown and Spectre?


Maybe we should wait for it to actually be announced?

Otherwise it's just fantasy sports.


The whole fun of product roadmap speculation is it being a fantasy sport.

(Fun and potentially useful, if you want to plan ahead, play the market, and other causes...)


> Maybe we should wait for it to actually be announced?

This[1] claims ARM Macs will be unveiled today (Monday) at WWDC. I guess we'll see soon enough.

___

1. https://www.theverge.com/2020/6/21/21298607/first-arm-mac-ma...


That's my whole point. All the discussion up til now has been based on rumours and speculation. There's only so much we can discuss around that.

Until we actually have details of Apple's implementations, there's no realistic way we can discuss

> how the new ARM-powered MacBooks will perform compared to the Intel-powered MacBooks


Apple and the Kremlin share a few things, one of which is that, due to the nature of how official announcements are done, a desire if not need for outside analysts has developed, who specialize in forecasting those official announcements both on outside trends and close observation of the company and any clues they might forget to filter.


I'll have to dig around and find relevant phoronix benchmarks later for you. But I think it's somewhat of a hive-mind assumption that Intel has generally higher performance all round. A majority of benchmarks show that Intel is generally better in single threaded benchmarks whereas ARM CPU's are better in multicore, multi-threaded applications with a lower TDP and leagues better efficiency. Heck the Surface Pro X that Microsoft brought out has performance on par with an Intel i5 with at least 1.5x the battery life of its Intel counterpart and without needing an internal fan whatsoever. I wouldn't be surprised at all if Apple will make the switch to ARM for all but their highest spec MacBook pro's.


I will immediately believe them when you interpreted "outperforming" as "better performance per watt".

However, I can also see their chips outperforming Intel in sustained workloads. Apple has a history of crippling Intel performance by allowing the CPU to only run at full speed for a short while, quickly letting it ramp up the temperature before clocking down aggressively. Some of that is likely because of their power delivery system, some of it is because they choose to limit their cooling capacity to make their laptops make less noise or with fewer "unattractive" airholes. Either way, put the same chip in a Macbook and in a Windows laptop and the Windows laptop will run hotter but with better performance.

However, with a more efficient chip, Apple could allow their CPU to run at a higher frequency than Intel for longer, benefiting sustained workloads despite the IPC and frequency being lower on paper. This is especially important for devices like the Macbook Air, where they didn't even bother to connect a heatpipe to the CPU and aggressively limit the performance of their Intel chip.

I'd even consider the conspiracy theory that Apple has been limiting their mobile CPU performance for the last few years to allow for a smoother transition to ARM. If they can outperform their current lineup even slightly, they can claim to have beaten Intel and then make their cooling solution more efficient again.


Everything you said is echoed in this LTT video from yesterday[0].

It does appear to be a very poor thermal design, but you can see that most of the issues appear to be from trying to make the user not feel the heat.

It feels a bit icky to leave performance on the table like that; since you paid for a CPU and are getting a marginal performance from it. But I remember Jobs giving a talk before about how "Computers are tools, and people don't care if their tools give them all it can, they care if they can get their work done with them well"

That's not a defense of a shitty cooling solution, but it is a defence of why they power limit the chips. When I got my first (and only) Macbook in 2011, it had more than twice the real battery life of any machine on the market. That meant that I was _actually_ untethered. That's what I cared about at the time much more than how much CPU I was getting.

[0]: https://www.youtube.com/watch?v=MlOPPuNv4Ec


> Either way, put the same chip in a Macbook and in a Windows laptop and the Windows laptop will run hotter but with better performance.

The Windows laptop can't possibly run hotter than a Macbook, all of which reach the throttle temperature (~100 °C) almost instantly.

I actually agree to the artificially limited performance, artificially worsened thermals theory. The Macbook Air design is extremely fishy.


The laptop can be hotter if it dissipates more heat to the case. The reason Macbooks reach 100C fast is because they don't give off a lot of heat so that the case doesn't burn your skin.


I think all of that is because of "the triangle".

the vertices would be power, heat and perf.

Apple just hasn't shipped an arm chip with high power and "excessive" cooling.


The article doesn't really mention outperform in what aspect. The sentence following that claim mentions "50% more efficient" which seems likely because ARM chips usually use <10W power while Intel U series chips use 15-25W


With geekbench scores, the A13 is outperforming some mobile i7 processors: https://gadgetversus.com/processor/intel-core-i7-8565u-vs-ap....

With a new A14 chip combined with intel stagnation, it wouldn't be too crazy to see a 50-100% increase, especially if you looked at multi-core scores.


While I certainly agree A13 is a fast chip, the more I've compared Geekbench scores the less convinced I am that the results can be compared between platforms.

Also first i7 chip was released in 2008. (The chip you refer to appears to be modern though, from 2018.)

To prevent any misunderstandings, I agree A13 has performance comparable to a modern laptop. I just don't think Geekbench is a right tool to measure that difference.


> Geekbench 4 - Multi-core & single core score - iOS & Android 64-bit

What does that even mean? The Intel platform is running Android?


It's entirely possible they installed Android on the Intel platform. Unlikely, but possible.


I had an Intel x86 Android phone from Asus a while ago, complete with Intel Inside logo on the back case. Not sure if they still sell x86 phones these days though.

https://www.gsmarena.com/asus_zenfone_2_ze551ml-6917.php


I assume the tests are same regardless of platform.


They aren't, though. Compare windows vs. linux geekbench scores of identical hardware, they are extremely different:

Linux: https://browser.geekbench.com/v5/cpu/2638097 Windows: https://browser.geekbench.com/v5/cpu/2635048

Geekbench is a terrible benchmark.


You cherry picked a terrible example. Here are some more Windows examples with the same chip: https://browser.geekbench.com/v5/cpu/2628613 https://browser.geekbench.com/v5/cpu/2628613

In my own personal experience, running Geekbench on many machines, Geekbench scores are pretty close running between Windows and Linux on the same machine. At least for compiling code (my main use for powerful machines), their compilation benchmark numbers also fairly closely match compilation times I observe.


Your links are identical.


50-100% seems a little high. I think Apple has picked the low hanging fruit. This kind of performance increase is only possible in multicore workloads. Single core would be more like 20-30%. But maybe they can increase clock speeds with the new 5nm node.


They’ll be working with a more forgiving thermal envelope than in iOS devices too, though.


Real world? AFAIK no not yet. The issue is that traditional benchmarks don't really tell the whole story. IPC isn't directly comparable because ARM is ostensibly RISC (this is debatable I'm aware) and Intel is CISC (also debatable). So until things like Adobe Premiere or Media Encoder are ported over (if they are), or similar real world workloads like Blender, it's going to be really hard to compare. Even then things like design power matter. A 15w Intel chip is likely to outperform a 5w ARM chip just because it has more headroom thermally. It doesn't mean that either is faster (IPC per watt goes up the more you downclock for a variety of reasons).

If this sounds like I'm saying it's apples (no pun intended) to oranges... that's pretty much the case. Apple is likey not going to produce an ARM chip in directly comparable fashion to an Intel chip. E.g. equal cache, memory bandwidth, TDP (using Intel calc method), PCI-e lanes etc. Mostly because it doesn't make sense to do so because Intel's chips are designed to be a component of the system while ARM chips are largely designed to be an SOC.


They need to prove they can do a few things before I'm worried.

* They need to ramp the clockspeeds -- much easier to say than do.

* They need to include 256 or 512-bit SIMD operations without breaking the power budget.

* They need to design a consistent and fast RAM controller (and likely need to add extra channels)

* They need to integrate power-hungry USB4/thunderbolt into their system.

* They need to add a dozen extra PCIe 3 or 4 lanes without ballooning the power consumption.

* They need an interconnect (like QuickPath or HyperTransport/Infinity Fabric) so all these things can communicate.

These are the hard issues that Intel, AMD, IBM, etc all must deal with and form a much bigger piece of the puzzle than the CPU cores themselves when dealing with high-performance systems.


I'd be very surprised by that figure unless it's multicore benchmarks. Many macbooks are still around the 4 CPU/8 thread marks, so there's a lot of space to improve there with multiple cores.


Why would you be surprised? Apple's chip design team is the best in the entire industry, funded by a virtually infinite bankroll and with a razor focus on only developing chips for Apple's products; so microcode to drivers can be intimately optimized like a gaming console.

The access to iOS and macOS means that they can profile every single app everyone runs and improve real-world performance, and they do.

If you are a company that supplies anything (software or hardware) to Apple or Amazon, and you are operating on >20% gross margins, Apple and Amazon will destroy you, and destroy your margins. You have to continuously innovate and can't just relax and chill and collect your margins, like Intel has been doing for the past 5 years, milking their advantage that has evaporated.

Other than the modem, I don't think there's a component on the latest iPhones that has more than 20% margins for the supplier. Apple is ruthless. So is Amazon, when it comes to fulfilment.


Just because you have lots of money doesn't mean laws of physics suddenly are out to lunch.

Apple is using the same fabrication process as everyone else in a mature industy. There simply isn't 2x improvement possible unless everyone else was staggeringly inconpetent. Imagine Apple decided to build their own car, and made their own electric motors. Would you believe claims that their motors are 2x as efficient?


Agreed. A lot of people seem to think x86 has some inherent bottleneck that switching to ARM will magically bypass.

Single core performance is currently doubling every five years or so. We’re not about to see it double in the next 24 hours.


I've never understood this argument. For some reason there is this RISC vs CISC war debate but it's not actually based on reality of modern day chip design. The idea is that the decoder is consuming too much power and this design flaw cannot be fixed. Simplifying the decoder reduces the power consumption and therefore allows higher clock frequencies. But when we look at the clock frequencies of ARM and x86 chips we consistently see that x86 chips run at higher frequencies (roughly 4 Ghz for x86 and 2.7 Ghz for ARM). If there is a difference between ARM and x86 chips then it's not in the decoder. The difference must lie in the micro architecture of the chips and therefore RISC vs CISC is no longer relevant because the ISA is by definition not part of the micro architecture.

Apple's ARM chips probably perform much better than reference ARM chips because they are much closer to the micro architecture that Intel uses. But Apple's chips still consume less power than Intel chips used in laptops. The secret sauce is probably the fact that the SoC has cores with different performance and power profiles. Big cores are used for peak performance bursts and lots of small cores for energy efficiency while the device is waiting for user input.

I think Geekbench is measuring peak performance and not sustained performance but the numbers are probably correct. Making a representative comparison between mobile devices and desktops isn't possible because their different thermal profiles but it's highly likely that Apple will go for a passively cooled design if they can.


x64 chips can run at 4-5 GHz because these are desktop parts that can drop 10+ W sustained on a single core if need be. Power-frequency scaling tells us that of course this is very inefficient (perf/W), but efficiency isn't all, sometimes you just gotta go fast.


> double in the next 24 hours

there is a time-honored way to do exactly that.

Constrain the comparison.

"Double the performance" (of $899 to $901 laptops with 2560x1440 resolution)

"Twice as fast" (as all laptops with thunderbolt 3 and no sd card reader)


That's a fair comparison though, if you're a Mac user and this machine is twice as fast and gets 5 more hours on a charge, you'll probably be persuaded to upgrade. If you aren't a Mac user I don't think they need to convince you now, they need to convince the developers and artists to stay on the platform.


sooo. should i NOT invest in a 2K MBP 16 in to do mostly heavy multiple web apps ?


I'd say just buy them if you need it now. Web developers sometimes need to use docker and VMs for various development tasks and testing. Sure you can run docker in arm, but not all docker images has arm version available. Running Linux and windows vm will probably a challenge as well, at least until we know how well x86 simulator run in the new arm laptop.


That's one use-case where they absolutely shine, in my experience.


For one, because we had evidence of this for a few years now - that Apple's Arm chips can beat Intel's chips. It was mostly ignored because "but that's just peak performance - not sustainable in a phone/tablet envelope."

Yeah, until they put the same chip, or better in a laptop envelope. Then what for Intel?

This was from 2016. People being surprised by this haven't been paying attention to where this was going:

https://www.extremetech.com/mobile/221881-apples-a9x-goes-he...

Also, I said it many times before, Intel trying to be as misleading as possible with the performance of its chips, such as renaming Intel Atom chips to "Intel Celeron/Pentium" and renaming the much weaker Core-M/Y-Series (the same chips used in Macbook Air for the past few years) as "Core i5" is going to bite them hard in the ass, when Apple's chips are going to show that they can beat "Intel Core i5". But Intel kept making this worse with each generation.

https://www.laptopmag.com/articles/intel-renames-core-m-core...


I'm suspicious but for the pain it's going to cause it better have some real benefits


Out of curiosity, what pain are you expecting?

Maybe I'm spoiled because my development is mostly java, python, Go, docker, etc, all of which should have no runtime problem.

In terms of IDE, It looks like vscode is ported which is great.

I see two projects in my workflow: intellij and iterm2. I can live with a vanilla terminal but I'll need intellij. It looks like there's some[1] discussion around support.

1. https://github.com/JetBrains/intellij-community/commit/db531...


How do you expect docker to run fine when it doesn't even run good on x86 OSX? There are constant performance issues (https://github.com/docker/for-mac/issues/3499) because docker-on-mac is basically a VM running Linux


Docker is a linux tech tied to features in Linux kernel. You don't have cgroups and the like in XNU (OSX kernel) to be able to run docker natively on OSX.


It's OSX's fault then. Windows can run containers natively now as long as the base image is windows too. No reason why OSX couldn't extend support either.


I would suggest that the VAST majority of Docker usage on Windows is not “windows containers” but Linux-based containers in a VM, be it WSL or Docker for Windows. The promise of Docker is that a container image built on machine A will run on machine B. That relies on them running the same operating system in some capacity.


I don't expect it to be too painful. Most common software has already been build for ARM on Linux. A lot of things have event been built for Apple's ARM chips running iOS, though the App Store restrictions have limited the usefulness of these builds. Unmaintained closed source software will stop working but most of these applications were already killed by the removal of 32-bit x86 support in Catalina.

The biggest potential issue will be how people get hardware for doing the builds. Hopefully Apple will provide a good way of cross-compiling or a new machine that is suitable for datacenter deployment (ARM based mac mini?). We'll have to wait for the WWDC announcement to find out.


You mention Docker which most definitely will be impacted by this change. Docker on Mac runs off a virtualized x86 Linux instance. Docker only announced ARM support last year I think, I can't imagine Docker for ARM would be as anywhere near as fully supported as it is for x86, and you certainly won't be able to build x86 images to push to x86 servers, without using some sort of emulation layer which will be horribly slow compared to current macOS x86 docker.


Currently running docker using WSL2 on an ARM laptop. So it's pretty much native Linux, as an anecdote I can tell you that it's just that much more stable than with my Mac or with a traditional Windows laptop. The only downsides is there's a few docker containers that you have to hunt around for to get the ARM64 version. The only way I see docker performance to be better than this or more stable is to switch to a Linux machine entirely.


Yes - that's what I was alluding to in my original reply, re: lack of support, lack of images. Docker running on ARM might be great, but one of the original purposes of docker was to build and run the exact same images that would run on your servers, which will no longer be the case unless those servers are ARM based as well.


What laptop ?


Surface Pro X


I use ARM64 Docker on my Raspberry Pi 4s, it’s excellent and loads of images are already available.


But for instance emulating an ARM raspberry on a fairly powerful i7 through qemu is... an exercice in patience to say the least, from my experience compiling the same codebase on the host system and the emulated Pi, it is almost 10x slower. So I'm not holding out on ARM having powerful x86 emulation.


Use qemu user mode with a chroot, it's much faster.


I'm using https://github.com/multiarch/qemu-user-static - that's what it is, right ?


Yes, indeed.

But instead of docker, just deboostrap the amd64 release under a directory, copy qemu-amd64 to $DIR/usr/bin and chroot into that directory. Docker is a bit of a mess.

Before chrooting, bind-mount:

/dev to $DIR/dev

/proc to $DIR/proc

/dev/pts to $DIR/pts

/sys to $DIR/dev

/home to $DIR/home

copy /etc/resolv.conf to $DIR/etc/resolv.conf

Then chroot and login (su -l yourusername). That way you could try running a lot of software.


okay, will try to see what that gives. I'm curious as to why it would be slower (when taking into account that an equivalent docker container, say running debian stable, but with the same architecture as the host, runs in more-or-less the same time as building directly with my host's GCC)


>ut with the same architecture as the host

debootstrap an ARM rootfs, please.


Yeah but that's Linux. Docker works excellent on Linux distros


Yep - I make no claims as to how well Docker works on an ARM Mac.


You are probably going to have a great deal of pain with Docker. If you use Docker for Mac, it basically uses Hypervisor.framework to run a slim x86_64 Linux virtual machine. Rumors suggest that Hypervisor.framework won't emulate x86_64 in the next version of macOS. We'll find out tomorrow. So you'll either set up docker-machine with a x86_64 based remote server to run your docker commands, or you'll just use ARM versions of the various docker images. I don't know where you get your images from, but many of them won't be available for ARM.


There's a surprisingly large number of ARM docker images. You can partially thank the Raspberry Pi for that.


Is that ARM v6/v7 32bit or ARM v8 64bit? Most Raspberrys run a 32bit userland (armhf).


Plenty of ARM64-architected images available. Lots of us using 64-bit Ubuntu.


Yeah but then you have consistency issue which Docker aimed to get rid of. You local setup won't replicate what runs on production. The enterprise infra isn't going to switch to ARM in next 1-2 years as Apple hopes to.


I bet that all popular docker images will be available for ARM very soon. It's a matter of recompiling.


What are the largest benefit for iterm2? I use it but honestly I don't see much improvement over Terminal.


Probably tmux integration, but the speed decrease against Terminal doesn’t warrant the additional features for me at least.


A trick I recently learned is to add iTerm 2 to the “Developer Tools” section of the Security prefpane - it makes it fly. Similarly if you’ve ported your config through several versions of iTerm, ensure that GPU-accelerated rendering is on.

It’s no stretch to say that iTerm 2 is a major one of the reasons I cannot stand desktop Linux (or Windows, even with the new terminal there).


For me, it's the little things such as ability to undo accidental terminal tab closing with cmd+z, restoring sessions after restart so I don't need to reopen a bunch of tabs and cd manually, ability to show timestamp for each command run, etc.


VS Code is not yet ported to ARM, there are bootleg builds but they do not work with many extensions, for example Remote SSH relies on a project that lies below the VS Code project proper, and that (apparently) is a fair bit of work to bring to ARM (though it's being worked on) [0]

[0] https://github.com/microsoft/vscode/issues/6442


You are wrong. I currently use VS Code insiders daily on ARM with all my extensions working.

https://code.visualstudio.com/updates/v1_46#_windows-arm64-i...


I second this, of course VS Code needed Electron which needs Node all ready for ARM https://github.com/nodejs/node/issues/25998#issuecomment-637... Also to note Edge is available on ARM. Very happy Surface Pro X user here.


I second Edge working fantastic on ARM! I have a Galaxy Book S.


Really? That is GREAT NEWS!!!

EDIT: It works great, and it has remote for ssh and wsl!!! This made my day!


out of what you've listed docker is probably the biggest issue -- there is support for multiarch containers but migration has some pain points depending on your setup, also docker will be have to made to work on the arm build of macos assuming virtualization is supported and speedy


What about legacy software and software provided by a vendor? Let's say I want to run Fusion 360 and Altium on my hypothetical ARM Mac. Can I do that today?

Outdated information: Windows on ARM requires you to build UWP apps which is more work than just a recompile.


I wonder how this would impact developers who primarily use a MacBook for development. A lot of the compile toolchain is optimized for Intel based x86 CPUs. If buying a Mac means that I end up with a slower build every time I compile, I would buy Windows.


I see no reason why compilation should be slower on ARM. Compilation typically doesn't use any Intel specific instructions (intrinsics). As long as the toolchain is recompiled for ARM (which it will be, Apple makes clang) it should be very comparable. If anything, it will be faster on ARM because the ARM chips will have more cores, and compilation is very parallelizable for larger projects.


Apple does not make LLVM on their own, they contribute to it like many other companies.


Clang (not LLVM) was made by Apple [1, 2]. They specifically hired the lead developer of LLVM to build it because they were not happy with GCC. At this point many other companies have also contributed, but Clang is first and foremost developed as Apple's compiler. It is a safe bet that they will optimize it for new Apple hardware.

[1] http://llvm.org/devmtg/2007-05/09-Naroff-CFE.pdf

[2] https://lists.llvm.org/pipermail/cfe-dev/2007-July/000000.ht...


> Clang (not LLVM) was made by Apple [1, 2]. They specifically hired the lead developer of LLVM to build it

No, one cannot claim "Apple makes Clang" (which was your original claim) just because they funded the initial effort many years ago. It is not their product and they do not control it. It is like saying Blender is made by the original company who developed the initial version.

The LLVM Foundation (a legal entity) is the actual owner.

> because they were not happy with GCC

Many companies are not happy with GCC due to the license. Which is why so many companies work on LLVM.

> Clang is first and foremost developed as Apple's compiler.

False. The Clang version that Apple includes with macOS is not even close to the latest Clang.

> It is a safe bet that they will optimize it for new Apple hardware.

False. Optimizing is mainly the job of LLVM, not Clang.

Further, most optimization passes are independent of architecture. Codegen targets Intel, AMD, ARM, etc. hardware in general, not Apple’s in particular.


It is different enough that cppreference has its own Apple specific clang column.


Compilation should be significantly faster on an A14 based MacBook, since an unthrottled A13 is already within 10% a 9900K for GCC.


I'll believe it when I see it.


Geekbench has a clang benchmark. Randomly picking from their public results data, looking at the "Single-Core Performance, Clang"

• iPhone 11 Pro Max - 1413 (11.0 Klines/sec) https://browser.geekbench.com/v5/cpu/2634126

• 9900K - 1270 (9.89 Klines/sec) https://browser.geekbench.com/v5/cpu/2633843

The A13 is really impressive.


According to Geekbench a 9900K also beats a Ryzen 3700X in Clang. However when those two CPUs actually face head to head in an actual compilation benchmark (such as compiling LLVM), the 3700X easily & consistently beats the 9900K: https://openbenchmarking.org/embed.php?i=1907064-HV-CPUBENCH...

Or from another source, this time compiling Chromium (which also uses Clang): https://www.gamersnexus.net/images/media/2020/cpu-methods/1_...

And these are both 8-core / 16-thread CPUs, no multi-thread brute force shenanigans. And the 3700X is even the lower TDP & lower measured power consumption part, so no sketchy turbo games either.

So don't put too much faith in Geekbench numbers. They seem to have a very weak relationship to reality.


> Clang is a compiler front end for the programming languages C, C++, Objective-C, Objective-C++, OpenMP, OpenCL, and CUDA. It uses LLVM as its back end. The Clang workload compiles a 1,094 line C source file (of which 729 lines are code). The workload uses AArch64 as the target architecture for code generation.

729 LOC within a single file doesn't represent ANYTHING about real world compilation. It neatly fits within a single system call and the entire thing fits within L2 cache (maybe even within L1 cache).

https://www.geekbench.com/doc/geekbench5-cpu-workloads.pdf


Why did you choose to ignore the 2.22x faster multi-core score of the 9900k?

Tests like these are also very questionable if they are constrained to sub-60s microbenchmarks, which ignore power-limits and cooling.

I'd rather wait for the actual product in a comparable package (e.g. laptop chassis vs. whatever unspecified environment the Geekbench test was run in) before drawing conclusions...


Nobody's drawing conclusions here. All I'm saying is that it isn't crazy to think that an A14 part will be competitive with Intel's lineup, especially from a core performance perspective.

Power limits and memory hierarchy both benefit the 9900k heavily in this comparison, which is why I went with the single-core score. I'd be surprised if an iPhone 11 Pro Max will floor all its cores even for a short microbenchmark. A desktop/laptop A14 will likely look much more competitive in these respects.


>Power limits and memory hierarchy both benefit the 9900k heavily in this comparison, which is why I went with the single-core score.

But then you're leaving out the actual interesting scenarios for the question of Apple ARM performance in laptops/desktops.

We've already known for a while that the A1x family is an IPC beast.

But it's sustained power and memory subsystem performance that are the hard part in scaling up.


I agree with you regarding the methods, but wouldn't power/thermal limits favor the ARM chip?


Because the benchmark is currently the only way to compare two devices using different chassis and give an answer to a specific workload.

Yours is nothing but theoretical.

The Multicore benchmark compares a 8 core with a dualcore which is a bad faith comparison.


> The Multicore benchmark compares a 8 core with a dualcore which is a bad faith comparison.

How so? Are we comparing real-world performance or artificially limited what-if-scenarios? That's my critique with these benchmarks in general.

I don't care what a single A13/A14 BIG-core does in a short burst load situation when the context is code compilation that takes a minute or more on a system with multiple cores.

Multi-core is a bad faith comparison you say? I'd say it's an Apples-to-apples comparison if anything, since I usually don't artificially restrict my machine when trying to get work done as fast as possible.

So I'd argue that it's actually your perspective that's just theoretical: oh look - this dual-core [technically hexa-core while we're on the topic of being theoretical] SoC is faster than this octa-core, provided we choose a short, bursty workload and limit ourselves to a single core. You know, as one does when compiling Chrome or a any big software...

Like, contexts where compilation times actually matter as supposed to compiling this small utility takes only 0.89s on the A13X as opposed to 1.2s on the i9. I know, I know, the difference adds up quickly, like to an whole hour after just 11600 compilations. Theoretically, that is. sigh


> How so? Are we comparing real-world performance or artificially limited what-if-scenarios? That's my critique with these benchmarks in general.

Remember the point of this thread: speculating on how fast ARM chips will be for compilation workloads compared to Intel chips when shipped as part of Macbooks. It seems that you are mistaking the point of the parent poster - nobody is arguing that an iPhone is better at compiling code than a desktop with an i9 chip.

The comparison of an 8-core chip to a current 2-core chip is pointless because it is an artifact of comparing a $500 stand-alone chip to a chip that is sold as part of a complete $400 phone. Adding more cores is easy - it just costs more money, and Apple has opted not to do it for the iPhone because it is not necessary for standard iPhone workloads.

It is a very reasonable assumption that the chip that will ship with a Macbook will have at least as many cores (but probably more as the individual cores are cheaper) than the equivalent Intel chip.

> I don't care what a single A13/A14 BIG-core does in a short burst load situation when the context is code compilation that takes a minute or more on a system with multiple cores.

ARM chips are more energy efficient and run less hot than Intel chips. In an equivalent chassis they have the advantage in this situation. The intel chip will overheat and throttle faster.

> So I'd argue that it's actually your perspective that's just theoretical: oh look - this dual-core [technically hexa-core while we're on the topic of being theoretical] SoC is faster than this octa-core, provided we choose a short, bursty workload and limit ourselves to a single core. You know, as one does when compiling Chrome or a any big software...

Again, it's speculation. Obviously it's theoretical. The point is that current benchmarks indicate that ARM chips are very good for compilation workloads - relatively comparable to Intel chips for a much lower cost.


It's not a bad faith comparison if the vendor forces you to make it. If only 2 cores are available then that's something the vendor is to blame for.


I wonder how it will affect power-hungry software like Photoshop and After Effects.

When Apple moved to Intel the unintended effect was that Adobe et al finally optimized their software for Intel CPUs to the benefit of non-Apple users. Maybe this move will just have all those graphics people switch to Windows.


I'm pretty sure Apple wouldn't switch the Macs to ARM chips if it wouldn't allow for major performance improvements. Or, if not "absolute" performance, then performance per Watt or price/performance - but if ARM proves to be worse than Intel on "raw performance", they will probably only switch the low-end models and keep e.g. the Mac Pro on x86 for now (or switch it from Intel to AMD, after all Intel is currently struggling to keep up with them). And I'm sure Adobe et al will happily optimize their software for ARM as well...


Photoshop already runs on arm.


> If buying a Mac means that I end up with a slower build every time I compile, I would buy Windows.

Isn't this already the case, especially for docker, due to FS perf on OSX ?


I don't know this, but I suspect any pro users who want absolute performance may be left with an uninterested apple.

High-end desktops and cloud systems are getting really amazing performance, with 32 and 64-core threadrippers compiling the linux kernel befo[compile complete!]re you can finish your thought.

I think this is a by-product of amd and intel serving the cloud market, and it just happens to benefit the high-end desktops.

Would apple invest in a many many core pro machine just for the (whatever low number) of pro users they have?


Many many core pro machine called Mac Pro. It's quite powerful workstation. Another alternative is iMac Pro which is quite powerful as well.


But that is Intel! Intel does do a 24-core chip for apple. (and amd does a 64-core chip, though not for apple)

I'm saying - if apple does ARM, will they make a 24-core arm chip?


My theory is that they will switch low power devices like Macbook Airs to ARM chips while keeping high power devices on Intel for now. They will sell millions of those low power devices, but designing a competitive workstation chip takes a lot of money and they won't sell enough Mac Pros to offset that cost. Intel makes those CPUs for servers and basically rebrands them for workstations. Apple is not in the server business.

May be they'll just discontinue powerful Pro devices at some point and that's about it.

May be they'll manage to create powerful CPUs using chiplet design, but I absolutely have no idea whether it's cheap or not.


Can you give some examples of tools optimized for x86 that would run worse on ARM?


I was once told that Reaktor, an immensely used software synthesizer, works by JIT'ing x86 code. Pretty sure that any widely used media creation software will at least have some SSE/AVX routines (or use Intel MKL) which will have to be rewritten.


ARM SVE/SVE2? For some software it may be matter of core libs or compiler. https://developer.arm.com/tools-and-software/server-and-hpc/...


Any virtualization stuff using VT-x or AMD-V? Not really sure if there's an ARM equivalent for that.



I've already been pushed away from Apple after the death of 32-bit. I tried the 2020 MacBook Pro but I ran into many problems, including software support, constant beach balls, and shorter battery life than my old laptop. I assume most of those issues are Catalina not the hardware. I'm sticking to my 2013 MacBook Pro until it breaks or I find something better.


It seems as if some kind of strategic direction change to lifestyle usage was made about five years ago. Keyboard, ports, Catalina.


Apple have always favoured the individual user over the enterprise, it's what differentiated them from Microsoft.


But that's not what I was talking about. Whether you're a freelancer, a novelist or a technical writer in a megacorporation, you need a good keyboard, not an itunes subscription.


I'd argue it's the same, right now I have Apple Music and Xcode open.

Also I'm pretty sure nobody wants a crap keyboard.


Even with ARM, Mac would still be the only way to get laptop with HiDPI screen and OS that supports HiDPI well, good touchpad and unix environment as a first class citizen.

Linux on laptop doesn't work that well, especially if you want a nice (HiDPI) screen. Microsoft is trying with the Linux emulation, but they are creepy with their snooping and telemetry, and will autoinstall Candy Crush and other shit.


I haven't really had any issues with HiDPI on Linux for a few years. Not saying that it's as smooth as macOS, but certainly not a huge issue.


MacOS does quite a bit of telemetry as well.


Really? What should I be turning off?


Anything with “analytics” in Settings, if it bothers you, I guess. Catalina bricked my iMac recently, so I can’t see what I did myself.


- HiDPI screen - Surface has this

- OS that supports HiDPI well - Windows 10 has good support for this now

- good touchpad - Surface touchpads are very close to Apple ones these days and I'd argue the key part of keyboards are superior as is the touch and pen support.

- unix environment as a first class citizen - WSL2 (while Linux not Unix) is a first class citizen.

How is Microsofts telemetry any more "snooping" than Apples telemetry? I mean they both do it...


The surface screens are nice, but mixed DPI screens on Windows are a complete mess - even Linux handles it better.

Surface keyboards are mushy and imprecise - frankly not even as good as the one on my iPad Pro, let alone the Macbook Pro which is to date the only keyboard I can type on for a whole day without significant RSI flaring.

WSL2 is absolutely not a first class citizen - right now I have to opt into a build with _required_ telemetry to use it. In fact the Microsoft telemetry and update regime is a disgrace to the company (apparently outside of enterprise SKUs which also do not support WSL2) - if you think they’re comparable to what Apple collect I don’t know what to tell you.

And Candy Crush with advertising in the start menu... JFC someone needs to be fired for that.


- I haven't had any issues with my multi monitor setup and multiple DPI values recently.

- Subjective. I love old Macbook keyboards. I hate new ones.

- WSL2 is available on mainline Win10

- Yeah


AFAIK all other touchpads still use a physical mechanism for clicks. I would not want to go back to that after getting used to the haptic trackpads.


WSL is cool, but definitely not "*nix as a first-class citizen" status.


I run Ubuntu on both a 4k laptop and a desktop. Zero issues out of the box.


I’ve never owned a HiDPI screen but I know each of the displays in X11 have an affine transform that you can tweak with the xrandr command. There’s also .Xdefaults of course but you’re probably going to have to put a lot of stuff in there by comparison.


Apple has a tactical advantage over Microsoft and even server-side ARM like Graviton: it’s LLVM toolchain integrated into Xcode for both Objective-C and Swift. Apple has a precedent for a seamless CPU architecture transition: the iPhone 64-bit transition. It is much harder to cross-compile seamlessly with traditional Gnu C or Visual Studio toolchains.

The App Store requirements make it easier to control this transition on iOS than macOS. I wonder how many Brew apps will make the transition seamlessly?


I really don't see any "tactical" advantage. MS has similar toolchains. MS has also done a similar 32-bit to 64-bit transition. And to top it off MS has done it on a PC, and not an smartphone where people do not depend on running "random" binaries.

The fact that MS has also tried several times to switch from ARM but no advantage has ever materialized also casts a huge doubt on the tactical advantages of an ARM migration. The power savings are not apparent: neither the Surface RT nor the Surface Pro X have significantly better battery life, likely because they've (very much like Apple will do) decided to reduce thickness. The performance is not there as both devices were reviewed as slow. And the cost advantage is also not there seeing that the Surface Pro X was more expensive than most of the Intel Surface Pros, at least for launch price (and we can certainly expect Apple not to lower prices, either).

I am guessing MS had more margins on the ARM devices, and that's about it, but it makes for a hard sale for users. Also those margins will evaporate as they are forced to reduce the price of the SP X.

Apple's previous migrations depended on the _huge_ performance improvement of the newer platform. Up to the point where porting programs to the new platform was not even always required, as the emulator would be performant enough to still show a performance improvement overall. In this case, the new platform has likely similar or slightly lower performance, and we care much more about power consumption, so emulation cost will be very noticeable and users will penalize non-native binaries heavily.

Apple has the advantage of the huge collection of iOS apps but I also hesitate whether this advantage will transfer to the MacBook or not. Most people do not buy MacBooks to run oversized iPhone apps. They already have the iPad for that.


Most of Homebrew upstream already runs on ARM Linux: Raspberry Pis, mobile phones and so on. Reworking the build process may take some time, but packages themselves should support multiple architectures.


Also Bitcode[1] which was introduced 5 years ago for iOS, watchOS and tvOS. I have a feeling they'll be making app binaries built with `-fembed-bitcode` optional on the Mac App Store and eventually mandatory.

[1]: https://help.apple.com/xcode/mac/current/#/devbbdc5ce4f


The idea that bitcode will allow porting from x86 to ARM was dispelled by non other than Chris Lattner himself during an interview on ATP.

https://atp.fm/205-chris-lattner-interview-transcript#bitcod...

John Siracusa: The same thing I would assume for architecture changes, especially if there was an endian difference, because endianness is visible from the C world, so you can’t target different endianness?

Chris Lattner: Yep. It’s not something that magically solves all portability problems, but it is very useful for specific problems that Apple’s faced in the past.


You seem to be assuming that the CPU on iOS devices are running in big-endian mode. ARM CPUs support both big endian and little endian modes and ARMs on Apple i-devices have been running on little endian mode for as long as they've been around. There is no endian difference gap to bridge between ARMs on iPhones and x86 CPUs.

Given this, it's quite unlikely that Apple will go big endian in their Desktop/Laptop ARM CPUs.

Of course, Bitcode won't solve all portability problems, but it does solve many of them in this specific case.


Read the transcript around the quote. It’s not just the endian differences. He said that it basically only helps with things like minor optimizations and added instructions.


What has endianness to do with porting from x86 to ARM? Typical ARM implementations/ABIs use little endian mode.


The context of the answer was:

John asked:

And the more prosaic version: it doesn’t mean it’s CPU-agnostic, all it means is that Apple has slightly more freedom to change instructions on the CPUs that they do target? [10:30] What advantages are there of compiling something to Bitcode and then uploading it somewhere versus sending someone a binary?


Ever heard of this little thing called .NET? Or PocketPC for that matter?


I have indeed heard of both. In fact, I worked on a 3rd party product that Microsoft highlighted at the introduction of PocketPC. Some products make very specific target architecture decisions. MS Surface tablets suffered due to code that was not .net or was not possible to port easily.


This is a typical corporate type view of it. I.e that its least profitable so doesnt matter.

It misses, developers use Macs to build stuff => It's easy to make arm compatible applications => The server (most profitable) domino falls.

I can imagine moving straight to ARM processors if its easy enough to work on and AWS/Google has a deployment option.

The dominos can cascade really fast, particularly in where the new demand for chips comes in, vs the existing one that will just run as it is now.


AWS will offer ARM based EC2 instances. If Intel doesn't come up with a drastic turnaround, they are going to be Nokia'd.

https://aws.amazon.com/de/ec2/instance-types/a1/


Indeed, as developer I use Macs to build iOS and macOS stuff.


Most developers using Macs don’t touch iOS and macOS dev with 10 foot pole.


Those "developers" shouldn't complain that Apple doesn't care about them, as they should have given their money to a Linux/BSD OEM.

Apparently only UNIX has developers, the rest of us are something else.


I'm looking forward to an ARM laptop, especially if it includes their latest GPU. That's excellent power usage and graphics performance.

I expect good compute performance, good graphics performance, better battery life, and (hopefully) better price range. We'll see what's in store for us tomorrow.


Are you thinking in comparison to Intel integrated graphics a la MB Air performance?


Whose latest GPU?


Apple's.

It's as powerful as a Radeon RX 5500M, which is incredible when you consider that the A13 is passively cooled and the RX 5500M is actively cooled.

It's also 40% as powerful as the RTX 2080 Ti, the best-performing consumer GPU on the market, that draws hundreds of watts and very well cooled.

By some quick maths, performance/watt is an order of magnitude better on the A13 than GPUs from AMD and NVIDIA.


> 40% as powerful as the RTX 2080 Ti

Yeah... going to need to see the work on that one.


3DMark - Ice Storm Unlimited Graphics Score 1280x720 offscreen:

Apple A13 Bionic: 208697

RTX 2080 Ti: 521458

https://www.notebookcheck.net/Apple-A13-Bionic-GPU.434833.0....

https://www.notebookcheck.net/NVIDIA-GeForce-RTX-2080-Ti-Des...

Keep in mind the A12Z drives a 2732×2048 display, at 120Hz (iPad Pro). That is a beast to drive for the GPU.

A 2080 Ti can do 4K at 80fps (Rise of the Tomb Raider).

For another source, AnandTech says the A12Z is a Xbox One S class GPU: https://www.anandtech.com/show/13661/the-2018-apple-ipad-pro...


You may want to check out those other benchmarks.

Ex: GFXBench 3.0 - Manhattan Onscreen OGL on screen The GTX1070 beats it by 508%


Yea its not going to be 40% as powerful as a 2080TI when t comes to real world performance. It might do OK in special cases.

Going from the links he posted it runs PUBG @ 40FPS and a mobile racing game at 30 FPS. Those numbers don't agree with the assertion that its as powerful as a mid-range desktop GPU.


The resolution is too low to compare. You need to be GPU limited to test GPU performance.


Indeed, my 2080 Ti is about 30% faster than my old GTX 1080 at 1080p, yet almost twice as fast at 4K resolutions.

I don't have the exact numbers, YMMV


Wouldn't a fairer comparison be at a higher resolution? At this resolution it's probably not the GPU that is a bottleneck for a system with a 2080Ti in it.


Look, there are no unicorns and there is no magic. To get certain performance (fps) you'd need some power, even if the Apple stuff is twice more efficient (!) - it'd still draw like 50W+ or anywhere close to 40% of 2080TI.


Just driving regular UI stuff on a 4k screen is not hard - most low end notebook gpus can handle that easily. It's not comparable to running a modern 3d game at 4k.


>It's also 40% as powerful as the RTX 2080 Ti, the best-performing consumer GPU on the market, that draws hundreds of watts and very well cooled.

Keep hearing these extremely bold claims about iPhone, iPad chips. But if they're truly that powerful why are we not filling compute farms with them? I mean the 2080 TI is $999 alone with just the card with much higher power and cooling requirements than the A13.

or is none of this really true at all to real world performance.


Are you going to really claim that the millions of iPads and iPhones out there aren’t the real world?

Comparing to esoteric highly specialized workloads is not apple’s game.


If you're going to compare it to a RTX 2080Ti then I'm going to compare it to the sort of professional work you would do with an RTX 2080Ti, not compare it to the millions of iPhones scrolling through Twitter feeds as a metric.

So much grading on a curve going on with iOS devices.


Exactly. As an analogy imagine if a car magazine compared a Lamborghini to a Toyota Corolla and said "the Lamborghini didn't take me down the road to the shops any faster than the Corolla, they're pretty much the same".


Using ARM processors combined with their own custom silicon will give Apple the potential to add features that can't readily be replicated by Intel-based laptops (as we already see in the phone market).

Does the same logic then hold in the datacentre? With the ability to add their own IP mean that AWS, Google etc can start to add new features (e.g. specialised accelerators) that would not be possible with Intel CPUs?


I think the big difference between what Apple does and what happens in the data centre, is that Apple builds custom silicon along with a full technology stack on top of it. They aren't throwing some ML specific stuff on their chip and then just hoping someone uses it - they've got specific applications (voice recognition, face recognition etc) that they're targeting.

It's much more difficult in the data centre where you have customers who are going to be writing their own code. You have to build something very generic, you have to build an API, you have to create a software eco-system of libraries and examples and a community of users, and even then lots of people will be worried about lock-in. It's absolutely possible, and Google are already doing custom silicon with the TPUs, but it's a very different challenge.


Fair point but there are examples where customers could be using AWS / Google API's that could hide the new hardware (voice recognition would be a good example).

I guess too that for Apple there is a huge advantage to having the custom silicon on the same SoC as the CPU whereas there is much less penalty for having it on a separate chip (as per TPUs) in the datacentre.


To a degree. But afaik Intel already will do custom extensions to Xeons for them and for many, especially larger-scale, accelerators it makes sense to not have them on the main CPU, so they have their own cooling and power delivery etc (although I don't know how open Intel is about giving them low-level access to system resources, e.g. the QPI bus, whereas a fully custom design could add the interfaces needed freely)


won't it be funny if after all this hype past few weeks .... Apple won't say a word about ARM or transition ...


What about the ARM impact on developers?

Will the Pro line of laptops continue on Intel or is it the whole line?

If it's the whole line, it's either going to be very good for developer tooling on ARM or it's going to be a nightmare.

I've been developing on Windows 10 ARM with WSL and it's pretty great, but it's not 100% there. I've had to switch back to x64 due to some tools not having ARM builds.


Ming-Chi Kuo (who has probably the best track record on Apple rumors) reports that this will actually come to the Pro line first.


That is really interesting. Most of the developers I know use Mac OS for nodejs tooled web development, and they use Visual Studio Code. VS Code hasn't got an official ARM build yet.


VS Code hasn't got an official ARM build yet

Yes it does. https://code.visualstudio.com/updates/v1_46#_windows-arm64-i...


Whoops, I missed this! Can't wait to try it on the Galaxy Book S when I get home! :)

EDIT: Vscode arm build works GREAT, and it has WSL and SSH remote extensions too! This is fantastic.

I can move back from my x64 machine for development again! I'm using Edge, Windows Terminal, WSL and now VS Code all with native arm on Windows 10, amazing!

Now... we just need Windows 10 ARM builds that run on Raspberry Pi and how much fun is that going to be!


Also note that Apple AArch64 (ie arm64) has a slightly different calling convention than Linux aarch64, which is going to influence pretty much any compiler/jit/runtime here.


Well, that depends. Deep inside a JIT it doesn’t really matter what the calling convention is because the code only needs to interact with itself, and can use whatever convention it wishes there. Even outside of that the changes Apple has made are fairly minor-a register here and there, aside from variadics.


What no one appears to be talking about is that the move to ARM might lead to computers that are less power-hungry thus freeing enough power for Apple to ship some of their nice LTE tech in macbooks without compromising battery life too much. I bet many people would welcome that.

I use a Surface Pro X with LTE, having an always-connected, always-on machine is really nice.


To do that though, you’re acting like they would have to have a source that isn’t Qualcomm for LTE modems! Where would they get those from? Intel used to have a cellular business but then they sold it..oh wait...to who? Apple?!

Lol I’m kidding but exactly, I agree, plus they can make them in-house.


It’s true that Apple has done two moves to different chips with success, but both times it was to a more capable and probably powerful processor. When moving to PPC they allowed to run 68k apps/code without too much penalty. When moving to x86, there was Rosetta[0], that made the entire existing library /almost/ work seamlessly. moving to ARM... this might not necessarily be true. Sure, you have Catalina but iOS apps on macOS says nothing about all existing apps for macOS. I have no idea of the amount of already existing apps and what would be the amount of work needed to move them to ARM. Very likely Apple did the math and it will be easier/there will be incentives for devs to move to that platform. They do have experience though.

[0]: https://en.m.wikipedia.org/wiki/Rosetta_(software)


You can go back and look at PBS's Computer Chronicles from the mid 90's to see how much of a performance win the 68K to PowerPC transition was.

They mention in the introduction that a pre-press shop went from hours to run their bread and butter workflow to about 20 minutes.

https://www.youtube.com/watch?v=Ic0dkf1iFOY

The Intel transition wasn't really a win on performance, but it was a total win for laptops from a power draw and waste heat perspective.


The G5 and Core Duo were not that far apart, performance-wise. Certainly nothing like 68k vs PPC. Rosetta worked because most software isn’t CPU-bound; software that is got native versions very quickly.


Well, there you go. They just announced Rosetta 2 :)


The article mentioned about the WinARM twice failure. I wonder as Apple is actually a Windows OEM, have they tried that and learnt from that experience and learn from that as well.

Further, would the AppleArm run BootCamp? What happen to thunderbolt, given it is ok now from Intel to use it (but in ARM)? Is Office compatibility still important to Apple and if so have Microsoft Apple product teams are already testing ARM version of Office on both WinArm and AppleArm? (Given that some small subset did run on iPad Pro, or even iPhone).

But unlike Windows, Apple has successful twice and given it can ignore our call for CUDA compatibility and worry about future Premiere/Adobe ... They would move. And we will move as well. Just not sure where we move to.

Sorry lots of questions. And the impact to Intel ... it is a side show and side issue. Games on.


Something i don't get about microsoft not porting its suite to ARM when releasing the surface Pro :

What's so hard about it ? Your code is supposed to use something like a C stdlib, which has been ported to ARM obviously. So what makes it so much harder than to recompile everything?

Once the OS is ported, the system libraries are available, and the programming language has a compiler for the target architecture, i don't understand what's blocking.


I've helped port a few open source systems to ARM. It's easy to write code which looks like C, but ends up being "x86ish C":

* Assume sizes of types, endian, that type of stuff (endian is particularly troublesome).

* The threading model is very different, and it's very easy to write a lot of code which works on the x86 threading model and then explodes in interesting ways when put on ARM.


And doesn’t ARM react quite badly to unaligned loads as well?

> endian is particularly troublesome

Arm is LE by default though.


ARM has a much weaker memory model than x86 for starters, so threaded programs which work on x86 (either accidentally or by specifically taking advantage of the memory model) might not on ARM.

I think unaligned accesses (e.g. packed structs) is also problematic, it’s either super slow or it faults (which might lead to an OS routine emulating it in software which is even slower, but if the OS does not emulate it’ll just crash).


There are a bunch of things which can differ between architectures which can't be reliably abstracted away by the standard library.

Memory coherency is one. Check out the table of what reorderings processors are allowed to do:

https://en.wikipedia.org/wiki/Memory_ordering#In_symmetric_m...

x86(-64) is very conservative, doing very little reordering. ARM is much more liberal, doing lots of reordering. The standard library may give you tools to write code which is correct across all architectures, by defining some portable memory model and then implementing that model on each architecture (in C++, std::atomic does that). But it's easy not to use those tools properly: you can write code which is incorrect according to the standard library, which works on x86 because it is conservative, but which fails on ARM because it is liberal. Detecting bugs like that statically is an open research problem, and cross-platform concurrency bugs are among the hardest there are to debug.

There's more than just reorderings, too. A local wizard tells me "On ARM, you can have cores where there is no implicit "propagation"... i.e. if you write to some memory location from core "A", core "B" may never see those changes. You have to use synchronization primitives to make changes visible. This also means that the old dirty trick of volatile instead of proper synchronization can definitely fail here.". My guess would be that cores like that won't turn up on Apple laptops, but who knows.

Another example is vector operations. If you want to make use of explicit vector operations, rather than relying on compiler autovectorisation, you need to write code around the shapes and operations supported by the processor. Those are different on ARM and x86 - although i think the instruction set on ARM is better-designed than that on x86, so at least it might be easier to port from x86 to ARM than vice versa. Still, here's a taste of the sort of work it takes to make vector code work properly even on different ARM chips:

https://www.cnx-software.com/2017/08/07/how-arm-nerfed-neon-...


Would it be possible to somehow force the threads of an “emulated” Application to run on the same ARM core to simulate the more conservative memory model? Or would it be possible to somehow detect and guard memory thats shared across multiple threads?


You truly underestimate how many things still use raw assembly here and there. Intel provides a library called libhoudini for this very problem. also what would you have end users do ? Somehow recompile arcane sourceless binaries to ARM ?


The Office source code has been ported to ARM multiple times. We're running that code in the iOS and Android ports of Office.


Yes, that's the only explanation that makes sense, but still, that surprised me a lot to know MS Office relies that much on assembly that recompiling to a different target arch isn't an option.


MS Office is ported, but it's a hybrid (CHPE). The executable looks to be x86 when examined in Properties, but if you look at the contents of the file on disk you'll find Arm code as well as x86.

MS did it this way so that any x86 extensions can continue to be run under emulation whilst the bulk of the application can run natively, according to: https://uk.pcmag.com/news-analysis/92340/microsoft-explains-...


Drivers on devices that get plugged in maybe.


It could be due to the fact that the Surface devices and the Office are developed by two different divisions in Microsoft. The group developing Office may not have a huge incentive to create ARM version just because another division decides to go with ARM processor on a new device.

Also think about anti-trust! If Microsoft could develop ARM version of Office in secrecy for the Surface launch, it would not give sufficient possibilities for Office competitors to do the same.


On a side note, I wonder what this will actually mean for Macs. If the move to ARM is part of a strategy for unifying iOS and macOS, we can expect productivity to drop as macOS veers more towards the touch-and-fullscreen UI of iOS.

As far as I understand it, the OS is the main selling point of Macs and a substantial change in the user experience will probably alienate their core user base.


What it means for Macs, according to most analysts, is better performance at lower power levels.

Apple knows why the Mac is popular with technical people. They're not going to f*ck that up. I know this bugaboo of "they're going to turn the Mac into an iPad" comes up every so often, but I really see zero evidence of that.


There’s really no connection.

If Apple wants to unify iOS and MacOS a common chip architecture doesn’t really help them do that.

Anyway, I don’t think it’s going to happen. I understand the worry, but it just doesn’t make sense.

They might allow and support macOS running on iPads, though.


I think Mac OS will remain Mac OS. Apple even said it would.


I think that they'll unify internals between macOS and iOS, but interface will be different. Something similar to iOS on iPhone and iPadOS: OS is mostly the same, but UI is different.


It'll be interesting to see if they succeed. It seems developers aren't to keen on differentiating their UI:s, but perhaps Apple's draconian app store rules can counteract that.


> PS: For today, we’ll leave the impact of ARM-based servers and their greater thermal efficiency alone.

They shouldn’t have left this topic alone!

In the US 1/4 of developers surveyed on StackOverflow use macOS.

If they all switch to ARM and AWS will sell them Gravitron instances for a 40% discount, in what world are Intel data centers going to be necessary in the long term future?

Intel should be absolutely terrified.


Everybody is focusing on the CPU architecture and the impact on CPU manufacturers. IMHO the risk for Apple is actually alienating people in the software ecosystem. It's also where the opportunities are.

If, like rumored, they are switching their pro line to ARM that will impact two groups of big spending customers (i.e. people actually spending many thousands of $ on hardware regularly):

1) Developers buying maxed out pro laptops for running IDEs, Docker, etc. I'm one of those.

2) Creatives using Adobe and other third party tool providers for 3D graphics, movies, photography, etc. This stuff is critical to their workflow and any hint of compatibility or performance issues will cause people in this segment to start considering other platforms or delaying purchasing decisions. I know people that bought the Mac Pro just before it was renewed because they needed it and there wasn't really anything else to buy for them that met their requirements even though it was 3 years out of date by then.

These segments are the ones where switching CPU architecture will hurt the most until such time that the tool ecosystem catches up. E.g. Adobe would have a lot of tools that probably will need quite a bit of work to run smoothly on ARM. It will be interesting to see how long that takes. The last few times Apple switched CPU architecture, it took Adobe a bit of time to switch and it provided an opportunity to MS, which was able to run Adobe's latest and greatest throughout the transition. And emulation is probably not going to be good enough here.

I'm a backend developer and the sole reason I'm still on a Mac is convenience. At this point it's neither the fastest nor the cheapest option. And, I can trivially get everything I use running on Linux or Windows (with the linux subsystem). Most of the stuff I use is OSS, cross platform (IDEs, command line tooling) & dockerized (databases, web servers, search engines, middleware, etc.).

All of that is x86 currently. Theoretically, ARM variants of the stuff I use could be created but in practice, this stuff does not yet exist or is kind of poorly supported/an afterthought at best.

Maybe, emulation of this stuff will be good enough. But still, I'm deploying on x86 and will be likely to want to test on that for the foreseeable future and not run different containers locally than in production. So, my workflow slowing down because of emulation is kind of a big deal for me.

So, (not so) hypothetically if I were to buy a new laptop right now, I'd be looking for something that supports my workflow going forward and that increasingly looks like either using Windows with the Linux subsystem or Linux (Ubuntu is pretty nice these days). Intel macs are still fine of course but not if there's this Apple will drop support in a hurry thing looming over it. I buy laptops with a 4-5 year useful life and Apple losing interest in anything Intel worries me when I'm going to be spending 3-4K on hardware.

The opportunities are also obvious: gaming & VR have so far not happened in the Apple ecosystem and I suspect a big part of the reason is Apple wanting to have their own hardware when they launch this stuff without dependencies on the likes of Intel, AMD, Nvidia, etc.

Also data centers eventually switching to ARM is something that is technically already a bit overdue. At this point most linux software should just compile and run on ARM. Mostly it's just market inertia. Data center supply lines just tend to be dominated by AMD & Intel and developers just happen to run x86 hardware.

So, long term this is definitely a smart move for Apple and I suspect they want to get this over with sooner rather than later. However, they do have their highend users to protect. A mac pro without Intel architecture would be a hard sell in the current market.

Unless of course they really nail high performance X86 emulation. I could see them dedicating a few extra cores to that.


I am in the same boat. We build a niche product that serves a Windows market (an add-on to Windows-only software). We use Macs for a variety of reasons and I'm terrified at the possibility of having to switch. If we don't get performant virtualization a la Parallels or VMWare Fusion for x86, my entire toolchain falls apart and I have no choice but to dump Mac. I'm sure there are a ton of other developers that depend on virtualization. I hope Apple has considered this and can talk about a solid path forward for x86 virtualization as soon as the ARM transition is announced.


> Creatives using Adobe

Adobe has been pushing hard to get 'real' versions of all their apps on iOS. LR on my iPad Pro often runs better than LR on my 2017 MBP for example. Adobe has also been pushing transitioning to cloud subscriptions so users are less likely to be stuck on old versions.

My guess is that Adobe is better positioned for a transition than they ever have been in the past.


Are they "real" though, iPad apps often feel better and smoother than desktop apps because you're not actually working with the real data they often downsample to fit within the small memory requirements.

If I drag a video file from my desktop into Premier then I'm actually working with that video file. If I open a video file in an iPad video editing app then most of the time that video file is then transcoded to H264 so the hardware acceleration can make it actually usable. Therefore adding a layer of compression.

It's not the real file anymore.


The number of devs for iOS dwarfs the number of devs on macOS.

The ability to target iOS, ipadOS, watchOS, tvOS, and macOS seamlessly is going to be a game changer.


> Adobe would have a lot of tools that probably will need quite a bit of work to run smoothly on ARM.

Adobe is a lot more "cloud" nowadays (see: Fusion 360).

I would be stunned if there isn't an ARM cloud Photoshop ready to roll.

Yes, this is going to hose over the people who stopped buying Photoshop once Adobe went subscription.


>I would be stunned if there isn't an ARM cloud Photoshop ready to roll.

I would be stunned if Adobe was developing new ARM anything and not just working on webtech versions of their suite.

Figma has proven that it's more and more possible each year and as soon as someone gets it right for Photoshop/Illustrator etc then they'll eat Adobe's lunch.


I feel like I've missed something. Why are there all these rumours of Apple moving away from Intel anyway? Do they have some sort of issue with Intel?

Otherwise I can't see the benefit in lower power CPUs for macs. If they really cared so much about reducing power usage and heat, then they wouldn't have shoehorned an i9 into their laptops


Because their iPhone chips are so obscenely powerful, MBPs are famous for running super hot, and they can make much better margins this way. Consolidating onto a single CPU and GPU platform is nice for developers as well.

We’ll have to wait until 10PDT today and see the points that are brought up in the keynote.


Because Apple, first and foremost, wants to be able to control its own destiny. Being tied to Intel has slowed them down on upgrading their higher-end laptops for years.

OTOH, they're having GREAT success with their ARM chips in mobile.

So, you know, you do the math.


If they do make the jump to arm, who else is excited to play games like "adventures in cross-compiling" and "which of my core dependencies suddenly doesn't work anymore?"

AMD chips are better than ever, throw a curve-ball and adopt them, please don't pick the weird mutant-mobile processor to seriously put in desktops Apple


Don't think Apple will be putting mobile processors into desktops - expect they will be processors designed for desktops with comparable or better performance than the Intel processors they replace.

Not sure what is weird about 64 bit ARM architecture - it certainly doesn't have the legacy (dating back to 8086 in 1978) that x86 carries with it.


Hell, you can find x86 instructions and concepts dating to the 8080 from 1974, and probably to the 8008 from 1972 (although I'm not familiar enough with it to confirm).

It's funny how no-one has been able to dethrone x86 despite decades of effort from industry rivals and even Intel itself with iAPX and then Itanium.


Weren’t ARM processors and their memory, power-performance, threading, etc designed around primarily mobile workloads though?

I’d be happy to be wrong here, that’s just my understanding.

What would be the benefits for developers especially of ARM? (Will the likes of Numpy run more efficiently? Rust compile faster?) I know the the iPhones and iPad’s have surprisingly powerful chips in them, but (as an example), the new AMD chips are amazing, and they at least won’t break my dependencies wholesale.


It is not the first time for Apple to make the switch. 68k -> PowerPC -> Intel -> ARM

While the cross-compiling pipeline already exists with Xcode where you code your iOS (ARM) stuff on macOS (Intel).


I know they’ve made the switch before, they’re definitely capable of pulling it off in a reasonably straightforward manner.


I wonder, is it going to be possible to run Windows 10 ARM on ARM Macs?


This is a big question for me, I hope Apple doesn't throw away the advantage that Boot Camp is (and that Microsoft are willing to co-operate). There would be some hurdles to it like needing Windows/Direct3D drivers for Apple's GPU.


Can TSMC handle it? Apple probably won't use Samsung because Samsung compete with Apple on smartphone and there is a bad blood between them.

AMD, Nvidia, Qualcomm, NXP...etc. all use TSMC.


Pretty sure Apple sources lots of other parts from Samsung


The Mac line is extremely low volume.

The issue with A series has always been yield because they are so massive.


Apple sells 16-20+ million macs per year. Not Windows computer level, but I wouldn't call that extremely low volume.


Compared to their phone and tablet line of course it is.

Volume is always relative.


Are these numbers wrong?

https://gs.statcounter.com/os-market-share

In the comments I read some really big numbers regarding the Mac dominance, can this be caused by a bias (country + income)? In this data OSx seems to be under 9%.

Please don't downvote me for questioning Mac dominance.


What's the provided GPU solution for ARM based machines? Do Apple SoCs contain a GPU?


It does in the iPad so I we can probably expect that.


It used to be custom silicon bit added from PowerVR but they switched to their own custom GPU architecture around the iPhone 8.


IPad pro:s have quite good GPU and since apple owns the entire graphic stack there should not be anything to hold them back...


So this is what jlg is up to these days. Loved his work on BeOS.


In my news bubble Intel seems to be on a very bad roll. What are some good ways to gamble on INTC going down significantly in 12 months time?


Would you buy a MacBook Pro that you cannot install Linux or Windows on?


Yes. I used Windows via BootCamp back in 2008. It was painful, with the fans running full speed all the time and the trackpad not responding in the way it should.

After a few more tries, I gave up. I occasionally used Windows since then through a VM in order to access something specifically, but these days there's nothing left on Windows that I need to use. Office on MacOS is good enough for the light use I put it to. VS Code / JetBrains tools have replaced Visual Studio.

There's no reason for me to use Linux as a desktop vs MacOS. I build Docker containers that run Linux, but that works fine from / in MacOS.


>from / in MacOS.

Via a Linux VM


Sadly no. We are pretty much done at that point because we often need Windows specific programs to fulfill grant requirements. I might buy one for myself, but I get the feeling some Higher Ed is done. Government contractors don't really do Macs or cross platform.


They might support Linux, WSL2 works on ARM with the Surface Pro X


Of course, for that I have thinkpads.


Of course. Why would I want to make my MB worse by installing Linux or Windows over the Unix (macOS) which is optimized for hardware it runs on?


As AMD user, I do hope Apple consider making Mac with AMD chip. The p/p of AMD chip is far beyond Intel chip now.


Why would they give control to AMD or Intel instead of going their own way ? It's not an emotional decision.

AMD is fast now but if history repeats we'll wait another 15 years before anything good happens.

And frankly why would you care what brand is your processor as long as it's fast enough ?


Ive mad a lot of money over the last few years buying AMD stock but I might be exiting this week ahead of WWDC.

I think getting developers on ARM might just be what eats X86 on the server.


Everything hints that this will not happen.


It will be interesting if Apple does announce a move to ARM for (at least some of) their macbook range. Not surprising though, as Intel has refused to remove the vulnerable backdoor [1] (Intel Management Engine) from all their new chips, companies like Google and Apple want more security and privacy for their platforms.

While ARM is not perfect, it does allow companies like Apple more control over the secretive firmware that boots these chips.

1. https://libreboot.org/faq.html#intelme


Apple's x86 chips run a custom gutted ME.

To reduce that attack surface, Mac computers run a custom ME firmware from which the majority of components have been removed. … The primary use of the ME is audio and video copyright protection on Mac computers that have only Intel-based graphics.

https://support.apple.com/en-au/guide/security/seced055bcf6/...


Interesting, thanks for sharing that. Sounds like what puri.sm are doing with their Librem ranges, with me-cleaner [1].

1. https://github.com/corna/me_cleaner


For a 3rd party this changes nothing: the master key is still under the control of some corporation.

Both AMD and ARM have something similar to Intel's ME. Let's not pretend this is by accident.


AMD does have a similar backdoor, called the PSP [1]. Both Intel ME and PSP are second chips that run separate from your main CPU, and can turn on/control/intercept the main CPU without the owners knowledge. ARM does not have this technology. The closest thing is TrustZone [2], which is a different process zoning/ring restriction, and can be completely turned off, depending on the manufacturer.

Alternatively, if you want to write your own TrustZone (for example on a RockChip CPU), you are free to write it and use it.

You can not run any firmware on an Intel or AMD chip that has not been signed by Intel ir AMD. All signed Intel firmware contains the Intel ME backdoor.

1. https://libreboot.org/faq.html#amd-platform-security-process...

2. https://en.wikipedia.org/wiki/Trusted_execution_environment


The PSP is slightly different though as it not remotely accessible. It does sit in a similar position to the ME (can interact with the computer outside of the main x86 cores) but yes not listen on a external interface. Interestingly enough the PSP actually uses the ARM TrustZone extensions to implement some of its security.


AMD's DMTF DASH is absolutely remotely accessible, that is its main purpose. See, for example, https://developer.amd.com/tools-for-dmtf-dash/ (AMD's developer site is currently down, but there's an archived copy at http://archive.is/HHEwh) which states "DASH is a web services based standard for secure out-of-band and remote management of desktops and mobile systems. Client systems that support out-of-band management help IT administrators perform tasks independent of the power state of the machine or the state of the operating system. Examples of out-of-band management tasks include: 1) Securely starting up a system remotely, even if it is currently powered off; 2) Viewing asset inventory information for a system that is powered off; 3) Retrieving health information about system components even if the OS is unavailable." All of these things require remote access.


DMTF DASH does look to be remote out of band management of AMD computers but I cannot find a connection to the PSP itself. I suspect it may run on the PSP but nothing I have seen so far has shown that the PSP is externally accessible or implements DMTF DASH functionality.


AMD's implementation of DMTF DASH is called SimFire, and Intel's is called AMT. They both implement the same standard. I don't see where else it run if not on the PSP if it works even when the machine is (supposedly) switched off? And even if it were not to run on the PSP, does that matter? It's still an opaque binary that runs below your OS.

There's an older subthread on this at https://news.ycombinator.com/item?id=16081422 which also laments the lack of documentation on this.


its actually more interesting imo if it _doesn't_ run on the PSP; because that means that there is another processor that runs at a lower level than the x86 cores that can interact with the host and listens externally.

After doing a bit of reading DMTF DASH appears to be implemented in the NIC from Broadcom or Realtek and uses the PCIE interface with the chipset from the NIC.


I would love for me to tell you that you were joking, but this sounds just crazy enough to be probably true. A cursory search confirms that the NIC seems to have a rather active role in DASH implementations:

"To support out-of-band management, a client system typically requires a Network Controller that supports out-of-band management tasks, such as the Broadcom 5761." https://www.amd.com/system/files/documents/out-of-band-clien... (out-of-band management is the "it works while the machine is supposedly off" stuff)

But it also needs to talk to the system BIOS / UEFI, so I guess something else needs to be running as well? "DASH functionality requires a communication channel between the hardware platform features, system BIOS, and the Broadcom NetXtreme™ Gigabit Ethernet Controller." (http://h10032.www1.hp.com/ctg/Manual/c01944865). I mean, the PCIe interface is useless if the other end is actually off (and not just supposedly off).


The second page of this document provides some details in diagrams.

https://docs.broadcom.com/doc/1211168563383

I presume it wakes up the chipset to do most of its work remotely.


I would be very surprised if the presence of the Intel ME had an impact on any CPU move Apple makes.


I think it's highly likely they considered it. One of Apple's main marketing pieces is security/privacy. It's important as they push into the health and payment marketplaces (for things like the iWatch and contactless payments).

It would make sense they want complete control over their hardware stack end to end. You can even see this in the Apple Secure Enclave on mobile, which was made possible due to their ability to customise their own ARM chips.


remember that time Jobs flat out said no to Flash on iOS due to security? that was when flash was the dominant tech for delivering multimedia.


I believe he said no to flash because it was a battery hogging tech and would have rendered early smartphones basically useless as portable devices.


Since RISC has a much smaller set of instruction compared to CISC, doesn't this mean any RISC software is about at least twice the size? Isn't there any study about average executable size when comparing CISC vs RISC?

If you look at smartphones, you always notice how much memory apps require. It's not a secret. It's odd that nobody seems to mention this.

Anyways, Wirth's law is relevant again. Nobody wants to hear it, but I really believe a new era of lightweight software will soon begin. I put a lot of hope in WASM, and I hope it will work well on smartphones.

I'm using tinder on a 3 year old android, and everyday it's slower and slower.

It's almost like software companies and hardware vendors have the opposite interests. Software wants to be faster, but hardware vendors wants to increase their margin, so they want software to be slower or be more feature-rich.


> Isn't there any study about average executable size when comparing CISC vs RISC?

The RISC-V people have done a few while developing their RVC (compressed instructions) extension; see for instance slide 16 of https://riscv.org/wp-content/uploads/2015/06/riscv-compresse... which shows, on 64-bit, x86-64 (CISC) being slightly bigger than ARMv8 (RISC), and on 32-bit, x86 (CISC) being much bigger than ARM Thumb2 (RISC). The main reason is that the encoding of x86 instructions is not very efficient, with rare instructions having shorter encodings than common instructions; an infamous example being the "ASCII adjust" single-byte instructions which nobody uses.


Interesting, thanks! Are compressed instructions always enabled by default?


Here's my (uninformed) guess about how this could go.

There's already have a fairly powerful ARM chip in all Apple computers - the T2 chip. Assuming it's a similar spec to the iPad Pro's A12 then Apple could start by moving the OS to the T2 chip, which should improve the battery life of all recent macs.

They'll come up with a fancy marketing term for apps that have been compiled for ARM and advertise them as having improved performance and battery life, thereby putting pressure on developers to update. The X86's will initially be removed from all non-pro devices and replaced with more powerful ARM chips, and once there's enough momentum and support they'll also be removed from the pro devices in a year or two.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: