I keep searching for "Graviton" in these thinkpieces. I keep getting "no results found."
Mac ARM laptops mean cloud ARM VMs.
And Amazon's Graviton2 VMs are best in class for price-performance. As Anandtech said:
If you’re an EC2 customer today, and unless you’re tied to x86 for whatever reason, you’d be stupid not to switch over to Graviton2 instances once they become available, as the cost savings will be significant.
Amazon is exactly the kind of company that would take 50% margin on their x86 servers and 0% margin on their Graviton servers in order to engineer a long term shift that's in their favor - the end of x86 server duopoly (or monopoly depending on how the wind is blowing).
Many people complain about AWS' networking costs, but I also suspect these are generally at-cost. A typical AWS region has terabits upon terabits of nano-second scale latency fiber ran between its AZs and out to the wider internet. Networking is an exponential problem; AWS doesn't overcharge for egress, they just simply haven't invested in building a solution that sacrifices quality for cost.
Amazon really does not have a history of throwing huge margins on raw compute resources. What Amazon does is build valuable software and services around those raw resources, then put huge margins on those products. EC2 and S3 are likely very close to 0% margin; but DynamoDB, EFS, Lambda, etc are much higher margin. I've found AWS Transfer for SFTP  to be the most egregious and actually exploitative example of this; it effectively puts an SFTP gateway in front of an S3 bucket, and they'll charge you $216/month + egress AND ingress at ~150% the standard egress rate for that benefit (SFTP layers additional egress charges on-top-of the standard AWS transfer rates).
Obviously it's on demand pricing and the hardware isn't quite the same, with Hetzner using individual servers with 1P chips. Amazon also has 10gbps networking.
But still, zero margin? Let's call it a factor of two to bridge the monthly and on demand gap - does it really cost Amazon five times as much to bring you each core of Zen 2 even with all their scale?
I don't think Amazon overcharges for what they provide, but I bet their gross margins even on the vanilla offerings are pretty good, as are those of Google Cloud and Azure.
Margin is a really weird statistic to calculate in the "cloud". Sure, you could just mortgage the cost of the silicon across N months and say "their margin is huge", but realistically AWS has far more complexity: the costs of the datacenter, the cost of being able to spin up one of these 32 core EPYC servers in any one of six availability zones within a region and get 0 cost terabit-scale networking between them, the cost of each of those availability zones not even being one building but being multiple near-located buildings, the cost of your instance storage not even being physically attached to the same hardware as your VM (can you imagine the complexity of this? that they have dedicated EBS machines and dedicated EC2 machines, and yet EBS still exhibits near-SSD like performance?), the cost of VPC and its tremedous capability to model basically any on-prem private network at "no cost" (but, there's always a cost); that's all what you're paying for when you pay for cores. Its the stuff that everyone uses, but its hard to quantify into just saying "jeeze an EPYC chip should be way cheaper than this"
And, again, if all you want is a 32 core EPYC server in your basement, then buy a 32 core EPYC server and put it in your basement. But, my suspicion is not that a 32 core EPYC server on AWS makes zero margin; its that, if the only service AWS ran was EC2, priced how it is today, they'd be making far less profit than when that calculation includes all of their managed services. EC2 is not the critical component of AWS's revenue model.
The marginal cost of VPC is basically 0. Otherwise they couldn't sell tiny ec2 instances. The only cost differences between t3.micro and their giant ec2 instances are (a) hardware and (b) power.
That's not strictly true. They could recoup costs on the more expensive EC2 instances.
I have not idea what the actual split is, but the existence of cheap instances doesn't mean much when Amazon has shown itself willing to be a loss-leader.
If they are recouping their costs, it's a capital expense, and works differently than a marginal cost. AWS's networking was extremely expensive to _build_ but it's not marginally more expensive to _operate_ for each new customer. Servers are relatively cheap to purchase, but as you add customers the cost increases with them.
If they're selling cheap instances are a marginal loss, that would be very surprising and go against everything I know about the costs of building out datacenters and networks.
I'm skeptical about this claim. Most cloud providers try to justify their exorbitant bandwidth costs by saying it's "premium", but can't provide any objective metrics on why it's better than low cost providers such as hetzner/ovh/scaleway. Moreover, even if it were more "premium", I suspect that most users won't notice the difference between aws's "premium" bandwidth and a low cost provider's cheap bandwidth. I think the real goal of aws's high bandwidth cost is to encourage lock-in. After all, even if azure is cheaper than aws by 10%, if it costs many times that for you to migrate all your over to azure, you'll stick with aws. Similarly, it encourages companies to go all-in into aws, because if all of your cloud is in aws, you don't need to pay bandwidth costs for shuffling data between your servers.
The reality is, their networks are fundamentally and significantly higher quality, which makes them far more expensive. But, maybe most people don't need higher quality networks, and should not be paying the AWS cost.
But the problem is that you can't. You simply can't use aws/azure/gcp with cheap bandwidth. If you want to use them at all, you have to use their "premium" bandwidth service.
What? My $2000 Titan V GPU and $10 raspberry pi both payed for themselves vs EC2 inside of a month.
Many of AWS's managed services burn egregious amounts of EC2, either by mandating an excessively large central control instance or by mandating one-instance-per-(small organizational unit). The SFTP example you list is completely typical. I've long assumed AWS had an incentive structure set up to make this happen.
"We're practically selling it at cost, honest!" sounds like sales talk.
*Unless things have changed in the last year or so since I looked
This seems to be demonstrably false, given that Amazon Lightsail exists. Along with some compute and storage resources:
$3.50 gets you 1TB egress ($0.0035/GB)
$5 gets you 2TB egress ($0.0025/GB)
Now, its certainly possible that Amazon is taking a loss on this product. Its also possible that they have data showing that these types of users don't use more than a few percent of their allocated egress. But I suspect that they are actually more than capable of turning a profit at those rates.
That's a theoretical maximum of 1Tb egress each three hours. So for the cost of 3 hours of egress you can buy an entire VPS with a month of egress, for cheaper. It's insane just how much cheaper it really is.
I assure you that you can run these servers for one hour a day and no one will bat an eye. I know people running seedboxes at full speed for 10 hours a day or so without an issue - that's 100 times the bandwidth of even Lightsail for the same price.
They have worse peering than AWS but the difference in cost to them is certainly not 100x or more.
Clearly they have margins, and fat ones at that.
"When factoring in heavy depreciation, AWS has EBITDA margins of around 50%."
Shameless plug - partly because of the high cost of sftp in AWS, and lack of ftp (understandable), and a bunch of people wanting the same in Azure / GCS, that made us start https://docevent.io which has proved quite popular.
And to make another point. Apple isn't a lower-end cost cutting laptop makers.
Apple sell ~20M Mac per year. Intel ship roughly ~220M PC CPU per year ( I think recent years saw the number trending back towards 250M) That is close to 10%. Not an insignificant number. Apple only use expensive Intel CPU. Lots of survey shows most $1000+ PC belongs to Apple. Most of the business Desktop, and Laptop uses comparatively cheap Intel CPU. i.e I would not be surprised the median price of Apple's Intel CPU purchase is at least 2x to total market median if not more. In terms of revenue that is 20% of Intel's consumer segment.
They used to charge a premium for being the leading edge Fab. You cant get silicon that is better than Intel. You are basically paying those premiums for having the best. Then Intel's Fab went from 2 years leading, to now 2 years behind, ( That is 4 years difference. ) all while charging the same price. And since Intel wants to keep its margin, it is not much of a surprise customers, ( Apple and Amazon ) looks for alternative.
Here is another piece on Amazon Graviton 2.
May be I should submit it to HN.
Apple owns the premium PC market, it’s Mac division is not only the most profitable PC company in the world, it might be more profitable than all the others combined.
It’s share of Intels most expensive desktop CPUs is much higher than its raw market share.
If macOS goes ARM and there's a sizable population of developers testing and fixing these issues constantly, the math changes in favor of Graviton and it would make it a no-brainer to pick the cheaper alternative once everything "just works".
I've been shifting my work to ARM based machine for some years now, mainly to reduce power consumption. One of my current projects - A problem validation platform(Go) has been running nicely on a ARM server(Cavium ThunderX SoCs) on scaleway; but weirdly scaleway decided to quit on ARM servers sighting hardware issues which not many of the ARM users seem to have faced. Only ARM specific issue I faced with scaleway was that the reboot required power-off.
Scaleway ditched ARM: https://news.ycombinator.com/item?id=22865925
I have a hunch our industry will need to increasingly deal with ARM environments.
What is the connection here ? ARM servers would be fine in a separate discussion. What does it have to do with Macs ? Macs aren't harbingers of anything. They have set literally no trend in the last couple of decades, other than thinness at all costs. If you mean that developers will use Gravitons to develop mac apps, why/how would that be ?
"Some people think that "the cloud" means that the instruction set doesn't matter. Develop at home, deploy in the cloud.
That's bull*t. If you develop on x86, then you're going to want to deploy on x86, because you'll be able to run what you test "at home" (and by "at home" I don't mean literally in your home, but in your work environment)."
So I would argue there is a strong connection.
I can see this making sense to Torvalds, being a low-level guy, but is it true for, say, Java web server code?
Amazon are betting big on Graviton in EC2. It's no longer just used in their 'A1' instance-types, it's also powering their M6g, C6g, and R6g instance-types.
Another example is android apps development. Most developers use x86_64 CPU and run emulator using intel android image. While I don't have vast experience, I did write few apps and never had any issue because of arch mismatch.
High level languages mostly solved that issue.
Also note that there are some ARM laptops in the wild already. You can either use Windows or Linux. But I don't see that every cloud or android developer hunting for that laptop.
Going from ARM to X64 is even less likely to have issues as X64 is more permissive about things like unaligned access.
People are making far too big a deal out of porting between the two. Unless the code abuses undefined behaviors in ways that you can get away with on one architecture and not the other, there is usually no issue. Differences in areas like strong/weak memory ordering, etc., are hidden behind APIs like posix mutexes or std::atomic and don't generally have to be worried about.
The only hangup is usually vector intrinsic or ASM code, and that is not found in most software.
This is just not a big deal unless you're a systems person (like Linus) or developing code that is really really close to the metal.
Thus I encourage everyone to target more than one platform as it makes the total better. This even though there are platform specific issues that won't happen on the other (like the compiler bug we found)
I would be surprised if Apple didn’t have internal prototypes of macos running on their own Arm chips for the last several years. Most of the macos / iOS code is shared between platforms, so it’s already well optimized for arm.
Usually making the application run in the host OS is much more productive than dealing with the emulator/simulator.
Unless you're talking about native code (and even then, I've written in the past about ways this can be managed more easily), then no, it really doesn't matter.
If you're developing in NodeJS, Ruby, .NET, Python, Java or virtually any other interpreted or JIT-ed language, you were never building for an architecture, you were building for a runtime, and the architecture is as irrelevant to you as it ever was.
Well I can't speak to some of the others... but Conda doesn't work at all on ARM today (maybe that will change with the new ARM Macs, though), which is annoying if you want to use it on, say, a Raspberry Pi for hobby projects.
Additionally, many scientific Python packages use either pre-compiled binaries or compile them at install-time, for performance. They're just Python bindings for some C or Fortran code. Depending on what you're doing, that may make it tricky to find a bug that only triggers in production.
Also one I've come across myself so I'm a bit disappointed I didn't call this out. So... kudos!
I think you are vastly underestimating how many people use Mac (or use Windows without using WSL) to develop for the cloud.
The things the Mac is good at:
1) It powers my 4k monitor very well at 60Hz
2) It switches between open lid and closed lid, and monitor unplugged / plugged in states consistently well.
3) It sleeps/wakes up well.
4) The built in camera and audio work well, which is useful for meetings, especially these days.
None of these things really require either x86 or Arm. So if a x86 based non-Mac laptop appeared that handled points 1-4 and could run Linux closer to our production environment I'd be all over it.
I started using my home Windows machine for development as a result of the lockdown and in all honesty I have fewer issues with it than I did with my work MacBook. Something is seriously wrong here.
Most of the teens to early 20 somethings I know are either buying or hoping to buy the latest Macs, iPads, iPhones and AirPods while most of the devs I know are on Linux or WSL but devs are a minority compared to the Average Joes who don't code but are willing to pay for nice hardware and join the ecosystem.
And as a byproduct perhaps they will work better for hip young consumers too. Or anyone else who is easily distracted by bright colours and simple pictures, which is nearly all of us.
The dominance of Macs for software development is a very US-centric thing. In Germany, there is no such Mac dominance in this domain.
And why not Java developers?
They seem pretty polar opposite in terms of culture, but all still turn up using a Macintosh.
Even the famous FailOverflow said in one of his videos he only bought a Mac since he saw that at conferences everyone had Macs so he thought that must mean they're the best machines.
Anecdotally, I've interviewed at over 12 companies in my life and only one of those issues Mac to its employees the rest were windows/Linux.
I've got anecdata that says different. My backend/cloud team has been pretty evenly split between Mac and Windows (with only one Linux on the desktop user). This is at a Java shop (with legacy Grails codebases to maintain but not do any new development on).
With Mac all your regular Mac software integrates well with the Unix world. XCode is not going to screw up my line endings. I don’t have to keep track of whether I am checking out a file from a Unix or Windows environment.
`git config --global core.autocrlf true`
That will configure git to checkout files with CRLF endings and change them to plain LF when you commit files.
They're not ancient, but are mostly ports of recent FreeBSD (or occasionally some other BSD) utilities. Some of these have a lineage dating back to AT&T/BSD Unix, but are still the (roughly) current versions of those tools found on those platforms, perhaps with some apple-specific tweaks.
You must be living in a different universe. What do you think the tens of thousands of developers at Google, Facebook, Amazon, etc etc etc are doing on their Macintoshes?
I can only speak of my experience at Google, but the Macs used by engineers here are glorified terminals, since the cloud based software is built using tools running on Google's internal Linux workstations and compute clusters. Downloading code directly to a laptop is a security violation (With an exception for those working on iOS, Mac, or Windows software)
If we need Linux on a laptop, there is either the laptop version of the internal Linux distro or Chromebooks with Crostini.
SSH to a linux machine ? I get that cloud software is a broad term that includes pretty much everything under the sun. My definition of cloud dev was a little lower level.
What point are you trying to make?
Yeah, I'm able to do remote debugging in a remote VM but the feedback loop is much longer, impacting productivity, morale and time to solve the bug, a lot of externalised costs that all engineers with reasonable experience are aware of. If I can develop my code on the same architecture that it'll be deployed my mind is much more in peace, when developing in x86_64 to deploy on ARM I'm never sure that some weird cross-architecture bug will pop up. No matter how good my CI/CD pipeline is, it won't ever account for real-world usage.
it's harder in all the way you describe, but it's much more likely the software will survive migrating to the next debian/centos release unchanged.
it all boils down to the temporal scale of the project.
I have to agree. It's not like we're all running Darwin on servers instead of Linux. Surely the kernel is a more fundamental difference than the CPU architecture.
Hahaha then you have not kept attention. Apple led the trend away from beige boxes. Style of keyboard used. Large track pads. USB. First to remove floppy drive. Both hardware, software and web design has been heavily inspired by Apple. Just look at icons used, first popularized by Apple.
Ubuntu desktop is strongly inspired by macOS. Operating system with drivers preloaded through update mechanism was pioneered by Apple. Windows finally seem to be doing this.
Being cheaper isn't enough here - Graviton needs to be faster and it needs to do that over generations. It needs to _sustain_ its position to become attractive. Intel can fix price in a heartbeat - they've done that in the past when they were behind. Intel's current fab issues does make this a great time to strike, but what about in 2 years? 3 years? 4? Intel's been behind before, but they don't stay there. Switching to Epyc Rome at the moment is an easy migration - same ISA, same memory model, vendor that isn't new to the game, etc... But Graviton needs a bigger jump, there's more investment there to port things over to ARM. Will that investment pay off over time is a much harder question to answer.
At many businesses that is probably over the threshold where someone can say it’s worth switching the easy n% over since they’ll save money now and if the price/performance crown shifts back so can their workloads. AWS has apparently already done this for managed services like load-balancers and I’m sure they aren’t alone in watching the prices very closely. That’s what Intel is most afraid of: the fat margins aren’t coming back even if they ship a great next generation chip.
Some function gets vectorized and a variable shifts in or out of a register? Or one of your libraries gets rebuilt with LTO / LTCG and its functions are inlined.
If your code was wrong, any of that can ruin it and you're left trying to figure it out, and requiring your developers to use exactly one point release of Visual Studio and never, ever upgrade the SDK install.
I love working on ARM, but with the licensing model I'm not sure how much of a positive return Amazon would really be able to squeeze from their investment.
It also potentially brings them a truckload of future headaches if any of their cloud competitors raise anti-trust concerns down the road.
Beyond that, I think Apple, ARM and AMD get a lot of credit for their recent designs - a lot of which is due, but quite a bit of which should really go to TSMC.
The TSMC 7nm fabrication node is what's really driven microprocessors forward in the last few years, and hardly anyone outside of our industry has ever heard of them.
I don't know that Amazon couldn't transition to RISC-V if they needed to in a couple of years.
Microsoft controls a lot of lucrative business-centric markets. An advantage that seems to have helped MS Azure surpass AWS in marketshare. One of Microsoft's weaknesses is their in-ability to migrate away from x86 in any meaningful way. IBM could used Redhat to push the corporate server market away from x86 under the guise of lower operating costs, which could deal a tremendous blow to MS, leaving Amazon and Google with an opening to hit the Office and Azure market.
You overestimate the AWS customer base. Lots of them do silly things that cost them a lot of money.
It's often easy to test if scaling the instance size/count resolves a performance issue. If it does, you know you can fix the problem by burning money.
When you have reasonable certainty of the outcome spending money is easier than engineering resources.
And later it's easier for an engineer to justify performance optimizations, if the engineer can point to a lower cloud bill..
I'm not saying it's always a well considered balance, just that burning money temporarily is a valid strategy.
If only AWS had thousands of engineers to create a UX that makes cost info all upfront and easy to budget. Clearly it's beyond their capability /s
>I'm not saying it's always a well considered balance, just that burning money temporarily is a valid strategy.
Yes, but I bet a majority of AWS users don't want that to be the default strategy.
Why do you think that? Cloud means linux, apple does not even have a server OS anymore.
We are seeing some ARM devices entering the server space, but these have been in progress for ears and have absolutely nothing to do with apples (supposed) CPU switch.
One reason is to get the same environment they get in a cloud deployment scaled down. Another is Apple keeps messing with compatibility, most recent example being the code signing thing slowing down script executions.
Edit, this includes docker on Mac: it's not native.
> Guys, do you really not understand why x86 took over the server market?
> It wasn’t just all price. It was literally this “develop at home” issue. Thousands of small companies ended up having random small internal workloads where it was easy to just get a random whitebox PC and run some silly small thing on it yourself. Then as the workload expanded, it became a “real server”. And then once that thing expanded, suddenly it made a whole lot of sense to let somebody else manage the hardware and hosting, and the cloud took over.
Sure they have lower margins on desktop but these brings sh!t tons of cash, cash you will then use for research and to develop your next desktop and server CPUs.
If I believe this page , consumer CPUs brought almost $10 billions in 2019... In comparison, server CPUs generated $7 billions of revenues... And these days, Intel has like >90% of that market.
Other players (Sun, HP, IBM, Digital, ...) were playing in a walled garden and couldn't really compete on any of that (except pricing because their stuff was crazy expensive).
So not only they were sharing the server market with Intel but Intel was most likely earning more than the sum of it all from their desktop CPUs...
More money, more research, more progress shared between the consumer and server CPUs: rinse and repeat and eventually you will catch up. And also, they could sell their server CPUs for a lot less than their "boutique" competitors.
You just can't compete with a player which has almost unlimited funds and executes well...
People paid the IBM tax for decades. Intel is the new IBM. Wherever you look, Intel's lineup is not competitive and it is only surviving due to inertia.
Yes, Spectrum and Meltdown definitely proves that...
I think I heard that argument already, in the 90's. But instead of Intel it was Sun, it was SGI, it was Digital, etc
The truth is that money wins and while there is an inertia, it's more like a Cartoon inertia where your guy won't fall from the cliff until he looks down. But then it's too late.
None of this is rocket science but it takes money and engineers time to make these things happen. Intel has more of both at the moment.
- L1: two 32KiB L1 cache per core
- L2: 512KiB L2 cache per core
- L3: 16MiB L3 cache per core, but shared across all cores.
The L1/L2 cache aren't yet publicly available for any Cooper Lake processors, however the previous Cascade Lake architecture provided:
All Xeon Cascade Lakes:
- L1: two 32 KiB L1 cache per core
- L2: 1 MiB L2 cache per core
- L3: 1.375 MiB L3 cache per core (shared across all cores)
Normally I'd expect the upcoming Cooper Lake to surpass AMD in L1, and lead further in L2 cache. However it looks like they're keeping the 1.375MiB L3 cache per core in Cooper Lake, so maybe L1/L2 are also unchanged.
Edit: Previously I showed EPYC having twice the L1 as Cascade Lake, this was a typo on my part, they're the same L1 per core.
There's been very few cloud benchmarks where 1P Epyc Rome hasn't beaten any offering from Intel, including 2P configurations. The L3 cache latency hasn't been a significant enough difference to make up the raw CPU count difference, and where L3 does matter the massive amount of it in Rome tends to still be more significant.
Which is kinda why Intel is just desperately pointing at a latency measurement slide instead of an application benchmark.
As another commenter pointed out, though, not all caches are equal. Unfortunately, I was not able to easily find access speeds for specific processors, so single-threaded benchmarks are the primary quantitative differentiator.
And I think Xeon L3 cache tops out at about 40MB, whereas Threadripper & Epyc go up to 256MB.
I don't think there's any Intel CPU's with more than 36MB L3?
IIRC the only people who buy those vs. the equivalent dual socket system are people with expensive software which is licensed per socket.
But Amdhal's Law shows us this doesn't make sense for most people.
Graviton is still almost an order of magnitude slower than a VPS of the same cost, which is around what the hardware and infra costs Amazon.
The limitation is not technical (as hackintoshes demonstrate); it’s legal.
That said, it would be great to be able to run Mac software (and iOS) on server-based VMs.
I just don’t think it will happen at scale, because lawyers.
Ever since Cavium gave up Arm server products and pivoted to HPC, there hasn't been a real Arm compititor.
Ampere is almost all ex-Intel.
what are my options if I want a ARM laptop with say good mobile processor performance close to the i7-8850H found in a 2018 mbp 15, 16GB RAM and 512GB NVME SSD to setup my day to day dev environment (Linux + Golang + C++ etc)?
surface x is the only ARM laptop you can easily purchase, but it is windows only and the processor is way too slow. there is also close to 0 response from app vendors to bring their apps to native ARM windows environment.
Apple fans just seem to be in denial about being late to the game.
I have to admit, the Apple's ARM processors will likely be significantly faster per core. But they are not the driver of the switch to ARM. If anything, Chromebooks were.
If you develop for Mac, chances are you want your CI to use the same target hardware, which means cloud ARM hardware to run Mac VMs.
Wishful thinking though, probably.
there's still a load of non scale-out services that the world depends upon.
1. Back in the 1990s, the big UNIX workstation vendors were sitting where Intel is now at the high end, being eaten from the bottom by derivatives of what was essentially a little embedded processor for dumb terminals and scientific calculators. Taken in isolation, Apple's chips aren't an example of low-margin high-volume product eating its way up the food chain, but the whole ARM ecosystem is.
2. For a lot of the datacenter roles being played by Intel Xeons, flops/Watt or iops/watt isn't the important metric. For many important workloads, the processor is mostly there to orchestrate DMA from the SSD/HDD controller to main memory and DMA from main memory to the network controller. The purchaser of the systems is looking to maximize the number of bytes per second divided by the amortized cost of the system plus the cost of the electricity. My understanding is that even now, some of the ARM designs are better than the Atoms in term of TDP, even forgetting the cost advantages.
I invested in ARM (ARMH) back in 2006, partly because I realized that whether Apple or Android won more marketshare, nearly everyone was going to have an ARM-based smartphone in a few years. Part of it was also realizing the above and hoping ARM would take a good share of the server market. SoftBank took ARM private before we saw much inroads in the server market, but it was still one of the best investments I've made.
Of course, maybe I'm just lucky and my analysis was way off.
1) which 2006 analyses proved wrong? it would be interesting to see which assumptions made sense in 2006 but were ultimately proven wrong.
2) which companies are you evaluating in 2020, and why?
thanks for sharing.
1) I thought the case for high-density/low-power ARM in the datacenter was pretty clear-cut, but it was an obvious enough market that within a few multi-year design cycles, Intel would close the gap with Atoms and close that window of opportunity forever, especially considering Intel's long-running fabrication process competitive advantage. In late 2012, one of my friends in the finance industry wanted to be "a fly on the wall" in an email conversation between me and a hedge fund manager he really respected who had posted on SeekingAlpha that his fund was massively short ARMH. A few email back-and-forths shook my confidence enough to convince me that the server market window had closed, and that it was possible (though unlikely) that Atom was about to take over the smartphone market. I reduced my position at that time by just enough to guarantee I'd break even if ARMH dropped to zero. In hindsight, I'm still not sure that was the wrong thing to do given what I knew at the time. I had two initial thesies: the smartphone thesis had played out and was more-or-less believed by everyone in the market, and the server thesis was reaching the end of that design window and I was worried that Intel was not far from releasing an ARM-killer Atom, backed up by the arguments of this hedge fund manager. I'm really glad the hedge fund manager didn't spook me enough to close out my position entirely.
2) I'm not a great stock picker. I've had some great calls and had about as many pretty bad calls. I'm now doing software development, but my degree is in Mechanical Engineering, and I took a CPU design course (MIT's 6.004) back in college. I think my edge over the market is realizing when the market is under-appreciating a really good piece of engineering, though I punch well below my weight in the actual business side analysis.
Jim Keller is a superstar engineer, who attracted other superstars, and I don't think the market ever really figured that out. Jim Keller retired now, so there goes about half my investment edge. Though, maybe I'm just deluding myself on the impact really good engineering really has on the business side.
could you share what arguments the hedge fund manager made?
do you find any companies interesting now?
Elixir, my main programming language, will use all the cores in the machine, e.g. parallel compilation. Even if an ARM-based mac has worse per-core performance than Intel, I am ahead.
Apple can easily give me more cores to smooth the transition. Whatever the profit margin was for Intel on the CPU, they can give it to me instead. And they can optimize the total system for better performance, battery life and security.
HAHAHA you sweet summer child.
Next thing you know, you'll be demanding Apple puts back the heatpipe in the Macbook Air for cooling the cpu instead of hoping theres enough airflow over the improperly seated heatsink (which just has to work until the warranty expires ;) )
They now sell an iPad, arguably better than any of the non-iPadOS competition, for $350.
The iPhone SE, one of the fastest smartphones on the market (only other iPhones are faster) starts at just $399.
They’ve aggressively been cutting prices, I believe, so that they can expand the reach of the services business. They’re cutting costs to expand their userbase.
I wouldn’t be surprised to see the return of the 12-inch MacBook at the $799 to $899 price point, now that they no longer have to pay Intel a premium for their chips.
I just replaced my Google Pixel (4 years old) with an SE and it's like... They're not even comparable. I'm sure it makes clear sense as to how so much progress could be made in 4 years, and how it could cost so much less (I paid $649 USD for the Pixel), but it feels a bit magical as a consumer and infrequent phone user. It's a fantastic little device.
I'm still 900% pissed about the MacBook Air they sold me in 2018 and I resent the awful support they gave me for the keyboard (It's virtually unusable already), but as a phone business, they seem hard to beat right now.
Like, give me a custom launcher option (custom everything options), emulation of classic consoles and handhelds, and easy plug and play access via a PC. If I can't even get one of those, I'm on Android regardless of how shiny I think Apple hardware is.
But if you don't want nay of that, it probably works grate. Just not for me.
- I like to use my phone to check bathy charts while I'm out free diving. In a water proof case, a phone is a huge asset for finding interesting spots to dive. I don't really want to buy a dedicated device for this. I can just hook it to my float and it's there for exploration/emergencies/location beacon for family/etc.
- When I forget to charge my watch, it's nice to have GPS handy for tracking a run or ride
- It's really nice to be able to do a decent code review from a phone if I'm out and about. I wouldn't do this with critical code or large changes, but it's nice to give someone some extra eyes without committing to sitting at the desk or bringing my computer places
- I have ADHD and having a full-fledged reminder/task box is a god send. I'd be lost without todoist
I could do this all with any modern smart phone, but I went with the best 'bang for the buck' model I could find. I don't think I'll miss anything from Android.
I do miss a home screen widget and some apps, but I think overall the experience is really nice, and being able to talk to the iMessage people would be nice.
I built a Hackintosh on a Thinkpad and Handoff, AirDrop and all those connections are way more useful than I thought. It’s been 3 years and Android/Windows seems to be pretty much in the same state minus some minimal improvements that really don’t add that much value.
Integration and unified workflows is where it’s at for me at least.
When I picked up my SE and signed in with my Apple ID it gave me access to all of my keychain and tons of other stuff I use frequently like photos and documents. Things I didn't realize I was missing on Android. I installed multiple apps and was automatically signed in! I didn't expect that. Wifi just works everywhere I've been with my MacBook. People near me on iPhones can have all kinds of things exchanged with minimum effort. Contacts, files, wifi passwords.
I know all of this works on Android too (kind of), but since I have a MacBook and an Apple ID I use a lot, my experience with an iOS is faaaar better than it was with Android.
I used to go days without touching my Pixel, but this iPhone is so much nicer to use and so much more useful that I carry it regularly and use it quite a bit. I'm really happy with it.
According to the usually accurate Ming Chi Kuo, the entry level iPad will be moving to using a top of the line chip as well, which is a needed change.
>Kuo first says that Apple will launch a new 10.8-inch iPad during the second half of 2020, followed by a new iPad mini that’s between 8.5-inches and 9-inches during the first half of 2021. According to the analyst, the selling points of these iPads will be “the affordable price tag and the adoption of fast chips,” much like the new iPhone SE.
and iphone SE is not a low margin example https://www.phonearena.com/news/apple-iphone-se-2020-teardow...
so question still stands, got any post jobs example?
>I didn't think we would get to a point where an iPhone offers more value than Android phones, particularly in the mid-range segment. But that's exactly what's going on with the iPhone SE.
Feel free to ignore Ming Chi Kuo, but given his track record for accuracy and known good sources inside Apple's supply chain, people pay attention to what he has to say.
More interestingly, will Apple use the same strategy of selling at entry level prices with a custom ARM SOC that outperforms the competition for an ARM Mac as well?
Even the $1500 Android flagships are completely outclassed by that SE.
>Overall, in terms of performance, the A13 and the Lightning cores are extremely fast. In the mobile space, there’s really no competition as the A13 posts almost double the performance of the next best non-Apple SoC.
>On the GPU side of things, Apple has also been hitting it out of the park; the last two GPU generations have brought tremendous efficiency upgrades which also allow for larger performance gains. I really had not expected Apple to make as large strides with the A13’s GPU this year, and the efficiency improvements really surprised me. The differences to Qualcomm’s Adreno architecture are now so big that even the newest Snapdragon 865 peak performance isn’t able to match Apple’s sustained performance figures. It’s no longer that Apple just leads in CPU, they are now also massively leading in GPU.
the point is that nothing of value can come from arguing subjective points.
It’s always kind of strange to me to see all the people who criticize Apple’s profits. Look at the profit margins of other major tech companies, and you’ll find they make similarly large margins.
this thread is about apple margins and people that is arguing they will transfer margins into lower margins. I really don't know why you're arguing about product value. this is the thread starter: https://news.ycombinator.com/item?id=23598122
what has your point about prices and product quality anything to do with this?
and 2020 se is not a low margin example https://www.phonearena.com/news/apple-iphone-se-2020-teardow...
I'm really not sure how the market would respond to it. So long as it played in the Apple ecosystem, I think it's interesting for some things though.
What makes people think that Apple will lower the price because they are now using their ARM processor?
>Overall, in terms of performance, the A13 and the Lightning cores are extremely fast. In the mobile space, there’s really no competition as the A13 posts almost double the performance of the next best non-Apple SoC. The difference is a little bit less in the floating-point suite, but again we’re not expecting any proper competition for at least another 2-3 years, and Apple isn’t standing still either.
Last year I’ve noted that the A12 was margins off the best desktop CPU cores. This year, the A13 has essentially matched best that AMD and Intel have to offer – in SPECint2006 at least. In SPECfp2006 the A13 is still roughly 15% behind.
I think performance per watt is going to be just as important as overall performance. Apple's "little" cores were a new design in the A13, and compare very well against stock ARM cores on a performance per watt basis.
>In the face-off against a Cortex-A55 implementation such as on the Snapdragon 855, the new Thunder cores represent a 2.5-3x performance lead while at the same time using less than half the energy.
It will be interesting to see how much performance they can get when they are on a level playing field when it comes to power and cooling constraints.
Integer WorkloadsLZMA CompressionLZMA (Lempel-Ziv-Markov chain algorithm) is a lossless compression algorithm. The algorithm uses a dictionary compression scheme (the dictionary size is variable and can be as large as 4GB). LZMA features a high compression ratio (higher than bzip2).The LZMA workload compresses and decompresses a 450KB HTML ebook using the LZMA compression algorithm. The workload uses the LZMA SDK for the implementation of the core LZMA algorithm.
They compress just 450KB of text as a benchmark - even the dict size is greater then the input. If both fits the CPU caches - the results are supreme, if not - horrible compared to the former. Also it'd vastly depend how fast the CPU can switch to full turbo mode (for servers that doesn't matter at all)
I agree these benchmarks have issues, but there's a consistent trend (eg. AnandTech's SPEC results are similar—they use active cooling to avoid the question about thermals) and the most reliable benchmarks, like compilers, aren't outliers.
Doing the test below 4MB of L3 would hurt, doing with with less than 2MB would decimate the test.
That's not true.
We keep hearing this especially from the Apple-bubble blogs, yet why does it not translate into higher end work being done on these iPads if they are supposedly so powerful, yet all we ever see is digital painting work that isn't pushing it at all and extremely basic video editing.
Also good luck to Apple if they lock down macOS the same way.
They are locking down macOS gradually. Their hardware revenue is declining indefinitely and they know it. Their push is to lock down the software and reap a cut as much as they can.
Could make the Macbook Pro the fastest laptop in some benchmarks while also offering more affordable options.
Apple's particular ARM CPU core happens to look like a beast vs. Intel's particular x86 CPU core. But those are far from the only implementations. AMD's x86 is, at the moment, also very strong, and incredibly efficient. Lookup Zephyrus G14 reviews for example - the AMD 4800HS in that is crazy fast & efficient vs. comparable Intel laptops.
Similarly, lookup Apple A12/A13 vs. Qualcomm benchmarks - Snapdragon is also ARM, but it kinda sucks vs. Apple's A12/A13.
In terms of power realize that there's quite a bit of overlap here. Apple's big cores on the A13 pull about 5W. That's not really that different from X86 CPU cores at similar frequencies. The quad-core i5 in a macbook air, for example, pulls around 10w sustained. It's just a product design question on what the power draw stops at. More power = more performance, so usually if you can afford the power you spend it. So far there's been relatively little product overlap between x86 & ARM. Where there has been, like the recent ARM server push, power consumption differences tend to immediately go away. Laptops will be a _very_ interesting battle ground here, though, as differences get amplified.
Apple simply can't create any kind of x86 chip whatsoever (at least if they plan to use it :) ) - they don't have an ISA license and likely won't ever be able buy one, because Intel doesn't sell.
Well, apart from outright buying AMD. Or Intel. Or both. :)
Intel chips have basically stagnated for about 5 years and they abused their monopoly and market leader positioning by offering marginal improvements each year in order to protect margins, as well as fab shrink problems that insiders describe as coming from culture not dissimilar to Boeing; just with less lives at stake.
In the meantime, competitor foundries have caught up, and exceed Intel's ability to ship in volume. ARM obviously is eyeing the desktop, but more critically server market so the latest ARM IP is geared towards higher performance computing, quite well I may say.
State of the art fab + state of the art processor IP = state of the art processor. Not a huge surprise :)
I had a friend with an old old powerbook and the key shape reminded me of a thinkpad.
They can do it, but the flat short throw keys are too form over function.
The 2016 keyboard fiasco was too far for me though, and even now with my 16" 2019 mac with the "fixed" keyboard, I still prefer the wireless 2014 model, or the KB from my 2014 MBP.
they WILL compete, and they've been playing leapfrog with AMD for years.
I'm waiting to see the intel response for the new 32 and 64 core chips from amd.
Yes they have. It's why we have 10+ hour thin & light laptops. The quad-core i5 in a macbook air runs at 10W, that's comparable per-core power draw to an A13.
Controlling the CPU will give Apple more controls to tune total-package power, particularly on the GPU / display side of things, but CPU compute efficiency isn't going to be radically different, and that's not going to translate into significant battery life differences, either, since the display, radios, etc... all vastly drown out low-utilization CPU power draw numbers.
I'm guessing the switch is for those reasons, plus a more important one (my opinion only), control.
If they use their own chips, they're not stuck waiting on Intel (or any other x86 CPU manufacturer) to build CPUs that have the attributes that matter to them the most.
I wouldn't imagine just upping the vcore and the frequency clock will just magically make it faster and compete at the same level as an intel chip.
I'm deeply skeptical as a developer, you -will- get compiler errors, and it's going to suck for a while.
Isn’t this the exact concept of the VAG EA888 motor used in the Audi A4 and extended all the way up to the S3?
That said, the rumored chip is an actively cooled 12 core 5nm part which is more like an RS3 motor with extra cores/ cylinders
Having said that, I think the top requirement for most laptops is that "it's gotta run what I need it to run on day one".
For a huge amount of light users, ARM Macs are going to satisfy that requirement easily.
Once you move up the food chain to users with more complicated requirements, however, it's less of a slam dunk... for the moment.
I guess its all trust that Apple can pull it off and there's no detailed rumors yet?
Since normal processors calculate a lot of important things, I'm wondering if there really are fewer lives at stake. Of course it would be more indirect, but I could imagine that there are many areas where lives are tied to PCs doing the right thing.
E.g. what if a processor bug leads to security issues in hospital infrastructure.
Otherwise it's just fantasy sports.
(Fun and potentially useful, if you want to plan ahead, play the market, and other causes...)
This claims ARM Macs will be unveiled today (Monday) at WWDC. I guess we'll see soon enough.
Until we actually have details of Apple's implementations, there's no realistic way we can discuss
> how the new ARM-powered MacBooks will perform compared to the Intel-powered MacBooks
However, I can also see their chips outperforming Intel in sustained workloads. Apple has a history of crippling Intel performance by allowing the CPU to only run at full speed for a short while, quickly letting it ramp up the temperature before clocking down aggressively. Some of that is likely because of their power delivery system, some of it is because they choose to limit their cooling capacity to make their laptops make less noise or with fewer "unattractive" airholes. Either way, put the same chip in a Macbook and in a Windows laptop and the Windows laptop will run hotter but with better performance.
However, with a more efficient chip, Apple could allow their CPU to run at a higher frequency than Intel for longer, benefiting sustained workloads despite the IPC and frequency being lower on paper. This is especially important for devices like the Macbook Air, where they didn't even bother to connect a heatpipe to the CPU and aggressively limit the performance of their Intel chip.
I'd even consider the conspiracy theory that Apple has been limiting their mobile CPU performance for the last few years to allow for a smoother transition to ARM. If they can outperform their current lineup even slightly, they can claim to have beaten Intel and then make their cooling solution more efficient again.
It does appear to be a very poor thermal design, but you can see that most of the issues appear to be from trying to make the user not feel the heat.
It feels a bit icky to leave performance on the table like that; since you paid for a CPU and are getting a marginal performance from it. But I remember Jobs giving a talk before about how "Computers are tools, and people don't care if their tools give them all it can, they care if they can get their work done with them well"
That's not a defense of a shitty cooling solution, but it is a defence of why they power limit the chips. When I got my first (and only) Macbook in 2011, it had more than twice the real battery life of any machine on the market. That meant that I was _actually_ untethered. That's what I cared about at the time much more than how much CPU I was getting.
The Windows laptop can't possibly run hotter than a Macbook, all of which reach the throttle temperature (~100 °C) almost instantly.
I actually agree to the artificially limited performance, artificially worsened thermals theory. The Macbook Air design is extremely fishy.
the vertices would be power, heat and perf.
Apple just hasn't shipped an arm chip with high power and "excessive" cooling.
With a new A14 chip combined with intel stagnation, it wouldn't be too crazy to see a 50-100% increase, especially if you looked at multi-core scores.
Also first i7 chip was released in 2008. (The chip you refer to appears to be modern though, from 2018.)
To prevent any misunderstandings, I agree A13 has performance comparable to a modern laptop. I just don't think Geekbench is a right tool to measure that difference.
What does that even mean? The Intel platform is running Android?
Geekbench is a terrible benchmark.
In my own personal experience, running Geekbench on many machines, Geekbench scores are pretty close running between Windows and Linux on the same machine. At least for compiling code (my main use for powerful machines), their compilation benchmark numbers also fairly closely match compilation times I observe.
If this sounds like I'm saying it's apples (no pun intended) to oranges... that's pretty much the case. Apple is likey not going to produce an ARM chip in directly comparable fashion to an Intel chip. E.g. equal cache, memory bandwidth, TDP (using Intel calc method), PCI-e lanes etc. Mostly because it doesn't make sense to do so because Intel's chips are designed to be a component of the system while ARM chips are largely designed to be an SOC.
* They need to ramp the clockspeeds -- much easier to say than do.
* They need to include 256 or 512-bit SIMD operations without breaking the power budget.
* They need to design a consistent and fast RAM controller (and likely need to add extra channels)
* They need to integrate power-hungry USB4/thunderbolt into their system.
* They need to add a dozen extra PCIe 3 or 4 lanes without ballooning the power consumption.
* They need an interconnect (like QuickPath or HyperTransport/Infinity Fabric) so all these things can communicate.
These are the hard issues that Intel, AMD, IBM, etc all must deal with and form a much bigger piece of the puzzle than the CPU cores themselves when dealing with high-performance systems.
The access to iOS and macOS means that they can profile every single app everyone runs and improve real-world performance, and they do.
If you are a company that supplies anything (software or hardware) to Apple or Amazon, and you are operating on >20% gross margins, Apple and Amazon will destroy you, and destroy your margins. You have to continuously innovate and can't just relax and chill and collect your margins, like Intel has been doing for the past 5 years, milking their advantage that has evaporated.
Other than the modem, I don't think there's a component on the latest iPhones that has more than 20% margins for the supplier. Apple is ruthless. So is Amazon, when it comes to fulfilment.
Apple is using the same fabrication process as everyone else in a mature industy.
There simply isn't 2x improvement possible unless everyone else was staggeringly inconpetent. Imagine Apple decided to build their own car, and made their own electric motors. Would you believe claims that their motors are 2x as efficient?
Single core performance is currently doubling every five years or so. We’re not about to see it double in the next 24 hours.
Apple's ARM chips probably perform much better than reference ARM chips because they are much closer to the micro architecture that Intel uses. But Apple's chips still consume less power than Intel chips used in laptops. The secret sauce is probably the fact that the SoC has cores with different performance and power profiles. Big cores are used for peak performance bursts and lots of small cores for energy efficiency while the device is waiting for user input.
I think Geekbench is measuring peak performance and not sustained performance but the numbers are probably correct. Making a representative comparison between mobile devices and desktops isn't possible because their different thermal profiles but it's highly likely that Apple will go for a passively cooled design if they can.
there is a time-honored way to do exactly that.
Constrain the comparison.
"Double the performance" (of $899 to $901 laptops with 2560x1440 resolution)
"Twice as fast" (as all laptops with thunderbolt 3 and no sd card reader)
Yeah, until they put the same chip, or better in a laptop envelope. Then what for Intel?
This was from 2016. People being surprised by this haven't been paying attention to where this was going:
Also, I said it many times before, Intel trying to be as misleading as possible with the performance of its chips, such as renaming Intel Atom chips to "Intel Celeron/Pentium" and renaming the much weaker Core-M/Y-Series (the same chips used in Macbook Air for the past few years) as "Core i5" is going to bite them hard in the ass, when Apple's chips are going to show that they can beat "Intel Core i5". But Intel kept making this worse with each generation.
Maybe I'm spoiled because my development is mostly java, python, Go, docker, etc, all of which should have no runtime problem.
In terms of IDE, It looks like vscode is ported which is great.
I see two projects in my workflow: intellij and iterm2. I can live with a vanilla terminal but I'll need intellij. It looks like there's some discussion around support.
The biggest potential issue will be how people get hardware for doing the builds. Hopefully Apple will provide a good way of cross-compiling or a new machine that is suitable for datacenter deployment (ARM based mac mini?). We'll have to wait for the WWDC announcement to find out.
But instead of docker, just deboostrap the amd64 release under a directory, copy qemu-amd64 to $DIR/usr/bin and chroot into that directory. Docker is a bit of a mess.
Before chrooting, bind-mount:
/dev to $DIR/dev
/proc to $DIR/proc
/dev/pts to $DIR/pts
/sys to $DIR/dev
/home to $DIR/home
copy /etc/resolv.conf to $DIR/etc/resolv.conf
Then chroot and login (su -l yourusername). That way you could try running a lot of software.
debootstrap an ARM rootfs, please.
It’s no stretch to say that iTerm 2 is a major one of the reasons I cannot stand desktop Linux (or Windows, even with the new terminal there).
EDIT: It works great, and it has remote for ssh and wsl!!! This made my day!
Outdated information: Windows on ARM requires you to build UWP apps which is more work than just a recompile.