Safari developer mode is actually not that bad. The only thing that's really awful is the complete lack of (useful) plugins.
I was also able to run my web project without any issues with ARM binary.
Bad memories aside, tracking what worked before and is in progress to working again, is useful service. Back during the 68K -> PowerPC switch this kind of information was very helpful.
OBS recommends against Big Sur because of a tiny corner of functionality (the browser view). But OBS otherwise works on Big Sur and on Apple Silicon under Rosetta2.
Matlab works but isn't officially supported until mid-December, but has the red dot and not the yellow triangle.
Also, Firefox isn't natively supported outside of beta.
In the last month Edge went from 0.67% to 2.09%.
Unfortunately, they'll be removing the webRequest API  which was the only reason I would have wanted to use it. This will happen probably whenever Google deprecates Manifest v2 and Manifest v3 becomes the only interface for extensions, which will stop uBlock Origin from working.
Other than that, when I tried Edge it didn't work perfectly on every site. But, it will be nice to have another non-Google browser to choose from.
Edge is just a reskinned Chrome now.
This source (2020) also claims that global Firefox use is about 4% while Edge is under 3%, contradicting the claim (even including non-MacOS devices): https://gs.statcounter.com/
Another 2020 source gives 5.5% to Firefox and 0.5% to Edge on MacOS, also contradicting the claim: https://netmarketshare.com/?options=%7B%22filter%22%3A%7B%22...
Choose your own source:
I'm still surprised there isn't more server takeup of ARM considering the incredible power numbers. Cloudflare announced their current builds and it's all Epyc2 and no ARM. What about Azure and Google Cloud? Are ARM servers easy to launch and superior on a cost/performance perspective?
This Anandtech review has a page called "An X86 Massacre":
Are giants like Apple and AWS just making it impossible for a player like Cloudflare to buy enough of the leading-edge Arm processors? And Epyc performs so well, and it's easy to buy, so they wait another year?
Note that Anandtech was only comparing against chips that had already been replaced. They did that comparison like a few days before Amazon started offering Epyc Rome instances. See https://www.phoronix.com/scan.php?page=article&item=epyc-vs-... for a more like for like comparison, and the results are brutal for the N1:
"When taking the geometric mean of all these benchmarks, the EPYC 7742 without SMT enabled was about 46% faster than the Graviton2 bare metal performance. The EPYC 7742 with SMT (128 threads) increased the lead to about 51%, due to not all of the benchmarks being multi-thread focused."
This will catalyze giant migrations of workloads away from AWS Intel instances to the Graviton2 processors. Additionally, it seems clear that AWS will begin running all of the managed services (e.g., RDS, caches, mail, load balancers) with the superior ARM processors. The world is changing!
Or just from Intel instances to Epyc Rome instances. Which is a less significant migration with basically all of the same advantages.
But if they figured out performance with Graviton2, what makes you think ARM isn't the clear future for server performance per dollar?
Do you think Amazon will be better at design & fabrication than Intel (when they get their shit together) or AMD? Do you expect Amazon to actually throw Apple's level of R&D money & acquisitions at the problem? And when/if they do, why would you expect any meaningful difference at the end of the day on anything? People love to talk about supposed ARM power efficiency, but the 64-core Epyc Rome at 200w is around 3w per CPU core. Which is half the power draw of an M1 firestorm core. The M1's firestorm is also faster in such a comparison, but the point is power isn't a magical ARM advantage, and x86 isn't inherently some crazy power hog.
Phoronix put Graviton2 to the test vs. Epyc Rome and the results ain't pretty: https://www.phoronix.com/scan.php?page=article&item=epyc-vs-...
Graviton2 being a bit cheaper doesn't mean much when you need 50% more instances.
But right now there's only a single company that makes ARM look good and that's Apple. And Apple hasn't been in the server game for a long, long time now. Everyone else's ARM CPU cores pretty much suck. Maybe Nvidia's recent acquisition will change things, who knows. But at the end of the day if AMD keeps things up, or Intel gets back on track, there really doesn't look to be a bright future for ARM outside of Apple's ecosystem and the existing ARM markets.
Just because other ARM cores suck today doesn't mean they have to forever. Apple's don't. They took it seriously. Perhaps Amazon is too and we are just at the start of their journey. It took Apple over 10 years to get to where we are with the M1.
You cited one of the significant contributors to performance - the 8-wide decode. x86 is hamstrung to 4 because of legacy. We aren't at the beginning of the story with ARM for performance, but ARM certainly isn't nearly as hamstrung out the gate by the legacy of x86 either.
Heck, is there anyone making a pure 64 bit x64 chip? There's a bunch of overhead right there.
Your right that ARM isn't magical - but ARM does have the potential for significantly more runway and headroom. The trade off is backwards compatibility. As most code continues to move further and further away from direct hardware interaction, is that backwards compatibility as valuable overall as it was 20 years ago? 10 years ago?
I guess we will find out :)
AMD has a semi-custom division that will do exactly that for anyone capable of paying: https://www.amd.com/en/products/semi-custom-solutions
Of course you will because that happens all the time. How do you think we ended up with VT-x and friends? Intel took a use case that reached enough usage and added specialized instructions for it. This has happened a ton over the years on x86. See also AES-NI for a more application-specific addition. In addition to obviously the huge amount of SIMD experimentation.
This is not fruit Apple discovered. AMD & Intel haven't been leaving this performance on the table the last 20+ years. Hell the constant addition of instructions for certain workloads is a major reason x86 is as huge & complex as it is.
In terms of a viable contender to Intel and ARM incumbents, I can only realistically think of the POWER architecture. Google was experimenting with POWER8 designs a few years back with a view to use POWER based blades for GCP – similar to what AWS has done with Graviton. There have not been any further news since then, so it is unknown whether (or when) we can have POWER powered compute instances in GCP. POWER is the only other mature architecture with plenty of experience and expertise available out there (compilers, toolchains, virtual machines etc etc).
Whether RISC-V will become the next big thing is yet to be seen, with the ISA fragmentation being the culprit. The 64 bit RISC-V ISA is yet to hit the v1.0 milestone, so we won’t know it for another few years. Unless a new string force appears on stage to push the RISC-V architecture.
It's only being positioned as a lower cost mid tier offering because it's uncompetitive otherwise. It's almost certainly not even cheaper to make. The monolithic design will be more expensive than AMD's chiplets. Cheaper for Amazon maybe, as obviously AMD is taking a profitable slice, but that's a slice that can easily be adjusted should Graviton start looking stronger at some point.
At the cloud data centre scale, the difference of 110W (Graviton2) vs 180W (AMD) is substantial as bills pile up quickly.
I am not sure what your point is, anyway. As a business customer, if it is going to cost me 30% less money to run the same workload regardless of the ISA, I will take it. A lesser power bill for the cloud provider that results from a more efficient ISA is a mere comforting thought. No more no less (to an extent). Philosophically and ethically, yes, I would rather run my workload on an ISA of my choice, but we can't have that anymore, and Intel is at blame here why. Not that I personally have the intention to blame anyone.
Wrong CPU, that's the old Zen1 14nm Epyc. The one that Graviton2 is going up against is the 64-core Epyc 7742 (or any of the other Zen 2 7nm Epycs).
And you can't call Graviton2 110W when you need 50% more of them vs. everyone else, and you can't ignore the power from the rest of the system. You need 50% more machines. That's going to be equivalent if not more total power usage for Graviton2 than Epyc 7742 for equivalent compute performance. Baseline power usage of a server is fairly high. It's not the rounding error it is on a laptop or even desktop.
EDIT: also as far as comforting thoughts go, manufacturing 50% more machines is vastly more environmental impactful than the power difference.
> I am not sure what your point is, anyway. As a business customer, if it is going to cost me 30% less money to run the same workload regardless of the ISA, I will take it.
I'm saying that 30% less cost is a temporary fantasy since all signs point to the Graviton2 being more expensive to manufacturer & deploy vs. the competition. If you're not coupled to ISA, sure take it while it lasts. Why not be subsidized by Amazon for a bit? But if you're talking long-term trends, which we are, it's not a pretty picture. Pace of improvement isn't compelling, either. The CPU core in the N1 is basically a Cortex A76. ARM claims a 20% increase in IPC going from an A76 to an A77. Not bad. But AMD just delivered a 20% IPC increase going from Zen 2 to Zen 3, too. So... the gap ain't shrinking.
Although 30% less per instance but you need 50% more instances also doesn't work out. That ends up being overall more expensive. Depending on your workload that it might be even less close. In things like PostgreSQL & OpenSSL graviton2 gets absolutely slaughtered.
Plus every watt costs money so the movement to performance per watt is huge for DCs!
I was wondering how other companies will compete with Apple's data center advantage--presumably Apple will replace most x86 infrastructure with cheaper, lower power, faster Apple Silicon.
Even if Graviton2 work is far behind the M1, it seems like Amazon can catch up. Particularly if they are able to hire away engineers from Apple's team. Even if Amazon trails Apple by years in performance per watt, it can still likely offer a compelling change from what Intel or AMD may be able to accomplish in the same time period.
But as you can see at the end of the article, other providers are considering/using ARM more.
It seems like the first-gen of Graviton wasn't great, but then they seem to have made huge leaps with Graviton2.
I certainly am heartened by all the news about processing power improvements. I'll just join the rest of y'all young whippersnappers after you pay the early adopter tax for me.
If this was a brand new chip, yes, but Apple's chips have had years worth of proofing in iPads etc.
Great to see Office 2019 (not 365) works - was putting off picking up a license for that reason.
Also be aware that unlike the Windows Office, Mac Office is supported only for 5 years and for the 2019 release, 2 years are already gone (it will be supported till Oct 2023, Mac Office 2016 is already unsupported since last October).
> We currently have a private beta for experimental arm64 linux support (no 32bit). If you have a valid Sublime Text 3 license and are willing and able to test on your arm64 device....
some context: https://www.reddit.com/r/docker/comments/jxc1ge/docker_and_a...
Presumably ARM docker images could run in an ARM Linux VM.
Seems like JetBrains still needs to get their softwares ready for Apple Silicon: https://youtrack.jetbrains.com/issue/JBR-2526
Any thoughts or info on the security implications of a first generation CPU design? Is it safe to assume that a design focused on cutting edge performance may have compromised on security in some form? Does the fact that this is first gen indicate opportunity for hackers to discover low hanging fruit vulnerabilities possibly to the benefit of nation state or private actors?
I feel like the long term path for silicon will converge on extreme compartmentalization of general purpose computing hardware inside chips, designed from the ground up to achieve physical process isolation purpose built per task, with highly secure hardware IPC all on a single high perf die.
Interested to learn what Apple has done to build a "more secure" CPU design. edit: A quick web search yields relevant results on this topic already, e.g. work by Chinese based Tencent Security.
This is barely different than AMD releasing yet another line of Zen chips or Intel shipping a new Core i9.
They have. And they've been shipping the chips in iPads.
I may be making a few semantical errors, regarding "first gen", et al
The M1 is part of a technical lineage that is over 10 years old at this point. It's hardly "first gen" from what people usually intend when using the phrase "first gen".
This also is Apple's 4th rodeo in changing the instruction sets that Mac's run on, and since the last transition their tooling, expertise in frameworks, compilers and other aspects of their ecosystem have also matured significantly - as evidenced by all the reports of the incredibly seamlessness of Rosetta 2 for the vast majority of applications (including terminal apps like Homebrew which still boggles my mind).
I figured there would at least be one or two earth shattering issues - but surprisingly no, seems to be pretty transparent for the vast majority of use cases. It's pretty mind boggling when you really think about it.
"Is Apple Silicon ready?"
Ready for what? Apple Silicon is the fixed quantity in this equation, while the software is what is changing to be ready for Apple Silicon, so the construction feels backward. An alternate construction could be "Is it Apple Silicon-ready," which I suppose one could stretch the title to read as a shortening of, but it's still awkward either way.
All that said – this is the best UI I've seen yet for an Apple Silicon compatibility list, confusing title or not.
I do windows desktop work and try to picture what would happen if our customers were suddenly moving to Win10 on Arm and expecting our software to work. We have dozens and dozens of third party binary dependencies, each of which could turn out to be the one that doesn’t translate. Not all of them could realistically be replaced or updated. The situation would basically be one where Microsoft had announced the death of our software and probably business.
What I am worried about is if the GPU is anything worthwhile. All the focus in the reviews is on the CPU, but the GPU seems where it mostly falls short. Not enough external screens for example, though that can be fixed in a newer generation. But is it faster than what Apple hardware included in Intel, with AMD graphics? Some people will feel the regression in speed and capabilities quite hard. I don't see much focus on that in media publications.
However, given the investment in both the neural engine and integrated gpu - I wouldn’t be surprised to see something interesting in 6-24 months.
Being a decisive, opinionated company means that you anger a lot of people, but it also means you’re predictable once you’ve announced something like this.
first results are impressive, there are reports of games that were unplayable on the 2020 Intel MacBook Air that now run well on the M1 MacBook Air
PG::InternalError: ERROR: could not read block 38 in file "base/84897/85294": Bad address
Context: I'm after a flexible material-like table component for a personal project.
One minor bug - Sketch shows under all apps, but not under design apps (I thought it was missing altogether because I didn't see it under Design).
Seriously though, what’s the plan for Docker? Will all containers need to be built for ARM or will Rosetta 2 be able to run Docker in an x86 VM?
Homebrew is my other benchmark.
Rosetta doesn’t support virtualisation (which would be needed to run the Linux VM under which the containers run).
So you’d need an ARM version of Docker, and an ARM Linux VM, and since the containers rely on that Linux kernel, I assume ARM containers running ARM software.
I suppose it might be possible to develop a para-virtualisation framework that runs under Rosetta.
I haven't stress tested the ARM version, but it is the one I'm currently using and I haven't had any issues so far.
▶ ruby -v
ruby 2.6.3p62 (2019-04-16 revision 67580) [universal.arm64e-darwin20]
In any case, the domain nam is incorrect. It should be “isitapplesiliconready”. The chips are clearly ready, it’s just that some developers haven’t recompiled their apps for it yet.
What I've gathered via osmosis here about Apple silicon is.
1. It will only be in macs, you cant buy the chips or mobos to build your own machine.
2. Theyre very energy efficient.
3. Their performance is okish.
Doesnt really seems earth shattering.. Like, if they were a super low energy alternative to the duopoly of amd or intel, that would be pretty cool.
But if u buy a mac nowadays, its like a console with set stuff in it that u cant change or upgrade.. Now that stuff will be improved / updated in newer macs, does that not happen regularly anyway?
Several people saying the performance is amazing.. Can u link me to some benchmarks? The only numbers I can find are these.
"In Geekbench 5, the A14X yields a single-core score of 1,634 and rakes in 7,220 in the multi-core test."
These are very low scores, like, my desktop gets many times that.. My old work low/mid level laptop from 4 years ago had 3000 single core and 11600 multi-core score.
That's how most people buy and treat computers. We're the weird ones who upgrade them and keep using them for 9 years, 3 SSDs, and 4 RAM bumps.
> Their performance is okish
Nope - not even close to merely 'okish'. These first devices (which were clearly targeted as the cheapest, lowest-end devices Apple sells) outperform the vast majority of PCs (and Macs) sold today, and not just by a tiny bit. Look at some of the reviews by developers - in many cases their existing Intel apps and games are running better under Rosetta emulation/bridge/whatever you want to call it then they did on native Intel Macs. Compile times for a $999 MacBook Air beat a $6k+ iMac Pro by 30-40%. Games getting better frame rates under Rosetta than they did on native intel boxes.
The list goes on - these chips are beasts, and this is only the beginning. The high-performance versions of these will make people who bought 8/10/12 core iMac Pros recently weep.
That seems an understatement. The first version, targeted at (for Apple) low-end devices, is much faster than most x86-64 based computers.
For example, I'm excited for Ruby on Rails on production when I was using PHP/Perl at that time. Also when I know how Kubenetes/Docker works after tired of Vmware ESXi.
This time a workable, non-x86 system on market, it has some new features that Intel don't have, that's why I'm excited.
This launched strong out of the gate. I figure they will do even better in a generation or two.
Oh well, at least Rosetta2 seems to be working really well - you will be able to run a lot of software you need rather well despite not to the fullest potential. The execution on Rosetta2 is really good and that's important. But I think it does go to show that the "Pro" in "Macbook Pro 13" does not mean all that much. At least not if they're going to ship with the majority of pro software not being native, many popular developer toolchains still months to be ready, and very limited I/O and RAM options. The Macbook Air and Mac Mini I fully get for the first releases on new hardware, but the Macbook Pro 13 really feels odd in this lineup if the word Pro is supposed to mean anything.
Let’s face it, that MacBook Pro is mainly there to make their buyers feel pro, while not really providing performance benefits over the Air.
It’s like you have the stock car (MacBook Air), the “sports” version of that car which just has some stripes sprayed on and a red colored gear shift knob (entry level MacBook Pro), and then the actual race version of that car which has a tuned engine etc. (other MacBook Pro’s).
The biggest advantage I see between the Pro and the Air, based on everyone I’ve talked to with both, is battery life. That might be worth the $200 or $250 depending on your storage configuration.
This is true now, but was it true before moving away from Intel?
For a while, the Air has been the form-factor Apple expected to be getting entry-level-MBP perf from, and requesting chips from Intel to satisfy that; and for a while now, the response from Intel has been a chip that thermal-throttles so hard in that form-factor that the performance has bombed it down to a lower class of computer.
The base-model MBP, then, has been Apple’s compromise: it’s the result of them taking those chips that were supposed to be just fine running in an Air, and giving them enough chassis and fans to make them perform the way Intel originally promised they would.
In other words, the base MBP is “a MacBook Air” in terms of what performance Apple targeted the Air to achieve each gen; and the Air itself is the pretty design Apple’s IxD dept put out in anticipation of that target, mated to an altogether-worse processor in order to get it out of the gate.
Now that Apple has a chip with actual thermal headroom, this duality will go away. You’ll get base-MBP perf in Air chassis, and these overlapping categories will merge into one.
Are we talking about the same computer, where the fan is not even thermally connected with the CPU heatsink? That's Apple's fsckup, not Intels.
For a variety of reasons, it didn’t work. Not only was the price higher, the port selection (just two TB3s at a time when the industry hadn’t moved en masse to USB-C, remember, this was four years ago) was really limiting. And of course, the keyboard drama.
It is my contention, though I have no proof, that Apple didn’t want to release the redesigned Retina MacBook Air in 2018, but had to based on continued sales of the older model and the lack of love for the Touch Bar free MacBook Pro. (Recall, even after the redesign, Apple was still selling a Broadwell-based MacBook Air, technically into 2019. That was essentially the same MacBook Air that was first released in March 2015.)
Once the MacBook Air was redesigned, the two-port MBP never made any sense, even with the re-added Touch Bar. In fact, every single year, when I participate in Jason Snell's Apple Report card , I comment on this weirdness in the lineup. I don’t think the two-port MacBook Pro needs to exist.
Not sure what you expected, but having seen previous transitions, this is smooth as butter. If you are a Pro, you know that jumping onto a platform early is fraught with potential gotchas.
> the Macbook Pro 13 really feels odd in this lineup if the word Pro is supposed to mean anything.
There has always been a bit of a blurry line between pro and non-pro Apple products. This model year it means just as much as it has on many other model years. Apple left the "higher end" Intel builds in their product line to address the needs of developers who want 32GB or RAM or many other configurations.
This Pro is exactly the machine that the developers porting Go or Rust over to MacOS will likely be using.
What I'm commenting on is that anything that is not already completely bought into Apple's stack is not there yet, and that reveals where Apple's priorities are. Anything that's cross platform or depends on cross platform components needs to play catch up now. It's fine if they have priorities though?
It reveals who priorities updating their software to Apple's new platform. Apple can't control who ports a given piece of software to their new platform and how quickly it gets done. The surface area is too large.
All Apple can do is get the tools out there for the people who are doing the porting.
Most likely Go and Rust aren't ported simply because porting languages is hard and time consuming.
> Anything that's cross platform or depends on cross platform components needs to play catch up now.
I'm not sure why this is remotely surprising or noteworthy. Apple needed Xcode, Swift, and Objective C running in order to build MacOS. There would not be a platform to try to port to if Apple's toolchain wasn't working prior to day 1.
Rust, Go, React Native, are all by necessity going to be rely on Apple's toolchain to run on Apple, so by nature they take longer to build.
So, to be clear, Rust is ported. It's not a tier 1 target yet, but it does work. Time is not the only issue here, as elaborated downthread. For the gory details, see https://github.com/rust-lang/rust/issues/73908
It might not be for you, but then I'm curious why it's noteworthy enough to have two layers of comments about it. If you don't want to talk about that then by all means let's not.
Apple has priorities, and these have results. I think it would be interesting to talk about that, as Apple could have invested time and money in getting some more things going (I'm thinking virtualization and Docker related things especially - you know, the stuff they demoed at WWDC!), but they didn't. It's fine they didn't, but we can still talk about that can't we?
You noted it, I was trying to explain something which seems to me exceedingly obvious.
> Apple has priorities, and these have results.
The reason these tools were built/ run first is due to the priorities and constraints of outside individuals, not Apple's. If you build a graphics editor or a text editor using Apple's toolchain and most of your clients run Apple, porting is likely high priority and not super hard.
A lot of these things you are complaining about are just really hard problems.
Docker requires a hypervisor which wasn't part of the A12Z processor they shipped in the DTK. It also depends on Go.
Go isn't available because porting languages to a new processor is non-trivial.
Rust has the above issues, plus didn't Mozilla lay off a huge chunk of the Rust team?
> Docker requires a hypervisor which wasn't part of the A12Z
processor they shipped in the DTK. It also depends on Go.
Apple did have prerelease hardware that supported virtualization, which they supplied to Parallels. Docker has not worked with this hardware based on their press release and GitHub issue , although they may have received some specs. In any case, Docker depends on Golang with won't release until February.
If Apple did make Docker a priority (which you'd expect given the namedrop at WWDC), this seems quite strange to me.
> Go isn't available because porting languages to a new processor is non-trivial.
I'm sorry, I don't buy this at face value for Go and Rust. I believe these teams could support a new architecture very well very quickly with the right tools and support, like how their stuff also runs on weird things like S390. They show day in day out their stuff is very flexible. By no means do I want to suggest it's a trivial matter, but these toolchains are made to be portable and currently support both much more exotic and very similar systems to aarch64 macOS at the same time - like x86-64 macOS and arm64 iOS. Rust's current bottleneck may be due to CI sure, but then the question becomes why weren't DTKs used for CI?
It seems that if Rust and Go developers were approached with the right tools and support, they wouldn't have had to figure things out now. Is it that bad that they have to figure things out now? Not really - but I do think it could have been avoided, and we wouldn't have had to wait for a golang release in February and a Docker release after that.
Rust's CI is pretty demanding, both in general, as well as given that it's core to our stability story, we need it to be reliable, and something we can rely on for a long time. DTKs are by very nature not a long-term thing, but a short term stopgap.
Mozilla laid off a huge chunk of the people they paid to work on Rust, but that was a very small (but important!) portion of the overall Rust team.
Shipping this gets M1 hardware in the hands of developers who can then use it to test their software. You mentioned Rust, which is currently blocked on getting hardware hooked up to CI (https://github.com/rust-lang/rust/issues/73908)
Also, such an impressive release of hardware shows third party devs that Apple is serious about transitioning, and doing so quickly. Before this laptop was released we didn’t know what the performance delta would be.
Removal of the audio jack in for headphones comes to mind.
For the machines these machines replace, non-native apps still run significantly faster under Rosetta 2 than they do on the current hardware.
When native ARM versions of those apps are released, you'll get another speed boost.
I fail to see what is negative or needs "catching up".
Just because M1 Mac's can compete with the all but the fastest laptops and majority of desktops already doesn't mean that's their intended use. These are Apple's entry level models. And the spank the vast majority of other machines even when running code not yet optimized for them.
There is catching up, but it isn't by Apple.
Anyone could request one and I’m not aware of any developer I know, no matter how small, not being able to buy one from Apple. I was able to get one and I don’t even have anything in the App Store at the moment. As for Apple gifting them to OSS projects, I mean, I guess that would be nice, but frankly the corporate stewards of Go and Rust can buy their own, just as Electron and others did. I imagine some Debian people may have been given loaner machines or stuff gratis, given the custom Debian build proof of concept at WWDC, but I have no insight into that.
The real challenges with AS are going to be for anything that uses virtualization or lots of lower level libraries that need to be compiled for ARM64 and to be honest, that was clear to anyone who watched any of the Apple Silicon sessions at WWDC and read the accompanying documentation.
You’re exactly right that many popular developer toolchains aren’t ready right now. Most of us didn’t expect that and some of us were screaming that loudly (and getting yelled at and called haters by fanbois even though we almost exclusively use Macs and Apple hardware) to prepare people for exactly this reality. The support will come over time and it’s also clear to me at least, that the way at least some stuff works, might not be as nice as the way it was under Intel or even PPC, just because of changing priorities with macOS, and we'll need to come to terms with that too.
You'll notice the 13” MacBook Pro that was replaced in the lineup, a device I’ve always found odd period (just get a MacBook Air), is the tweener device with two ports, and originally , no Touch Bar. This isn’t the much more powerful 13” MacBook Pro that got a big update on Intel alongside the fixed keyboard this May. This is the one that got a fixed keyboard but was still running a two year old 8th-gen Intel processor, AKA, the MacBook Pro you shouldn’t buy and should really just get a MacBook Air instead (the Intel MacBook Air refresh was running a newer processor than the two-port MBP).
Honestly, this is a huge boon for non Apple Silicon ARM64 projects and libraries because a lot of developers won’t do this sort of work for Raspberry Pi or Pinebook or any number of ARM boards, but they will for Apple. And that will trickle downstream.
This is the first, and worst / “least pro” Apple silicon machine Apple will ever make. But it absolutely won’t be the last.
That’s interesting. I was shocked at how many green check marks there are for a chip that was announced in June. It takes time to write and test an application. A lot of teams must’ve really prioritized it.
Also, Apple Silicon compatibility is not enough. One needs Big Sur compatibility too.
I mean this point is largely moot by Rosetta covering all the basics well and still making the Macbook Pro 13 a serviceable Pro computer for a lot of (but not all) use cases. It just seems strange to me that popular developer things like Go, Rust, Docker, virtualization of any kind are still months away despite these being super important use cases of the Mac. Maybe Apple just doesn't feel that way, or have the numbers showing that my impression isn't true, but it feels strange that they're letting the community just figure it out by themselves now.
From all appearances, practically anybody who applied for a DTK got one so in many cases I would take lack of a green checkmark as the dev not applying for a DTK.
Not entirely true. A lot of software is simply hard to port or has a lot of dependencies which need to be ported.
A lot of developers already have a full plate and porting to a new platform is low on their list of priorities. As the user-base expands, it will move up on their list of priorities.
Complaining about Pro software not being native on Apples entry level machines.
Probably the best praise you could heap on the M1 even though it's obvious you didn't intend to :)
If you maintain legacy apps, you now need an x86 box. And then you have to ask yourself, why buy a Mac too? They need to fix that somehow. They have enough money to sponsor something based on QEMU, for example. They're just cheap.