As far as I can tell, the M1 does have virtualization support, Docker just isn't ported yet.
Update: Also, from Apple docs it seems like you won't be able to run emulation and virtualization in the same process. So you can run x86 Mac apps, but it's likely x86 Docker images will be out-of-reach.
A12Z/DTK: HW does not support virtualization at all.
Apple M1 / New Apple Products: HW does support virtualization for ARM64 guests (both windows and linux demonstrated).
What about x86 software in the guestOS? Not with Rosetta. Instead, the guest OS will have to provide its own translation (such as windows/arm's current x86->arm64 or upcoming x86_64->arm64 feature). I'm not familiar if there is any usable high performance x86_64->arm translation available for linux.
Docker w/ arm images: needs some work to be able to work on mac/arm virtualization, but it's coming.
Docker w/ arm linux kernel + x86 userland images: Any translation solution would be found within the linux guestOS, not macOS. I don't know if any candidates exist. Maybe qemu?
Docker w/ full x86 image (incl. kernel): I don't think this is possible?
I spent more time trying to figure out ways to get the right images than I did learning k8s, so I kinda put the project down.
I do think this is exactly the sort of thing though that will start to flush that out, long term.
Also, be real careful you don't accidentally build for x86 when installing libraries or it all goes to shit. I got bitten by this by using iTerm (building it from source was fine though) and not realizing it was the x86 variant. That is to say, make sure your terminal is the system terminal or built for ARM-- otherwise when you install things using package managers they'll be the x86 variant.
Like, I'm super stoked for the M1, but also just totally fucking irritated about the toolchain changes and MacOS in general... They've not been the best stewards of their software and developer communities as of late, and they're pushing a totally proprietary architecture onto them expecting them to foot the bill (hours spent fixing/debugging) to make their platform usable.
Truly, I get paid to write software that runs on Linux not on Macs. In fact, I've never gotten paid to write software for Macs, because no production systems use Macs. Now it is harder for me to do primary development on a Mac; because I must fix MacOS or live with MacOS only bugs... bugs which only exist on a platform I do not intend to deploy my code to... these aren't just some config bugs either they're going to be a fucking mess of irritating, show-stopping, moving targets with a negative ROI.
I still can't believe they didn't incorporate containers into the OS before switching platforms to keep developers around but that's a totally different rant.
If i can't run our own images on a future mac, i will no longer buy one.
I can't force my team to maintain two versions if there is no better business reason.
I'm quite curious how this will play out.
Our app is React/ Typescript and we don't use Docker/ x86 virtualization at all. I do need MySQL and a few other tools, but I don't expect they will be long coming (or maybe already running in Homebrew is there Homebrew ARM?)
If you make your living writing Mac or iOS software, this is a gift. Might be worth waiting for the next generation with more RAM and even beefier CPUs, but otherwise this is ideal.
There is but it’s highly experimental still, with 50 % of formulae not working yet.
Better to run Homebrew with Rosetta for now.
For some reason I'd completely forgotten this was an option. Looking at the discussions on the home-brew boards, it looks like this will be the way to run home-brew for some time.
You can track progress here. Be advised that packages listed as ‘check again when XYZ is fixed’ may themselves have issues that can’t yet be discovered.
My servers run Linux. Not bsd.
Having to deal with this sort of stuff is just a waste of time.
Mac's don't have a monopoly on quality, haven't for a long time.
And since it's a usual response, anyone claiming Linux is too fiddly haven't used it in a long time. And while it can be if you want to roll your own DE its completely optional. Using something like fedora you can literally do almost everything non developer based with point and click.
From being a Windows guy through all the years, I've found myself increasingly using Linux in this span of time. The only thing preventing me from a complete switch is MS Office. I barely used MacOS, except to diagnose issues on MacOS and run XCode.
Ms Office is very compatible with Wine - even the newer ones. The only problems seem to be with apps were direct replacements like Skype or are irrelevant like OneDrive.
For setting up wine and managing my apps I use Lutris. It is meant for games but is brilliant for setting up windows apps as well. I wish they would acknowledge that fact in their UI but it doesn't bother me too much that everything is called a game.
If you could help me out by pointing out which products/technologies you used and maybe a website describing step by step, that would be great.
Apart from that, standard QEMU process. This was the guide I used:
Yes, I understand why, but still...
That could be something doing dynarec like box86, which is mindboggingly the only way to run (x86) Zoom on a arm7hf Linux atop a Raspberry Pi 4 (I tried it on a Pi3B: it works but it’s way too slow)
as mentioned in the issue comments: Docker x86 version already supports running ARM images via qemu out of the box on Windows and Mac, so x86 images via qemu on ARM host should not be a problem...
it "works", but on my overpowered i7, running a raspberry pi docker image that way with qemu-user-static is... only marginally faster than running the code on an actual rpi 3, which is pretty much potato-levels of power.
(It would at least be a huge problem for me. I mostly develop server software that runs on Kubernetes on AMD64 machines.)
PS C:\Users\hokaa> docker run -it --platform aarch64 ubuntu
root@aaf35d3fd9de:/# uname -a
Linux aaf35d3fd9de 4.19.128-microsoft-standard #1 SMP Tue Jun 23 12:58:10 UTC 2020 aarch64 aarch64 aarch64 GNU/Linux
And yes, AFAIK it uses qemu's binary translation.
(1:40:06) if the timestamp doesn't work
For the last year she has been working at Apple so I've always suspected that she was developing on M1 hardware but with Linux OS.
Many, many images are now multi-arch so I encounter failures to deploy things to the Pi much less frequently nowadays.
To me this says that Rosetta doesn’t convert VT-X commands (et al) to their ARM counterparts.
I don’t see an issue here... sure you’d be left without a dedicated VM framework from Apple, but you should be able to run a qemu process that is emulating a Linux VM. That qemu binary could be either x86 or Arm. But instead of using the Mac OS Hypervisor.Framework, you’d be back to using stock qemu that we used to use. It just isn’t packaged up as nicely at the moment. (And who knows how fast it will run).
If the early story around performance of the M1 bears out, I suspect this won't be the case. If their CPU for the base MacBook Air trouncing higher end MacBook Pros, what kind of a beast will their higher end CPUs going to be?
People who really need x86 compatibility will stick with x86 based Macs. Others? I'm not sure. For my current job where everything runs on node.js, the performance and battery life on the new CPU is pretty appealing.
All that performance is going to be used to run cloud based Linux x86 development environments. Repl.it and Github codespaces are the real winners with the move to Apple Silicon
I'm not entirely sure that's true. Or at least not entirely relevant/ interesting.
First, you have thousands of developers writing code for iOS/ Android & Chromebooks. That right there is a pretty big chunk of developers. And at least for the iOS developers, running on ARM instead of x86 is a significant advantage.
Second, a lot of us are writing software for the web which means primarily writing software that runs in the browser. So long as we have a good running version of node.js and a browser to test with, we are golden. Assuming for a moment here that Google Chrome and Firefox are both going to be ported to ARM, I don't really see why I care about x86 linux except...
Of course you need to serve up your site and that piece is usually running on x86. Only... many of us are already crossing platforms. Our web server is Node on Linux and my dev system is MacOS (no Docker or Linux VM) so I'm already cross platform.
When I was primarily running a Python/ Django shop, it was a similar deal—so long as I was able to get Python running I was good. There are definitely a few places where you notice the difference, but there are definitely places where I can use an extra 50% battery life and a faster CPU as well.
I don't know how many other developers really need x86 and how many don't. I have had jobs where running Docker/ Kubernetes was important, but I've had a fair number where it wasn't as well.
What gives me pause moreso than this is the absence of the touchscreen and Face ID, which are very obviously coming in the next refresh (iPad app support without touch?).
As impressive as the new MacBooks are, they’re very clearly a stopgap solution so as not to replace everything all at once.
I agree, but at the same time, they pushed it pretty hard by not just doing an Air with the chip like I thought they would, but putting it in a Pro as well (with, according to the early benchmarks I've seen, barely any performance difference. I mean they could at least have put two of them in there)
The Pro adds
- Brighter Screen
- Better cooling for extended CPU loads
- Touch Bar (which many don't care for)
- Bigger battery.
The MacBook Air boosted to 16B is likely the biggest bang for the buck unless you are going to be hitting that CPU hard all the time. The iPad has pretty damned good performance without fans so I expect the Air will do quite fine.
I used to be fairly laissez-faire about it but now I really think it should be legally required for at least some level of freedom for the end users to be supported (technologically rather than as in customer service) by big tech companies selling hardware
See this WWDC session: https://developer.apple.com/videos/play/wwdc2020/10686
and this manpage: https://pastebin.ubuntu.com/p/RwcT8stYMY/
If you have code that makes use of custom instructions, even if only sprinkled in a few places, the emulator must support them.
If you have a cpu with those extra instructions that is otherwise backwards compatible, you can run code that doesn't make use of such instructions just fine (of course, you won't benefit from the functionality/performance offered by those new instructions)
See https://arstechnica.com/gadgets/2020/03/project-sandcastle-b... on details about it. This work, especially as there's no exploiting security bugs required here, will benefit Linux on those Macs quite a lot.
Right, it sounds like Apple was very careful to provide an out here for enthusiasts by making it possible to sign your own kernel, something you can't do for iPhone/iPad/Watch. Seems like an astute appreciation of differences in their target market.
However, as they showed at WWDC , you can run Linux and other operating system using Apple's hypervisor. And it will run faster on an M1 Mac than it does natively on comparable Intel hardware.
That is to say, am I supposed to compare based on price, power consumption, process node equivalent, or some other factor?
I ask because it's hard for me to understand the comparison otherwise.
People aren't laughing now, are they?
Obviously they done have access to the shipping m1 devices yet, which are supposed to have the hardware support.
It is always a bit of a gamble to be on the bleeding edge with your production machine!
What they're showing now is pretty damn impressive. I look forward to the M2 or whatever variants that would be a more direct replacement for the 2019 rMBP I have now (6 cores, 32GB RAM).
But I'm happy to wait. If you don't have virtualization needs, and your use case is pretty straightforward, these are gonna be amazing machines. They're just not the power user machines -- I mean, except for those who insist on bleeding-edge living.
If it's about the dominance of x86, I think in about 15-20 years there will be no other architectures in widespread use except for ARM and maybe RISC-V.
x86 has lock in like that in spades.
What I think is much more interesting is what the software industry will do with all these random coprocessors we find in chips these days. They seem much less stable than an instruction set but amazing speed gains can be had there, so it's enticing. If software libraries can bring the dream of the HAL to reality, that would be pretty cool.
But from threads like these, it's safe to conclude we're not there just yet.
Now $699 M1-based Mac mini is way faster than any Intel box in that price range and many that cost much more.
The benchmarks that are starting to come out are just nuts, favoring the M1.
Of course not all of the pieces are in place; that takes a while. But those developers that are on top of their game released universal versions of their apps that take full advantage of the M1 SoC and all of its benefits: unified memory, 8 CPU and GPU cores, Neural Engine, etc.
It's day 1 of Big Sur being available and customers haven't gotten their M1 Macs yet, though they'll have them in a few days.
In a couple of months, once the dust has settled and developers have gotten their hands on shipping hardware, people's understanding of what performance is possible on consumer-level hardware will be changed for good.
There are Hollywood studios already planning to replace their high-end, pro Macs with M1 Mac minis because they'll be faster than what they have: https://appleinsider.com/articles/20/11/12/hollywood-thinks-...
x86 and Intel are making all their money running your k8s on AWS, GCP and other server platforms so you can consume on your locked down ARM device. That's where the big money and margins are.
Apple never really could get a foothold in that market.
ARM is coming for the datacenter too; this will not end well for Intel :
Amazon Web Services launched its Graviton2 processors,
which promise up to 40% better performance from
comparable x86-based instances for 20% less.
Graviton2, based on the Arm architecture, may have a
big impact on cloud workloads, AWS' cost structure,
and Arm in the data center.
Now people are at home again, and not quite so mobile. Commensurately, we are seeing a breath of life into a laptop/desktop market segment that, with few exceptions, has been marching to a steady drumbeat for at least the past six years.
True. The Mac had its best quarter ever, which should be a good setup for the transition to M1.
Won't it always cause issues? If you build locally you are packaging ARM binaries in zfs which won't run on x86... unless you somehow signal to docker that you want a different arch when building
Now (I guess always?), it's clear that Apple Silicon is not going to be a comfortable dev environment, at lest not for some and not for a while. JDK macos/aarch64 port is still in dev and so is VS Code. Docker support is probably months away (looking at their roadmap no dev has even started) and when it arrives it's almost certain to be limited to ARM linux images.
Still, hanging on to x86 on a Mac seems like a lost cause and I wonder if I should just change my approach. Rather than getting a beefy MBP, get the cheaper Air with M1 and a powerful MiniPC (NUC or similar) with native Linux. VS Code has a Remote Development (over SSH) feature, has anyone used it? Can it be combined with Docker (on Linux) in a seamless setup where the Air runs VS.Code and all devel happens on the MiniPC through remote coding? Is this setup going to work when VS Code macos/aarch64 is out or is there something else one needs to wait for?
End of last year, I switched jobs and have been forced onto a Windows laptop. I had been using VSCode, so after fighting with WSL1 and Docker on the work system for months, I installed a dedicated console only linux VM (under vmware workstation) within Windows and I use the VSCode's SSH remote option. It's pretty much seamless. The VM runs docker "natively" within it, the core part of vscode is running within the VM itself. There used to be a bit of confusion if you open a new vscode window, then you had to use that to open a second ssh-connected window. But last month, they changed that so you can connect the current window. When you open a folder or file, it's all like browsing the "remote" VM's filesystem. VSCode also now can auto-detect when you're starting a program on a localhost port and creates an ssh tunnel from your desktop to the remote system (you can also set these up manually if it fails to detect it). Sidebar: Now that WSL2 is available, I could see migrating from my Linux VM under VMWare to a WSL2 Linux VM under Hyper-V, but that's another level of effort for about the same end result.
Just prior to the pandemic, I had set myself up a linux desktop and made that my primary system (relegating my MBP to secondary/couch use). My work laptop was on a stand to the right and my MBP was on a stand to the left. I would use vscode to remote into either the VM on my work laptop or develop personal stuff locally. I also setup VSCode on the MBP to remote into the desktop. During the pandemic with kids home, I had to migrate from my detached garage/home office to inside the house. So I rarely touch my desktop directly and do my (personal) dev work remotely on it from the MBP.
In the future, should it come time to replace my MBP, I don't think I'll use another Apple. Since I can't BYOD it for my current job, and I don't need Outlook and PowerPoint for personal use, getting a hefty Linux laptop seems just fine for personal. If they would let BYOD MBP on the corp VPN, I'd consider it.
I actually take this route, building a $300 personal server out of second hand parts (except for the ram and ssd) so I don't waste too much money if it doesn't work and try to exclusively use vscode remote development feature. It's working perfectly. My development workflow is not changed at all. I was expecting some friction, but there are almost none. I can even work outside my home network seamlessly using zerotier. Building a ryzen system would be totally great for this setup.
This allows me to go from development on personal projects to gaming and vice-versa which is very satisfying.
Unless you are an ios dev, why??
I stay on MacOS because it works very, very, very well for what I want to do.
There are a large number of affirmative reasons I prefer Macs. Hardware build quality has traditionally been stellar, on par with the golden age of Thinkpads (which is one reason the keyboard thing was so jarring). The OS is immensely, profoundly stable. The built-in tools for things like mail, contacts, and calendars work very very well. The overall polish of the experience is unmatched.
AND I have access to a bash prompt, and a whole host of FOSS offerings for things I might want. (As I've moved away from actually writing code, this has mostly boiled down to emacs and a few other bits, but still; it's comforting.)
I also use an iPhone, an iPad, and have an Apple Watch. The seamless integration is really, really great. I'd be hard pressed to give that up.
Add to this the fact that there are material reasons I want to AVOID the other two players:
- Windows is a chaotic disaster in terms of consistency, stability, overall design, and general behavior over time. The degree of weird enmeshment of binaries required to install software more or less guarantees system bloat that just doesn't happen on Macs.
- Linux can be many things to many people, and if Apple hadn't moved to a unixy base at the turn of the century I'm sure I'd have ended up there. But as it is, I'm just not willing to tinker around to achieve a workable environment for me, or deal with the inevitable interoperability challenges that would come from living without the COTS tools I rely on, like Office.
I also love homebrew, it strikes a good balance between something like apt and windows-style installers. Toss all that together with the ecosystem that apple offers with it's other products (being able to send texts from my laptop via iMessage was a revelation) and you have one hell of a value proposition. There's a lot of valid criticism against apple but I can pretty much guarantee that 99% of the actual devs who rail against them use a MBP for work.
People have to understand that these are not your bargain-bin windows laptops, these are premium products. You wouldn't compare a Ferrari to a Toyota Camry, nor would you treat them the same or expect the same level of "performance" across all use cases.
Only today we had 2-3 _major_ osx issues on HN front page. For example
Maintenance is dead easy on Linux. Setting THAT up, though, isn't.
Really? You think the same Macs that maintain silence by overheating have "stellar" build quality?
Seriously, go watch Louis Rossmann's channel. You'll soon see how terrible the build quality in Macbooks is. Just because it's encased in aluminium doesn't mean it's well made.
- They work a long time
- They tolerate travel and other "heavy use" scenarios much better than most other laptops
- I've only rarely had to resort to warranty support or repair, which is 100% not the case with even the higher-end Dell (e.g.) machines I've bought for organizations.
I actually like it. I get a sane shell with all of the basic *nix tools I need and I get the wide consumer-level support of the rest of the OS.
One basic thing I really enjoy is having extra monitors just work without wrestling with display settings. A small thing, sure, but it was a surprisingly persistent pain.
Games are mostly C++, based on my experience with C++ codebases I'd say there's about zero chance that a game could be (easily) recompiled and made to work on ARM.
And games won't be ported anyway because it's too much work for too little sales, even if a developer really wanted to make it work they'd need a $2000 new mac to be able to compile/test in the first place.
Ninja edit: I guess you're suggesting that some parts are in hand rolled x86 assembly, not just pure C++ that would be output as the cross compiler target architecture?
Performance? Who knows.
Plus another 100 or 200 in adapters. It's beyond me why they would make a laptop with zero USB and zero HDMI ports.
Total is $2000 when converted back in USD.
The 10% might still be to account for the instability of the GBP.
However it still won't give you any HDMI or audio or Ethernet output. If you try to get one more adapter, say HDMI to connect a projector, you won't be able to plug it because there was only one thunderbolt port in the first place.
£75 pounds for basic connectivity https://www.apple.com/uk/shop/product/MUF82ZM/A/usb-c-digita...
£120 pounds for dual display with dual USB. https://www.apple.com/uk/shop/product/HMX02ZM/A/caldigit-thu...
£230 for a dock https://www.apple.com/uk/shop/product/HMX12Z/A/caldigit-ts3-...
That said, it sounds like you're really not interested in this market and I recommend you look elsewhere for a solution.
Not that the transition has been seamless, but I’ve only had to fall back to my Macbook once in the past three months. There’s something about shaking up the routine, plus the forced simplicity and single-tasking has majorly helped my focus. I just wish Cloud9 or Codespaces worked better on an iPad...
Edit: This is my personal laptop fwiw. My work one is still Intel.
Is that worth $300, for a larger form factor?
From what we know right now - same processor, different thermal envelope.
Funny how far we've gotten from "It just works"
Obviously if you just make websites it doesn't matter but if I'm buying an ARM device I want to play with the bare metal (you can't even access performance counters in MacOS IIRC)
I don't like Apple devices myself but I understand that some people like the ecosystem as a whole.
However I don't understand the point of a Mac without MacOS. The hardware is pretty much garbage compared to any laptop of the same price range, let alone high-end laptops like Thinkpads.
(this is a honest question and I am not trying to troll or make fun of anyone, in case anyone doubt of the tone of my post)
To answer your question, I don't see much reason to buy a MBP and run it without MacOS, because -- although the build quality is sensational and the MBP travels well -- I'd rather just buy a Lenovo with Windows 10 Pro from the factory, get their top-tier service and repair plan, and still come out cheaper than a new MBP. However, I could see myself converting an old MBP for use as a Windows machine.
For the same use cases Linux feels (and is) way more performant and responsive even on a less powerful hardware, and I'm not even talking about Docker.
The MBP keyboard is probably the worse on the market and any lower-end laptop keyboard is more comfortable to use. And nowadays it is not even a complete keyboard since it lack many standard physical keys.
The number of ports is extremely limited and you cannot connect anything without an adapter.
The computer gets hot very easily, and when it does, the keyboard gets extremely hot as well. It is also very noisy compared to any other laptop.
The glossy screen may look cool in a shop but it is very unpractical and uncomfortable.
I do not have any of those problems with my X1, I even have the best keyboard I could hope for a laptop, along with actual features like the Trackpoint, physical camera shutter or the PrivacyGuard.
Also, many people like to work with dual boot devices. And no, virtualization is not the same.
The hardware is also not garbage. It may not be competitive, but it is well made.
1.5) I like the GPL, apple doesn't
2. I like apple's hardware in a vacuum. As much as the company leaves a bad taste in my mouth (there is a deeply pretentious streak to Apple's approach to their users), they make nice cases and such.
Websites in 2020 is not that simple. At my workplace, we use Docker extensively in our build pipelines
I've bought a lot of Apple refurbs, they are like new in every respect.
Pretty misleading to include it in the demo if Docker for Mac still doesn't launch next week right?
The M1 hardware hasn't shipped yet. Presumably this criticism is only valid if the hardware has shipped, which it has not. If Apple did indeed have a lot of involvement porting Docker, my assumption is that they'll release that with the machines next week (November 17, 2020), as opposed to before the hardware is generally available.
I would have expected Apple to be far more proactive in working with Docker, Inc., especially considering they namedropped Docker specifically at the WWDC Apple Silicon announcement. Apple has supplied virtualization capable hardware to certain developers - Parallels certainly - so it's strange to see that Docker seems to have had to sit on their hands until now? I certainly would have expected a Docker release a lot closer to hardware availability.
The fact that a stable golang for Apple Silicon likely won't be out until February is not a great look either. That blocks Docker but a lot of other things also.
I mean, you should expect a lot of startup issues when moving to a new ISA, but it seems Apple could have done a lot more here.
(Additionally, it's terrible spyware, and sends ridiculous amounts of data back to Docker about your system, including network pcap logs(!).)
TBH docker-on-mac is not that good. That is why I've most minimal config and just using to have some 'linux' time to time.
What I use is `docker context` command or DOCKER_HOST=ssh://user@host pattern.
env-var is more convenient for 'ephemeral' hosts. (It uses your own SSH-key to connect to host) And for static hosts (like staging machines) I use context.
Plus it uses native SSH client, so it supports ProxyJump or JumpServer or Bastion Host, forwarding, session-reuse, YubiKeys & whatnot.
So, I'd rather to have client on my laptop, having 1 or 2 medium sized hosts in Digitalocean, AWS, whatever place. (Some providers even gives you docker hosts with ssh access, like civo.com - disclaimer: I just started using them last weekend...)
PS: You cannot 'mount' your local folder to remote host. But you can 'build' your local folder on remote host. (Thanks to BuildKit / DOCKER_BUILDKIT=1)
Edit: I remembered that I've installed (compiled actually) docker-cli to my Android phone using Termux.
You can't run daemon at all. The client gives some warnings at the start saying some things not right. But it works with those methods & variables...
$ go get -u -v github.com/docker/cli/cmd/docker
The mac version is not perfect but I've used it for years without major headaches. I regularly run all sorts of middleware, build tooling, etc. using it.
Creates lots of headaches with mounted volumes (local vs staging, prod etc)
Apple explicitly said that the DTK does not support hypervisors, and that final consumer hardware would. People posting garbage reports like this to HN annoys the hell out of me.
Kindly check your attitude at the door.
... but for the current state where ARM-compatible images make up roughly 1% of Docker Hub, I don't even see the point of wanting to use Docker on M1.
AWS is in the process of moving all of their managed services e.g. RDS, ElasticCache over to their Graviton platform.
It reserves a huge chunk of disk and RAM. Running anything non-trivial (e.g. k3d and a few pods) led to a very noticeable slowdown. My fan was noisy all day.
Instead I purchased (for a very, very modest amount of money) a decommissioned rack server with a dual Xeons (2*8=16C/32T) and 64GB of DDR3 RAM. It's on my local gigabit network. I docker-machine into it. The hardware is obviously older but build times are in the same ballpark.
I'm holding out for the 16" ARM MacBook Pro but I'll probably pull the trigger when it's released.
Running it in a VM is bad enough. If they also had to run the VM in the Intel emulator (Rosetta 2), it could be unusable, performance-wise.
If they developed a macOS subsystem for Linux (like the WSL on Windows) and ran that on Rosetta 2, I can imagine it being faster.
Slightly off-topic: I just realized that they're actually calling it Apple Silicon. I find that name very cringey for some reason.
Does it ARM? https://news.ycombinator.com/item?id=25075458
"..Even that DTK hardware, which is running on an existing iPad chip that we don’t intend to put in a Mac in the future – it’s just there for the transition – the Mac runs awfully nice on that system. It’s not a basis on which to judge future Macs ... but it gives you a sense of what our silicon team can do when they’re not even trying – and they’re going to be trying." - Apple SVP
"The Apple A12Z Bionic is a 64-bit ARM-based system on a chip (SoC) designed by Apple Inc.The chip was unveiled on March 18, 2020, as part of a press release for the iPad Pro (2020), the first device to use it. Apple officials touted the chip as faster than most Windows laptops of the time.
Just the DTK said explicitly that 4k pages were not supported under rosetta but would be by release - that's what broke chrome and apps that are just glorified copies of chrome.
x86 emulation gives access to most available Docker images anyways -- and the performance of the new CPUs would seems to indicate that emulation may not be so terrible, at least for the short term.
QEMU would allow running x86 images on Apple Silicon but I don't know if the performance would be acceptable. You can't use Rosetta 2 with Linux no matter how much you might like to.
A full VM with x86 emulation is not supported though, and from the sounds of it will never be.
Apple would need to implement all the supervisor mode parts of x86, which massively increases complexity. It's a notably different product.
And it would even be virtualiztion, it's not exposing the real host hardware. No real x86 cpu exists to be exposed. It would just be an emulator pretending to be virtualiztion.
Fair point. On an infinite time scale...
https://news.ycombinator.com/item?id=25064841 This post with 8 votes? Did it come up anywhere that got actual discussion?