I think that UTM is the Apple way to gain kernel access for other OSes.
Mach was originally designed to host multiple OSes simultaneously. I think UTM is meant to develop those capabilities.
A lot of dev tooling still expects x86_64. Example off the top of my head, Cosmopolitan will not build ARM binaries and will not compile on ARM. (But it WILL build x86_64 universal binaries that will run on Apple Silicon and macOS via Rosetta.)
There is also the issue of wanting to have your dev environment be as close to your prod environment as possible, and the vast majority of cloud-based hosting is still x86_64.
Then you're emulating everything that runs in the VM, as opposed to using an ARM vm and only emulating the x86 program you want to run. This makes things a lot slower.
I don't know how Rosetta 2 for Linux is implemented in Virtualization.Framework - but it exposes it via a mountpoint on the (ARM) Linux VM. The binfmt executable that is exposed essentially does what Rosetta 2 does but for a Linux binary.
I think the binfmt executable can be used outside of Virtualization.Framework and can even be used in an ARM VM in say Asahi but people don't because it's not easy to do but also honour system as it's not licensed for use outside of MacOS.
Fair enough, but (speaking as a web developer), running a stack locally on Apple Silicon (especially if it has any "interesting" dependencies such as compiled Rust/C libs and whatnot) and expecting the same stack to run identically in the cloud on x64 can still expose differential behavior
At the very least, I could test mock deploys from the Apple Silicon side to the x64 VM running locally (over loopback) as an extra data point
I don't actually use it for this use-case but now that I'm thinking about it, I might try, because this seems useful
There is definitely value to be had in using systems that force you to write high-performing code. In the web space we all too often we sit down at high powered macs to write code that will be run by low-end Android devices, and fail to appreciate the consequences.
I'm always wondering about thoughts like this. Although there are likely a humongous amounts of low end devices, as a whole, how much potential return would you get from this group? This is a value judgment and I know many have low end devices for reasons other than monetary and just completely ignoring quantity may not be viable for certain investments, its just an interesting thing. It might be similar to the debate about the cost of handling credit cards vs cash. Cash has its own set of costs that are usually neglected. You could easily get more return by adding features than spending any time optimizing.
> how much potential return would you get from this group?
This reads a bit like "these poor sods can't afford to pay for my app, so why bother", at least to me.
However even if a certain device owner subgroup doesn't represent a potential revenue stream, you can as a developer still profit from also (or even primarily) targeting their devices.
Apps that will run on lower-powered devices will almost necessarily be leaner, and as a side benefit will have less complexity, less dependencies, and, ultimately, less technical debt for you as the developer to manage.
Or even just devices in high-latency environments. This gets missed in development so often. My experience is that 75% of apps cannot handle it and fail to work in all kinds of ~~interesting~~ frustrating ways.
I think they mean them, and the obviously-high percentage of developers that use Macs compared to the general population of users. Which, ironically, skews the percentage of Windows users lower than most other demographics.
In fact, how dare you forget the Linux users when writing your comment?!
Faced with the option to pay higher costs for incredibly good hardware, or to run any of the many *nix distros, you chose to have meh hardware, and a restrictive OS that is built as a GUI first, terminal a distant second.
Bash is bash, zsh is zsh; how are they different? coreutils differs, sure, and I dislike BSD’s implementation of them, but that’s why gnu-coreutils exists.
Now that I see it in real life I don't know how I feel about it. It doesn't feel safe when I see a Twizy, but when I see these cars in my mind I see them on Swedish bicycle roads.
The whole thing would probably require a total transformation of city travel.
The regulatory regime will take a minute to figure out, but with tiny vehicles like this + good transit + closing streets to regular big cars, we'll figure it out.
The basic ones (and things like the Citroën Ami) are more A-traktor than bicycle – there’s an A-traktor registered Ami in the next village along from me – but that’s typically a software limit. The Twizy could be bought in an 80km/h variant, and there are “remaps” that will take that version up to 110km/h.[1] I’ve seen them doing near that on riksvägar here.
The best way to stay clear from this insanity is to divest from the US as much as possible / reasonable. It won’t help second-order effects, but it reduces your exposure.
I wouldn't go that far (there's still the possibility that things change for the better for whatever reason), but those 60%-70% of the US in global stock indices really do look like single country risk now.
Not the same thing as this article, but I was impressed with the Chinese trend of “fixing” 2015-era MacBook Pros with broken screens by deleting the screen and installing a blanking plate, leaving the machine as a sort of C64/Amiga/ST/Acorn style keyboard-and-CPU single unit that could be plugged into an HDMI screen. https://ioshacker.com/news/people-in-china-are-using-macbook...
I’ve personally always liked firebreak sprints. Every so often (3-4 times a year) in between other large pieces of work, take a week and give the developers free reign to individually fix the things they think are most important but never seem to get prioritised.
Yes, it talks to a disconnect between product and engineering, but working on that relationship at the same time doesn’t mean that both aren’t worth doing.
The Shape Up model (originally from Basecamp) builds in a 2-week "cool down" period after every 6-week "build new stuff" period. We further designate one of those weeks as a "bug blitz." That 1-2 weeks in every 8 cadence really helps encourage fixes not just new features.
reply