She was literally reverse-engineering Mali for Panfrost starting in her Sophomore year.
> Where do people get this knowledge?
Time, hands-on experimentation, and focus.
Anecdotally, reverse engineering and low-level hacking felt more popular back in the 90s and early 2000s. Back then, there were fewer distractions to soak up the free time of young tech enthusiasts. Old IRC chatrooms feel like a trickle of distraction relative to the firehose of Twitter, Reddit, 24/7 news cycles, and modern distractions.
A common complaint among the younger engineers I mentor is that they feel like they never have enough time. It's strange to watch a young, single person without kids and with a cushy 9-5 (or often 11am to 5pm at modern flex/remote companies) job complain about lack of free time. Digging deeper, it's usually because they're so tapped in to social media, news, video games, and other digital distractions that their free time evaporates before they even think about planning their days out.
It's amazing what one can accomplish by simply sitting down and focusing on a problem. You may not reach the levels of someone like Alyssa, but you'll get much farther than you might expect. And most importantly, you probably won't miss the media firehose.
In high school, I was lucky enough to not have to do laundry, feed myself, or a variety of other tasks and chores that come along with adulthood. I also could sleep like crap and make it up during school the next day. Not to mention that the time I spent at school wasn't spent programming, so when I got home it wasn't more of the same, it was exciting. I could sit at the computer programing from 3pm to 2am.
Funny how this works. In high school I had a ton of free time. In college, not a lot of free time and very stressed. Entering the workforce, again a lot of free time and very stress free. I expect free time will disappear if children ever come into play
That would very much depend on how you choose to live.
As for me, living in Thailand and being a father of one daughter, I need to earn about 1.500 USD per month for our monthly needs. Which is easily achievable for me with less than a week of work. All extra money I earn above basic needs, I invest.
My current client sometimes has work for me 40 hours a week, sometimes only 40 hours a months. As such, I have plenty of time to dedicate to my own hobbies.
But this is only achievable because I left The Netherlands and decided to work remotely. And I do understand these kinds of choices are not possible for everyone (some people would find it very hard to live in another country, far away from family and friends, for example).
In our field and by making suitable choices it should be possible to have plenty of time for your own hobbies and projects.
Wrote about it here https://link.medium.com/BAGPsTlChdb
It doesn't surprise me that some kids capitalize on it instead of just using it to watch Twitch in class.
YouTube hadn't really taken off until we were in middle school to early high school, so I couldn't imagine what internet addictions may be like in the future, seeing as quite a few toddlers of today grow up watching it. I suppose it's not too different to Television however.
Cooking is generally taken care of by parents, which means ancillary things like grocery shopping are too. You don't even have to dedicate brainwidth to "what groceries do we need to buy". There are so many tiny interruptions and bureaucratic overhead that keeps the brain slightly stressed as an adult that simply don't exist as a child, it is marvelous.
Knowing what to work on, what trends to be aware of, where to present your stuff is something that a modern elite collegiate is intimately familiar with.
Other teens and young adults, not so much. I myself having been one such and friends of other such.
In the time before I started work and after finishing school I used to do so much open source stuff in the time that I now spend working.
There is also the possibility, They might be right. Before, communication and the long access lag inherent in accessing data in remote systems meant change propagated by trickle. Now, change can happen and propagate across the world at breakneck speeds. Which means if one is to have an effect, one must be there/be aware.
This leaves precious little time for following the white rabbit. I think everybody could do with a little somber reflection on just the impact that rapid information propagation has had on the world.
Fast forward 20 years later and browsers have major releases every few weeks while one of the most stable frameworks available - Angular - every six months.
It was a wonderful time to learn about computing from first principles without the distractions that exist today.
But I think we undersell the abilities of college-aged adults in modern times. Watch some top violinists or gymnasts at that age, or just someone who had a couple of kids as a teen and held it together.
The young have unbounded ability; it is only wisdom and experience that they may lack.
Yes, they are for the Atari, just trying to make a point here.
So any smart kid in the age of Internet, instead of having what is available at the local library, is quite capable to build up the skills to achieve this kind of work at the high school level.
I guess anyone doing games were a prodigy back then.
I would imagine at that time there were also many fewer kids at a high-school level doing that kind of hacking, so it might not be totally wrong to say they were all prodigies.
The Ataris had a display process called Antic that you supplied with a display-list, a set of tokens that defined which antic-mode was next as the electron-dot proceeded down the screen. Antic would halt the CPU and take over the bus in order to transfer data from main memory to use for display purposes, be it character or bitmap data
You could choose from a variety of modes (15 in all, IIRC) and add horizontal or vertical smooth scrolling to each mode-line individually by toggling a bit in the display-list entry.
One further bit allowed you to set a DLI (Display-list interrupt) where the CPU would be interrupted and run your code at that end of the specific display-line on-screen - you had the flyback interval to effect something (not a lot, but you could change colour registers, character-sets, do a few register operations, basically).
So it was minimally programmable, and certainly not comparable to today's stuff - but it was groundbreaking in its day.
I'm jealous of people who started programming as a kid.
Back in the 80s lots of kids that age wrote computer games and often found novel ways to use the hardware because even though video controllers might not be fully documented you still knew where their control registers were in memory and could just try altering the values in them.
Now, modern graphics hardware is more complex and controlled by things like command buffers and shaders, but you can apply similar ideas, and you’ve probably got easier tools for examining the state of your process. Write a program that does a simple thing and dump its state. Do a slightly different thing and compare the states. When you think you understand things either change your program and see if you’re correct or use a debugger to alter the state at runtime and see if you can change the visible result.
It's why kids are able to devote so many hours to video games and other time waster stuff.
Only difference is this person managed due to luck / skill / timing to end up on a more productive path.
Time commitment is the big impediment for most open-source projects. Late adolescence / young adulthood is one of the few periods of life where you both are competent enough to do useful things and have enough free time to actually commit to that work.
and intelligence and education and intellectual curiosity and non-working time and energy/stamina ...
That Venn diagram ends up with a pretty small number of humans ...
Please could you link to some of the reverse engineering writing by you guys to which you're referring?
Of course back then, people used to document the components they put in computers...
I wish there were some "Dissecting the Apple M1 GPU for Dummies".
Kudos to Alyssa!
“Afterwards, we came to refer to certain types of accomplishments as “black triangles.” These are important accomplishments that take a lot of effort to achieve, but upon completion you don’t have much to show for it only that more work can now proceed. It takes someone who really knows the guts of what you are doing to appreciate a black triangle.”
IMO, writing some Vulkan (it being a thin layer over what a modern GPU's "actually" doing) would be good to get fundamentals down first, not the least of which because you'll have the validation layers to yell at you for incorrect usage instead of getting mysterious GPU behavior.
Some of the Metal APIs are also a little intertwined with swift and objc.
My guess would be she bought the Mac specifically to tackle this project.
I'm going to upvote pretty much any reverse engineering monologue, especially one as accessible as this.
I didn't understand everything the author said, but I was able to get the gist of it. There GPU has a list of valid addresses and it's stored in a C style array with a terminator instead of a 1 indexed array with an object count in the Apple API style. The mismatch caused the author some confusion even after he discovered the existence of the buffer due to his code failing.
It's one of the many details that has to be understood in order to write an open source driver for the hardware. The open source driver will be necessary for people who want to install Linux on this Apple hardware.
If you don't know enough about CPUs to even read reviews that compare the CPU to other CPUs, then you either don't need the extra IPC of Zen 3 (and you won't notice when you use your laptop day to day) or you just... don't care.
If you care, get a 5600U/5800U or H line and it will never affect you. The laptops these come in should be priced accordingly.
I just can't even imagine what the gap is going to look like when Apple really refines this down.
Its 70% the weight at 70% the volume of the 16". What is the point of comparing the weight of a 13" and a 16" laptop?
What that means is laptop OEMs will have to limit TDP on CPUs - probably 15W or less. Given current Intel chips being very power hungry, these are likely NOT going to be great CPU performers.
The only competition in CPU space to M1 will be Ryzen 5000U chips in the 15-25W thermal envelope. They should be ~19% more powerful/efficient than Ryzen 4000U chips, but I would not expect M1 levels of cool or battery life yet.
Am I right to assume we can see benchmarks within the next few weeks?
So I think there's a good chance in the next week or so we'll start seeing benchmarks.
- where was this 1/2/10 years ago?
- how would this address the fundamental CPU performance gap?
- Intel has no competitive GPU offering, yet another glaring failure on their part
- why would OEMs go along with this when Ryzen is a better CPU, GPU, aside from getting Intel bribes and the usual marketplace branding momentum?
- will this actually get ports migrated to laptops faster? It was criminal how long it took for HDMI 2.0 to hit laptops.
I get Intel doesn't own the stack/vertical integration, but Intel could have devoted 1% of its revenue to a kickass Linux OS to keep Microsoft honest a long time ago and demonstrate its full hardware.
Even if only coders/techies used it like Macbooks are standard issue in the Bay, it would have been good insurance, leverage, or demoware.
Microsoft being a true monopoly might have struck fear in the timid souls of Intel executives that they would go headlong for AMD.
Or Google had this opportunity for years, and half-assed ChromeOS. Or AMD. Or Dell/HP/IBM who sold enough x86 to have money on the side.
I don't buy that it would have been hard. Look at what Apple did with OSX with such a paltry market share and way before the iPhone money train came. First consumer OSX release was 2001.
Sure, Apple had a massive advantage by buying NeXT's remnants and Jobs's familiarity with it and the people behind it, but remember that Apple's first choice was BeOS.
So anyone looking to push things could have got BeOS, or an army of Sun people as Sun killed off Solaris. The talent was out there.
Instead here we sit with Windows in a perpetual state of the two-desktop tiled/old frankenstein, Linux DE balkanization and perpetual reinvention/rewrite from scratch, and OSX locked on Apple.
Yeah of course it was possible for Intel to do more, but they're clearly largest contributor to Linux graphics stack anyway.
For me it's Linux with AMD both for CPU and GPU.
The M1 is great because it's low power with optimized performance, but on a desktop you can have well over 500W+ and that's normal.
I don't see anyone else making only mobile arm chips for laptops other than trying to be like a Windows Chromebook. The software compatability will be a nightmare.
At my last job, they issued us 2012-era Macbooks. I eventually got so frustrated with the performance that I went out and bought everyone on my team an 8th generation NUC. It was night and day. I couldn't believe how much faster everything was. The M1 is a similar revelation for people that have stayed inside the Mac ecosystem all these years.
Given the benchmarks I've seen I imagine the M1 would be a somewhat comparable experience, but using a desktop machine for software development for the first time since...2003(!) has really turned me off the laptop-as-default model that I'd been used to, and the slow but steady iOSification of MacOS has turned me off Macs generally. Once people are back to working in offices I'd just pair it with an iPad or Surface or something for meetings.
Yeah Air on M1 cannot beat triple 1440p 164HZ monitor setup with high end desktop hardware, but it's still damn impressive. It's has slightly better 16:10 screen, more performance than any of current x1 thinkpads and Air that I bought is absolutely silent too.
Also might be in the US you have comparable prices on Thinkpad or some Linux-friendly laptops, but where I live I bought Macbook for 2/3 of comparable Thinkpad price. I've used to buy used ones, but now they became much more expensive due to high demand and much less of air travel (less smugling I guess).
Right now, I am torn between a *FHD*, AMD X13, 16GB RAM/500GB SSD for about *1000€* and the Air 16/500 at 1400€. I hate my life for that decision right now...
If you wanted a fair comparison you'd wait to see what processor Apple puts in their Mac Pro in 2-3 years, and compare that to whatever is the Threadripper equivalent then.
Heck, if you're not squeezing the absolute maximum performance from a chip by overclocking (I am :P) you can run a 5950X on a regular tower cooler with the fan curve never coming close to 100% and it will still be incredibly fast.
Apple took the power reduction.
I'd expect Microsoft to make a run at this with their Surface line.
They've been trying to make ARM tablets for a years (see Windows RT), and they just recently added the x86-64 compatibility layer so that it could be actually useful instead of "it's an iPad but worse".
Will it see any success with 3rd party developers probably not bothering to support ARM? Maybe for some people who spend most of their time in Edge, Mail, or Office. I have a hard time seeing it being as successful as Apple's change, since the messaging from Apple is "This is happening in 2 years, get on board or you'll be left behind" and the messaging from Microsoft is "Look, we made an ARM tablet, and will probably sell at least 10 of them."
I don't see an obvious player currently working on a "broad market" high performance ARM chip for the commodity desktop.
You only see that on gaming rigs and high end workstations. The typical office machine or ma & pa PC is either a laptop or a mini-PC with mobile parts.
1. A huge catalog of apps and games, old and new, that are mostly windows and linux, but all exclusively x86.
2. I don't want to get locked in to an ecosystem with no variety. For example, if Dell's offering laptop has a keyboard I hate or poor selection of ports, then I can switch to Lenovo or HP or go for a gaming laptop with an Nvidia GPU and use it to train NN, etc, the variety is amazing. While if Apple's next machine has a flaw I dislike, then too frikin bad, I'll be stuck with whatever Apple bestowes upon me.
The fact of the matter is GPU support on MacOS is just not there in the same way it is with windows. It is really hard to justify the price of Apple when you compare directly to Windows offerings spec for spec. When you compare the sheer volume of software available for Windows as compared to MacOS, especially if you need a discrete GPU there are really no comparisons.
- Dutiful citizens of The Ecosystem
What they are no closer to biting off is gamers and hardware enthusiasts. Anyone who actually needs to open their PC for any purpose whatsoever.
I know Apple wants to be the glossy Eve to all the PC market's Wall-E's, but I will continue to shun them forcefully as long as they remain hell-bent on stifling the nerdy half of all of their users.
I assume the developer popularity is a result of the iPhone gold rush, which Apple exploited using exclusivity tactics. Therefore I consider it an abomination to see developers embrace the platform so thoroughly. iPhone development should feel shameful, and when there's no way to avoid it, it should be done on a dusty Mac Mini pulled from a cardboard box that was sitting in the closet ;)
That's not why most developers I know use macs. We use them because we got tired of spending a couple days a year (on average) unfucking our Windows machine after something goes wrong. When you're a developer, you're often installing, uninstalling and changing things... much more than the average personal or business user. That means there are a lot of opportunities for things to go wrong, and they do. Even most Google employees were using macs until they themselves started making high end Chromebooks; tens of thousands of google employees still use macs. Some users have moved on to linux, but most stay on mac because they want to spend time in the OS and not on the OS. I can appreciate both perspectives, there's no right answer for everyone.
I share your wish that the hardware was more serviceable but everything is a compromise at the end of the day and that's the compromise I'm willing to take in exchange for the other benefits.
Some complain about the price, but the high end macbook pros aren't even more expensive than windows workstation laptops. Our company actually saved several hundred dollars per machine when we switched from ThinkPads with the same specs. Not to mention, our IT support costs were cut almost in half with macs.
So, aside from gaming or some specific edgecase requirements, it's hard for me to justify owning a PC. That said, I have one of those edgecase requirements with one of my clients so I have a ThinkPad just for them. But, it stays in a drawer when I'm not working on that specific thing.
They definitely won the "general public use laptop" market (if you conveniently ignore the rest of the laptop and OSX, which are utter crap IMO), but its important to understand that they really didn't invent anything, they just optimized. And optimizing something like this means you make the stuff that is most commonly used better, while reducing the functionality of the rest.
Compare the Asus Zephyrus RoG 14 laptop with the Macbook pro. G14 has the older Ryzen 9 4900HS chip, and while the single core performance of the MBP is better, the multi core is the same despite the 4900HZ being a last gen chip. G14 gets about 11 hours of battery time for regular use, MBP gets about 16, but the G14 also has a discrete GPU that is superior for gaming than the integrated GPU of the mac. Different configurations for different things.
Then, even ignoring the companies decision to mix and match components, the reason why you can buy any windows based laptop and install Linux on it and have 99% of the functionality working right out of the box is because of the standardization of the cpu architecture. With the Apple M1, even though Rosetta is very well built, its support is not universal across all applications, some run poorly or not at all.
And while modern compilers are smart, they have a long way to go before true cross operability with all the performance enhancements. Just look at any graphical linux distro, and the fact that all android devices out there run linux, but there isn't a way to natively run it on them without the chroot method that doesn't take advantage of the hardware.
So in the end, if you are someone that just wants a laptop with good performance and great battery life, and don't care about any particular software since most of your time will be spent on the web or in the Apple Ecosystem software, the M1 machines are definitely the right choice. However, if you want something like a dev machine with Linux where you just want to be able to git clone software and it work without any issues, you are most likely going to be better off with the "windows" based laptops with traditional AMD/Intel chips for quite some time.
Apple probably has the best laptop out there as of today, but I don't think Apple sales performances are impacted that much by their hardware perf actually: around 2012-2015 or something they had several years with a subpar mobile phone, both on the hardware a and the software side, and it still sold very well. A few years later, they have the best phone on the market and… it didn't change their dynamic much: it still sells very well, as before. On the laptop market, they have been selling subpar laptops for a few years without much issue, and I guess it won't change much that they now have the best one.
Apple customers will get a much better deal for their bucks, which is good for their long term business, but I don't think it will drive that many people out of the Windows world just for that reason (especially with the migration/compatibility issues which are even worse now than they where running on Intel).
Also, many people outside of HN just don't listen to Apple “revolutionary” announcement, they have used that card too much, for no good reason most of the time, so people just stopped listening (even my friends who are actually Apple customers).
: which is where most people are tbh, and I don't think that many Linux people would switch either.
People who are multi-platform in daily life, are much more likely to switch - and that's a rather small percentage of computer users (and of course very much over-represented here at HN).
> they have used that card too much
you can never have enough "magic" :-)
Intel's CEO change may save them, but they're definitely facing an existential threat they've failed to adapt to for years.
Amazon will move to ARM on servers and probably some decent design there, but that won't really reach the consumer hardware market (probably - though I suppose they could do something interesting in this space if they wanted to).
Windows faces issues with third party integration, OEMs, chip manufacturers and coordinating all of that. Nadella is smart and is mostly moving to Azure services and O365 on strategy - I think windows and the consumer market matter less.
Apple owns their entire stack and is well positioned to continue to expand the delta between their design/integration and everyone else continuing to flounder.
AMD isn't that much better positioned than Intel and doesn't have a solution for the coordination problem either. Nvidia may buy ARM, but that's only one piece of getting things to work well.
I'm long on Apple here, short on Intel and AMD.
We'll see what happens.
Oh, and the chassis of this fanless system literally remains cool to the touch while doing all this.
The improvements in raw compute power alone do not account for the incredible fluidity of this thing. macOS on the M1 now feels every bit as snappy and responsive as iPadOS on the iPad. I've never used a PC (or Mac) that has ever felt anywhere near this responsive. I can only chalk that up to software and hardware integration.
Unless Apple's competitors can integrate the software and hardware to the same degree, I don't know how they'll get the same fluidity we see out of the M1. Microsoft really oughta take a look at developing their own PC CPUs, because they're probably the only player in the Windows space suited to integrate software and hardware to such a degree. Indeed, Microsoft is rumoured to be developing their own ARM-based CPUs for the Surface, so it just might happen 
I’m going to be the first person in the queue to grab their iMac offering.
Apple may own their stack, but there are a TON of use cases where that stack doesn't even form a blip on the radar of the people who purchase computer gear.
External GPUs will remain and I think Nvidia has an advantage in that niche currently.
The reason stack ownership matters is because it allows tight integration which leads to better chip design (and better performance/efficiency).
Windows has run on ARM for a while for example, but it sucks. The reason it sucks is complicated but largely has to do with bad incentives and coordination problems between multiple groups. Apple doesn't have this problem.
As Apple's RISC design performance improvements (paired with extremely low power requirements) become more and more obvious x86 manufacturers will be left unable to compete. Cloud providers will move to ARM chipsets of their own design (see: https://aws.amazon.com/ec2/graviton/) and AMD/Intel will be on the path to extinction.
I'd argue Apple's M1 machines are already at this level and they're version 0 (if you haven't played with one you should).
This is an e-risk for Intel and AMD, they should have been preparing for this for the last decade, instead Intel doubled down on their old designs to maximize profit in the short term at the cost of extinction in the long term.
It's not an argument about individual consumer choice (though that will shift too), the entire market will move.
I don't see that. At least in corporate environment with bazillion legacy apps, x86 will be the king for the foreseeable future.
And frankly I don't really see the pull of ARM/M1 anyway. I mean, I can get a laptop with extremely competitive Ryzen for way cheaper than MacBook with M1. The only big advantage I see is the battery, but that's not very relevant for many use cases - most people are buying laptops don't actually spend that much time on the go needing battery power. It's also questionable how transferable this is to the rest of the market without Apple's tight vertical integration.
> I'd argue Apple's M1 machines are already at this level and they're version 0
Where is this myth coming from? Apple's chips are now on version 15 or so.
> "And frankly I don't really see the pull of ARM/M1 anyway. I mean, I can get a laptop with extremely competitive Ryzen for way cheaper than MacBook with M1..."
Respectfully, I strongly disagree with this - to me it's equivalent to someone defending the keyboards on a palm treo. This is a major shift in capability and we're just seeing the start of that curve where x86 is nearing the end.
“No wireless. Less space than a Nomad. Lame.”
Fair enough, it's just important to keep in mind that M1 is a result of decade(s) long progressive enhancement. M2 is going to be another incremental step in the series.
> to me it's equivalent to someone defending the keyboards on a palm treo. This is a major shift in capability ...
That's a completely unjustified comparison. iPhone brought a new way to interact with your phone. M1 brings ... better performance per watt? (something which is happening every year anyway)
What new capabilities does M1 bring? I'm trying to see them, but don't ...
People don't really remember, but a lot of people were really dismissive of the iPhone (and iPod) on launch. For the iPhone, the complaints were about cost, about lack of hardware keyboard, about fingerprints on the screen. People complained that it was less usable than existing phones for email, etc.
The M1 brings much better performance at much less power.
I think that's a big deal and is a massive lift for what applications can do. I also think x86 cannot compete now and things will only get a lot worse as Apple's chips get even better.
I do remember that. iPhone had its growing pains in the first year, and there was a fair criticism back then. But it was also clear that iPhone brings a completely new vision to the concept of a mobile phone.
M1 brings a nice performance at fairly low power, but that's just a quantitative difference. No new vision. Perf/watt improvements have been happening every single year since the first chips were manufactured.
> I also think x86 cannot compete now and things will only get a lot worse as Apple's chips get even better.
Why? Somehow Apple's chips will get better, but competition will stand still? AMD is making currently great progresses, and it finally looks like Intel is waking up from letargia as well.
I'd say the M1's improvements are a lot more than performance per watt. It has enabled a level of UI fluidity and general "snappiness" that I just haven't seen out of any Mac or PC before. The Mac Pro is clearly faster than any M1 Mac, but the browsing the UI on the Mac Pro just feels slow and clunky in comparison to the M1.
I can only chalk that up to optimization between the silicon and the software, and I'm not sure that Apple's competitors will be able to replicate that.
Arguably this has been the case for the last ten years (comparing chips on iPhones to others).
I think x86 can't compete, CISC can't compete with RISC because of problems inherent to CISC (https://debugger.medium.com/why-is-apples-m1-chip-so-fast-32...)
It won't be for lack of trying - x86 will hold them back.
I suppose in theory they could recognize this e-risk, and throw themselves at coming up with a competitive RISC chip design while also somehow overcoming the integration disadvantages they face.
If they were smart enough to do this, they would have done it already.
I'd bet against them (and I am).
ARM64 is a good ISA but not because it's RISC, some of the good parts are actually moving away from RISCness like complex address operands.
OTOH, apple is doing some interesting things optimizing their software stack to the store [without release] reordering that ARM does. These sorts of things are where long term advantage lies. Nobody is ever ahead in the CPU wars by some insurmountable margin in strict hardware terms.
System performance is what counts. Apple has weakish support for games, for example, so any hardware advantage they have in a vacuum is moot in that domain.
Integrated system performance and total cost of ownership are what matters.
From that linked article:
Why can’t Intel and AMD add more instruction decoders?
This is where we finally see the revenge of RISC, and where the fact that the M1 Firestorm core has an ARM RISC architecture begins to matter.
You see, an x86 instruction can be anywhere from 1–15 bytes long. RISC instructions have fixed length. Every ARM instruction is 4 bytes long. Why is that relevant in this case?
Because splitting up a stream of bytes into instructions to feed into eight different decoders in parallel becomes trivial if every instruction has the same length.
However, on an x86 CPU, the decoders have no clue where the next instruction starts. It has to actually analyze each instruction in order to see how long it is.
The brute force way Intel and AMD deal with this is by simply attempting to decode instructions at every possible starting point. That means x86 chips have to deal with lots of wrong guesses and mistakes which has to be discarded. This creates such a convoluted and complicated decoder stage that it is really hard to add more decoders. But for Apple, it is trivial in comparison to keep adding more.
Maybe you and astrange don't consider fixed length instruction guarantees to be necessarily tied to 'RISC' vs. 'CISC', but that's just disputing definitions. It seems to be an important difference that they can't easily address.
Variable length instructions are not a significant impediment in high wattage cpus (>5W?). The first byte of an instruction is enough to indicate how long an instruction is and hardware can look at the stream in parallel. Minor penalty with arguably a couple of benefits. The larger issue for CISC is that more instructions access memory in more ways so decoding requires breaking those down into micro-ops that are more RISC like, in order that the dependencies can get worked out.
RISC already won where ISA matters -- like AVR and ARM thumb. You have a handful of them in a typical laptop plus like a hundred throughout your house and car, with some PIC thrown in for good measure. So it won. CISC is inferior. Where ISA matters it loses. Nobody actually advocates for CISC design because you're going to have to decode it into smaller ops anyway.
Also variable length instruction is not really a RISC vs CISC thing as much as also a pre vs post 1980 thing. Memory was so scarce in the 70s that wasting a few bits for simplicity sake was anathema and would not be allowed.
System performance is a lot more than ISA as computers have become very complicated with many many I/Os. Think about why American automakers lost market share at the end of last century. Was it because their engineering was that bad? Maybe a bit. But really it was total system performance and cost of ownership that they got killed on, not any particular commitment to a solely inferior technical framework.
x86 has complicated variable instructions but the advantage is they're compressed - they fit in less memory. I would've expected this to still be good because cache size is so important, but ARM64 got rid of theirs and they know better than me, so apparently not. (They have other problems, like they're a security risk because attackers can jump into the middle of an instruction and create new programs…)
One thing you can do is have a cache after the decoder so you can issue recent instructions over again and let it do something else. That helps with loops at least.
Apple software is also important here. They do some things very much right. It will be interesting to run real benchmarks with x64 on the same node.
Having said all that, I love fanless quiet computers. In that segment Apple has been winning all along.
They could just run them as virtual desktop apps. Citrix, despite its warts, is quite popular for running old incompatible software in corporate environments.
To start... How would a city with tens of thousands of computers transition to ARM in the near future?
The apps that run 911 Dispatch systems and run critical infrastructure all over the world are all on x86 hardware. Millions if not Billions of dollars in investment, training, and configuration. These are bespoke systems. The military industrial complex basically custom chips and x86. The federal government runs on x86. you think they are just going to say, "Whelp, looks like Apple won, lets quadruple the cost to integrate Apple silicon for our water system and missile systems! They own the stack!"
Professional grade engineering apps and manufacturing apps are just going to suddenly rewrite for apple hardware, because M2 or M3 is sooooo fast? Price matters!!!! Choice Matters!!!
This is solely about consumer choice right now. The cost is prohibitive for most consumers as well, as evidence by the low market penetration of Apple computers to this day.
The growth markets will drive the price of ARM parts down and performance up. Meanwhile x86 will stagnate and become more and more expensive due to declining volumes. Eventually, yes, this will apply enough pressure even on niche applications like engineering apps to port to ARM. The military will likely be the last holdout.
"How would a city with tens of thousands of HDDs transition to SSDs in the near future?"
It happens over time as products move to compete.
Client side machines matter less, the server will transition to ARM because performance and power is better on RISC. The military industrial complex relies on government cloud contracts with providers that will probably move to ARM on the server side.
It's not necessarily rewriting for Apple hardware, but people that care about future performance will have to move to similar RISC hardware to remain competitive.
The only way for Intel and AMD to thrive in this new world is to throw away their key asset: expertise in x86 arcana. They will not do this (see Innovator's Dilemma for reasons why). As a result they will face a slow decline and eventual death.
Apple has no Linux strategy. Nobody is working for Apple for free. People are working on their own time (or supporting the project with their own money) because they want to see this happen.
There is no skin in this for Apple either way.
What I don't understand is why people insist on criticizing a project which won't fundamentally affect them at all. Having Linux on Mac isn't going to hurt you and stands to benefit the community as a whole.
While the GPU port is unlikely to benefit others, it's very likely some of the other work will. Any improvements to the Broadcom drivers will be useful for the entire community. Improvements and optimizations to ARM64 support will likewise benefit the whole community.
Really tired of the mis-directed zealots who think they have the right to tell other people where to direct their time and energy.
macOS comes with a free built in hypervisor for running things in VMs. That's how Linux is supported. (Although I guess not if you want to use the GPU.)
I was thinking about bare metal.
Apple could save some money by selling bare metal M1's without an OS installed, but then it might get into the hands of people who want a "cheaper Mac but your hacker friend can get it working" and it would damage their brand, so I see why they don't do it.
The idea about "cheaper Mac" is weak, because you can already buy a cheap Mac - it's not going to be the latest gen, but let's take into account that Intel has not made big progress in the last couple of years and then M1 is actually on the affordable side.
Isn't actually more damaging to their brand that they don't support their products that will benefit professional users and that they rely on people doing work for free and thus Apple is avoiding paying fair share?
M1 is an ARM chip that's up there with Intel desktop PCs. That's awesome. It's possibly the real beginning of the end of the effective Wintel monopoly on personal computing and if we are going to continue to have Linux on hardware that's not locked-down phones it needs to happen. I'd certainly put my effort there if I had the skill.
> Isn't actually more damaging to their brand that they don't support their products that will benefit professional users and that they rely on people doing work for free and thus Apple is avoiding paying fair share?
Apple has $100 billion in cash. Whatever they are doing now, is working.
> Whatever they are doing now, is working.
If you are using child labour, avoid taxes, use anti-competitive measures, make stuff deliberately difficult to repair and easy to break and then have money to shut any politicians willing to look into their shady business then yes it is definitely working.
The M1 is slower in single threaded performance than any AMD Zen 3 CPU at 4.8 GHz or higher clock frequency and also slower than any Intel Tiger Lake or Rocket Lake CPU at 5.0 GHz or higher clock frequency.
For example the maximum recorded single-thread Geekbench 5 score for M1 is 1753, while Ryzen 9 59xx score above 1800 and up to 1876. The first Rocket Lake record at 5.3 GHz is 1896, but it is expected that it will score better than that, above 1900.
In benchmarks that unlike GB5 focus more on computational performance in optimized programs, Intel/AMD have an even greater advantage in ST performance than above. (For example, in gmpbench even the old Ryzen 7 3700X beats Apple M1, even when using on Apple a libgmp that was tuned specifically for M1.)
What is true is that Apple M1 is much faster (on average) in ST than any non-overclocked CPU launched before mid 2020.
I hope the M1 will bring a change to computation in general..