Hacker News new | past | comments | ask | show | jobs | submit | Maursault's comments login

Why does Intel need Arm for collaboration, and what does Arm get out of optimizing "Arm’s IP for Intel’s upcoming 18A process technology?" Why doesn't Intel license Arm designs like anyone else, and optimize it themselves? I also don't quite get why Intel doesn't just design their own competing high efficiency architecture, after abandoning x86 and backwards compatibility, of course, something they should have done two decades ago at least.


This is IFS, Intel Foundry Services. They are a fab that will make parts for anyone that pays them. This isn't Intel the x86 vendor, that is a different business unit.


> I also don't quite get why Intel doesn't just design their own competing high efficiency architecture, after abandoning x86 and backwards compatibility, of course, something they should have done two decades ago at least.

They've tried, at least twice. Itanium and Atom come to mind. It turns out, it's not as easy as it sounds, even back when Intel was near the top of its game.


Wasn’t atom still x86 though? Itanium was definitely it’s own bizarre beast but Atom was closer to what would be an efficiency core today.


Atom cores are x86, and aren't just close; they're near exactly the same. They use the Gracemont microarchitecture in the Efficiency cores, which is the fourth generation Atom built on Intel 7.


My understanding was that Atom was x86 instruction set (or whatever Intel calls amd64?), but its own arch. I very easily could be wrong about that though.


Itanium was supposed to be powerful, not efficient, and it originated at HP. Atom was x86. If Intel designed something new from the ground up with high efficiency specifications, I don't think it could be too terrible, and I think it would advance SotA to have real competition with ARM designs. The i860 may be Intel's last innovative chip design solely designed in house. Every advance in x86 is just another ugly monstrosity.


Intel bought the StrongARM line if ARM cpus from DEC (used in some PDAs of the late 90s) and rebranded it XScale. Had some minor successes, but no huge design wins. Sold it to Marvell in 2006 right before the iPhone was released and smartphones exploded the market for ARM cpus.


Intel is already an long-term Arm licensee[1], meaning that they can build and optimise Arm designs for their own use. I'd expect the terms of an Arm IP license restricts them from sublicensing any Intel-optimised Arm designs to foundry customers. If I understand correctly, this deal is Intel Foundries in future being able to offer the implementation of optimised Arm designs to customers.

https://wccftech.com/intel-manufacture-arm-chips/


Even if Intel design something similar to ARM, legacy wins. It’s not replacing mobile OS any time soon. Just like ARM isn’t replacing the Windows ecosystem soon.

As others have said Intel failed. There were Intel phones back in the days. Maybe they should have kept going even if it lost money. Maybe not. Who knows.

Not to mention Apple isn’t moving regardless since they were 1 of the founders of ARM.


> legacy wins

20 years ago, probably. I don't understand why it is still the case today that legacy is important. Who is still running very old, 25yo software, and just how are they wagging the dog?


SF's subway system is still using floppy disks. Being on HN might feel like everyone is on Rust or Rails or something else but sadly in the real world under the hood there's lots and lots of legacy - yes important 1s too.


> I also don't quite get why Intel doesn't just design their own competing high efficiency architecture, after abandoning x86 and backwards compatibility, of course, something they should have done two decades ago at least.

Intel structurally sucks. That’s the simple answer.


It's just such a waste of time, resources and talent, when it would be vastly better, and presumably easier, to stop the extinction of vaquita, black rhinoceros and other subspecies, saola, tiger species, leopard species, elephants and sub-species, gorillas and subspecies, orangutang, and hundreds of species of fish, frogs and insects. We don't need fake mammoth. Save the vaquita. And save the Amazon Rainforest, and whatever is left of every other forest.


> KDE is so great.

Funny, back in 2006ish, KDE development was stalled, and it was a janky, inconsistent mess. Everyone hated it and used Gnome, which had a simpler design and was far more stable and smooth.


KDE3 languished and lost a lot of users because they were so focused on the KDE4 migration. Then KDE4 came out and it was resource heavy and "overly aesthetic" so even more people jumped ship.

Meanwhile, GNOME did the exact same with GNOME3 even harder, but people had in their mind that the community would fix it or that KDE just had to be worse.

KDE kept trucking along; optimizing things, cleaning up and addressing complaints, not ballooning resource usage, etc. So now they're the refined and lighter option, while GNOME is overly opinionated and resource hungry. The only complaint I hear from non-user's today is that it feels old/last gen. But that's the appeal to me, I don't want "new gen" if it means touch oriented interfaces and dictated themes with overly simplistic applications.


Yes it was. It was in fact the reason I initially tried Gnome and didn't even consider KDE when I moved away from Mac.

After spending ages finding extensions to make everything work the way I wanted to an upgrade appeared and some of them broke. I then tried KDE Neon as a live image and I was like wow, this is nice right away. And what's better, I can really make it my own, configure it the way I want to. Without having to install any extensions. That feels so welcoming after using Mac for more than a decade.

However yes in the early '00s it was an inconsistent mess I'm terms of UX. Very similar to early Android in fact. It was why I moved to Mac back then.


A lot can change in that amount of time and apparently has. Also, as you're very likely aware, KDE runs on macOS, though idky anyone would want to do that. It certainly wouldn't draw anyone back from FreeBSD. The best and worst parts of macOS are BSD — best because there is a BSD userland, and worst because it's inexplicably outdated.


> Just yesterday I started using a new maxed out Mac mini and everything about it is snappy.

Really?! I didn't think anyone here would fall for that.

      Mac Mini 12-core M2, 19-core GPU, 32GB, 10Gbit, 8TB storage? $4500

      Mac Studio 20-core M1, 48-core GPU, 64GB, 10Gbit, 1TB storage is $4000. 128GB of RAM is $800 more
but either Studio RAM configuration obviously spanks the M2 mini. It's sacrificing Apple's expensive storage, but with Thunderbolt 3 it's pretty academic to find 8TB or more of NVMe storage, probably 32GB of NVMe RAID[1], for less than Apple's charge of $2200 above cost of 1TB.

[1] https://eshop.macsales.com/shop/express-4m2


I specced the smallest SSD. I use netwomr homes. The mini is a stop gap waiting for the pro. Drive size Indont really consider a performance item anymore.

I spent just over $2,000.

Mac mini With the following configuration: Apple M2 Pro with 12‑core CPU, 19-core GPU, 16‑core Neural Engine 32GB unified memory 512GB SSD storage Four Thunderbolt 4 ports, HDMI port, two USB‑A ports, headphone jack 10 Gigabit Ethernet

Im satauisfied.


Not awful, but for $2K you could have had 16-core CPU, 20-core GPU, 32-core Neural Engine, 48GB unified memory, 512K SSD storage, Four Thunderbolt 4 ports, two HDMI ports, four USB-A ports, two headphone jacks, two Gigabit Ethernet.


Yes. I wanted the 10Gbt Ethernet. My purchasing question is when is the right time to buy a great monitor. In the CRT days the monitor lasted the longest and buying the best one could afford worked for me.


I just went back to compare the Mini with the Studio again. Despite your advice I would buy the Mini again for these reasons:

I'm on a newer generation chip that has a lower power draw. Meets my network speed minimum. All for the price of the entry level Studio. This box is basically an experiment to see how much processing power I need. I have a very specific project that will require the benchmarking of Apple's machine learning frameworks. I want to see how much of a machine learning load this Mini can handle. Once I have benchmarks maybe the Pro will exist and I will be in good shape to shop and understand what I'm buying.

I think a Mini of any spec is a great value. The studio has a place but I'm hoping the Pro ends up being like an old Sun E450.

This Mini experiment is to help me frame the hardware power vs. the software loads.


> I think a Mini of any spec is a great value.

My second suggestion for 16-core was M2, also. $100 less with 1Gb, and with 10Gb it would be $100 more than you paid. i.e. two of the 8-core M2 Minis with 24GB RAM each would do about twice as much work as the high end Mini M2 Pro alone, sometimes less than twice the work, sometimes more. The same is true of two M1 Max Studios vs one M1 Extreme Studio for the same price. 2 less powerful machines spank one more powerful machine every single time, and one M1 Extreme Studio is definitely NOT worth two M1 Max Studios, same as one 12-core M2 Pro Mini is definitely NOT worth two 8-core M2 Minis.

Everyone is drawn to "the best," and that's where Apple fleeces and makes its money. Pretty consistently forever, the best buys from Apple are never the high end configurations. We may feel secure in what our choices were, doubling down on affirming them, but we definitely pay for it.


I don't see a 16 core M2 or any Studio's with an M2. I was drawn to the latest chip Apple has produced. They put that chip in a small headless form factor. I shopped for a Macintosh computer and judged whether I wanted the motherboard bandwidth of the Mac Studio or the latest chip with the Mac mini. I'm sorry I disappointed you. I have retroactively looked over everything you have said and doubt I would do it differently. If this machine turns out to be such a dog I can get another one to pair it with as you have suggested I do with 8-core. Finally are you speaking from first hand experience or benchmarks?

I think the disconnect is that you are trying to get as much processing power as possible and I'm trying to understand how much processing power currently exists.


> I remember some PC vendors selling Alpha based workstations running NT in the mid- to late 1990s.

AGFA sold Alphas (DEC-branded) running NT4.1 in the mid-1990s along with their Apogee film imaging equipment to run their Apogee RIP software (which may have been G4-TIFF based at the time). I worked at a commercial printer as a prepress operator in 1996, and the Alpha/NT AGFA RIP just stayed on until 1998, at least that's when I moved on. The Mac workstations saw the RIP as an ordinary printer. There was never any reason to touch the RIP, it just did its thing and shot to the imagesetter. What I saw later was RAMpage equipment, an AGFA competitor, and which was 100% PDF, that separated the RIP from the shooter into two machines, both RAMpage-branded PCs.


There was never an "NT 4.1".

Do you mean 3.1, the 1st release?


SP1


That is not what anyone ever calls NT releases.

Multiple point releases of NT have received multiple service packs. SPs are absolutely not equivalent to or interchangeable with SPs.

For reference, these are the version numbers of NT:

3.1 -- 3 SPs

3.5 -- 3 SPs

3.51 -- 5 SPs

4 -- 6 SPs

5 (Branded "Windows 2000") -- 4 SPs

5.1 (Branded "Windows XP", "Server 2003") -- 4 SPs

6 (Branded "Windows Vista", "Server 2008") -- 2 SPs

6.1 (Branded "Windows 7", "Server 2008 R2") -- 1 SP

6.2 (Branded "Windows 8", "Server 2012")

6.3 (Branded "Windows 8.1" "Server 2012 R2")

10 (Branded "Windows 10", "Server 2016", "Server 2019", "Server 2022"); 10 numbered build releases issued, then 4 datecoded ones; 14 in all.

11 (Branded "Windows 11"); 2 datecoded releases so far.

In other words, 7 versions of NT with a minor-version point-release so far, and in total 16 service packs for those point-releases which did not change the major or minor version number.


> 5.1 (Branded "Windows XP", "Server 2003") -- 4 SPs

5.1 was XP only, and it had 3 SPs. Windows Server 2003 was 5.2, and it had 2 SPs.


I sit corrected. :-)


Thanks!


> Life began in that alien environment, and at some point between 3.2 and 2.8 billion years ago, cyanobacteria began to use sunlight to split hydrogen from water, discarding oxygen as waste.

David Attenborough brilliantly explains the beginning of life and the role of cyanobacteria in the first episode of Life on Earth (1979), "The Infinite Variety,"[1] (in less than 20 minutes; doesn't need to be that long, but I wanted to catch him riding the mule, sooooo... strap in and enjoy).

[0] https://www.dailymotion.com/video/x2i43qc?start=673


I don't think the race is over yet, but it is really entirely up to developers, not Apple. Apple Silicon isn't quite 3yo yet, and it is hard to say right now what the gaming landscape will look like in another five years. Technically, however, though it is somewhat of a cheat, as soon as Apple released the first M1 Mac, the new platform instantly had nearly an order of magnitude more games than PC. While there are something like 50K games available through Steam, and x86 Macs only had like 7K, as soon as the M1 was released, early Apple Silicon Mac users had access to something like 450K AppStore games from iOS and iPadOS. Plus, Steam was ported to macOS, and Rosetta 2 allows many x86 PC games to run adequately in emulation (virtualized + emulation). This may not be satisfactory for hard core gamers than need massive fps, but I don't think it can be honestly claimed anymore that Macs suck for gaming. At the very least, they're competitive, and that's only going to get better, though how much better is, again, up to developers.



> Local LLMs do best on big GPUs, which aren’t an option on Macs. What Apple has done with the M-series silicon is really impressive, but it’s not as fast as big, dedicated GPU silicon.

It'd be pretty silly for Apple to cram in big, dedicated GPU when 99.9% of their customers don't care and don't want to pay for it, especially considering that anyone that does want big, dedicated GPU can outboard as much GPU as they want.[1] And many seem to think that the onboard GPU along with Neural Engine should be adequate for local LLM.

[1] https://support.apple.com/en-us/HT208544


Only on Intel Macs which are going away soon (only 2019 Mac Pro is remaining outside of refurbished.)

Apple Silicon doesn't currently have any provision for external graphics cards.


There are two issues.

     1) Apple Silicon won't support PCIe. 

     2) Apple doesn't want it to.
#2 means Apple is taking nVidia and AMD head on in the GPU space. Apple wants to control everything, and allowing these competitors on their platform is giving away too much. Because Apple Silicon scales better than competitors' hw, the desire for third party GPU is probably going to evaporate within a few generations of Apple Silicon. I mean, we'll see, but that is my best guess, because it seems like that was an intentional decision rather than oversight.


AS supports Thunderbolt just fine. Isn't the lack of eGPU support due to lack of ARM drivers for them?


To suggest it is merely lack of drivers is an oversimplification. There is a chip errata that prevents PCIe GPUs from working properly on Apple Silicon: the architectures are not compatible. Apple Silicon GPU drivers are deeply integrated into the system. Due to this integration, only graphics cards that use the same GPU architecture as Apple Silicon could be supported, and there just aren't any, and I don't see how there could be unless Apple developed one and released it.


> To suggest it is merely lack of drivers is an oversimplification.

Not really. When you hook up an external PCI chassis with a graphics card inserted, it sees the PCI expansion slot and the GPU just fine, it just doesn't have a driver for the GPU.


Not really. Software drivers alone will not get it done, at least not adequately (Asahi-related developers may come up with a software solution for Asahi, but it will necessarily degrade performance, so the effort is likely to be abandoned).

Thunderbolt supports PCIe, and for most devices Apple Silicon does also, but GPU is different enough from audio interfaces and NVMe that it isn't just a "load the driver and plug it in" situation. GPU is vastly more complex than other PCIe devices. Apple Silicon and x86 architectures are not compatible, so GPU for x86 is not going to work with Apple Silicon with merely a software driver.

It's going to take hw translation and other technology that is not yet available for Apple Silicon, thus Apple's recent patent applications,[1] showing that Apple is either exploring supporting outboard GPU or locking anyone else out from their method of doing so, but either way is no guarantee they'll complete development or release, because it seems just as if not more likely the roadmap for Apple Silicon GPU performance will outpace nVidia and AMD GPUs.

But, again, claiming, "it just doesn't have a driver for the GPU" is a staggering oversimplification.

[1] https://image-ppubs.uspto.gov/dirsearch-public/print/downloa...


It'll be interesting to see what happens with the Mac Pro.

It could be that Apple does away with it entirely and the Mac Studio is the new Pro.

Or they might make a machine with PCIe support, but make it so expensive that only people with a serious need get access to it.

Or something else.


> It could be that Apple does away with it entirely and the Mac Studio is the new Pro.

Or the Mac Pro will be released without PCIe GPU support, and Apple will be able to leverage increases in Apple Silicon GPU performance to eliminate any need or desire for PCIe GPU, drawing away high end GPU customers from nVidia and AMD and locking them into Apple Silicon and the Apple ecosystem.


> Or the Mac Pro will be released without PCIe GPU support, and Apple will be able to leverage increases in Apple Silicon GPU performance to eliminate any need or desire for PCIe GPU, drawing away high end GPU customers from nVidia and AMD and locking them into Apple Silicon and the Apple ecosystem.

Maybe. But graphics cards aren't the only thing people put into PCIe slots.


> Maybe. But graphics cards aren't the only thing people put into PCIe slots.

Right, so there will be PCIe slots for expansion, they just won't support PCIe GPU, just like Thunderbolt PCIe expansion chassis for Thunderbolt now. It isn't the PCIe slots that break compatibility, its the difference in architecture between x86 and Apple Silicon that makes them incompatible.


> My guess has been that they are an item a journeyman blacksmith had to create to be promoted to master.

Seems pretty specific to span two centuries of known construction. Why would they be saved and not recycled? Nearly every medieval village had a blacksmith, with larger population centers supporting more, within the whole of medieval Europe thousands of blacksmiths at any one time, so why have only 116 of these turned up when there should be hundreds of thousands of them?

I think it's very interesting their construction appears to cease just in time for Christianity to take over Europe in the late 4th century. Previously, Rome had 12 principal deities.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: