If I were to bet, I'd say it's actually a mutual understanding of 'taking A-chips' to their full potential for these use-cases. It's also smart to make it a startup, because why spend the full power of a corporation when a small tight band is actually better. Sort of a separation of concerns; friendly relationship, possibly deals between the two entities — I'm sure Apple's datacenters could use some A-chip mojo in their blades.
It sounds like they’re focusing on cloud computing which isn’t something Apple is working on.
And I think this also shows that Apple has no intention to build an ARM chip for its Desktop and Mac Pro products. Had this been the case I think the three would not have jump the chance for its own Startup.
How will this affect Apple’s ability to innovate going forward? Were these executives a crucial part of Apple’s competency?
Even Reddit has one
As a side note, a couple of jobs on their careers page seem to want ARM assembly experience...so it's likely to be an ARM core?
"Knowledge of library cells and optimizations from ARM, TSMC, and other high performance library vendors"
"Familiarity with coherent bus protocols like ARM AMBA and CHI bus protocols"
Plenty of people seem to have been caught up in the hype of RISC-V taking over the world and doing everything, but that's never going to happen. The ISA is heavily optimized towards making very low-end devices very cheap. Like, don't think cellphone chips, think appliances. This is not a bad call, as it's the area where greenfield designs with cost advantage have the best chance to get market share. However, there is no path of extending the ISA that will make it competitive with ARM or x86 on high-end devices. The only way to do that is to design RISC-VI, that abandons most of the things that RISC-V what it is.
I have to dispute this. RISC-V was specifically intended to make high-end designs (i.e. out-of-order architectures, multicore, SMT etc.) not just feasible but relatively easy. It's also designed for extensibility from the ground up, which few others architectures are.
By default, code density on RISC-V is pretty bad. You can try to solve that by using variable length instructions which many high end RISC-V projects intend to do but having variable length instructions means your front end is going to have to be more complicated to reach the same level of performance that a fixed width instruction machine can achieve.
More instructions for a task means your back end also has to execute more instructions to reach the same level of performance. One way to do better is to fuse together ISA-level instructions into a smaller number of more complex instructions that get executed in your core. This is something that basically every high end design does but RISC-V would have to do it far more extensively than other architectures to achieve a similar level of density on the back end which makes designing a high end core more complex and possibly uses extra pipeline stages making mispredicts more costly.
And more more criticisms here: https://gist.github.com/erincandescent/8a10eeeea1918ee4f9d99...
EDIT: But in fairness it looks like conditional move might be getting added to the bit manipulation RISC-V extension which would fix one big pain point.
This isn't to say that RISC-V is bad. It's simplicity makes it wonderful for low end designs. It's extensibility makes it great for higher level embedded uses where you might want to add some instruction that makes your life easier for your hard driver controller or whatever in a way that would require a very expensive architecture license if you were using ARM. It's open, which would be great if you were making a high end open-source core for other people to use except the Power ISA just opened up so if I were to start a project like that I'd use that instead.
Code density with the C extension is competitive with x86, which is very good for a RISC. And the C extension is not even that hard on the decoder, since most or even all 'compressed' instructions have a single uncompressed counterpart. It's easier to implement than what ARM32 does with Thumb.
> One way to do better is to fuse together ISA-level instructions into a smaller number of more complex instructions that get executed in your core.
Insn fusion has in fact been endorsed by the designers of RISC-V as an expected approach in high-end designs. The RISC-V spec even gives "suggested" sequences for things like overflow checking, that can be transparently insn-fused if an implementation supports it.
- Like somebody already pointed out, thumb is a different mode, whereas RISC-V C (RVC) can be freely mixed with normal 32-bit instructions.
- And speaking of higher end processors, x86 does very well in that space despite requiring much more complex decode than RVC.
- Aarch64 has more complex addressing modes (base + index<<shift in particular) whereas RISC-V needs both RVC and fusion to do the same with similar code size and execution slot occupation. Personally, I'm leaving towards thinking that it was a mistake for RISC-V to not support such addressing modes. Unless you're aiming for something really super-constrained in terms of gate counts, having an adder and small shifter as part of your memory pipeline(s) seem like an obvious choice. And thus, having single instructions to use those pipelines isn't really committing any sins against the RISC philosophy.
What you should take seriously is that three of the four big RISC ISAs - MIPS, ARM, and POWER - all have extensions for variable length instructions. They all use them in embedded systems where program memory size is a big factor in your bill of materials and reducing that can save you money. But none of them use it on their high end processors. So I think we can conclude that the power/transistor cost of fetching a number of variable length instructions per clock is higher than the cost of expanding the L1 instruction cache to deal with full size instructions.
RISC-V on the other hand has an encoding scheme that already from the beginning allowed instructions with longer and shorter encodings. You can just mix compressed instructions with normal ones.
The real cost you're looking at is the complexity for instructions to straddle cache lines and MMU pages, but even then it's less of a deal than you might think.
It must be a pain to design its instruction decoder, but I don’t think that’s insurmountable for high end processors.
Or do I misunderstand your argument?
A72 is a mobile chip, so still not particularly relevant to tuna-fish's claim about high end servers. The raspberry pi 4 is not a high end server.
> half the die area
Much of this is free when comparing a 16nm design to a 7nm design. See footnote 1 in your link.
EDIT: I was curious, so I looked up the transistor density of tsmc 16nm vs 7nm.
16nm: 28.8 MTr/mm^2
7nm: 96.5 MTr/mm^2
So I think that sifive core with "half the area" is actually not quite as good as a die shrink of the A72.
So is Apple A13. And yet it's been reported to outperform all Intel desktop CPUs in single core performance (I find the benchmark dubious, but I do believe it's close, and faster than it has any right to be given the super low power consumption). The only reason it doesn't do so in multicore perf is the limited TDP - a concern which does not exist in high end chips.
Not all servers need to be high end. There’s a big market for small websites.
It seems symptomatic that he couldn’t inside Apple.
Sad that it’s so hard to do some things inside humongous companies. Even when you’re obviously incredibly talented and accomplished.
It's all about mission orientation vs. functional orientation. Apple is heavily leveraged in a functional orientation to build chips for iPhones. NUVIA is heavily mission oriented in building server chips. Mission oriented companies win on speed. An acquihire of NUVIA shortens the total time this fella would need to ascend to the top tier of semiconductor executives just by virtue of the fact that they are a company that moves faster. He either becomes a top executive in a successful chip startup, or he gets acquihired at a high level into a big semiconductor company. Either way - he wins.
Andy Grove has some great insights on mission orientation vs functional orientation if you're interested. Chapter 8 of High Output Management - cliffs notes available here:
There’s no reason why huge companies can’t foster innovation, besides bad management. See Bell Labs and Xerox Parc.
You would think a place with almost unlimited money for R&D would be perfect for nurturing an ambitious project, but instead most go for VC.
What’s sad to me is that it’s cheaper and easier to buy a startup than to fix broken culture, given a large enough company.
I don't understand why this is sad? This is just a negotiating tactic.
So producing TV content is important enough that it should be done in-house but servers… not so much?
Seeing that Netflix is AWS’s largest customer, why is it surprising that Apple outsources servers? They also outsource all manufacturing except for Mac Pros, outsource camera design and displays.
They don’t “produce” content in house. They pay the same production companies that everyone else pays.
Tektronix in it's prime, run by it's original founders, did exactly this. The benefits still play out today.
In any case, best wishes and godspeed. I believe an increase in competition in the CPU space is good for humanity and only improves our chance of surviving as a species.
Like it or not, when a team of people knows exactly what their multinational company would find valuable, and observes that the company isn't choosing to build that thing itself (often for internal political reasons the insiders are privy to), it's an extremely viable strategy to leave, build the thing yourself, and sell it back to the company 2-3 years later, and make yourself a big chunk of change in the process.
In fact, I'd say it's a much more guaranteed route to success than betting on an IPO. Because unless there's a drastic change in the political winds at the company, as long as you can execute on the product development, it's a reasonably sure bet.
This is considered a very viable exit strategy. Sure you want to have a plan B in case nobody will acquire you, but I've known several companies that plan A was to get acquired. It didn't work out for every one but it does work sometimes. It really depends on your industry and your ability to make yourself attractive to the buyers.
Intel, Cisco, nVidia, AMD, Qualcomm and a dozen other companies are always happy to scoop up new IP for their castles. Just Intel and Cisco together have $37 billion in annual operating income burning a hole in their pockets.
Teams that can flip new semi or network IP, are like medtech inventors that routinely do exactly the same thing in that space. It's very difficult, requires particularly specialized industry insider knowledge and the industries are filled with very large, rich corporations happy to overpay to ensure their positions.
How did that work out for Uber, Lyft, and closer to home DropBox? They all went public and still haven’t shown that they have any idea how to become profitable.
What's wrong with DropBox and why in the world would you include them with Uber? They had a modest $18m operating loss on $428m in revenue last quarter. They have essentially zero red ink concerns right now and their business is still expanding (over the prior three years sales will have increased almost 100% through this fiscal year). Further they have a billion dollars in cash to go with their small trickle of red ink.
DropBox could roll into profitable status in any given quarter and going forward, if they wanted to, just by very gently nudging operating expenses. They don't need to, there's basically no benefit to their market valuation in doing that (not until or unless they start spitting off a lot of profit). They're in a position where they're clearly going to let sales growth gradually lift their operating condition to profitable, I'd guess within four quarters or so at their rate of improvement.
A lot of companies follow the DropBox approach of allowing sales growth and margin improvement to overtake the operating losses gradually. That is especially desirable when pulling back on expenses might harm sales growth and there is no good reason to do it (ie no cash crunch).
See: Salesforce, ServiceNow, Palo Alto Networks, Twitter and Square for recent prominent examples of how this works. It's not uncommon.
The company’s prospectus warns it has “a history of net losses”; anticipates increasing expenses and slowing revenue growth; and notes that it “may not be able to achieve or maintain profitability.” To compete, Dropbox is pouring money into research and development to convert non-paying users to paid subscribers, and to enhance collaborative tools.
Also like Jobs famously said about DropBox. “They are a feature not a product.” For the same price that you pay for DropBox and 1TB of storage, you can get the full Office 365 suite, for 6 users and a total of 6TB of storage.
For personal users and not businesses, Google and Apple offer good enough alternatives cheaper and that integrate better with their ecosystem.
Every company you named has a much higher switching cost than Dropbox. You can’t just go to a Twitter competitor because of the network effects.
Not until iCloud folder sharing is available.
I worked as a dev lead for one company where we were our vendors largest customer (over 60% of their revenue). I recommended that we either don’t depend on that vendor for our new implementation when contract renewals came or that we insisted on them putting their code in escrow in case they were sold and they abandoned the software.
I would suggest for tech VMware, Arista, RedHat are far more rational guides.
What about building a company with idea of making money?
Look no further than YC - only two YC based companies have ever gone public.
Ampere is 3 years old, though PE and not VC backed.
Private equity vs VC is a distinction without a difference as far as the investors motivations.
Profitable, yes, but all companies together = less than bitcoin
Spawn companies, yes, but total number of public companies that IPOed = 2
But if you ask a random person in the street, they will know what crypto is, they may know dropbox if they are computer litterate, but they won't know YC unless they are a geek.
And for those who want to be successful by creating their company, I have strong doubts. Profitable businesses are laughed at as "lifestyle business", and profitability itself is considered less important than network effects.
In the end, I'm not surprised companies like wework are happening, where no sane person should invest a dollar. It's like the dot com boom again.
YC funds a lot of companies who do cool things that aren't very profitable. I don't know what most of them do, and that's great. So many of them serve niches I haven't even heard of. If they break even making someone's life better, then they're a success.
Imagine a world without Stripe or something like it. The current boom in membership services might not have happened. Even VC-juiced Patreon, which I use, handles payments through Stripe. How many successful companies by your measure glue their business together with companies that got their initial funding and guidance from YC?
There's a reason I'm not a capitalist. Profit as the only measure of success is how we ended up in this hell world. You're welcome to only care about that, but I prefer to imagine and work toward better futures.
Also, please don't cross into personal attack.
Just that there are nuances between monastery and money-seeking-only business.
No, profit is not the cause of hell. Debt is, it's a modern-form of slavery.
When you accept VCs money, the reality (if you play by the rules), is that you agree to give your most precious ressource (time and energy) to pay back your debt.
It may pay off, or may not, but you have to be very aware of your choices.
Theoretically you could “buy back” your equity but with the type of returns and want it’s almost impossible. Taking VC money is like taking a payday loan.
there’s nothing wrong or fake about building a company and a product to take on more risk/freedom with the intent of selling it to someone for whom it would be valuable later. it also probably pays a lot better, considering the increased risk of failure and better negotiating position if you invent and produce something truly groundbreaking.
this is how silicon valley has worked for decades.
Notice I didn’t say whether it was their desire. What they want is inconsequential once you take investor money.
Exponential Technology for instance was designing PowerPC chips that were faster than Motorola or IBMs. After Apple got rid of the clone market, they closed down the business and started another one. In 2010, Apple acquired them to help design their custom Arm chips.
> expect to start having losses consistently
This is a very rosy view of Tesla. Tesla has always had losses consistently and will continue to do so for years. They are $13 billion in debt with only $4 billion cash, and a not insignificant amount of their cash is from full self-driving preorders they are still obligated to fulfill.
My money is still on Tesla going bankrupt.
(And for other kinds of chips besides general purpose CPUs, fabless startups have been common for a long time)
Your tone ("building a company"), however, suggests that something useless is occurring here, that they are just playing house as opposed to creating value. That's the part that feels like coded cynicism.
After seeing Apple design processors that are performance competitive with Intel except on the very high end and they will probably make better modem chips than Intel could by acquiring PA Semi and the remnants of Exponential Technology and seeing Amazon design custom server and embedded chips for its own use, I think we are seeing a renaissance in chip design we haven’t seen since the 80s and 90s.
Microsoft and Google are also dipping their toes in custom designed chips.
It’s estimated that Google only sells 3 or 4 millions Pixels a year. What’s the motivation for them to care about the watch market? Screens on watches are too small to display ads.
Considering Google paid Apple 12 billion dollars this year to be the default search engine on iOS , they have a very big incentive to build a high quality android phone to get iOS users to switch over.
Those 3 or 4 million pixels could have been iOS users as I was one of those people last year when I decided to try out android again and the pixel was the first android I have ever used that didn't suck compared to my iphones (looking at you samsung galaxy s4). With apples draconian enforcement of what I can run on my iOS devices and set as defaults, I think I'll even stick to android unless it goes to shit or apple makes something radically better.
They’ve had over a decade to create an ecosystem that people are willing to actually spend money on. But statistically, people who are willing to spend money on a high end phone, a watch, $160+ headphones, etc veer toward Apple.
Google has been selling its own branded phones and tablets for six or seven years unsuccessfully.
The Pixel 3a seems to have been relatively successful.
And while Pixel vs. iPhone is not the same discussion as iOS vs Android, it's worth mentioning Android has grown over the past several years whereas iOS has largely stalled.
But Apple also realizes that iPhone sells are stagnating. But at the same time they have auxiliary products that are all making revenues and profits that any of the Android makers would die for from watches, AirPods, and services.
The high end Android market is minuscule. The whole Android hardware ecosystem is a profitless race to the bottom.
I doubt Google “cares” if you use Android or not. If you use iOS you still get Google ads and you still probably at least watch Youtube.
this looks made up & doesn't make sense.
In any case, they don't mention any secret sauce. Intel has strong channels, fabs and the x86-64 ISA with massive software compatibility on the server side. Even with better performance/power, Intel can simply undercut the competition to drive them out of business. If Qualcomm's sales channels couldn't dent the server space, I am sceptical about upstarts, unless their [power, performance, cost] is significantly better than Intel.
However, unifying the memory models sounds like a performance disaster for the ARM code. And having multiple decoders in hardware (as well as having enough of them) doesn't seem like a justifiable use of silicon real estate.
Also note that the x86 memory model is a perfectly valid implementation for an ARM chip, as it's a superset of the ARM memory model constraints. And the performance impact isn't that big really.
Even Windows runs on Arm now
For servers they care about price, performance, support and volume/availability
If you can have a non-x86 platform that will save power/cooling to the big datacenter users (FAANG basically) they will be willing to give you money
You only get to read this kind of stuff in HN.
ARM servers are extremely awful at all of these. Most ARM servers perform poorly compared to x86 servers, they have less software support and they require you to cross compile or switch your development machines to ARM as well and finally actually getting an ARM server is incredibly difficult because almost no one offers them and even those providers who offer some ARM instances don't give you the same variety as with x86. The worst part is that switching to ARM involves extra costs for your expensive developers in exchange for hardware that probably was already too cheap to care about because RAM has a much bigger impact on server costs than the CPU.
But if they can get an ARM server that checks those boxes the game may change.
> involves extra costs for your expensive developers
Which expensive developers? Web technologies run the same on ARM as on x86 (may I say it's even easier than developing for mobile - Node/Java/PHP/Python/etc run the same in ARM, Node might even benefit from optimizations done for ARM Chromium)
The three worked on mobile chips at Apple and Google, and are now focusing on enterprise data centres. So it’s not exactly the same.
> That said, what’s interesting is that while the troika of founders all have a background in mobile chipsets, they are indeed focused on the data center broadly conceived (i.e. cloud computing), and specifically reading between the lines, to finding more energy-efficient ways that can combat the rising climate cost of machine learning workflows and computation-intensive processing.
> The company’s CMO did tell me that the startup is building “a custom clean sheet designed from the ground up” and isn’t encumbered by legacy designs. In other words, the company is building its own custom core, but leaving its options open on whether it builds on top of ARM’s architecture (which is its intention today) or other architectures in the future.
It's rarely black or white.
We have seen it with the case of Uber and the self-driving cars.
In their own words, they said they will reuse knowledge from ARM and Apple (including Desktop CPUs).
As public investor of Apple, yes I think it's a very legitimate question.
The Uber case was quite rare and egregious. It's not as simple as working on similar stuff.
Uber directly acquiring a company (well after raising hundreds of millions) made by someone who worked at Google to work on the exact same thing which is far riskier than working on a completely different product category you're building from scratch. But even then the case against Uber heavily relied on the fact he stole IP directly from company computers and used that as the base of the product.