Hacker News new | past | comments | ask | show | jobs | submit login
Three of Apple and Google’s former star chip designers launch NUVIA (techcrunch.com)
216 points by tosh 19 days ago | hide | past | web | favorite | 169 comments



How do people start new ventures like this without falling afoul of past non-disclosure agreements?


Don't publicise more than the minimum in uarch details, and give a friendly reminder to anyone who sues that their IP is also dependent on other IP secretly and starting a nuclear war is probably not worth it?


Late reply: we don't know the details but it's entirely possible that part of their knowledge from their experience at Apple is allowed to be used so long as they do not compete with Apple products using that knowledge; and the server / enterprise target is clearly the one space where Apple is de facto absent in tech.

If I were to bet, I'd say it's actually a mutual understanding of 'taking A-chips' to their full potential for these use-cases. It's also smart to make it a startup, because why spend the full power of a corporation when a small tight band is actually better. Sort of a separation of concerns; friendly relationship, possibly deals between the two entities — I'm sure Apple's datacenters could use some A-chip mojo in their blades.


A reply from the future: looks like Apple is unhappy.

https://news.ycombinator.com/item?id=21752781


You get your lawyers to talk to their lawyers, I would presume.

It sounds like they’re focusing on cloud computing which isn’t something Apple is working on.


Assuming they are well connected to ARM, I wonder if they get to be working on uArc based ARMv9?

And I think this also shows that Apple has no intention to build an ARM chip for its Desktop and Mac Pro products. Had this been the case I think the three would not have jump the chance for its own Startup.


They could’ve possibly done this move to get acquired by Apple for a larger sum than plainly working there


NUVIA =~ NVIDIA. That’s a bad name.


New VIA? It looks like VIA are still producing CPUs: https://en.m.wikipedia.org/wiki/VIA_Technologies


That's even make things worse!


I have to agree. They could have tried to distinguish themselves with a name resembling something novel, yet they decided to go with one that sounds like a copy-cat version of company already in the industry. Why not something like XCHIP, POWER2 or whatever. I hope the lack of imagination does not extend on their chip design.


Also it name sounds both like Nivea (skin care) and Nuva (birth control)


It also sounds like Lenovo (computer company), Denuvo (DRM system), and Pepsi (beverage)


yeah, what the fuck. Maybe they are counting on the inevitable lawsuit to raise their profile before changing their name or something?


Also Jon Masters, ex Red Hat.


my question to the group is:

How will this affect Apple’s ability to innovate going forward? Were these executives a crucial part of Apple’s competency?


Tech Crunch has the foulest of cookie control systems. Not reading that.


Someone please create a mirroring service for HN submissions.

Even Reddit has one


I wonder if they are considering RISC-V. Probably not - they don’t sound like risk-taking people. But given the timelines of chip production, I would think a couple years from now a solid RISC-V server might be very much in demand.


You're saying these are risc-averse people?


The vector extensions would need to be finished before that can happen.

As a side note, a couple of jobs on their careers page seem to want ARM assembly experience...so it's likely to be an ARM core?


Yeah, they mostly avoid being specific, but there's a few ARM callouts on their job listings, like:

"Knowledge of library cells and optimizations from ARM, TSMC, and other high performance library vendors"

https://nuviainc.com/job-listing/nuvia-4csrj

"Familiarity with coherent bus protocols like ARM AMBA and CHI bus protocols"

https://nuviainc.com/job-listing/nuvia-mlpfd


Even without ARM, I can see them using AMBA or AXI for the bus because there is so much existing IP out there built to that interface.


And they're fairly emblematic of modern SoC bus protocols. You'll be fine jumping to TileLink or whatever if you're used to AMBA/AXI.


The RISC-V ISA is pretty awful for high-end designs. No competitive servers can be built with it.

Plenty of people seem to have been caught up in the hype of RISC-V taking over the world and doing everything, but that's never going to happen. The ISA is heavily optimized towards making very low-end devices very cheap. Like, don't think cellphone chips, think appliances. This is not a bad call, as it's the area where greenfield designs with cost advantage have the best chance to get market share. However, there is no path of extending the ISA that will make it competitive with ARM or x86 on high-end devices. The only way to do that is to design RISC-VI, that abandons most of the things that RISC-V what it is.


> The RISC-V ISA is pretty awful for high-end designs. No competitive servers can be built with it.

I have to dispute this. RISC-V was specifically intended to make high-end designs (i.e. out-of-order architectures, multicore, SMT etc.) not just feasible but relatively easy. It's also designed for extensibility from the ground up, which few others architectures are.


Interesting view. The SIMD, Vector, and Hypervisor RISCV extensions, as well as things like OoOE seem to contradict your claim. Are you saying they intended to target the high end, and failed? Or that they never intended to target the high end?


There are a lot of things about the RISCV design that come from a very ideological place and hurt in a high end design. Yes, there are extensions and designs with high end features, that's certainly true, and I'm sure people someone will be making a high end version at some point. But the ISA isn't very well suited to it compared to Power or ARM.

By default, code density on RISC-V is pretty bad. You can try to solve that by using variable length instructions which many high end RISC-V projects intend to do but having variable length instructions means your front end is going to have to be more complicated to reach the same level of performance that a fixed width instruction machine can achieve.

More instructions for a task means your back end also has to execute more instructions to reach the same level of performance. One way to do better is to fuse together ISA-level instructions into a smaller number of more complex instructions that get executed in your core. This is something that basically every high end design does but RISC-V would have to do it far more extensively than other architectures to achieve a similar level of density on the back end which makes designing a high end core more complex and possibly uses extra pipeline stages making mispredicts more costly.

And more more criticisms here: https://gist.github.com/erincandescent/8a10eeeea1918ee4f9d99...

EDIT: But in fairness it looks like conditional move might be getting added to the bit manipulation RISC-V extension which would fix one big pain point.

This isn't to say that RISC-V is bad. It's simplicity makes it wonderful for low end designs. It's extensibility makes it great for higher level embedded uses where you might want to add some instruction that makes your life easier for your hard driver controller or whatever in a way that would require a very expensive architecture license if you were using ARM. It's open, which would be great if you were making a high end open-source core for other people to use except the Power ISA just opened up so if I were to start a project like that I'd use that instead.


> By default, code density on RISC-V is pretty bad.

Code density with the C extension is competitive with x86, which is very good for a RISC. And the C extension is not even that hard on the decoder, since most or even all 'compressed' instructions have a single uncompressed counterpart. It's easier to implement than what ARM32 does with Thumb.

> One way to do better is to fuse together ISA-level instructions into a smaller number of more complex instructions that get executed in your core.

Insn fusion has in fact been endorsed by the designers of RISC-V as an expected approach in high-end designs. The RISC-V spec even gives "suggested" sequences for things like overflow checking, that can be transparently insn-fused if an implementation supports it.


Yes, RISC-V's instruction compression is easier to use than Thumb. But there's a reason no arm server uses compressed instructions and ARM dropped it from A64.


I don't think it's as easy as saying that Aarch64 dropped thumb, therefore compressed instructions are dumb.

- Like somebody already pointed out, thumb is a different mode, whereas RISC-V C (RVC) can be freely mixed with normal 32-bit instructions.

- And speaking of higher end processors, x86 does very well in that space despite requiring much more complex decode than RVC.

- Aarch64 has more complex addressing modes (base + index<<shift in particular) whereas RISC-V needs both RVC and fusion to do the same with similar code size and execution slot occupation. Personally, I'm leaving towards thinking that it was a mistake for RISC-V to not support such addressing modes. Unless you're aiming for something really super-constrained in terms of gate counts, having an adder and small shifter as part of your memory pipeline(s) seem like an obvious choice. And thus, having single instructions to use those pipelines isn't really committing any sins against the RISC philosophy.


Enlighten us: what is that reason?


As far as I can tell the reason is that with variable length instructions you have a mux layer between fetch and decode that sends the right set of bytes to the right decode slot. Your first set of 16 bits is certainly going into the first decoder but your fourth might be going to the second, third, or fourth depending on previous decoding. Or maybe not, I've only worked with fixed width instructions when getting into the weeds of processors and only entirely designed a simple five stage classic RISC myself. So don't take the above too seriously.

What you should take seriously is that three of the four big RISC ISAs - MIPS, ARM, and POWER - all have extensions for variable length instructions. They all use them in embedded systems where program memory size is a big factor in your bill of materials and reducing that can save you money. But none of them use it on their high end processors. So I think we can conclude that the power/transistor cost of fetching a number of variable length instructions per clock is higher than the cost of expanding the L1 instruction cache to deal with full size instructions.


Thumb works as an alternative encoding that you have to switch to and where you just have a subset of the instructions available. Switching has at least the same cost as a branch. My guess is that this is the reason why thumb is avoided often.

RISC-V on the other hand has an encoding scheme that already from the beginning allowed instructions with longer and shorter encodings. You can just mix compressed instructions with normal ones.


Thank you for clarifying. It still seems wrong, somehow. Admittedly I don't know enough detail about Thumb and the others, but if the variable-length encoding is a simple design like RISC-V's, it's hard to imagine that extra layer to be so expensive as to eliminate the benefits of an effectively increased I$. It would be nice to be able to dig deeper into the details of this.


At most, you're looking at an extra pipeline stage for all of that. The less I$ pressure is almost certainly worth it.

The real cost you're looking at is the complexity for instructions to straddle cache lines and MMU pages, but even then it's less of a deal than you might think.


Nope, done right the cost is a layer of 2:1 muxes and some control logic to pass data between decoders, and 16 flops to hold a partially decoded instruction between fetch bundles


It's more complicated than that if you want to decode multiple instructions per cycles, which you have to for high-end designs. I still agree with monocasa that it's almost certainly going to be worth it for the reduced I$ pressure.


For sure, it's just that this little bit could necessitate a pipeline stage if that was the critical timing path somehow already. Just trying to be as friendly to their argument as possible before taking it on.


x64 has that problem, in the extreme (max length of an instruction is 15 bytes) doesn’t it?

It must be a pain to design its instruction decoder, but I don’t think that’s insurmountable for high end processors.

Or do I misunderstand your argument?


Yes, it certainly does. As far as I can tell it takes more engineer hours to make an efficient x86 front end than it does a RISC front end and the RISC designs I see tend to balance their pipelines so that they're wider in front than x86 designs typically are. Everything here is a matter of "all else being equal" and the talent and hard work of the design team are more important than any of the ISA differences we're talking about here.


That’s funny, SiFive just announced a 3-way OoO RISC-V core. A73 performance, but half the die area. Pretty awful, I guess.

https://www.sifive.com/blog/incredibly-scalable-high-perform...


FYI the comparison is to the A72, not A73.

A72 is a mobile chip, so still not particularly relevant to tuna-fish's claim about high end servers. The raspberry pi 4 is not a high end server.

> half the die area

Much of this is free when comparing a 16nm design to a 7nm design. See footnote 1 in your link.

EDIT: I was curious, so I looked up the transistor density of tsmc 16nm vs 7nm.

16nm: 28.8 MTr/mm^2

7nm: 96.5 MTr/mm^2

So I think that sifive core with "half the area" is actually not quite as good as a die shrink of the A72.


Necro-reply here, but yes you got me. 2.63mm2 on 7nm is about 8mm2 on 16nm, which is same as the A72. And it was the A72, the A73's predecessor. I think the point stands that RISC-V is suited for high-performance designs, which was the original contention. If a startup can do 3-way OoO, an org with more resources can do 7-way OoO along with more sophisticated branch prediction and stronger cache coherency to catch up to state-of-the-art cores. There isn't anything in the RISC-V ISA preventing that.


>> A72 is a mobile chip

So is Apple A13. And yet it's been reported to outperform all Intel desktop CPUs in single core performance (I find the benchmark dubious, but I do believe it's close, and faster than it has any right to be given the super low power consumption). The only reason it doesn't do so in multicore perf is the limited TDP - a concern which does not exist in high end chips.

https://news.ycombinator.com/item?id=21033371


Sure. Even all of Apple's macOS-Server machines running the Apple Cloud are powered by A13 chips these days. /s


Non sequitur.


This argument would benefit from specific examples.


You've not justified this. I'd be very interested in understanding why you make this claim.


> No competitive servers can be built with it.

Not all servers need to be high end. There’s a big market for small websites.


The big names involved are all heavily involved in ARM so I expect that's what they'll use.


Specifically their focus is on energy efficiency in data center chips.


Or maybe risc-v is just good enough?


I wish him good luck. The more the merrier for us, consumers.


What’s sad is that he had to exit Apple and found a company. Maybe to sell it back to Apple in the future.

It seems symptomatic that he couldn’t inside Apple.

Sad that it’s so hard to do some things inside humongous companies. Even when you’re obviously incredibly talented and accomplished.


It's not so sad or surprising. Apple is heavily leveraged to build chips in support of its core business, which is building iPhones. He can likely achieve his goal of becoming a top executive in the semiconductor industry (within Apple or other company) faster by getting acquihired into a larger corporation as head of this new server development company.

It's all about mission orientation vs. functional orientation. Apple is heavily leveraged in a functional orientation to build chips for iPhones. NUVIA is heavily mission oriented in building server chips. Mission oriented companies win on speed. An acquihire of NUVIA shortens the total time this fella would need to ascend to the top tier of semiconductor executives just by virtue of the fact that they are a company that moves faster. He either becomes a top executive in a successful chip startup, or he gets acquihired at a high level into a big semiconductor company. Either way - he wins.

Andy Grove has some great insights on mission orientation vs functional orientation if you're interested. Chapter 8 of High Output Management - cliffs notes available here:

https://medium.com/@iantien/top-takeaways-from-andy-grove-s-...


I don't see it as sad, the existence of NeXT as a separate entity allowed many possibilities. I.e. it left room open for a completely different startup to roll new tech directions back into Apple and NeXT to end up as new direction at SGI, HP or whatever. Even if the end looked a lot like the status quo should have looked, a wider range of possibilities shape the industry and individual motivation.


The fact that Jobs had to found another company is unfortunate and symptomatic to me.

There’s no reason why huge companies can’t foster innovation, besides bad management. See Bell Labs and Xerox Parc.

You would think a place with almost unlimited money for R&D would be perfect for nurturing an ambitious project, but instead most go for VC.

What’s sad to me is that it’s cheaper and easier to buy a startup than to fix broken culture, given a large enough company.


Bell and Xerox are great examples of why. If the ideas were separate entities, Bell or Xerox could have decided which to buy back and run as a business. They didn't decide anything and each idea had to be stolen by a visitor.


>What’s sad is that he had to exit Apple and found a company. Maybe to sell it back to Apple in the future.

I don't understand why this is sad? This is just a negotiating tactic.


He wants to make server chips. Apple doesn’t sell servers. It would be more of a Google thing to invest money on random products that either and fail inside the company.


Apple doesn’t need servers? They outsource iCloud to AWS/Microsoft.

So producing TV content is important enough that it should be done in-house but servers… not so much?


I didn’t say that Apple didn’t “need” servers. I said that Apple doesn’t “sell” servers. Apple doesn’t even design chips for its desktop computers (yet).

Seeing that Netflix is AWS’s largest customer, why is it surprising that Apple outsources servers? They also outsource all manufacturing except for Mac Pros, outsource camera design and displays.

They don’t “produce” content in house. They pay the same production companies that everyone else pays.


There were companies who encouraged these things. Lifted up whole regions, and in some cases, kicked off whole new industries.

Tektronix in it's prime, run by it's original founders, did exactly this. The benefits still play out today.

https://www.opb.org/television/programs/oregonexperience/epi...


What would be sad is if he was unable to take the financial risk of leaving Apple. The whole point of capitalism is encouraging experimentation with limited liability companies, that falls apart if you try to experiment inside another company. The magic of our economy is in the connections between companies, inside individual companies it's just boring old dictatorships.



Alternative to RISC-V? https://archive.is/fFw5T


Is it possible or probable they'll go into mobile chips to compete with Qualcomm?

In any case, best wishes and godspeed. I believe an increase in competition in the CPU space is good for humanity and only improves our chance of surviving as a species.


Apple is already doing this(mobile chips part), opening or expanding offices in San Diego/Austin for chip design based on public announcements, job openings, personnel changing companies.


They're not really competing with Qualcomm though, since they don't sell their chips.


Let’s be honest. They aren’t building a company to take on Intel and AMD. They are at most “building a company” to either be an acquisition for one of the major chip producers (maybe even Apple) or to be acquihired.


Building a company to sell is a very bad thing to do. There is no guaranty it will even happen. You should always build a company with the idea that you will go public as doing so forces discipline in how you do things (there are a ton of internal processes, accounting, etc.). Having that work done and clean makes you more attractive to someone that would want to acquire the company and also gives you a better negotiating position. In an ideal world, you build it to go public and are at a point where you are starting the paperwork, and someone makes an offer which you can then shop to their biggest competitor in the space. A bidding war starts and you are sold for IPO price++++, a number that is 12 months down the road from where you think you would be after the IPO. You have just made a ton, and removed a years worth of risk.


Every company is a risk, there's no guarantee of anything happening.

Like it or not, when a team of people knows exactly what their multinational company would find valuable, and observes that the company isn't choosing to build that thing itself (often for internal political reasons the insiders are privy to), it's an extremely viable strategy to leave, build the thing yourself, and sell it back to the company 2-3 years later, and make yourself a big chunk of change in the process.

In fact, I'd say it's a much more guaranteed route to success than betting on an IPO. Because unless there's a drastic change in the political winds at the company, as long as you can execute on the product development, it's a reasonably sure bet.


> Building a company to sell is a very bad thing to do.

This is considered a very viable exit strategy. Sure you want to have a plan B in case nobody will acquire you, but I've known several companies that plan A was to get acquired. It didn't work out for every one but it does work sometimes. It really depends on your industry and your ability to make yourself attractive to the buyers.


Build to be acquired has been not uncommon in the semiconductor and network gear spaces in particular over the last two decades.

Intel, Cisco, nVidia, AMD, Qualcomm and a dozen other companies are always happy to scoop up new IP for their castles. Just Intel and Cisco together have $37 billion in annual operating income burning a hole in their pockets.

Teams that can flip new semi or network IP, are like medtech inventors that routinely do exactly the same thing in that space. It's very difficult, requires particularly specialized industry insider knowledge and the industries are filled with very large, rich corporations happy to overpay to ensure their positions.


It is a viable exit strategy as a backup for sure. I would question anyone that I spoke with that came to me with my only plan is for a FANG, et.al. to acquire us, unless they came from of of those companies, left on good relations and was trying to solve a problem that is import but not so important that they would want to do it themself. It also helps if you can be sure that what you are working on is wanted by 2 of the major companies that would be interested in you.


You should always build a company with the idea that you will go public as doing so forces discipline in how you do things

How did that work out for Uber, Lyft, and closer to home DropBox? They all went public and still haven’t shown that they have any idea how to become profitable.


> DropBox

What's wrong with DropBox and why in the world would you include them with Uber? They had a modest $18m operating loss on $428m in revenue last quarter. They have essentially zero red ink concerns right now and their business is still expanding (over the prior three years sales will have increased almost 100% through this fiscal year). Further they have a billion dollars in cash to go with their small trickle of red ink.

DropBox could roll into profitable status in any given quarter and going forward, if they wanted to, just by very gently nudging operating expenses. They don't need to, there's basically no benefit to their market valuation in doing that (not until or unless they start spitting off a lot of profit). They're in a position where they're clearly going to let sales growth gradually lift their operating condition to profitable, I'd guess within four quarters or so at their rate of improvement.

A lot of companies follow the DropBox approach of allowing sales growth and margin improvement to overtake the operating losses gradually. That is especially desirable when pulling back on expenses might harm sales growth and there is no good reason to do it (ie no cash crunch).

See: Salesforce, ServiceNow, Palo Alto Networks, Twitter and Square for recent prominent examples of how this works. It's not uncommon.


From Dropbox’s own IPO filing:

https://qz.com/1214822/dropbox-is-filing-for-a-500-million-i...

The company’s prospectus warns it has “a history of net losses”; anticipates increasing expenses and slowing revenue growth; and notes that it “may not be able to achieve or maintain profitability.” To compete, Dropbox is pouring money into research and development to convert non-paying users to paid subscribers, and to enhance collaborative tools.

Also like Jobs famously said about DropBox. “They are a feature not a product.” For the same price that you pay for DropBox and 1TB of storage, you can get the full Office 365 suite, for 6 users and a total of 6TB of storage.

For personal users and not businesses, Google and Apple offer good enough alternatives cheaper and that integrate better with their ecosystem.

Every company you named has a much higher switching cost than Dropbox. You can’t just go to a Twitter competitor because of the network effects.


> For personal users and not businesses, Google and Apple offer good enough alternatives

Not until iCloud folder sharing is available.


Hopefully that’s not what Dropbox is hanging it’s hat on - one point release of iOS 13.


It worked out very well for the founders of all of those companies.


While I admit that I'd take the money and run too, that kind of approach when viewed from afar looks like a genteel pump and dump.


I don’t begrudge them their money. But I feel a lot more at ease depending on a company whose business model is I give them money and they give me stuff and that they are doing so profitably. That’s part of the reason that I’m a big fan of companies like Backblaze and JetBrains.

I worked as a dev lead for one company where we were our vendors largest customer (over 60% of their revenue). I recommended that we either don’t depend on that vendor for our new implementation when contract renewals came or that we insisted on them putting their code in escrow in case they were sold and they abandoned the software.


Yes it did. But the stated hopes of most of the posters on this submission seem to be that they will be a viable competitor to the big server chip manufacturers.


Casinos do well for their founders and current owners too.


These are terrible examples because they are not reality. In a rational world they would not have been able to go public with the numbers and lack of rational plan to solve it they have. You do not have to be profitable but you have to because to explain away to get there that makes sense. For Uber and Lyft, I do not think they can. Time will tell. WeWork is a example of rationality returning after Uber, Lyft.

I would suggest for tech VMware, Arista, RedHat are far more rational guides.


As Warren Buffet said, “the world can stay irrational much longer than you can stay solvent”. This is the world we live in.


> You should always build a company with the idea that you will go public

What about building a company with idea of making money?


Ampere seems to be in the same space and shipping product. Why couldn't this company?


It’s also only two years old and venture backed. Any venture backed company is by definition looking to have an exit strategy and not just trying to be a “lifestyle business”. Statistically, it’s via an acquisition.

Look no further than YC - only two YC based companies have ever gone public.


"It’s also only two years old and venture backed"

Ampere is 3 years old, though PE and not VC backed.


https://www.crunchbase.com/organization/ampere-computing

Private equity vs VC is a distinction without a difference as far as the investors motivations.


Don't be a hater, exits can be of multiple types including IPO and that could very swell make this company become a formidable competitor to the big 3 Chip Makers.


It’s not about being a “hater” in what world will this company be able to design better server chips than their much better financed competitors? Even Apple has a better chance of designing high performance desktop chips if they decide to move Macs to ARM and neither Amazon nor Google can be discounted. They both have a desire to produce better server chips in house for their own use. Come back in a year when this company is either acquired by one of the big tech companies or they go out of business.


Ok, so you're a pessimist.


I’m a realist. What is the percentage of startups that ever become public? Again, without doing too much research, just look at YC companies.


YC is not very successful in my measures of success.

Profitable, yes, but all companies together = less than bitcoin

Spawn companies, yes, but total number of public companies that IPOed = 2


The thing is, YC is like a school for newbie entrepreneurs, it's not their role as a seed-fund, and you can't expect them to find you an exit. What they offer is to give you some money and tell you how to spend it. That's basically it, all the rest you are on your own, it's your company.


It is a bit like a social club - say the Lions club of SV. Socially it sounds important and people network and know the brand.

But if you ask a random person in the street, they will know what crypto is, they may know dropbox if they are computer litterate, but they won't know YC unless they are a geek.

And for those who want to be successful by creating their company, I have strong doubts. Profitable businesses are laughed at as "lifestyle business", and profitability itself is considered less important than network effects.

In the end, I'm not surprised companies like wework are happening, where no sane person should invest a dollar. It's like the dot com boom again.


Your measure of success would make a lot of good companies look like failures.


So what if we measure success by profitability? Neither Pagerduty nor Dropbox are profitable.


I'm not a capitalist. Profitability is helpful under capitalism for survival, but it's a poor measure of anything other than whether one number is higher than another.

YC funds a lot of companies who do cool things that aren't very profitable. I don't know what most of them do, and that's great. So many of them serve niches I haven't even heard of. If they break even making someone's life better, then they're a success.

Imagine a world without Stripe or something like it. The current boom in membership services might not have happened. Even VC-juiced Patreon, which I use, handles payments through Stripe. How many successful companies by your measure glue their business together with companies that got their initial funding and guidance from YC?


So exactly how do you run a business as an ongoing concern if you continuously lose more money than you make?


The answer is get bought by an ad-funded behemoth or a VC-funded behemoth or struggle to get by. I didn't say better measures of success were viable under this system.

There's a reason I'm not a capitalist. Profit as the only measure of success is how we ended up in this hell world. You're welcome to only care about that, but I prefer to imagine and work toward better futures.


So did you give up all worldly goods to live in a monastery and live on the charity of others or do you work for a company that seeks to make money?


"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

https://news.ycombinator.com/newsguidelines.html

Also, please don't cross into personal attack.


Those are not the only choices.


Well what “choice” did you make to food on your table?


Many. Sometimes I manage to make interesting stuff that people are also willing to pay for. Those are the good ones.


So you’re providing stuff that people are willing to pay for - ie capitalism.


Sure, no one claimed otherwise.

Just that there are nuances between monastery and money-seeking-only business.


"Profit as the only measure of success is how we ended up in this hell world. You're welcome to only care about that, but I prefer to imagine and work toward better futures."

No, profit is not the cause of hell. Debt is, it's a modern-form of slavery.

When you accept VCs money, the reality (if you play by the rules), is that you agree to give your most precious ressource (time and energy) to pay back your debt.

It may pay off, or may not, but you have to be very aware of your choices.


VC funding is worse than debt if you care about your business. When you borrow money, the lender doesn’t own a stake in your business. You just have to pay them back at their agreed upon terms.

Theoretically you could “buy back” your equity but with the type of returns and want it’s almost impossible. Taking VC money is like taking a payday loan.


This is an incredibly insightful post, YC companies are overhyped CRUD apps mostly. Meanwhile bitcoin and crypto has a a far larger market cap and is reshaping the entire economy. /s


remember: corporations are abstractions, they aren’t real. human beings are going to do the work and be paid for it, corporations are simply a useful model for how we allocate the profits.

there’s nothing wrong or fake about building a company and a product to take on more risk/freedom with the intent of selling it to someone for whom it would be valuable later. it also probably pays a lot better, considering the increased risk of failure and better negotiating position if you invent and produce something truly groundbreaking.

this is how silicon valley has worked for decades.


I’m not saying it’s “wrong”, rather it’s naive to think that they will create a “competitor” to one of the major chip makers/designers or that it is the desire of their investors.

Notice I didn’t say whether it was their desire. What they want is inconsequential once you take investor money.


If it’s not viably competitive against either a potential acquirer’s business or that of their competitors, why pay money for it?


It doesn’t have to be viable “against” anyone. Amazon for instance is designing its own custom ARM chips for both servers and their network cards. They could potentially buy them.

Exponential Technology for instance was designing PowerPC chips that were faster than Motorola or IBMs. After Apple got rid of the clone market, they closed down the business and started another one. In 2010, Apple acquired them to help design their custom Arm chips.


In support of your comment, I don't know much about this space but I would imagine the realities of manufacturing a CPU would make it prohibitively expensive for new entrants and highly amenable to big players buying up anyone with new ideas.


As much as I agree with you, people said the same thing about Tesla. In other words it may be an uphill battle, but every conglomerate was once a startup.


Tesla is turning a profit now every now and then but they also said that they expect to start having losses consistently going forward as they ramp up manufacturing.


Tesla has really only ever turned a profit through financial engineering, which is why it's never consistent. Tesla still has a long, long way to go before they don't need to keep raising capital to stay alive.

> expect to start having losses consistently

This is a very rosy view of Tesla. Tesla has always had losses consistently and will continue to do so for years. They are $13 billion in debt with only $4 billion cash, and a not insignificant amount of their cash is from full self-driving preorders they are still obligated to fulfill.

My money is still on Tesla going bankrupt.


PA Semi did it.

(And for other kinds of chips besides general purpose CPUs, fabless startups have been common for a long time)


They were a fabless semiconductor company and they were bought by Apple in 2008. That kind of proves my point....


Yep they were fabless, didn't mean to imply otherwise. They had a chip out before they were acquired.


Yikes, this is incredibly cynical.


It’s not cynical it’s the truth. Why else do you think investors give money to startups?


It's possible that the end goal is acquisition or an aquihire-like situation. Even if that's the goal, it could be pretty cool — they may get to the point where they build a team that can execute and build chips with different design goals in mind, adding diversity of ideas to the industry. How cool is it to be part of a team that is capable of building their own chips?

Your tone ("building a company"), however, suggests that something useless is occurring here, that they are just playing house as opposed to creating value. That's the part that feels like coded cynicism.


No. I’m saying it’s naive to think that it is their investors intention of building a business that takes on the big chip makers. That seems to be thought in many of the comments.

After seeing Apple design processors that are performance competitive with Intel except on the very high end and they will probably make better modem chips than Intel could by acquiring PA Semi and the remnants of Exponential Technology and seeing Amazon design custom server and embedded chips for its own use, I think we are seeing a renaissance in chip design we haven’t seen since the 80s and 90s.

Microsoft and Google are also dipping their toes in custom designed chips.


That's fair. I can see why you felt that way. The headline of the article did imply that the founders were going after Intel/AMD, when in fact the likely outcome is more modest in scope.


google's watch products have really been held back by lack of good chips for phones so I would not be surprised to see google following apple's lead to make their own chips. This looks like a very tempting buyout if that is the case.


Why would Google care about how good chips for Android are? Google only cares about the Android ecosystem being good enough to deliver ads. Google doesn’t care about performance only ubiquity. Google doesn’t even care about operating system and security updates being delivered across their ecosystem.

It’s estimated that Google only sells 3 or 4 millions Pixels a year. What’s the motivation for them to care about the watch market? Screens on watches are too small to display ads.


> Why would Google care about how good chips for Android are? Google only cares about the Android ecosystem being good enough to deliver ads.

Considering Google paid Apple 12 billion dollars this year to be the default search engine on iOS [0], they have a very big incentive to build a high quality android phone to get iOS users to switch over.

Those 3 or 4 million pixels could have been iOS users as I was one of those people last year when I decided to try out android again and the pixel was the first android I have ever used that didn't suck compared to my iphones (looking at you samsung galaxy s4). With apples draconian enforcement of what I can run on my iOS devices and set as defaults, I think I'll even stick to android unless it goes to shit or apple makes something radically better.

[0]: https://fortune.com/2018/09/29/google-apple-safari-search-en...


Considering that’s less than a weeks worth of iPhone sells, do you really think that’s going to help Google’s negotiating leverage when it’s time to renew their search deal with Apple?

They’ve had over a decade to create an ecosystem that people are willing to actually spend money on. But statistically, people who are willing to spend money on a high end phone, a watch, $160+ headphones, etc veer toward Apple.

Google has been selling its own branded phones and tablets for six or seven years unsuccessfully.


> Google has been selling its own branded phones and tablets for six or seven years unsuccessfully.

The Pixel 3a seems to have been relatively successful.

https://ww.9to5google.com/2019/08/20/report-google-sees-near...


That’s one of those famous “Bezos statistics”. Where Amazon brags about year over year growth and shows charts with no Y axis with actual numbers.


I agree - whenever hard numbers aren't shared you never really know (there are a couple of very rare instances where this rule is justified, but it's usually just to hide less-than-stellar sales numbers). But even so, it strongly indicates that Google has a winning formula with cheap hardware, vanilla Android, and good cameras - if they could just unfuck their messaging app ecosystem they might start to make a bigger dent against Apple.

And while Pixel vs. iPhone is not the same discussion as iOS vs Android, it's worth mentioning Android has grown over the past several years whereas iOS has largely stalled.

https://gs.statcounter.com/os-market-share/mobile/worldwide/...


I agree. But companies aren’t in business to gain marketshare. Their in business to make a profit. The cumulative profit of all Android manufacturers have historically been a around a fourth or fifth of Apple’s. The only company making money from Android’s ubiquity is Google and even they made less than $25 billion in profit from its inception until the time of the Oracle trial (it came out as part of discovery). They still pay Apple $8 billion to $12 billion a year to be the default search engine on iOS devices. That doesn’t sound like “winning” to me.

But Apple also realizes that iPhone sells are stagnating. But at the same time they have auxiliary products that are all making revenues and profits that any of the Android makers would die for from watches, AirPods, and services.


If they do only care about ubiquity, then having competitive products helps them achieve that goal. I hate wearing a watch so I am not all that interested in a smartwatch but I have heard on android podcasts that even the hosts - who we can presume are big fins of android - have stopped wearing android watches and some have started wearing Samsung's Tizen-based watches. Google has bought Fitbit and ip from Fossil so clearly they are still interested in wearables.


One motivation could be to have a competitor to the Apple watch for people that really want a good smartwatch to go with their phone. Right now there is nothing Android compatible that even comes close to the Apple watch. Having something comparable and Android exclusive could help with locking people into the Android ecosystem.


Again, why does Google “want” anything that’s not going to generate significant profit? The average Android user isn’t willing to spend more than $300 on a phone (the average selling price of an Android). They definitely aren’t willing to spend $300-$500 on a watch.

The high end Android market is minuscule. The whole Android hardware ecosystem is a profitless race to the bottom.

I doubt Google “cares” if you use Android or not. If you use iOS you still get Google ads and you still probably at least watch Youtube.


> google's watch products have really been held back by lack of good chips for phones

this looks made up & doesn't make sense.


It’s a chicken and the egg scenario for Android watches. No company wants to invest money in making processors for watches because there is no market and there is no market partially because no Android manufacture can make money selling watches. They can’t really make money selling phones.


Seems like a very General Magic style story.. I'm sure apple will acquihire them back if successful.


While I like this idea, I think supporting a full x86-64 instruction set for high-performance applications, from scratch, is likely years out. This company will need a 10-year runway to see any market impact.


Where is x86-64 mentioned? I assumed they were building ARM chips.

In any case, they don't mention any secret sauce. Intel has strong channels, fabs and the x86-64 ISA with massive software compatibility on the server side. Even with better performance/power, Intel can simply undercut the competition to drive them out of business. If Qualcomm's sales channels couldn't dent the server space, I am sceptical about upstarts, unless their [power, performance, cost] is significantly better than Intel.


If Jon Masters is present, it is a very high guarantee that they are focusing on ARM server chips


Are there any chips which support multiple ISAs? Could a single chip decode both AMD64 and ARM instruction streams? Or could the memory models not be unified so it's not possible?


Yes there are chips that support multiple ISAs...although typically one ISA is more native. At least, I believe NVIDIA Denver qualifies as that[1]. Also I believe some Itanium chips had a (slow) x86 hardware decoder that changed the instructions into the native EPIC instruction set[2].

However, unifying the memory models sounds like a performance disaster for the ARM code. And having multiple decoders in hardware (as well as having enough of them) doesn't seem like a justifiable use of silicon real estate.

[1] https://en.wikipedia.org/wiki/Project_Denver

[2] https://www.techsupportalert.com/pdf/r1048.pdf


The x86 hardware decoder on Itanium turned out to be slower than a JIT, and was removed from hardware pretty quickly.

Also note that the x86 memory model is a perfectly valid implementation for an ARM chip, as it's a superset of the ARM memory model constraints. And the performance impact isn't that big really.


Yes, I agree with you. Many of the barrier instructions will end up as internal NOPs. Performance wouldn't be terrible. Power may suffer.


Transmeta did this. It emulated the x86 ISA, though the actual ISA was not x86. They did a demo of it translating JVM bytecode directly to its own VLIW ISA with no x86 running at all.


I was also immediately thinking about Qualcomm’s processors business. From this point of view I don’t see success for this startup: https://www.tomshardware.com/news/qualcomm-server-chip-exit-...


A lot of the press has been talking about how they'll be taking on Intel and AMD but that's just because those are the only server chip makers their readers are familiar with. Don't take it literally.


"Nobody" cares about x86 anymore except those married to the platform.

Even Windows runs on Arm now

For servers they care about price, performance, support and volume/availability

If you can have a non-x86 platform that will save power/cooling to the big datacenter users (FAANG basically) they will be willing to give you money


>"Nobody" cares about x86 anymore except those married to the platform.

You only get to read this kind of stuff in HN.


Windows runs on ARM by a very generous definition of “run”.

https://www.techspot.com/review/1599-windows-on-arm-performa...


>For servers they care about price, performance, support and volume/availability

ARM servers are extremely awful at all of these. Most ARM servers perform poorly compared to x86 servers, they have less software support and they require you to cross compile or switch your development machines to ARM as well and finally actually getting an ARM server is incredibly difficult because almost no one offers them and even those providers who offer some ARM instances don't give you the same variety as with x86. The worst part is that switching to ARM involves extra costs for your expensive developers in exchange for hardware that probably was already too cheap to care about because RAM has a much bigger impact on server costs than the CPU.


I don't disagree in essence with what you're saying.

But if they can get an ARM server that checks those boxes the game may change.

> involves extra costs for your expensive developers

Which expensive developers? Web technologies run the same on ARM as on x86 (may I say it's even easier than developing for mobile - Node/Java/PHP/Python/etc run the same in ARM, Node might even benefit from optimizations done for ARM Chromium)


TechCrunch has more details than Reuters does. Such as confirming ARM ISA, but custom arch. https://techcrunch.com/2019/11/15/three-of-apple-and-googles...



Nuvia like the ring ;)


Any news from Mill computing?


I wonder what Apple thinks once they will see their IP exfiltrated like this


I’m sure that would be the first thing the VCs would ask about before putting $50m into the business.

The three worked on mobile chips at Apple and Google, and are now focusing on enterprise data centres. So it’s not exactly the same.

> That said, what’s interesting is that while the troika of founders all have a background in mobile chipsets, they are indeed focused on the data center broadly conceived (i.e. cloud computing), and specifically reading between the lines, to finding more energy-efficient ways that can combat the rising climate cost of machine learning workflows and computation-intensive processing.

> The company’s CMO did tell me that the startup is building “a custom clean sheet designed from the ground up” and isn’t encumbered by legacy designs. In other words, the company is building its own custom core, but leaving its options open on whether it builds on top of ARM’s architecture (which is its intention today) or other architectures in the future.


It can simply be a form of risk-management by Dell/VCs. How likely is that Apple going to sue them? How likely the company going to succeed and deal with the issue?

It's rarely black or white.

We have seen it with the case of Uber and the self-driving cars.

In their own words, they said they will reuse knowledge from ARM and Apple (including Desktop CPUs).

As public investor of Apple, yes I think it's a very legitimate question.


Of course it's a legitimate question, it's an obvious question, which is why the CMO mentioned clean room building it and the founders having worked on a combined 100 patents. VC's do plenty of due-diligence, they don’t invest blindly when there’s such an obvious risk.

The Uber case was quite rare and egregious. It's not as simple as working on similar stuff.

Uber directly acquiring a company (well after raising hundreds of millions) made by someone who worked at Google to work on the exact same thing which is far riskier than working on a completely different product category you're building from scratch. But even then the case against Uber heavily relied on the fact he stole IP directly from company computers and used that as the base of the product.


What specifically makes you think they pulled a Levandowski?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: