
The Economics of ASICs: At What Point Does a Custom SoC Become Viable? (2019) - amelius
https://www.electronicdesign.com/technologies/embedded-revolution/article/21808278/the-economics-of-asics-at-what-point-does-a-custom-soc-become-viable
======
Noone10101
This article is just an advert. I have worked on practically the same project
as the second example with a different manufacturer and can tell you it is a
lot more expensive than 5 million. This project was 90nm,3 years ago so
comparable to 55nm nowadays, and minus the mcu, so there is no licensing
costs. NRE of 20 million for design, development, and bringing to production.
And I think it ran over budget, with only 1 silicon revision. With a final
component cost of $1.20, it finishes up cheap compared to the cost of
individual components, but the up-front costs are much larger than they are
suggesting. They only list mask sets here, not mentioning engineering time for
12+ months and design license tools, costing upwards of 1 million per seat.
Then on top of that the PDK from the manufacturer. I would love to see the
full cost breakdown and see where it becomes viable.

~~~
StormChaser_5
I work at a company that does these sorts of custom ASICs for a variety of
customers all the time. $5M for the second example is a bit low but the right
ballpark. Even allowing for 2 mask sets (mistakes happen) and $1-2M for an off
the shelf bluetooth IP it would be difficult to see this going much over that.

I'd estimate a team size peaking at < 10 over 12 months to go from initial
specification discussion to GDS-II. Another $1-1.5M for qual and support
through production plus silicon validation.

Put that all together and <$7M sounds to me like a good estimate and I've seen
a lot of more complex projects come back for less.

Not wanting to be a gobshite here but how did you manage to spend >$20M for
something like this? It sounds like you were being seriously ripped off if you
were paying $1M per engineer for design tools - you might want to push your
tool vendors on that next time negotiations come round.

~~~
Noone10101
I agree, there probably were some wastage on our project, but I'm only a lowly
engineer so cannot really comment on high level decisions. There were a lot of
engineers working on the project, wages tends to drive prices up quickly, and
all the ip was developed from scratch. I don't work there anymore, moved to a
smaller more nimble outfit. Thanks for your insight

------
BooneJS
Sure they’re expensive to design and make, but with appropriate volume or
margin you’ll get the lowest power device possible for your application.
General SoC’s are just that: general. Moore and Dennard stopped helping, so
they’re missing what you need and have what you don’t, requiring a SW stack
that ends up working around it all.

Example from HPC 10-15 years ago: Opteron had a fast start but gen2 was really
late. Intel Sandybridge Xeon was their first with PCIe 3.0, which was 6-9
months minimum after adapter cards supported it. Both products caused
headaches for interconnect and system builders because they couldn’t ship with
parts that didn’t exist.

Apple can of course control their execution of their A series, but they can
also be first to AirPods and a smart watch with that integrates Siri. This
from a company that famously waited for a market to look like it was going to
take off before getting in because they let other companies shake out the
issues with merchant silicon before they’d integrate. Now they don’t have to
wait.

------
mNovak
Does anyone know if anything ever came of the 'Bespoke Processor' [1]
approach? Essentially you run your application on a simulated MSP430, and it
prunes out all the gates you don't use. Seemed promising, but then I don't
actually know this field.

[1]
[http://people.ece.umn.edu/~luo../jsartori/papers/isca17.pdf](http://people.ece.umn.edu/~luo../jsartori/papers/isca17.pdf)

~~~
perlgeek
Sounds like a horrible idea if your application ever needs to receive an
update. Which is basically 100% of applications that aren't abandoned.

~~~
WrtCdEvrydy
Ship new processor with update?

~~~
perlgeek
Man, getting software updates installed in a timely manner is already a
problem for large parts of industry and government/administration.

No need to make it even harder by requiring a hardware update at the same
time.

Don't even get me started about the incentives for the vendor: way easier to
sweep a security bug under the rug than roll out an update that requires a new
processor to be manufactured and rolled out.

Or about the economical and ecological impact. What do you do with the old
processors? Just throw them out because they were optimized for an old,
insecure version of the application?

~~~
WrtCdEvrydy
> What do you do with the old processors? Just throw them out because they
> were optimized for an old, insecure version of the application?

We do the same shit with tablets and phones and noone bats an eye... instead
of updating an existing phone and just charging for an update, we'd rather
release a new $1000 phone every year and just 'recycle' the previous one.

------
rathel
Japanese companies seem to have a boner for custom ASICs. I never understood
that. For example Roland uses custom chips [1] for their synthesizers and
effects, where they could use almost any DSP on the market.

Likewise printer manufacturers go for a huge SoC where they could get an
application processor that suits their needs and couple it with a specialized
printer chip. [2]

What's the deal?

[1]
[https://www.flickr.com/photos/psychlist1972/37188679832](https://www.flickr.com/photos/psychlist1972/37188679832)

[2] [https://news.synopsys.com/2020-05-21-Fuji-Xerox-Adopts-
Synop...](https://news.synopsys.com/2020-05-21-Fuji-Xerox-Adopts-Synopsys-
ZeBu-Server-for-Multi-Function-Printer-SoC)

~~~
StormChaser_5
Because its cheaper.

An off the shelf AP is often going to have a lot of functionality you are not
going to use but you are going to be paying for. It's going to consume power
driving signals back and forth to the specialised printer chip. Its going to
cost money to control inventory and deal with two chips when one could do the
job. It may be end of lifed at an awkward time causing you to have to redesign
or store a lot of inventory.

And it really isn't that much more expensive to develop one big SoC if you are
already developing a specialised chip to go with the AP.

------
henry_epic
The article makes several huge leaps of faith and the intended audience is
definitely not intended for the typical Ycombinator crowd. To clarify:

1\. All his examples are large companies (eg Tesla & Amazon) with internal
demand for components which a custom ASIC could provide some cost
efficiencies). For a startup to compete for such business, they would have to
be proven, well capitalized and/or have unique IP. \-- It is hard for a
startup to land business with these giants. --

2\. He ignores to other SoCs as a potential alternative. The IoT segment is
full of standard parts and it would be a challenge for a chip design house to
compete unless the company contracting the development has high enough volumes
to offset the NRE cost. \-- see point #1 above --

3\. SoC is generally bad business for startups. There is a lot of IP that
needs to be aggregated. Your value-add is small unless you have fundamental
value-add. And there is high risk in of integrating someone else's IP as it is
not in your direct control. If you need cutting edge IP like DDR6 memory
controllers, SERDES interfaces or PCIe Gen4, it's usually out of reach unless
your startup has mid-7 digit back accounts. \-- Sourcing IP can be expensive
and time-consuming. License terms are typically not favourable to small
startups and require "large" upfront payments --

4\. The examples cited have long product life cycles. Most startups excel at
Greenfield or new market opportunities where the risk is higher and lifecycles
are much shorter.

5\. I would not do a design in nodes larger than 45nm today. The PPA
(Performance, power and area) differences are small compared to your
engineering and EDA tools cost.

Basically the article is an advertorial, self serving and not appropriate
advise for startups to follow. I agree with noone10101's comments below.

------
samstave
Stupid question: has the pandemic made creating custom/any HW in places,
specifically, Shenzen, even more economical/cheaper??

If i have a product definition, would i be able to get prototypes for say,
$1000 assuming its a fairly (seemingly) simple product?

Edit:

As ones will probably ask “define simple”

\- i want what is in effect a body cam, but with multiple cameras providing a
360 degree view, ideally affixed to the “button” location on a baseball cap.
Or a small pole on a bavkpack.

I actually thought of this years before gopro existed hut i couldnt convince
any of my HW friends in silicon valley of its interest...

There are many iterations on this, for example, one iteration i would like is
effectively what has been developed into the “google hike view/walk view”

Lidar on a bavkpack for mapping out in 3D your walk hike...

Regardless, the most simplistic veing a multi cam 360 camera on a hat....

Could this be something done more cheaply in the pandemic climate?

~~~
LeifCarrotson
You've got no idea what simple or complex means.

If you had asked for a 3 or 4 cameras with 5mm lenses on 25x25mm base boards
connected by MIPI ribbon cables under your collar to a few Raspberry Pis and a
big battery in a backpack, you can do that today with off-the-shelf hardware
for half your budget. They won't be synchronized, you can do that after the
fact in your video editor. Getting someone to hold your hand and configure it
for you will still cost the rest of your budget, but it's job done.

Or buy an off-the-shelf 360 degree action camera with included mounting pole
and just use that.

Miniaturizing it into an ASIC and custom optics? That's man-years of effort
and hundreds of thousands of dollars.

And both DARPA and the private sector have put millions to billions into
miniaturizing and reducing power requirements for LIDAR into the baseball cap
and it's still humanly impossible at the moment to fit it into the button of a
baseball cap.

It's not even necessary to ask whether the pandemic makes these tasks a few
percent cheaper. If you have an idea, get some basic information about the
problem domain first.

~~~
dang
Please don't be a jerk in HN comments. If you know more than others, it's
great to provide correct information, but please don't put other people down.

[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)

~~~
catherd
I typically agree with your calls on being a jerk vs. not, but LeifCarrotson
is being pretty factual here.

Hardware is incredibly expensive and time consuming to create from scratch in
ways programmers usually don't appreciate. If the GP is asking those questions
after thinking about the problem for years they do clearly need to be told
they have no idea what the problem is.

In the hardware business we are constantly approached by people who have
little more than a vague idea and a hobby-money budget. I've seen what happens
when you encourage them, and it usually involves fucking up their retirement
if you let them get too far.

Much more humane to see it for what it is and shoot the idea down as fast as
possible, or at least help them understand they probably need VC level funding
to get anywhere.

If someone has been simmering on an idea for years and can't even formulate a
clear explanation of a problem he wants to solve (so that people can explain
what's required past "it's really expensive, don't do it") it's clearly not
sinking in and there's no product there, just a guy who wants to play with
technology and waste manufacturer's time while doing it.

~~~
dang
I certainly wouldn't argue against any of that. But it's easy to make such a
case without putting someone else down. In fairness, though, the bits I was
objecting to in LeifCarrotson's comment were pretty borderline.

There's a long long tail of weirdness and variation in any large population
(such as an internet forum like HN), so it's best to give people the benefit
of the doubt. It can easily seem like someone else is doing one thing when in
reality they're doing another. I feel like half of the moderation we have to
do in comments boils down to this.

~~~
catherd
Not disagreeing, just adding more info:

I think a lot of people on the manufacturing side react in a confrontational
way because it's disrespectful to a manufacturer when someone wants to use the
infrastructure they've sunk years into as a playground with no hope of
actually making it to production.

If a manufacturer takes on 10 projects in a year and none of them do anything
but endless prototyping they will quickly go out of business. Unlike in
software where you make your money in the design phase.

If someone has been trying to make a hardware project happen for years and
they still don't understand this dynamic then there's not just an
understanding issue, there's a track history of being a drain on other
people's resources, or at least a willingness to be one. Establishing a
relationship with everyone needed to make hardware has a much higher drain on
partner resources than would happen if a consumer called a help line or a
programmer read some API documents. Just making a quotation can take a week or
more for a non-trivial hardware project, and prototyping is generally not that
profitable either, if at all.

An analogy: If someone came in here and said they wanted to hire CS grad
students to work on Blockchain (but no clear idea of a specific problem)
during the shutdown and was hoping they could pay them half of what they make
on their already lower than market rate research stipend but still take
advantage of the research topics they are involved in, I think expressing a
certain amount of "hey, those are real people you are hoping to screw over"
would be appropriate.

------
syedkarim
What is the largest node/process that is currently available? Does that large
size mean the lowest cost for development? I assume the trade is that fewer
chips are produced from a wafer, so lower development costs are related to
higher unit costs.

~~~
zozbot234
> I assume the trade is that fewer chips are produced from a wafer

Not really true once you scale down to a minimum viable size for your
application, since you'll need plenty of area regardless for bonding pads,
vias, interconnects etc. Nowadays the partial failure of Dennard scaling means
that smaller chips may also be a lot harder to cool; it may be more
advantageous to try and spread out logic in a way that might be a bit
"wasteful" of area, if this makes the thermals more manageable. The real
tradeoff wrt. very coarse nodes is performance.

------
asah
Naive q: does RISC-V potentially impact any of this?

~~~
zozbot234
In a way; if you're doing a custom ASIC at an older process node as discussed
in the article, there's basically no real reason _not_ to use RISC-V (perhaps
RV32E when applicable for minimized area requirements) for your logic. (16-bit
and 8-bit logic might also be viable but is really quite constrained these
days.) But RV is only one part of what might become a thriving open-hardware-
blocks ecosystem over time, that would be especially compelling for these
sorts of designs.

~~~
milesvp
I've been watching RISC-V for the last year, and for me it represents a coming
tipping point that I've seen in other tech areas in the past. Having an open
standard that people are rallying around is a game changer. I don't know how
it effects ASICs specifically, but it seems to be helping open up FPGA
development and tooling, making it easier for people outside big established
companies to work. I see chip design getting easier (or at least more
approachable) in the next 5 years, and I think it will only accelerate from
there.

Mind you, I may have rose colored glasses on. I've been dreaming of JITs all
the way down to the hardware layer ever since I first heard of an FPGA that
could flash itself in a single clock cycle over a decade ago...

------
bsder
tl;dr: 1 million units

~~~
mNovak
$2M component costs, actually

