I'd estimate a team size peaking at < 10 over 12 months to go from initial specification discussion to GDS-II. Another $1-1.5M for qual and support through production plus silicon validation.
Put that all together and <$7M sounds to me like a good estimate and I've seen a lot of more complex projects come back for less.
Not wanting to be a gobshite here but how did you manage to spend >$20M for something like this? It sounds like you were being seriously ripped off if you were paying $1M per engineer for design tools - you might want to push your tool vendors on that next time negotiations come round.
Though, I admit, making just any IC under $1m requires really knowing what you are doing. Not something for a team of green engineers, and engineers whose only experience was doing cookie cutter SoCs from hard macros.
EDA CAD tools are infuriatingly expensive. You can get a chip run on an older node (180nm or 250nm) for <$100K. Good luck finding a set of EDA CAD tools for under that per year.
The problem is that most of the "interesting" circuits for old tech nodes have a significant analog piece or RF piece--generally either ultra-low power(nanoamps) or higher voltages(15V+) or higher frequencies (2GHz+).
Both the simulation models as well as the tools to extract parasitics are extremely weak (or non-existent) on the open source front for analog and RF circuitry.
Anything based around Magic has to run on an extremely simplified set of rules in order so that the tiling and stitching mechanisms that it uses don't get upset.
DRC and extraction are hard. They require line-sweep geometry engines of fairly significant sophistication. Extraction requires some notion of third dimension matching and/or analysis.
The problem is that Cadence will donate tools to practically any school but will take your firstborn if you're a company--and Cadence are one of the better companies in the EDA space. This cuts off any incentive for someone in academia to create a new VLSI tool.
Note the accepted papers at DAC 2020:
Not a single mention of "DRC", "rule", or "extract." Even simulators are thin on the ground unless you include "quantum". You would think that extraction, DRC, RF simulation, etc. are all solved tasks, right? (I assure you that algorithms in these spaces that can exploit massive parallelism are quite rare and are very difficult to implement well) :(
We should be living in the time of massive GPU and cloud acceleration of these tasks, and yet ... nothing.
This tells you what research is getting funded.
(Edit: Sorry, Largo, for some reason your down comment isn't getting a reply button...)
Edit: Magic, in this context, refers to the VLSI layout tool. Most open source EDA systems default to Magic as the thing that you use to draw/interpret physical geometry. This is good code reuse, but bad in that you inherit all of its limitations.
edit: I wasn't even aware of https://www.dac.com/ -> one more fractal in the Tsundoku stack
edit: never mind :-)
Example from HPC 10-15 years ago: Opteron had a fast start but gen2 was really late. Intel Sandybridge Xeon was their first with PCIe 3.0, which was 6-9 months minimum after adapter cards supported it. Both products caused headaches for interconnect and system builders because they couldn’t ship with parts that didn’t exist.
Apple can of course control their execution of their A series, but they can also be first to AirPods and a smart watch with that integrates Siri. This from a company that famously waited for a market to look like it was going to take off before getting in because they let other companies shake out the issues with merchant silicon before they’d integrate. Now they don’t have to wait.
No need to make it even harder by requiring a hardware update at the same time.
Don't even get me started about the incentives for the vendor: way easier to sweep a security bug under the rug than roll out an update that requires a new processor to be manufactured and rolled out.
Or about the economical and ecological impact. What do you do with the old processors? Just throw them out because they were optimized for an old, insecure version of the application?
We do the same shit with tablets and phones and noone bats an eye... instead of updating an existing phone and just charging for an update, we'd rather release a new $1000 phone every year and just 'recycle' the previous one.
Likewise printer manufacturers go for a huge SoC where they could get an application processor that suits their needs and couple it with a specialized printer chip. 
What's the deal?
An off the shelf AP is often going to have a lot of functionality you are not going to use but you are going to be paying for. It's going to consume power driving signals back and forth to the specialised printer chip. Its going to cost money to control inventory and deal with two chips when one could do the job. It may be end of lifed at an awkward time causing you to have to redesign or store a lot of inventory.
And it really isn't that much more expensive to develop one big SoC if you are already developing a specialised chip to go with the AP.
(someone with more experience feel free to give more detail or to correct me)
1. All his examples are large companies (eg Tesla & Amazon) with internal demand for components which a custom ASIC could provide some cost efficiencies). For a startup to compete for such business, they would have to be proven, well capitalized and/or have unique IP.
-- It is hard for a startup to land business with these giants. --
2. He ignores to other SoCs as a potential alternative. The IoT segment is full of standard parts and it would be a challenge for a chip design house to compete unless the company contracting the development has high enough volumes to offset the NRE cost.
-- see point #1 above --
3. SoC is generally bad business for startups. There is a lot of IP that needs to be aggregated. Your value-add is small unless you have fundamental value-add. And there is high risk in of integrating someone else's IP as it is not in your direct control. If you need cutting edge IP like DDR6 memory controllers, SERDES interfaces or PCIe Gen4, it's usually out of reach unless your startup has mid-7 digit back accounts.
-- Sourcing IP can be expensive and time-consuming. License terms are typically not favourable to small startups and require "large" upfront payments --
4. The examples cited have long product life cycles. Most startups excel at Greenfield or new market opportunities where the risk is higher and lifecycles are much shorter.
5. I would not do a design in nodes larger than 45nm today. The PPA (Performance, power and area) differences are small compared to your engineering and EDA tools cost.
Basically the article is an advertorial, self serving and not appropriate advise for startups to follow. I agree with noone10101's comments below.
If i have a product definition, would i be able to get prototypes for say, $1000 assuming its a fairly (seemingly) simple product?
As ones will probably ask “define simple”
- i want what is in effect a body cam, but with multiple cameras providing a 360 degree view, ideally affixed to the “button” location on a baseball cap. Or a small pole on a bavkpack.
I actually thought of this years before gopro existed hut i couldnt convince any of my HW friends in silicon valley of its interest...
There are many iterations on this, for example, one iteration i would like is effectively what has been developed into the “google hike view/walk view”
Lidar on a bavkpack for mapping out in 3D your walk hike...
Regardless, the most simplistic veing a multi cam 360 camera on a hat....
Could this be something done more cheaply in the pandemic climate?
How cheap do you think engineers will work? Even if they could pull it off with 1000 man hours, do you really expect them to charge $1/hr? Your estimates are off by many orders of magnitude.
But what you’re describing (Minus LIDAR) is available off the shelf from multiple action camera vendors. We’ve been using 360 cameras in the action sport world for years now.
This takes $500K+ of investment. You need expensive tooling to develop the design and license any commercial IP. You have to pay the engineers to do the design and preproduction verification work. You have to pay the fab to make the masks, make the chips, and test them. Then you have to iterate to deal with problems.
You could take the raw video from these and run it through a SLAM algorithm (simultaneous location and mapping) to build a 3-D map of the environment.
Those cameras may not be as small as what you envision, but you'll have trouble making them even smaller.
If you want to carry around a lidar unit, I have some bad news for you regarding cost and weight...
Like i said, i wanted one of these years ago and these things didnt exist then....
But i think ill buy one now
Simplest microcontroller prototypes start to fly from $6k-$10k if you want anything beyond a kickstarter project.
I once worked in an LED lighting company, a simplest analog IC went into $3m after cost overruns from $1.2m after a few failed tapeouts.
To make any IC under $1m requires you to really know what you are doing. It's not what a startup with green, unexperienced engineers can do.
Full custom ASICs get much higher performance, lower power, lower per unit cost but have like a minimum of $500,000 NRE costs.
If you had asked for a 3 or 4 cameras with 5mm lenses on 25x25mm base boards connected by MIPI ribbon cables under your collar to a few Raspberry Pis and a big battery in a backpack, you can do that today with off-the-shelf hardware for half your budget. They won't be synchronized, you can do that after the fact in your video editor. Getting someone to hold your hand and configure it for you will still cost the rest of your budget, but it's job done.
Or buy an off-the-shelf 360 degree action camera with included mounting pole and just use that.
Miniaturizing it into an ASIC and custom optics? That's man-years of effort and hundreds of thousands of dollars.
And both DARPA and the private sector have put millions to billions into miniaturizing and reducing power requirements for LIDAR into the baseball cap and it's still humanly impossible at the moment to fit it into the button of a baseball cap.
It's not even necessary to ask whether the pandemic makes these tasks a few percent cheaper. If you have an idea, get some basic information about the problem domain first.
Hardware is incredibly expensive and time consuming to create from scratch in ways programmers usually don't appreciate. If the GP is asking those questions after thinking about the problem for years they do clearly need to be told they have no idea what the problem is.
In the hardware business we are constantly approached by people who have little more than a vague idea and a hobby-money budget. I've seen what happens when you encourage them, and it usually involves fucking up their retirement if you let them get too far.
Much more humane to see it for what it is and shoot the idea down as fast as possible, or at least help them understand they probably need VC level funding to get anywhere.
If someone has been simmering on an idea for years and can't even formulate a clear explanation of a problem he wants to solve (so that people can explain what's required past "it's really expensive, don't do it") it's clearly not sinking in and there's no product there, just a guy who wants to play with technology and waste manufacturer's time while doing it.
There's a long long tail of weirdness and variation in any large population (such as an internet forum like HN), so it's best to give people the benefit of the doubt. It can easily seem like someone else is doing one thing when in reality they're doing another. I feel like half of the moderation we have to do in comments boils down to this.
I think a lot of people on the manufacturing side react in a confrontational way because it's disrespectful to a manufacturer when someone wants to use the infrastructure they've sunk years into as a playground with no hope of actually making it to production.
If a manufacturer takes on 10 projects in a year and none of them do anything but endless prototyping they will quickly go out of business. Unlike in software where you make your money in the design phase.
If someone has been trying to make a hardware project happen for years and they still don't understand this dynamic then there's not just an understanding issue, there's a track history of being a drain on other people's resources, or at least a willingness to be one. Establishing a relationship with everyone needed to make hardware has a much higher drain on partner resources than would happen if a consumer called a help line or a programmer read some API documents. Just making a quotation can take a week or more for a non-trivial hardware project, and prototyping is generally not that profitable either, if at all.
An analogy: If someone came in here and said they wanted to hire CS grad students to work on Blockchain (but no clear idea of a specific problem) during the shutdown and was hoping they could pay them half of what they make on their already lower than market rate research stipend but still take advantage of the research topics they are involved in, I think expressing a certain amount of "hey, those are real people you are hoping to screw over" would be appropriate.
Not really true once you scale down to a minimum viable size for your application, since you'll need plenty of area regardless for bonding pads, vias, interconnects etc. Nowadays the partial failure of Dennard scaling means that smaller chips may also be a lot harder to cool; it may be more advantageous to try and spread out logic in a way that might be a bit "wasteful" of area, if this makes the thermals more manageable. The real tradeoff wrt. very coarse nodes is performance.
The IP licensing cost goes to zero when using RISC-V, and if the company already has a design force set up for MCU dev, it could be a big deal (if RISC-V works out its architectural issues).
Comparing this to ASICs still has to take into account what all of the other top-level comments in this thread have discussed. It still doesn't answer the question of MCU vs ASIC (vs. FPGA), but it does add a significant cost reduction to the mix if an MCU is in the running.
Mind you, I may have rose colored glasses on. I've been dreaming of JITs all the way down to the hardware layer ever since I first heard of an FPGA that could flash itself in a single clock cycle over a decade ago...