Hacker News new | past | comments | ask | show | jobs | submit login
Darpa invests $100M in a silicon compiler (eetimes.com)
415 points by adapteva 9 months ago | hide | past | web | favorite | 257 comments

"Most importantly, we have to change the culture of hardware design. Today, we don’t have open sharing … "

This, to the 100th power.

The culture in the EDA industry is stuck in the 1950's when it comes to collaboration and sharing, it's very frustrating for newcomers and people who want to learn the trade.

As was pointed out by someone in another hardware related HN thread, what can you expect from an industry that is still stuck calling a component "Intellectual Property"?

The un-sharing is built into the very names used to describe things.

It's not just the EDA industry, it's almost all of hardware including most of the embedded software people. Yeah, I get it - I too am sometimes stuck using some old proprietary compiler with a C standard older than some of the people I work with - but come on. On my last job I used a Qualcomm radio and they ended up giving me close to a thousand zip files of driver revisions going back a decade because their build process kept screwing up the output source code. All it took was running an open source static analysis tool and 200 man-hours of junior developer time to fix the root causes of 90% of their bugs - for a product that has made billions of dollars (and I'm not talking about the GSM/4G chips with crazy standards that require tons of real R&D).

You read that right. Their build system outputs source code, generated from another C code base, using macros to feature gate bug fixes depending on who the customer is. The account managers would sent a list of bugs that a given client had experienced and the backoffice engineers would make a build that fixed only those bugs.

Forget about collaboration and sharing. They haven't even figured out the basic business processes that many software engineers take for granted.

> the backoffice engineers would make a build that fixed only those bugs

Makes perfect business sense! Those customers didn't pay to get those bugs fixed, so why should they get the fixes??? /s

Holy moly.

This decision was likely not made to rip-off some more moneys, but because of bad quality. If any bugfix might as well introduce two new issues, you don't deliver the fix if it isn't needed. The whole "if it ain't broke, don't fix it" saying has some of its roots in that environment, maybe even all.

This approach is wrong on many levels, but fixing the real cause is often a bit more work than "a hundred junior dev hours". In the meantime, you have to deliver to customers.

I think a reasonable explanation may be that other customers possibly have come to unknowingly rely on some of the bugs. Shipping a fix of all the bugs could actually break their code!

Or their hardware, some of the impact of those bugs could be built into the physical hardware. By that point.

Edit:fixed a typo.

Add in qualification of specific hardware inside specific operating conditions.

The "move stuff and break stuff" crowd forgets there's a market for "move slow and rarely break stuff" too.

So if you have qualified hardware w/ a certain driver version, you don't just ship a bunch of code fixes to a customer, you ship the bare minimum change set required to fix any specific issues they experience. Because you're sure as hell not recertifying every part revision with every driver version.

I am definitely not one of those "move fast and break stuff" people, especially since I often have only a few months to design something that is expected to be in service until oxidation is a legitimate concern. My personal ventures are largely web based so I've also become rather too personally acquainted with the other extreme - the madness that is node_modules/, implicit *.d.ts imports, and browser impedance mismatches.

The problem in the hardware industry is far more insidious. Unlike with software dev, there is no concept of "full stack" or "devops" in manufacturing. The entire field is so hyperspecialized that youve got people dedicated to tuning reflow temperatures to +-5C depending on humidity and local weather. No one person can have a proper big picture view of the situation so the entire industry is dominated by silos each with their own micro-incentives and employees competing with each other for notoriety.

That’s absolutely true. Once you’ve delivered hardware with certain bugs that can be worked around by firmware/software, often that means the firmware/software teams don’t need or even want those bugs fixed. Fixing the bug means they’d have to go back and change their workarounds, which not only costs time/money, but adds risk that their may be unintended consequences and new issues.

Yep, there is a method to the madness. Something complex like making an electronic product is bound to have tons of pitfalls and most of these behaviors have rational (albeit twisted by economic incentive) reasons for why the things are the way they are.

We were, however, an established client of Qualcomm and working on a new design so we explicitly asked for all bug fixes and they refused except for the ones we could name. We got "lucky" in that no other client had needed both bug fixes at the same time and when the build system broke, the one guy responsible for it was MIA so we got the whole source dump (they didn't even test their fixes together until a client found problems).

I've used other Qualcomm products with different engineering support teams that are much better at source control, testing, and even devops but it's a very rare sight within the industry.

It's "if it ain't broke, don't fix it."

Seems very similar to how Restaurants and grocery stores throw food out instead of giving it to shelters and homeless people. It’s a common corporate policy in the US. The only justification seems to be that if they gave it away, they might miss out on some otherwise potential sales.

I think there's also liability issues, like if they give away food and someone gets salmonella poisoning or has an allergic reaction or something then they could get sued. It sucks but it's not completely arbitrary.

That's a myth. Good Samaritan laws exempt that

In addition to possible bad press, it’s also common for people to not know the law. And as I read news stories asking if the reader knew the difference between “best before”, “sell by”, and “use by”, I would also expect some to destroy the produce out of a misguided but kind desire to avoid harming those who they could legally and effectively help.

Doesn't get rid of bad press.

Throwing away food is a common practice also in production factories to keep prices fixed. If a resource supply becomes too much available compared to its demand, the price lowers, and we aren't anymore at the "get an apple from a tree and sell it" level: if the price lowers too much entire production or distribution companies can go bankrupt.

It stinks badly, but is a sad reality.

I designed the ABEL language back in the 80's for compiling designs targeted at programmable logic arrays and gate arrays. It was very successful, but it died after a decade or so.

It'd probably be around today and up to date if it was open source. A shame it isn't. I don't even know who owns the rights to it these days, or if whoever owns it even knows they have the rights to it, due to spinoffs and mergers.

I think you'd be surprised. I went to Caltech (graduated 2014), which is a fairly well known university for their Electrical Engineering program, and I learned ABEL in my Sophomore/Junior year. My instructor, an admittedly old school hardware engineer, was in love with the language and had it as part of our upper level digital design curriculum for a few labs. FWIW, I think it was super intuitive and a hugely valuable learning tool. I suppose that doesn't mean it isn't "dead" for professional purposes, though.

Thanks very much for the kind words!

Perhaps you can ask the professor for who has the copyright on ABEL now? So we can ask the holder if it can be open sourced.

Miracles like this do happen - last year Symantec allowed the Symantec C++ compiler to be fully open sourced!

Walter - appears Xilinx are the current copyright holder and ABEL was last supported in the XILINX 10.1 ISE toolset released circa 2008 (Current release is 14.7). Introductory guide can still be found here: https://bit.ly/2NfkLWq

This is the URL behind the shortened link:


Thanks, I'll contact them and see what they have to say.

Was it with Glen George?

But academia is not the professional world, it's the opposite...

It was very successful, but it died after a decade or so.

Perhaps not entirely. We had one lab session dedicated to it during my junior year in college. That was ten years ago but apparently they haven't changed that[0] (course description in english at the bottom of the page).

[0] http://studia.elka.pw.edu.pl/pl/18L/s/eres/eres/wwersje$.sta...

I learned basic ABEL at Purdue in 2006 in the digital intro. course. "identify operators and keywords used to create ABEL programs" remains in the course objectives in 2018 (https://engineering.purdue.edu/ece270/Docs/learning_outs_and...)

I believe there are some Lattice ispPAC parts that still use ABEL, at least on the back end of their GUI.

Xilinx supported their flavor until ISE 10 which is still available to download. They also had a code converter to the HDLs so you could still theoretically target the latest FPGAs using ABEL with some scripting to orchestrate the mixed tooling.

Holy shit, around 2005/6 I took an EE class in programmable logic that used ABEL for the labs. I couldn't find any information on it anywhere and all we had to learn the language was an old single-page photocopy of an example that had been re-copied so many times it was barely readable. And the ends of the lines were cut off.

Needless to say, the ABEL projects were frustrating...indeed a shame that it's not open source.

I have an original manual around here somewhere, I wonder if anyone would shoot me if I scanned it and made it available :-/

Some more info (I'm surprised at the response here!):


"The ABEL concept and original compiler were created by Russell de Pina of Data I/O's Applied Research Group in 1981." This is false. I don't know what de Pina did, but ABEL was developed from scratch by the 7 member team listed in Wikipedia, and the grammar and semantics were designed by myself.

> I have an original manual around here somewhere, I wonder if anyone would shoot me if I scanned it and made it available :-/

Is there some "copyright" printed on it? Before 1989 it was apparently "required" and if not printed apparently it matters if "the author made diligent attempts to correct the situation":


"you should assume that every work is protected by copyright unless you can establish that it is not. As mentioned above, you can’t rely on the presence or absence of a copyright notice (©) to make this determination, because a notice is not required for works published after March 1, 1989. And even for works published before 1989, the absence of a copyright notice may not affect the validity of the copyright — for example, if the author made diligent attempts to correct the situation.

The exception is for materials put to work under the “fair use rule.” This rule recognizes that society can often benefit from the unauthorized use of copyrighted materials when the purpose of the use serves the ends of scholarship, education or an informed public. For example, scholars must be free to quote from their research resources in order to comment on the material. To strike a balance between the needs of a public to be well-informed and the rights of copyright owners to profit from their creativity, Congress passed a law authorizing the use of copyrighted materials in certain circumstances deemed to be “fair” — even if the copyright owner doesn’t give permission."

It's not simple, but maybe an interesting starting point... Maybe, if there's if it is not a product actually being sold or even used as such anymore, there's reasonable chance that the copyright holder wouldn't be interested to enforce the protection of the "historical material"?

FWIW, this appears to be de Pina's contribution:


Did you go to Caltech by any chance?



But my degree is ME, not CS nor EE.

Verilog was heavily influenced by ABEL, and is now the predominant HDL in many places.

Have you looked at chisel?

I had a co-worker that still liked ABEL, I think. Was that the crazy HDL that assumed it could output anything it wanted in unconstrained cases?

>> "Most importantly, we have to change the culture of hardware design. Today, we don’t have open sharing … "

I'll have popcorn ready for the eventuality where IP blocks are widely available under GPL type of FOSS licenses and Intel|AMD|ARM|TI|... is eventually found to include one or more of those open sourced blocks with incompatible license in their chips.

The exact same arguments have been perused ad nauseam by old-timers in the software industry in the 80's and 90's when they clamored against open-source being anti-capitalist and un-american. Not sure how and why H/W is different.

I was thinking this mostly from the standpoint of viral nature of GPL-kind of FOSS licenses. E.g. somehow, somewhere someone manages to inject a GPL licensed IP block into a commercial CPU, after which, with my layman understanding, chip manufacturer now owes under the terms of GPL everyone who happens to have purchased one of those chips a full VHDL, Verilog etc. source code of the entire chip.

In software you can always replace a library that had incompatible license with another. Not sure how this would work with a chip.

As for attempts to modernize EDA industry, I welcome that in open arms. And if in the process we end up open sourcing a lot of current IP blocks (think of USB, HDMI, etc. designs) - all the better.

That is an absurdly wrong interpretation of both the GPL and how the courts exercise common sense.

How would you interpret the GPL in above situation?

Intuitively I'd thought something similar, but IANAL and could be completely wrong, of course :)

Different level of liability. When GPL missteps are discovered in software, the offender can usually just re-release the software without the GPL code and move on. In hardware…it's soldered into a bunch of devices all over the place. Consider Intel taking a $475 million hit for the FDIV bug (requiring a hardware replacement) vs the invisible bugfixing through microcode (i.e., software) they do now.

They would just need to pay a fee to the copyright owner for the violations, not replace all the existing hardware.

You don't skip validation just because you use open source.

Not sure what cores will be needed for you to break out the popcorn. But there are actually quite a few cores available under open and free licenses.

First off, the RISC-V community is based on the open ISA. There are several open implementations of the ISA. And the RISC-V community is meaking good headways in developing open tools, peripheral cores etc.


Secondly there are at least two attempts at collecting and tracking open cores for FPGA- and ASIC-implementations.

LibreCores is the newer project. They have collected quite a few projects:

https://www.librecores.org/ https://www.librecores.org/project/list

Related to LibreCores is the SoC-builder and core package handler Fusesoc by Olof Kindgren. Fusesoc makes creating your SoC easy:


The older project is OpenCores. OpenCores has been quite tightly related to the OpenRISC CPU core, the wishbone set of on-chip interconnect solutions. They have been used in many FPGAs and ASICs

https://opencores.org/ https://openrisc.io/

Then you have projects like Cryptech that develops a complete, totally open, Hardware Security Module capable of doing certificate signing, OpenDNSSEC signing etc. The Cryptech Alpha design from PCB to FPGA cores and SW including pkcs11 handling is open. The project has amassed quite a few cores. The PCB design is available in kiCAD. (disclaimer: I'm part of the Cryptech core team doing a lot of the FPGA design work.)

https://cryptech.is/ https://trac.cryptech.is/wiki/GitRepositories

Speaking of tools like KiCAD, there are aqtually quite a few open tools for HW design. For simuation there are Icarus Verilog, Verilator, cver for example. They might not be as fast as VCS by Synoptsys. But they do work. I use them daily.

http://iverilog.icarus.com/ https://www.veripool.org/projects/verilator/wiki/Intro

For synthesis, P&R the state is less good. For implementation in Altera and Xilinx devices you currently have to use the cost free tools from the vendors. But there is work ongoing to reverse engineer Xilinx Spartan devices. I don't know the current state though.

But what has been reverse engineered are the ICE40 FPGA devices from Lattice. And for these you can use the open tool Yosys by Clifford Wolf (also mentioned below by someone else).


And if you are looking for open implementations of crypto functions etc I have quite a few on Github and try to develop more all the time:


The sha256 and aes cores has been used in quite a few FPGA and ASIC designs. Right now I'm working on completing cores for the Blake2b and Blake2s hash functions.

I agree that we in the HW community is waay behind the SW community in terms of open tools, libraries (i.e. cores). But it is not totally rotten, and it is getting better. RISC-V is to me really exciting.

This will never, ever happen, so why ponder it?

They have been some attempt. E.g: the mit project called sirus that used python 2.5 as a dsl to describe high level components you could combine and reuse and then process to generate system c or verilog.

Unfortunaly, while the tool is pretty nice, it never resulted in major adoption (qualcom has some tool using it internally and a few others) and we haven't seen the idea of making reusable libs and components florish.

Somebody would need to find this project and up it to python 3.6. With current tooling, it would make writting code in it really nice and ease the creation of reusable components.

Reusability in raw verilig is hard

What in the EDA industry is preventing this?

Every CAD system I know of supports ways of group circuits into modules and libraries for multiple Instantation. And those libraries are distributable.

I wonder how a system would turn out in which all electronics/software are forced to have both their diagrams/schematics and code published, but in exchange this is copyrighted for like ~7 years or so.

You are describing the patent system

My advisor at Stanford is working on an open-source hardware toolchain to solve these exact problems. The Agile Hardware Center is trying to bring software methodologies of rapid prototyping and pervasive code sharing/reuse to ASICs, CGRAs, and FPGAs: https://aha.stanford.edu/

It’s a bit ironic that a decade or two ago, there was a drive to make software development look more like hardware development (as if it were somehow better), but the trend has swung all the way around.

And stuff like Cleanroom worked to get the defect rate really low. The formal tooling of things like SPARK are pretty amazing now. Few adopted it but it happened. And like Karrot_Kream says, there's a small subset actually using those kinds of methods to their benefit. It's more than "dozens." Not mainstream by far, though.



What was this drive?

Using contracts, random verification, and formal verification methods to ensure the integrity of software blocks, I believe. There's still dozens of us out there, heh.

Not just that, but there was a genuine effort to make programming more like connecting together components on a board.

How was that different than ideas found in object oriented programming?

There were folks that believe you could instantiate objects graphically, and just connect them together and wire them up. Take a look at Java Spring Beans, which let you describe object instances through XML that either defines an object instance or wires up other instances together.

What you are mentioning sounds like LabVIEW (http://www.ni.com/en-gb/shop/labview.html). Incidentally it is widely used in semi-conductor industry.

I remind testing Java Studio as a student back in 1997, I guess that's Spring Beans spiritual ancestor.

A review with screenshots : https://www.javaworld.com/article/2076574/developer-tools-id...

Any tips on getting started with FPGA on custom pcbs?

What exactly do you mean by "getting started with FPGA on custom PCBs?" Have you made custom PCBs with high density BGAs before? Most non-trivial FPGAs (to me that means you can easily fit a decent softcore processor with space left over for your FPGA logic) will be ball grid arrays and almost impossible to DIY without xray inspection equipment and a reflow oven. You can get what you need for a few hundred $ on eBay if you're patient and lucky but even then, getting to the point where you can solder the chips reliably will take time.

If you have the budget for professional assembly, then I would start with a Digilent product that has an available reference design you can copy later. First get used to the FPGA and how it works (they are largely incomparable to a CPU or GPU except for the fact that they both have transistors). Then, design a simpler board with a smaller FPGA and work your way up to the big 500-1k pin chips.

You don't need XRay and your reflow oven can be a toaster oven. I do recommend a cheap microscope and a good pair of tweezers though. With this you can do 0.5mm pitch BGA although that is pushing it. I've made hundreds of prototypes this way.

The fear of BGA parts is seriously overblown. The only real expensive part is the finer pitch parts will require tighter tolerances on the PCB which will take you out of the PCB batch services price tier.

edit: toaster oven, not toaster

With this you can do 0.5mm pitch BGA although that is pushing it.

If you have actually figured out how to consistently make boards with BGA-256/512 0.5mm pitch parts with just a toaster, I'd love to learn more about your technique. Even with professional inspection equipment and a pick and place, it's rarely worth the effort unless I have an imminent deadline.

For a beginner, I doubt the cost of trial and error would be cheaper than just having someone else do it. He could go with a larger pitch [edit: and smaller pin count] but I have never successfully introduced someone to FPGAs without a relatively huge chip capable of running a soft core closer to what they're used to with in general purpose computing.

I use a 10x jewelers microscope and tweezers to get the alignment perfect. The hardest part is actually applying the solder paste there is no room for registration errors. I triple check everything.

As for the reflow profile I raise the temperature gradually, with the thermocouple taped directly to the PCB with kapton tape. I do this manually, not with an automatic controller. My toaster oven is a convection model with a fan which helps, and I had some trial and error with the rack position.

The part in question is: TPS62180 which is a dual phase step down converter.

Edit: I think you are conflating I've done both high ball count and 0.5mm. Never both at the same time. I've soldered high ball count Xilinx FPGAs at 0.8mm which was relatively easy. Also I stress these are for prototypes. You can do production runs and that is how the company behind 3DRobotics got their start in the early days of quadrotors but its a whole other ball of wax.

My appologies, I work largely on client/personal projects in the high speed digital/RF domain so my assumptions can be all over the place. Client budgets/deadlines leave little room for mistakes and my dexterity isn't what it once was so I use a PNP or make funky assembly jigs (my CV implementation/hands makes it easier to align with pins).

If you can, I'd really recommend talking to dentists who are retiring in your area or buying an XRay machine on ebay. They're pretty cheap and relatively safe and will open up a whole world of PCB fab. The problem with BGAs is the lack of feedback: if you have a chip with hundreds of pins and precise requirements for power boot up timing or dozens of impedance matched traces, figuring out what went wrong with your boards is literally impossible without proper inspection.

Since you have an intuitive feel for soldering BGAs already, I think that with a visualization of the solder joints you would be able to solder high pin count chips at small pitches. Most issues that I've experienced with complex BGAs are caused by mass manufacturing where you dont have the room to reflow every 10th board and you find out too late that a variable (from personal experience: el nino) has changed and reduced your yields by double digits. If you can take the time to do it right by hand, the sky's the limit.

If your back is against the wall you can also hotfix design mistakes like swapping two LVDS MIPI2 pins by using a laser drill to strip the plating on microvias at an angle to preserve the trace above it and resoldering them with wire only slightly bigger than IC interconnects. An almost completely useless skill but nothing beats the feeling you get when you can command RoHS compliant surface tension to do your bidding. Fair warning though: here be dragons.

I've noticed it a lot going to trade shows that the North American market has turned so much to the high end that most people assume you need all this fancy gear to make anything. Its really stifling innovation in my opinion. People think they needs tens of thousands to get an MVP prototype out the door and most of the time it kills the idea. If you are a big company and already have the resources then by all means use them, but someone with a good idea shouldn't think its required to build something new.

On the high speed digital front we do have 1Ghz DDR3 and its worked very well - we don't even use controlled impedance. Of course we are extremely careful to keep traces short and length matched. Our application can handle a very small error rate, however in practice we haven't seen any corruption at all in testing.

Its not the right approach if you are pushing the envelope of technology. But if you need a bit more grunt than an atmega by all means design in the ARM SoC and DDR3. You don't need fancy gear to do it.

Edit: Also thanks, we're looking into expanding more manufacturing in house and you've given me some ideas. I'll definitely look into an XRay machine.

I agree on the trade show front. I feel like the worst offender is LPKF: I've come close to losing a significant fraction of my client base twice because several clients insisted that $100-200k capex/support expenses were worth slightly faster (theoretically) prototyping speeds and the vastly higher NRE of learning how to design for the damn thing. We ended up sending 90% of the boards to a third party assembler anyway - delaying the completion of the project two months past my initial estimate with twice as many revisions as I had predicted. Literally the worst project of my career was caused by inability to say no to flashy new tech. Twice (Yeah, still bitter).

On the other hand, I designed a STM32F4 based design last year and it was glorious. SnapEda for all symbols/footprints, Altium design vault for a specialized FPGA design, and github for a STM32 reference design PCB file using the same layers/copper weights as my requirements with imported fab design rules for good measure. The PCBs cost like $1500 and assembly was only $1600. 10 years ago I used to pay that much per single board in quantities of 30, for roughly the same complexity. This time, the whole thing (minus firmware) took a weekend.

Good luck with your mfg! If you have a chance, please blog about your experiences. There arent enough people spreading the art of solder.

It's been a little while since I did proto boards, but I remember a vast chasm between industrial & hobbyist, that basically forced me to wrangle packages I had no desire to deal with. You could get 4GB DDR2 DIMMs, or you could get a DIP 8kB SRAM. For a microchip you could get TQFP-32, or BGA-1000. So on and so forth. I remember a useful chip that came in nothing but QFN-16.

It seemed like it wasn't I need fancy gear to make blinky lights, but rather for anything better than 8-bit, kilobyte, and low MHz, the only parts available was the fancy commercial stuff.

SMT soldering is seriously easy, I teach it to kids - people are just too scared by it - I've done small BGAs - hot air reflow, a fine tipped iron, and if you have old eyes like mine a stereo microscope are all you need.

I do occasionally do small (100+ pins) BGAs - for that I get a steel solder stencil made and use solder paste and a cheap Chinese reflow oven

I'm trying really hard to communicate that you can use these parts pretty easily. Just get a stencil made with your PCB it's another $40. Add in a toaster oven, some solder paste and tweezers and your all set.

Once you see how easy it is to reflow a board you won't want to bother with hand soldering DIP packages anyway.

If you have access to a laser cutter you can make a pretty decent solder stencil by folding some clear package tape (so all adhesive is on the inside) and having it cut out the hole pattern. Lay it down on the PCB, squeegee some solder paste over it like you would a real solder mask, and carefully peel the tape up.

Been a few years, but IIRC a labmate of mine got a few units of .5mm highish ball count to work using that process and a toaster oven, but it is a bitch to test.

Would maybe do for a hobby project assuming you have access the equipment (e.g. a hackerspace or something) and are willing to drop a buttload of time into trying to get it to work, but if you are working for someone it is probably more cost effective to save the manhours and get it done by someone with the right equipment.

Things have changed a lot. You can get proper stainless steel stencils from lots of providers now, most PCB houses have it available as well - done to the same specs as your PCB.

The only time it really fails is if I'm lazy applying the solder paste or if I stop paying attention during reflow. Doing it in house saves a ton of money which is really important when you are trying to get your first prototypes up and running. Plus it also keeps you deeply in tune with what is possible to manufacture and what isn't. DFM stops being just a checklist.

We'd have a lot more hardware startups if people realized how cheap and easy its become. You need the prototype to get funding.

Agreed, if you meant a toaster oven. It does take precise temperature control if you want any chance at reliability. And a fine touch. It's easy to line the whole thing up one row off and have to scrap the part, which is a bummer if you can't re ball it.

If i have never reflowed a pcb before, how should i learn?

Reflow trash PCB's. Focus on packages like TQFP, and discrete components of human-visible sizes.

Your comment reads as if you equate 'BGA' with 'SMD', a BGA is a ball grid array with up to 1,000 tiny pads that have been pre-dipped in solder. Your toaster isn't going to work.

Yes, it will. Sure, it won't do wonders with 0.5mm pitch bga's, but 217 pin 0.8mm ones are doable even with a cheap Chinese hot air station. That's how I did these boards.


You can use a toaster, but your yield will be poor. Slapping some insulation on the oven and a PID controller will improve things significantly.

If you screw up the solder job, you can use a hot air gun to remove the BGA part, reball it (using a stencil, fairly cheap if its a common package), then try reflowing again.

If you practice for a few hours on scrap electronics, you can get good enough at it.

BGA is actually easier than leaded SMD parts. The tiny leads tend to bridge easily. With BGA you can be up to half the pitch off and it will center itself. I actually only go for leadless and BGA now because anything else is more of a hassle.

It is a bit rude to assume I don't know what a BGA part is.

The tweezers bit is what triggered that. Handling a large BGA with tweezers is going to scratch the PCB if you're not ultra careful.

Anyway, if you do this stuff often enough then I see no reason why you wouldn't get the proper tools, a rework station and an actual reflow oven or something with a PID controlled heating element would make your life so much easier. Working with bad tools would drive me nuts.

The reason the larger BGA center themselves is as soon as the solder goes fluid there is a lot of accumulated surface tension trying to reduce the size of the bridge and that will center the part all by itself. For that to work properly though everything has to become fluid more or less at once and stay fluid until the part has shifted to the right position.

The tweezers are for the 0802 passives and nudging the FPGA itself. It takes forever but then so does programming the pick and place for a one-off.

As for the reflow you can actually get more consistent results with the toaster oven - It just won't be able to handle the volume of actual production. Whatever you do just don't try going "semi-pro" and getting one of those IR ovens from China. Stick with the $40 walmart special. The toaster oven when heated slowly is much less likely to have hot and cold spots. Stenciling and placing parts take up a lot more time and are much more error prone.

Someone told me a spartan 6 might be enough for what I need, will qfp make my life easier?

I personally find them more annoying than just BGA. But conventional wisdom is they are easier since technically you can solder them with a soldering iron and eventually you'll get lucky and remove the last solder bridge without creating a new one.

The trick with BGA parts is to use solder paste and a stencil. Once you try it you will never go back. By the way make sure you read the xilinx documents extremely carefully. Its easy to forget to wire PUDC_B and prevent the thing from ever being programmed. There are lots of little gotchas.

IMO start with QFP, yes. It's not trivial and has a lot of hand work, but critically you can both see & fix your mistakes.


Have you ever considered a DIY youtube channel?

If you, @slededit, others on this thread did something like Louis Rossmann, that'd be fantastic. I've never even seen a dentist's x-ray machine, much less seen it used for DIY design. That'd be amazing to watch.

Louis Rossmann https://www.youtube.com/channel/UCl2mFZoRqjw_ELax4Yisf6w

(But maybe y'all would dial down the rhetoric from an 11 to a 8 or 9.)

The project I'm working on right now is with a FLIR tau 2 thermal camera. Our university got it donated but the only decent expansion board is 1400 euro so i figured I might use that budget to build my own. I basically just need to convert LVDS to mipi csi-2 or ethernet. You can find the LVDS spec on page 16 of this spec sheet https://www.flir.com/globalassets/imported-assets/document/f... I need to make a board with the 50 pin hirose. Do you have any suggestions on getting started? I figured a qfp spartan 6 would be enough.

Man I hate those connectors. I don't have a problem soldering most things down to 0.4mm pitch & 0402 components without magnification, but these connectors love to bridge. Either lay down a (thin!) bead of solder paste, or try drag soldering (I prefer this).

With LVDS, make sure to keep tracks short,have some impedance control (doesn't need to be super strict), and match trace lengths. One issue I had before was a clock (90Mhz I think) that wasn't quite in sync. Ended up ordering a new board with a 8 channel inverter, and snaking the clock through a few of the the gates to get a slight delay.

I'm curious, my uni satellite club is using the same camera for a cubesat, is that the same domain you're using it for?

We are building ours for a drone. Please feel free to contact me if you want to work together to figure this camera out.

Power requirements are annoying. You need multiple voltages and they need to ramp up in a certain sequence at a certain ramp rate(!). Study reference designs - find a development board that has the same family of FPGA and also has published schematics and board layout (such as Intel and Xilinx official boards). Then just copy their power layout.

Just had a really good experience using a free online service [0] and using a home made reflow set up. I guess it depends on your end goals of the PCB deployment?

[0] https://easyeda.com

I just got back from the Design Automation Conference in San Francisco. It is one of the major EDA conferences. Andreas Olofsson gave a talk about the silicon compiler. There was serious discussion about open source EDA. As far as I could tell it is still unclear what the role of academia will be. It seems tricky to align academic incentives with the implementation, and most importantly, maintenance of an open source EDA stack. However, there is quite some buzz and people are enthused. A first workshop, the "Workshop on Open-Source EDA Technology" (WOSET) has been organized.

I also thought I'd try to answer some questions that I've seen in the comments. Disclaimer: as a lowly PhD student I am only privy to some information. I'm answering to the best of my knowledge.

1) As mentioned by hardwarefriend, synthesis tools are standard in ASIC/FPGA design flows. However, chip design currently often still takes a lot of manual work and/or stitching together of tools. The main goal of the compiler is to create a push-button solution. Designing a new chip should be as simple as cloning a design from GitHub and calling "make" on the silicon compiler.

2) Related to (1). The focus is on automation rather than performance. We are okay with sacrificing performance as long as compiler users don't have to deal with individual build steps.

3) There should be support for both digital, analog, and mixed-signal designs.

4) Rest assured that people are aware of yosys and related tools. In fact, Clifford was present at the event :-) Other (academic) open source EDA tools include the ABC for logic synthesis & verification, the EPFL logic synthesis libraries (disclaimer: co-author), and Rsyn for physicial design. There are many others, I'm certainly not familiar with all of them. Compiling a library of available open source tools is part of the project.

Edit: to be clear, WOSET has been planned, but will be held in November. Submissions are open until August 15.

> Compiling a library of available open source tools is part of the project.

Compiling? What's that even mean? What about funding?

I wrote arachne-pnr, the place and route tool for the icestorm stack. My situation changed, I didn't see a way to fund myself to work on it and I didn't have the time to work on it in my spare time. I assume that's one of the reasons Clifford is planning to use VPR going forward (that, and it is almost certainly more mature, has institutional support at Toronto, etc.) I would have loved to work on EDA tools. I've moved on to other things, but I wonder if these programs will fund the likes of Yosys/SymbiFlow/icestorm/arachne-pnr.

Do you have a link to WOSET? Googling doesn't turn up anything.

The website isn’t up yet, but it’s organized by Prof. Sherief Reda at Brown University. You can contact him for inquiries.

So it costs $500 million every time someone designs a SoC and (before now) nobody has spent $100 million trying to make that more efficient?

I think the primary focus / value here would be if they can somehow dramatically reduce the cost of making masks and IC's. Lets say these researchers make the actual process of converting C code to silicon super easy, then you go to make the chip and they are like, "cool, the mask / fab cost is like 500k USD for samples" -- then basically the exact same people who currently make chips will keep making chips. What would be awesome would be if DARPA funded somebody converting an old 90nm fab into a low cost foundry that was basically fully automated and subsidized it to allow people to make a chip design for $1000 USD, then you would have a flood of people just being like, "well, its $1000 bucks, why the hell not try to make this chip even if its wrong.." most businesses would happily roll the dice on stuff for that kind of price, and some individuals would as well.

Uh, for 90nm you should be in the 4-digit range for a hand full of prototypes.

Oh yea I was totally exaggerating since obviously the prices vary tremendously upon the what you are trying to do. I’ve previously worked for a semiconductor company, so there is sort of a “you can pay as much as you want” option always avail if you want a super awesome mixed signal chip. For the general public, do you have a particular place you are aware of that you would pay the 4 digits for the mask plus few prototypes? I was not aware that there was even simple chips you could get in that range.. I mean 4 digits implies what like say 5k... so that’s pretty low cost.... really?

Both CMP and MOSIS have a number of options that come in under US$10k for a handful of prototypes. Large process nodes, so they won't be competitive for digital stuff, but for analog or mostly analog mixed-signal, they might actually be able to beat anything you can buy off the shelf. Haven't tried it yet myself, though.

You can get an automated quote from the MOSIS link below:


There is one, from the european network to made to provide semiconductor prototyping for universities. [0]

[0]: http://www.europractice-ic.com/

I read that as "nobody has spent $100 million trying to make that more efficient and open sourced it/made it available to smaller companies."

Someone who is NOT in the business of SoC design may have wanted to open source something like this just to commoditize their SoCs. But if you're in that business already, then why help reduce the barrier to entry?

If you are rutinely making $500M SoCs, you can benefit yourself too of the automation/process improvement (as an established actor).

They have. The problem is that efficient silicon layout is NP-complete and automated tools can't do better than humans today. At best they can assist and detect certain classes of errors. I'm sure Intel has invested more than $100 million on tooling by now.

There are many NP complete problems where computer do vastly better than humans.

But when it comes to EDA, the utterly closed-source culture of the industry has completely prevented what essentially amounts to a horde of smart people to work on the problem: no one - except a very small number insiders - even know what the problems are.

It's really godawful. Software guys have no idea how good they have it.

You have layers upon layers of closed source EDA tools and home-brew scripts, all built around horribly abysmal "standards" like SystemVerilog, mostly cobbled together in TCL and Perl.

For instance, the following sounds like a horrible joke, but it is real. In a real life, world-class company, port connections between modules are handled using a Perl script. This perl script invokes, I shit you not, an Emacs-Lisp Verilog parser, that scans your files to figure out what submodules you are instantiating, infers the ports that you want to connect, then injects a blob of connections directly into the source code file you were editing. The engineers at this company then commit the output of this Perl script into their source code repository. Predictably the diffs are full of line noise.

And you can't escape from this, any of it, because there are so many tools involved, and they all do such useful things, that you end up truly stuck using whatever features all of the tools support. And `ifdefs. Sigh...

So true. I used to manage a design verification business that made very good money consulting for companies' in-house design groups. Most of what we did was write Tcl and Perl scripts to glue together tools. Our mojo was that we had seen so many examples of design flows that had terrible tool integration and had to be fixed with one-off scripts, and as a result had built up a large library of scripts that could be be quickly tweaked to fix the next customer's problems. It felt a lot like working as a plumber to clean old clogged drains, a pretty dirty business.

"...a lot like working as a plumber to clean old clogged drains..."

Working IT (aka DevOps, continuous tech support), it's just cutting & pasting strings. Maybe some data quality stuff.

Gods I miss product development. Actual design & programming.

Software guys have it good thanks to software guys; especially the code-sharing, open-sourcing software guys. And mostly just in the last quarter century or so.

> It's really godawful. Software guys have no idea how good they have it.

You can also look at it from the other perspective: at least you can still make money making hardware tooling. That is not possible for software tooling, as everything is open-source already.

>That is not possible for software tooling, as everything is open-source already

You have a very distorted view of OpenSource. People working on OpenSource related project do make money. And lots of it. They're just not selling what essentially amounts to obfuscated source code, a way of doing business that is apparently a great source of astonishment for HW folks.

The other point is the state of hardware tooling. Tool vendors may be making money, but their customers are getting shafted something fierce.

Every single EDA tool I've used feels like it was written in the 80's: slow, bulky, bloated, opaque, complicated, unpredictable behavior, need band-aids all over the place to get it to do what you actually need (Tcl anyone?)

The typical byproduct of an industry focused on secrets instead of innovation.

> People working on OpenSource related project do make money. And lots of it.

I was talking about compilers and such. Do you have any evidence to support this case?

>Do you have any evidence to support this case

In the SW world tooling people are busy kick-starting entire industries. You can argue they may be making less money, but as to their overall utility to the ecosystem, clear win AFAIC.

Here's an example:


Where is the LLVM of the HW world?

Yes, my entire point was about it being difficult to make money writing SW tooling, like it was in the past (e.g. like Borland made a lot of money selling different kinds of compilers in the 80s and 90s).

Nowadays, I suppose you can still make money writing compilers, but besides being a master at compiler design, you now also have to have the skills to sell your services, which is difficult, time-consuming and boring. You can't just sell shrink-wrapped products like before.

You couldn't be any more dead-on. EDA feels like Scientology, where with every level closer to the core you first need to invest half your life savings + a firstborn. I have an FPGA lying on my desk I can do barely anything interesting with because there are no open-source cores for even simple things like USB3.1 controllers or Thunderbolt or pretty much any interesting bus. Protocols are strongly guarded open secrets where everyone in the industry knows how they work (like MIPI for example) but God Forbid anyone outside the industry might show interest in them. Then you first need a $100k annual membership to the foundation and need to sign and NDA with your blood and that of the next three kin in line.

There's a cargo cult confidentiality that achieves absolutely nothing if not completely frustrating out anyone who tries their best to keep their interest in it alive.

USB 3.1 pushes data at 10Gbps. Do you have any idea how much black magic it takes to do that?

Try writing an FPGA SDRAM controller that reliably operates across frequency, voltage, and temperature at 150 MHz. It is doable but a challenge with the common xilinx boards that are sold to hobbyists for $80. If you can do that, then think about designing a circuit which operates 20 times faster, and negotiates a protocol which is 100 times more complex than what an SDRAM controller has to do.

We routinely grab a highly complex operating-system kernel for free, build it repeatably from source, and deploy it on machines. It's called Linux.

Kernels are extremely complex software blobs with crazy race conditions, and tons of device-specific hacks. Somehow the world goes on with Linux going open source. Nobody is claiming that a single person will bang out a USB 3.1 block, but building up these "IP"s over time through the community would be a huge first step.

I have no doubt that solving modern state-of-the-art protocol is an extremely complex endeavor.

All the more reason for sharing your solutions with the rest of the world, or at the very least parts of it.

The problem with the EDA industry is their firmly held belief that these kind of trade secrets is what they're competing with.

Think of Google, who had this exact same struggle internally.

For BigTable and MapReduce, they chose to sit on the "IP" to put it in H/W parlance. This gave the world Hadoop and its evil twin file-system whose name I forget. Yay (Hadoop is a a catastrophe).

For Tensor Flow, they chose the OpenSource route. TF is arguably the best framework for ML. There's a lesson somewhere in there.

Google has that luxury because they make their money on ads. If their primary source of revenue was selling a database product (like say oracle) then there may have been a different decision made there.

Not to mention the "commoditized complement" approach, e.g. https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/

That's probably what's lacking here, anyone with anything to gain by opening up all the bemoaned goodies. Is that Darpa's role?

Nice article. If I understand it correctly I would argue it’s the FAB’s role. :)

But in contrast there are boatloads of free open source software that does highly nontrivial things and has thousands and thousands of PhD-man-hours poured into it.

Indeed. But what if every statement in those boatloads of open source software were race conditions, and varied with each machine, frequency, temperature, voltage, workload, etc?

You're saying the problem is harder. I'm more than willing to believe that.

Wouldn't you then want ten times the number of people eyeballing the code? And boatloads of free contributers submitting change requests to fix potential problems?

And a marketplace of users that can freely choose deciciding which "IP (I still find the term so deeply offensive, I can't help but it I have to put it in quote)?

The key is the solution is so application specific. Write code once, it runs anywhere forever more. But IP blocks are not really like that. Especially advanced IP (USB1.1 vs USB3.1) need to be heavily customized for each application. So John user can't just download a block from Sourceforge and use it.

A lot of very complicated, distributed systems require a lot of customization and tuning to run. Kafka, a system used for distributed "logging" (sorry for the gloss) is known to have a huge operational burden, and takes a lot of customization. If you've ever deployed ElasticSearch, a document search tool, you know you have to write non-trivial blocks of code that operate on the domain of the target documents, to even have ElasticSearch produce basic results.

The point is that, while these blocks can be complicated and need lots of customization, the ability to grab the source and modify it yourself is a huge enabler. If I could grab a stock IP from a repo and then spend 40 man-hours customizing it, it would be a huge benefit over spending 1000 man-hours making the IP from scratch.

I hate to appeal to expertise, but I've written HDL, synthesized it, and done mask layouts before. I understand that it requires a lot of customizing, but the first step to all of this is sharing. Sharing infrastructural components is a huge reason of why the commodity software business is as robust as it is.

I don't disagree that it would be great & all. But there's things working against it, that software doesn't really have to deal with (or at least has solved) is my point.

And let's be honest, a state of the art block ported to the latest process isn't going to be 40 man-hours to customize, you're probably looking at more like a couple man-years unless it's very well designed... Which is why reusable core shops are a viable business. And how vibrant of an open source community would you have, if it took a small team a year just to customize & implement the open source design to their application? You would have very few users.

Now, you could argue that because a lot of the real work is in the customization that they could open source the core. But where does the customization end, and the core, the root functionality begin?

I think a more realistic pitch would be an open source testbench or behavioral model that verifies your core is fully spec compliant.

"sourceforge". nuff said.

>USB 3.1 pushes data at 10Gbps. Do you have any idea how much black magic it takes to do that?

I doubt most projects need the full 10gibts. What if they only need 750mbits? That's unfortunately not possible without USB 3.1. The alternative to USB 3.1 (Gen 1 or 2 doesn't matter) is USB 2.0 which only can do 480mbits.

because there are no open-source cores for even simple things like USB3.1 controllers or Thunderbolt

Those are hardly that simple. There exist entire companies whose function is designing such cores (and licensing them).

Red Hat entirely operates on a business-licensed version of a fundamentally free product. Oracle does pretty much the same. Many large, widely deployed and lucrative software packages are completely open-source. Because there's very few businesses who think that grabbing those packages without vendor support is a good idea. Especially in the EDA industry, with its very sensitive and particular cores, having the source does not guarantee it working, let alone working well. Buying vendor support is cheaper than winging it yourself.

I guarantee you, every single Chinese manufacturer who wants to dodge licensing fees has already done so with stolen IP cores. After all, EDA companies are hopelessly stuck in the past and woefully unaware of proper security practices.

Yeah I'm reading these comments and getting a good laugh.

Yup. Thanks for proving my point. This comment of yours is precisely the culture I've experienced when talking / trying to work with EDA folks. Snide comments, mockery, "you sw guys just don't get it", and a general ivory-tower attitude towards everyone who's not in the inner circle.

I'm a hw guy, FWIW that reaction you see might owe a lot to the periodic castigation directed our way over things like, (paraphrasing)

There's a clear need for a Ruby-to-gates compiler, the only possible explanation for why there isn't one is because hw guys just don't get it

Hard not to just throw up your hands.

The sad irony is a lot of the tools, languages, & culture of today's software could really be valuable- for formal verification. But that doesn't get much attention.

Yeah when I first read the headline i thought it said 100 Billion, now i'm a little dissapointed

On the opposite side of the spectrum, there's Chuck Moore (Forth creator) who in trying to find the simplest combination of software and hardware for his projects devoted a lot of time into a DIY VLSI CAD system. Fascinating history behind it, although the actual OKAD system is essentially trade secret for his company.

His site has been down for a while, but someone thankfully mirrored most of the pages here: https://colorforth.github.io/vlsi.html

More history about OKAD, plus links to more about Forth both software and hardware: http://www.ultratechnology.com/okad.htm

Side note: when people complain about the military budget, projects like these should be noted. Political reality in America, today, is military R&D and jobs programs are easier to fund than civilian ones; so that’s where projects go to live.

We could fund this exact project through academic means. We choose to allot funds through the military.

A lot of academic funds have military roots?

and why do they have to?

in australia we have CSIRO which does some amazing research, and our military budget is basically non existent compared to the US military budget

Australian defense budget is 36 billion [1] with a GDP of 1.2 trillion. US defense budget is 600 billion with a GDP of 18 trillion. It is just about the exact same percent of GDP.

[1] https://www.pyneonline.com.au/media-centre/media-releases/a-...

If you read the sentence just before it, 2% of GDP is the growth target: "[...] the Defence budget growing to two per cent of GDP by 2020–21". The $36b number is the target value in 2020-2021, not right now. The current US defence budget is 3.1% of GDP.

As a percentage of total government expenditure, Australia spends 6% on defence and the US spends 16%.

And it's $700bn now, which is almost as much as the entire rest of the world spends on defense.

GDP is not what that defense budget is protecting. People and property are what that budget exists to protect.

China has the second highest defense budget. We have about 1/4 the population of China, yet we spend several times more on defense.

CSIRO is funded at about $787 million AUD/year.

Australia's Defence Science and Technology Group, which does science and other research for the military, is funded at about $400 million AUD/year. It's Australia's 2nd largest research organization after CSIRO.

This $100M research project is equivalent to about one fighter plane (without the ammunition, I guess). Imagine what we could fund for the cost of an aircraft carrier.

A supercomputer project the scale of the K-computer in Kobe.

So instead of explicitly funding research we should implicitly fund it in a roundabout way and you're saying people shouldn't complain about this state of things?

I have seen many people on HN and elsewhere use the same argument in other contexts. For example, to justify spending on the Apollo program on the grounds that some of the money was used to fund research which turned out to be useful outside of manned space flight. So apparently it is indeed considered a perfectly fine state of affairs.

I'm being a bit facetious of course, I just find it funny that people's acceptance of this argument depends on how much they support the primary subject of the funding.

I sincerely doubt our enormously bloated defense budget is because of research...

Guns and bombs are cheap. Everyone has plenty of them. The power of our military is in force prjoection; that is, the ability to use those guns and bombs anywhere at any time.

That requires massive, sustained logistics spending. To the point it may seem absurd and wasteful. But the alternative is fighting a fair fight on equal footing. I’d rather not do that.

Guns and bombs are cheap. The Zumwalt, F-35, B-2, and more are not cheap. I would point out that “smart” bombs and cruise missiles are not cheap, and the cost really starts to add up. Nuclear weapons research, production, and maintenance isn’t cheap.

So really it’s fair to say that guns and bombs aren’t cheap, but they are also a distraction from the delivery systems, which are catastrophically expensive.

the point it may seem absurd and wasteful. But the alternative is fighting a fair fight on equal footing. I’d rather not do that.

Yeah? Are you sure the alternative isn’t to spend 3 or 4 or 5 times as much as any other nation instead of 7 times as much? Maybe the alternative is to avoid boondoggles, and focus on proven tech. Of course the real alternative is that doing so would get in the way of the real business of arms dealing.

>The Zumwalt, F-35, B-2, and more are not cheap.

You're right, they sure aren't. But as a result no other military force on earth besides NATO has the capability to launch air superiority fighters from amphibious assault ships, and perform multi-ton circumglobal bombing sorties. That kind of capability doesn't come cheap, and shouldn't be dismissed.

Take away the B2 and we are fighting to-to-toe with Russian/Chinese long range bombers

Without the F-35 we are on equal footing with Chinese/Russian carrier based aircraft

Without Zumwalt class ships, we are going head on against Chinese missile destroyers and subs of equal capability.

The whole point is that you don't want an even remotely fair fight. "Keeping up" with others's spending, even within an order of magnitude, is a really bad idea if you can help it.

I don't disagree with your post, but you shouldn't use the Zumwalt class as an example because the US has decided to stop production of it after 3 ships. For a citation see the bottom of the section https://en.wikipedia.org/wiki/Zumwalt-class_destroyer#Backgr...

I don't think the Zumwalt was ever meant to be a mass production ship. I think it was built the prove the concept of lower signature vessels. Notably it's too small and lightly armed to accomplish most Navy objectives. The stealth technology makes it hard to modify for specific objectives.

Basically, what are you going to do with a Zumwalt that you couldn't do with a submarine? Is it better at any of those tasks than a conventional destroyer?

>>I don't think the Zumwalt was ever meant to be a mass production ship

You think wrong.

Maybe I should have clarified that the Navy never considered it to be a real ship, for the aforementioned lack of a role in the fleet. Bath Iron Works probably wanted to build hundreds of them but that was little more than a hopeful fantasy on their part.

https://www.nytimes.com/2009/04/09/business/09defense.html states that 10 "were originally planned" though the intro to the wikipedia article says that "the class was intended to take the place of battleships in meeting a congressional mandate for naval fire support," so I'm not going to dispute your clarification because I know that Congress sometimes forces the Navy to buy things the Navy does not want.

If the US military is, in any serious way, going toe-to-toe with Russian/Chinese bombers, I hope you've got your fallout shelter stocked, because as likely as not, both sides in the conflict will cease to exist ~20 minutes after that point in time.

Ukraine is still alive after 4 years of war with Russia. Relax.

They’re at war with Russia because they gave up their nuclear weapons. No one is invading a nuclear power, which I think is the point the post you’re responding to was making.

How you know that Ukraine still doesn't have nuclear weapons?

Because I assume that the Russians, when they armed it (In Soviet times), and then disarmed it (In post-Soviet times), could count up to, and back down from twenty.

If they still have weapons, they sure as hell haven't been using their existence as a deterrent. The whole point of having nukes is letting potential aggressors know that you have them, and that, if attacked, you may be crazy enough to use them.

> The whole point of having nukes is letting potential aggressors know that you have them, and that, if attacked, you may be crazy enough to use them.

You must know that Ukraine was attacked 4 years ago. So the "whole point" is not applicable in this case.

The point is not “Russia vs. USA” in an all out conventional war. It’s about spheres of influence. If I can project my will upon the (non-nuclearized) world more effectively than anyone else, my power increases substantially.

It’s why Russia is considered only a regional power, despite having a larger nuclear arsenal than the US. Their ability to perform intercontinental conventional bombing and troop deployment is non-existent. To cede this power to any other nation would be catastrophic to world peace.

Its about spheres of influence. If I can effectively project my will upon the (non-nuclearized) world more effectively than anyone else, my power increases substantially. Its why Russia is considered only a regional power, despite having a larger nuclear arsenal than the US. Their ability to perform intercontinental conventional bombing and troop deployment is non-existent. To cede this power to any other nation would be catastrophic to world peace.

What world peace? The US has started two wars that destabilized a whole region, and it shows no sign of restoring anything like peace there. World peace from such a belligerent nation, one that drone strikes most days of the week, is a bad joke or a real example of Orwell in action. How many pointless losing wars does the US need to start, how many governments does it need to overthrow only to see them revert to something even worse before you get that you’re about world peace.

You are not “the good guys.” The US is just in it for money, and that money only for a fraction of its population. It’s just a good thing the US can’t actually win a war it starts, or they’d have taken over by now. What’s actually happened is the likes of North Korea looked at what happened to Libya and Iraq and realized that they needed nuclear weapons. What a great outcome for “world peace” right? I’m sure those nuts won’t sell the technology, and other countries won’t follow suit so as not be “projected” all over.

how exactly do modern day bombers go "toe to toe"? do you send one into the other to crash into it?

Surely all you want to do is shoot them down. Bombers are purely offensive weapons. No one needs them unless they plan to attack somewhere else

> avoid boondoggles

The DoD budget, as usually reported, generally doesn't include funding allocated for specific operations (e.g. the Afghan or Iraq conflicts).

> focus on proven tech

The average soldier has always wanted proven tech. But what they really need (and what the officers ask for) is training. More HUMVEEs and Arleigh Burke destroyers, fewer wheel 4.0s and Zumwalts.

We are at the top of a power curve with regard to total dollars spent, but we actually about 4th in terms of defense spending as a percent GDP, (I think we're around 5th). Soft power remains our major advantage. Not that bad when you consider our outsize role in world military affairs.


> , (I think we're around 5th). Soft power remains our major advantage. Not that bad when you consider our outsize role in world military affairs. > https://www.forbes.com/sites/niallmccarthy/2015/06/25/the-bi....

Your link says US is 4th in list of selected countries, not in the world. On this page US not in the top: https://en.wikipedia.org/wiki/List_of_countries_by_military_...

That’s as a % of GDP, and says more about the GDP than the military spending. The US is number 1 in total military spending many times over any other country in absolute terms.

Sure, as number 1 in nominal GDP.

> We are at the top of a power curve with regard to total dollars spent, but we actually about 4th in terms of defense spending as a percent GDP

Any reason why percent GDP is the right metric? If we're normalizing I would've expected per capita instead.

Then the Vatican would be topping the charts.

You sure about this? Any citations? I saw somewhere that it'd be Saudi Arabia.

I think you have a point about the Nuclear weapons research, production, and maintenance, however the parent post was saying that the "bomb" was cheap, its getting it to the location its needed was expensive. Airplane and smart bombs are logistics.

On the other hand, every time we fire a cruise missile, we're shooting off the equivalent of an NIH R01-scale research grant.

Serious question: how much of that cost is because we don’t have to fire them more?

WW2 is the only real example to answer this kind of question. For all the global skirmishes and regular war games and other exercises, since WW2 there really hasn’t been sufficient conflict to prevent manufacturing an ahead of time stockpile of desired weaponry at the prices set by companies in the military industrial complex. Price did come down in WW2 but it can be hard to attribute it specifically to “we used everything we built and wanted more delivered yesterday” when so many other factors are involved. Wartime rationing, production regulations, patriotic decision making affecting prices people charged the government, and the simplicity of the things being manufactured compared to the complexity of modern munitions.

At the end of the day I’ve personally settled on “yes but probably not as much as you might think”.

Multiple sourcing probably played a big role. Lots of companies were making Sherman tanks, for instance. IBM was making rifles along with many other companies.

There is no second source for the F-35.

The U.S. Navy has apparently bought 8,000 Tomahawk missiles overall and we've shot 2193 of them - but I genuinely don't know when Raytheon's volume discounts kick in.

What number would you place on the threat that Iraq posed to the United States? Would it be more or less than $1.1t?

But doesn't that mean that it would be cheaper for an adversary to just wait until the excessive military spending makes the US implode, figuratively, in debt? This same concern worried former president Eisenhower back in his days[0].

A similar argument is usually touted regarding the economic strain that the 'Space Race' caused the U.S.S.R.[1].

[0] "The Cold War: An International History", p. 42-43 https://books.google.com/books?id=Yv2FAgAAQBAJ&lpg=PA43&pg=P...

[1] "Comparison of US and Estimated Soviet Expenditures for Space Programs", p. 3-4, https://www.cia.gov/library/readingroom/docs/DOC_0000316255....

The Space Race bankrupted an already failing command economy in the USSR. However it was a massive economic boon for US capitalists, as evidenced by our aerospace and defense industries today. The great thing about defense spending is that it is almost entirely domestic. We're not running up some huge trade deficit with a foreign power. It creates millions of jobs from Florida to California.

How does that apply here?

I believe the question is “does the spending result in a net benefit to society?”

I’d add in a consideration of the opportunity cost.

That’s not the broken window fallacy though.

That only works if you can survive the overspend, because it's being used to actively destroy you at the moment.

Ah, ... I see they've cut back on the education and civility budgets to accommodate for more military spending.

Who does the US intend to fight, and over what?

This logistic spending is certainly not for national defense - it has two oceans, a navy, and two thousand nuclear warheads for that purpose.

> Guns and bombs are cheap.

No, they are not. Go look up how much the military spends on such things. It is a lot. (Per unit and in aggregate.)

Apparently a military logistics expert system saved the US enough money during the Gulf War period that the savings essentially paid for all the Pentagon spending on AI research to that point. (And probably more than was spent on AI.)

The large US defense budget is almost entirely due to human costs.

The US has one of the highest median incomes on earth. Now combine that with the large human force that the US military has. The bases, veterans, healthcare, various benefits, retirement, equipment for the humans, salaries, and on it goes.

For example missiles + munition costs are a mere 2% of the military budget. Shipbuilding and maritime systems is a mere 3%. Aircraft and related systems is 5.x%. R&D is about 9%.

The budget related to operating the bases, maintenance, salaries, benefits, vets, and all other related human costs = $500 billion.

There's a strong argument that if you adjust China's military spending to factor in the difference in income / per capita costs for human soldiers vs the US (basically PPP adjusted), that China outspends the US already (and that's with an economy 1/3 smaller):


The DoD budget exceeds Google's market cap, and their research budget exceeds Google's annual revenue.

That’s nuts, they could buy one Google per year. In five years they could own all the big tech companies.

Why is thang nuts? Tech industry is peanuts.

But they dont spend it nearly as efficiently. Gov is slow and bloated. It's why government contracts are so coveted, lots of money, long term contracts, overages are just part of life.

Ok but if 200 planes need to be upgraded so that they don't get shot down by ____ new weapon. It's not possible to build an ASIC for this, so you're probably looking at doing something on an FPGA. You need a few FPGA hardware engineers, electrical engineers, software engineers. Even if the project only takes a year to design, build, and test. And then you spend an extra 6 months because xilinx's fpga tools basically don't work. You've got minimum $1 million dollars in salary, and that's not including the facility, materials, and testing costs. Even if the program costs $20 million, if it saves 40 planes, you can't tell a dead airmen's family that their loved ones life wasn't worth $500,000.

America does that all the time with it's poor people.

And it’s soldiers. The IED situation took a long time to be addressed with more suitable equipment. Not that the U.K. did any better.

Besides, DARPA funds arguably led to Infocom and their great text adventures from the 1980s.

I have questions, if anyone knows something about hardware. What would a "silicon compiler" let one do? What exactly gets easier/cheaper and what exactly could new chip designs yield?

It's difficult for me to be sure, since the article makes it sound as if they're attempting something novel, but synthesis tools are standard in ASIC/FPGA design flows.

Currently, the best synthesis tools are closed-source and extremely expensive. Imagine the benefit of having gcc/clang be free software. That is the kind of effect that is at stake here.

Usually, hardware designers will write RTL code (Verilog/VHDL) which describes the hardware slightly above the gate level. In order to turn this description into a web of logic gates (called a netlist), the design is processed by a synthesizing program. The produced netlist describes exactly how many AND, NAND, OR, etc. gates are used and how they're connected, but it doesn't actually describe where the gates are placed on the chip or the route the interconnections take to connect the gates. To generate that info, the netlist is fed into another synthesis tool (usually called place and route).

This is a simplified version, but even at this level of detail, there are important factors affecting chips. - How many gates? (less might be better) - How far are the gates from each other? (closer is better; less power, area, cost, timing) - How often will the gates switch? (less is better) - More....

More advanced synthesis tools improve area, cost, power, timing. They also allow designers to have less expertise and still obtain the same result as experienced designers by optimizing out micro-level inefficiencies in the design (though experienced designers will also lean on the synthesis tool).

To expand: compiling silicon isn't like compiling code even though it does use a hardware description language. Not only do all parts of your "code" all run at once, but you're also laying out physical transistors and mapping their connections. This is fundamentally an NP complete traveling salesman problem with an absurd exponential explosion of complexity - aka how do you route thousands or millions of connections that can't intersect on a 2D plane while still doing what the code describes. Oh and the fun part: unless you're careful, a change in a completely unrelated bit of code could break almost anything in the system by making it impossible to route connections without screwing up the timing of all the little bits.

There is no type checker that can deduce whether your design will work or not and then output machine language. At the end of the day, with silicon you have to actually figure out whether your design can be physically manufactured by running a long compiler process and then testing it, often with rigorous simulations before moving to the fab process.

I only have a vague idea aboutthe first two of your questions: I guess they use the term silicon compiler as a description of an ideal state in chip synthesis software where you can go from high level logic descriptions (VHDL source) to the final chip masks without any human intervention. Right now, this requires a lot of manual work in intermediate stages of the process. Being able to do away with that would simplify and speed up the process. But this also means tackling pretty nasty NP complete optimization problems.

The standard joke is PDF2GDS.

PDF is the document where you write the specs.

GDS (actually GDSII) is the geometry description of chip layers you send to the foundry for fabrication.

I don't quite get the open source angle in the comments here.

If I managed to get my grubby hands on a moderately modern computer, I can use all manner of open source software and I can create wonderful new software. The barrier of entry is fairly low in rich countries.

If AMD open sourced all the design aspects of their chips, I would have to get a loan to build 100 million fab to have any practical manner to enjoy it?

I can see that if Intel/AMD/NVidia/Apple shared all aspects fo their chips cross pollination might bring great things, and academic research would be boosted and might end up giving back more to the community at large, but you are talking about very few entities across the world that can afford fabs.

I believe you have it backwards: this is exactly why there is actually very little risk for the big guys to actually share their knowledge.

The financial entry barrier for building a fab is so huge that what in heaven's name would intel lose if they published the RTL for - say - their integer division hardware to show and teach the whole world how it's done when real professionals take a stab at it?

And if they're scared AMD might copy their integer division, why not publish the Verilog code from 2 or 3 generation old h/w? (and this is probably a bad example, I believe AMD and Intel are essentially done competing on stuff like that).

But what I am talking about here is basically unthinkable given the current culture in the EDA world: a person suggesting this inside one of the big shops would be committing career suicice.

Conversely, if you navigate EDA discussion boards a little bit, there is no end to the snarky or sometimes downright insulting comments made by big shop insiders about how lame and terribly inefficient the open source hardware designs published on the net actually are.

In other words: mocking outsiders are their ignorance instead of teaching them how to do cool stuff. That's the culture of the EDA world. Time for a change.

What are the EDA discussion boards? Curious to peruse.

> If AMD open sourced all the design aspects of their chips, I would have to get a loan to build 100 million fab to have any practical manner to enjoy it?

Try $1 to 5 billion. We're talking about something that over half of all extant nation states wouldn't be able to pull off without devoting 10-50% of their annual GDP to the project.

Even the large designers (AMD, Apple, Mediatek) don't have their own fabs. You would "just" need to order a design made - probably for hundreds of thousands of chips to make any sense.

Is there information on what this step would actually require?

First, you have to choose your feature size and manufacturer based on your specs because that will lock down your available cell library. Cell libraries are an abstraction over the masks/dopants and describe how to fab the transistors and higher level logic gates, made by each manufacturer for each feature size. STM, TSMC, Global Foundries, etc. have their own cell libraries and each process node gets a different one (there could also be different libraries for consumer, medical, automotive, etc.). If your design isn't pushing the physical limits of your chosen process, you can usually use a standard cell library supported by your synthesizer so that you can stay in HDLs instead of tweaking masks.

Once you've finished your design (I'll leave this as an exercise to the reader :)), you'll start a back and forth with the foundry which will run its own database of rules against your design and work with you to make tweaks that better fit their fab. After you agree on a final mask, you and the fab will run verification simulations and eventually, after several months, you'll get your chips with that new factory smell.

Yes, you can "just" order some chips made. I don't have the URLs off the top of my head but there are a few companies here in California that I've worked with. You can literally walk into their offices unannounced and if you have the money, they will start right then and there. If I remember correctly, in 2012 it cost about $100k to get started with STM's 45nm process (might have been 28nm by then, don't remember). Today, 28nm can cost as low as $4-12k if you're a Canadian researcher [1] - although that's almost certainly subsidized.

It all seems scary only because the process is so complex that no one person has a grasp on even a minor fraction of what's going on.

[1] https://www.cmc.ca/en/WhatWeOffer/Products/CMC-00200-02843.a...

There's also S-ASIC's like eASIC provides on 45nm and 28nm. What do you think about that in today's market?

(No affiliation. I find this option interesting with the varied responses I get on the topic. Triad Semi was another interesting option in this space.)

I've only ever made ASICs for high power RF equipment and that was using large feature sizes (>180 nm) in the 90s. I don't think structured ASICs even existed at that point so I can't offer any insights.

The idea is really cool. My go to advice: know your specs, know your vendor, know your delivery guy. If you can implement your design within the restrictions of your chosen structured ASIC, you trust the vendor to deliver on time/in quantity, and you can get them soon enough (I don't know the turn around time) then go for it.

Civilitty's answer is good, but I think the $4–$12k price is not subsidized; it's just CMP’s normal price. Older process nodes are even cheaper. MOSIS is the US equivalent of CMP. I don't know what the Chinese equivalent is but I am sure there is one.

This is a process a substantial number of undergraduate students manage to get through in a semester each year. Some universities will foot the MOSIS or CMP bill if the design passes DRCs.

You don't have to build your own fab or even get a fab to dedicate a whole wafer to you.

If you are willing to share silicon space with other chips, you can prototype chips on older nodes (e.g., 14nm) through services like MOSIS for "just" tens of thousands of dollars: https://www.mosis.com/products/fab-schedule

The argument to share your source code extends beyond the argument that the average end-user can take that code, alter it and redistribute it. The bar for entry for contributing to hardware is not 'has their own multi-billion dollar fab center'.

Nvidia did release an open source deep learning core: RTL, c models, regression tests.


The source code is on github.

What even is this project? There are no details on the DARPA page either.

Is it for PCB design, ASIC design or both? Is a constant current source also considered a “small chip” or just digital designs?

Basically every EDA tool already has the ability to group sub modules which one could distribute as open source if they chose.

Do it in kicad and put your circuit into a hierarchal symbol if you must be all open source.

I get hard IP blocks from vendors all the time for inclusion in our ASICs.

It’s not the EDA tools that are preventing “openness”.

I was just joking the other day how all the PCB designs I’m reviewing lately are just conglomerations of app note circuits and it’s really boring. So to me it seems like there’s plenty of design reuse. :)

I'm surprised to see no recognition of yosys, arachne-pnr and the icestorm tools which together are a free and open source HDL tool chain which already exists and is pretty widely used.

These two projects are exactly the road the EDA industry should be taking.

Unfortunately, they make very slow progress because they have to painstakingly reverse-engineer everything (with the possible exception of Lattice stuff).

For Xilinx chips, where exactly nothing is publicly documented at the lower levels of the stack, they have to spend mountains of time re-discovering everything.

Even if I deeply admire the effort and how far they've gotten, I can't help but think: what a terrible waste of human talent and time.

Edit: I once asked a Xilinx employee why they didn't OpenSource their entire software stack, because it struck me that they were in the chip manufacturing business, and not in the toolchain business (a blisteringly obvious fact when you look at the quality of such monstrosities as, e.g., Vivado), and that OpenSourcing the tools would potentially enlarge their potential target market by a large margin.

The culture is so broken in that space that I don't think he even actually understood the question.

Note I don't work at Xilinx. And be prepared that what I'm about to write may seem incredibly cynical, sorry.

But if I were to take a guess at why the culture is the way it is, I'd say that it's because programmable logic is fundamentally relatively small logic tiles replicated across large areas.

That means across competing companies there's a high chance for infringing upon arsenals of patents for rather mundane things like interconnect, logic families, or memory cell layout where there are only a handful of viable alternatives yet the patent offices were likely duped into accepting multiple legalese interpretations of the same underlying tech. It's a minefield.

Xilinx is not really a chip manufacturing business either. They're fabless. Imagine having a company that designs RAM memory and outsources everything beyond the cell design. If you don't own the foundry itself you're not going to last very long unless you encrypt the memory access protocols, obfuscate your (probably patent infringing) hardware architecture by layers of undocumented tooling, and dominate the industry by buying up any upcoming contenders while cross-licensing stuff to build up a complex ecosystem of interdependent tools required to get even the most basic project done.

Thanks for that perspective, I had not considered the angle that the EDA industry is scared of itself because of patents.

Once more, the problem can be traced to the root evil that is patents, sold to the world as essential for innovation, but which end up having the exact opposite effect.

> with the possible exception of Lattice stuff > For Xilinx chips, where exactly nothing is publicly documented at the lower levels of the stack, they have to spend mountains of time re-discovering everything.

The situation with Lattice parts was the same; they reverse engineered them.

Thanks, didn't realize that, in my naiveté I thought Lattice actully helped :(

A great blog post here (https://wp.josh.com/2017/10/23/adventures-in-autorouting/) about some different auto-routing software.

Those routes look terrible. Maybe if everything you're doing is electrically short and there are no high speed routes, it would work great. Basically it's wonderful if all you make are blinkenlight projects.

But if you have to input all of the data that makes up a good route (including coupling, ground/power planes, trace length matching, PDN noise, stackup, EMI/C rules, etc, etc), and then review the whole thing anyway, what's the point of the autorouter?

Also the article says nothing about the various algorithms involved, which are interesting from a computational geometry standpoint. But the gulf between algorithm or academic example and "commercial router" is huge!

This blog post is just... not very good. It's how I imagine a software person who made some stupid blinky LED thing thinks about hardware.

One of the main points the post is missing is the distinction between a placer and a router. Most (all?) PCB design tools do not come with a connectivity-based placer. They only ship with a maze-, grid- or shape-based router.

Placement and Routing together is generally the domain of the very-expensive EDA tools inferred in the main article.

It should be noted that Andreas Olofsson used to run Adapteva and close to singlehandedly designed the Parallella processor.

This kind of spending is so foreign to me.

At $250,000 per year per person, that supports 100 people for four years.

I… suppose that's not completely insane?

Fully loaded (inclusive of benefits, payroll tax, etc.). That’s actually not a lot.

Thought I'll see Olofsson in there.

Sounds like you're familiar with this space and this guy; any thoughts you'd like to share?

I'll bite but consider all I write poorly informed wild speculation.

The person he mentions has been behind a lot of advancements in both getting cheap FPGA based boards to the hands of users and creating libre silicon-proven IP. He recently left his company that was making Zedboards and Parallella among other things to join DARPA.

The last big project he seemed to have worked on was a 1000-core processor of which a lot was open source but a lot of the important bits were protected due to likely an NDA with the factory or the provided of the tools they used.

It hope what he is going to work on (EDA) will finally enable DARPA and others to fund truly open source designs and tools and work with fabrics that would allows access to really cheap ASICs or FPGA based systems, building on all the momentum around platforms like RISCV and his experience creating the Epiphany cores.

OH! is an open-source library of hardware building blocks based on silicon proven design practices at 0.35um to 28nm. The library is being used by Adapteva in designing its next generation ASIC.


I wish FPGA design tools just worked...

They should have given 1% of it to Clifford Wolf.

I wonder if the decline of Moore’s Law will eventually lead to the commotidization of ASIC fabrication?

Of course fabricating a chip will never be as cheap as writing a bit of software, but maybe it will eventually be as cheap as, say, injection molding a piece of plastic?

Heading the opposite direction - at least for the bleeding edge 7nm/5nm/3nm ASICs you want in your next computer or smartphone.

Manufacturing costs (particularly fixed costs) are going up exponentially. We're getting stuck on economics before physics. You need to be able to sell 10million+ parts to cover your costs.

There's more opportunities if you don't need the best performance or lowest power and use an older manufacturing process node like 65nm.

No matter what, you'll need to clear a certain volume floor. Masks are expensive [0] especially for nicer process nodes.

Once you have a mask set and fab time, it's off to the races. IMO $1M really isn't bad for a simple chip run.

[0] https://anysilicon.com/semiconductor-wafer-mask-costs/

I might be able to weigh in here.

Having these tools as open source and freely available is a huge deal for so many industries. I've worked with these tools at an academic level and now at a startup, and it's amazing the magnitude of this enabling technology. Just the tooling investment will be huge, making the core solvers and algorithms more accessible should spawn a whole new wave of startups/research in effectivley employing them. Just these days, I've heard of my friends building theorem provers for EVM bytecode to formally check smart contracts to eliminate bugs like these [0].

These synthesis tools roughly break down like this:

1. Specify your "program"

- In EDA tools, your program is specified in Verilog/VHDL and turns into a netlist, the actual wiring of the gates together.

- In 3D printers, your "program" is the CAD model, which can be represented as a series of piecewise triple integrals

- In some robots, your program is the set of goals you'd like to accomplish

In this stage, it's representation and user friendliness that is king. CAD programs make intuitive sense, and have the expressive power to be able to describe almost anything. Industrial tools will leverage this high-level representation for a variety of uses, like in the CAD of an airplane, checking if maintenance techs can physically reach every screw, or in EDA providing enough information for simulation of the chip or high-level compilation (Chisel)

2. Restructure things until you get to a an NP-complete problem, ideally in the form "Minimize cost subject to some constraints". The result of this optimization can be used to construct a valid program in a lower-level language.

- In EDA, this problem looks like "minimize the silicon die area used and layers used and power used subject to the timing requirements of the original Verilog", where the low level representation is the physical realization of the chip

- In 3D printers it's something like "minimize time spent printing subject to it being possible to print with the desired infill". Support generation and other things can be rolled in to this to make it possible to print.

Here, fun pieces of software in this field of optimization are used; Things like Clasp for Answer Set Programming, Gurobi/CPLEX for Mixed Integer programming or Linear programs, SMT/SAT solvers like Z3 or CVC4 for formal logic proving.

A lot of engineering work goes into these solvers, with domain specific extensions driving a lot of progress[1]. We owe a substantial debt to the researchers and industries that have developed solving strategies for these problems, it makes up a significant amount of why we can have nice things, from what frequencies your phone uses [2], to how the NBA decides to schedule basketball games. This is the stuff that really helps to have as public knowledge. The solvers at their base are quite good, but seeding them with the right domain-specific heuristics makes so many classes of real-world problems solvable.

3. Extract your solution and generate code

- I'm not sure what this looks like in EDA, my rough guess is a physical layout or mask set with the proper fuckyness to account for the strange effects at that small of a scale.

- For 3D printers, this is the emitted G-code

- For robots, it's a full motion plan that results in all goals being completed in an efficient manner.

[0] https://hackernoon.com/what-caused-the-latest-100-million-et...

[1] https://slideplayer.com/slide/11885400/

[2] https://www.youtube.com/watch?v=Xz-jNQnToA0&t=1s

It would be great if they could make something like a Bluespec Verilog[0], but open source. This HDL is far better than traditional ones, IMHO.

[0] https://en.m.wikipedia.org/wiki/Bluespec

I don't see how this is not literally the same thing any HDL synthesis program does. How?

I see many chip vendors on the list of participating companies.

Won't this project reduce barriers to entry for their industry ? and if so, isn't it against their interests to participate?

I don't buy it. Having Cadence and Synopsys involved it is like DARPA invites Mathworks to do an open-source version of Matlab.

I don't believe keeping barrier of entry high in an industry is a good thing for companies in that industry.

Quite the contrary, lowering barriers creates more opportunities and thereby new scope for growth.

It took MSFT 30 years to finally understand that lesson and start offering a free suite of dev tools.

The EDA industry still hasn't groked that lesson.

Microsoft knew what they were doing: using secrecy, obfuscated formats/protocols, copyright law, and patent law to block as much competition as possible. They made billions in the process. The EDA vendors do something similar but mostly acquire competitors. There's just three of them covering the basics parts of ASIC design. They aren't competing on driving prices down, either.

So, their strategy is smart until they, like Microsoft, get in a situation where less and less customers need them. I did think one could do a more open version of EDA, though. I was gonna try to talk to someone at smaller player, Mentor, but they got bought. Im keeping my eyes open for new players wanting a differentiator.

The participating companies makes chips that requires a huge amount of resources to make them happen: design engineering time, CAD software, tape-out cost, validation engineering time, ...

The CAD software is expensive, but it's not barrier of entry expensive compared to design engineering time, tape-out cost etc.

If the CAD software cost gets reduced, it would result in a cost reduction of all companies involved while still having a barrier of entries that's be way too high for anyone but the best funded companies.

From the article: "If successful, the programs “will change the economics of the industry,” enabling companies to design in relatively low-volume chips that would be prohibitive today. "

Yes. But it won’t change a thing for the big companies: they don’t make those low-volume (and low effort!) chips.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact