This, to the 100th power.
The culture in the EDA industry is stuck in the 1950's when it comes to collaboration and sharing, it's very frustrating for newcomers and people who want to learn the trade.
As was pointed out by someone in another hardware related HN thread, what can you expect from an industry that is still stuck calling a component "Intellectual Property"?
The un-sharing is built into the very names used to describe things.
You read that right. Their build system outputs source code, generated from another C code base, using macros to feature gate bug fixes depending on who the customer is. The account managers would sent a list of bugs that a given client had experienced and the backoffice engineers would make a build that fixed only those bugs.
Forget about collaboration and sharing. They haven't even figured out the basic business processes that many software engineers take for granted.
Makes perfect business sense! Those customers didn't pay to get those bugs fixed, so why should they get the fixes??? /s
This approach is wrong on many levels, but fixing the real cause is often a bit more work than "a hundred junior dev hours". In the meantime, you have to deliver to customers.
Edit:fixed a typo.
The "move stuff and break stuff" crowd forgets there's a market for "move slow and rarely break stuff" too.
So if you have qualified hardware w/ a certain driver version, you don't just ship a bunch of code fixes to a customer, you ship the bare minimum change set required to fix any specific issues they experience. Because you're sure as hell not recertifying every part revision with every driver version.
The problem in the hardware industry is far more insidious. Unlike with software dev, there is no concept of "full stack" or "devops" in manufacturing. The entire field is so hyperspecialized that youve got people dedicated to tuning reflow temperatures to +-5C depending on humidity and local weather. No one person can have a proper big picture view of the situation so the entire industry is dominated by silos each with their own micro-incentives and employees competing with each other for notoriety.
We were, however, an established client of Qualcomm and working on a new design so we explicitly asked for all bug fixes and they refused except for the ones we could name. We got "lucky" in that no other client had needed both bug fixes at the same time and when the build system broke, the one guy responsible for it was MIA so we got the whole source dump (they didn't even test their fixes together until a client found problems).
I've used other Qualcomm products with different engineering support teams that are much better at source control, testing, and even devops but it's a very rare sight within the industry.
It stinks badly, but is a sad reality.
It'd probably be around today and up to date if it was open source. A shame it isn't. I don't even know who owns the rights to it these days, or if whoever owns it even knows they have the rights to it, due to spinoffs and mergers.
Perhaps you can ask the professor for who has the copyright on ABEL now? So we can ask the holder if it can be open sourced.
Miracles like this do happen - last year Symantec allowed the Symantec C++ compiler to be fully open sourced!
Perhaps not entirely. We had one lab session dedicated to it during my junior year in college. That was ten years ago but apparently they haven't changed that (course description in english at the bottom of the page).
Needless to say, the ABEL projects were frustrating...indeed a shame that it's not open source.
Some more info (I'm surprised at the response here!):
"The ABEL concept and original compiler were created by Russell de Pina of Data I/O's Applied Research Group in 1981." This is false. I don't know what de Pina did, but ABEL was developed from scratch by the 7 member team listed in Wikipedia, and the grammar and semantics were designed by myself.
Is there some "copyright" printed on it? Before 1989 it was apparently "required" and if not printed apparently it matters if "the author made diligent attempts to correct the situation":
"you should assume that every work is protected by copyright unless you can establish that it is not. As mentioned above, you can’t rely on the presence or absence of a copyright notice (©) to make this determination, because a notice is not required for works published after March 1, 1989. And even for works published before 1989, the absence of a copyright notice may not affect the validity of the copyright — for example, if the author made diligent attempts to correct the situation.
The exception is for materials put to work under the “fair use rule.” This rule recognizes that society can often benefit from the unauthorized use of copyrighted materials when the purpose of the use serves the ends of scholarship, education or an informed public. For example, scholars must be free to quote from their research resources in order to comment on the material. To strike a balance between the needs of a public to be well-informed and the rights of copyright owners to profit from their creativity, Congress passed a law authorizing the use of copyrighted materials in certain circumstances deemed to be “fair” — even if the copyright owner doesn’t give permission."
It's not simple, but maybe an interesting starting point...
Maybe, if there's if it is not a product actually being sold or even used as such anymore, there's reasonable chance that the copyright holder wouldn't be interested to enforce the protection of the "historical material"?
But my degree is ME, not CS nor EE.
I'll have popcorn ready for the eventuality where IP blocks are widely available under GPL type of FOSS licenses and Intel|AMD|ARM|TI|... is eventually found to include one or more of those open sourced blocks with incompatible license in their chips.
In software you can always replace a library that had incompatible license with another. Not sure how this would work with a chip.
As for attempts to modernize EDA industry, I welcome that in open arms. And if in the process we end up open sourcing a lot of current IP blocks (think of USB, HDMI, etc. designs) - all the better.
Intuitively I'd thought something similar, but IANAL and could be completely wrong, of course :)
First off, the RISC-V community is based on the open ISA. There are several open implementations of the ISA. And the RISC-V community is meaking good headways in developing open tools, peripheral cores etc.
Secondly there are at least two attempts at collecting and tracking open cores for FPGA- and ASIC-implementations.
LibreCores is the newer project. They have collected quite a few projects:
Related to LibreCores is the SoC-builder and core package handler Fusesoc by Olof Kindgren. Fusesoc makes creating your SoC easy:
The older project is OpenCores. OpenCores has been quite tightly related to the OpenRISC CPU core, the wishbone set of on-chip interconnect solutions. They have been used in many FPGAs and ASICs
Then you have projects like Cryptech that develops a complete, totally open, Hardware Security Module capable of doing certificate signing, OpenDNSSEC signing etc. The Cryptech Alpha design from PCB to FPGA cores and SW including pkcs11 handling is open. The project has amassed quite a few cores. The PCB design is available in kiCAD. (disclaimer: I'm part of the Cryptech core team doing a lot of the FPGA design work.)
Speaking of tools like KiCAD, there are aqtually quite a few open tools for HW design. For simuation there are Icarus Verilog, Verilator, cver for example. They might not be as fast as VCS by Synoptsys. But they do work. I use them daily.
For synthesis, P&R the state is less good. For implementation in Altera and Xilinx devices you currently have to use the cost free tools from the vendors. But there is work ongoing to reverse engineer Xilinx Spartan devices. I don't know the current state though.
But what has been reverse engineered are the ICE40 FPGA devices from Lattice. And for these you can use the open tool Yosys by Clifford Wolf (also mentioned below by someone else).
And if you are looking for open implementations of crypto functions etc I have quite a few on Github and try to develop more all the time:
The sha256 and aes cores has been used in quite a few FPGA and ASIC designs. Right now I'm working on completing cores for the Blake2b and Blake2s hash functions.
I agree that we in the HW community is waay behind the SW community in terms of open tools, libraries (i.e. cores). But it is not totally rotten, and it is getting better. RISC-V is to me really exciting.
Unfortunaly, while the tool is pretty nice, it never resulted in major adoption (qualcom has some tool using it internally and a few others) and we haven't seen the idea of making reusable libs and components florish.
Somebody would need to find this project and up it to python 3.6. With current tooling, it would make writting code in it really nice and ease the creation of reusable components.
Reusability in raw verilig is hard
Every CAD system I know of supports ways of group circuits into modules and libraries for multiple Instantation. And those libraries are distributable.
A review with screenshots : https://www.javaworld.com/article/2076574/developer-tools-id...
If you have the budget for professional assembly, then I would start with a Digilent product that has an available reference design you can copy later. First get used to the FPGA and how it works (they are largely incomparable to a CPU or GPU except for the fact that they both have transistors). Then, design a simpler board with a smaller FPGA and work your way up to the big 500-1k pin chips.
The fear of BGA parts is seriously overblown. The only real expensive part is the finer pitch parts will require tighter tolerances on the PCB which will take you out of the PCB batch services price tier.
edit: toaster oven, not toaster
If you have actually figured out how to consistently make boards with BGA-256/512 0.5mm pitch parts with just a toaster, I'd love to learn more about your technique. Even with professional inspection equipment and a pick and place, it's rarely worth the effort unless I have an imminent deadline.
For a beginner, I doubt the cost of trial and error would be cheaper than just having someone else do it. He could go with a larger pitch [edit: and smaller pin count] but I have never successfully introduced someone to FPGAs without a relatively huge chip capable of running a soft core closer to what they're used to with in general purpose computing.
As for the reflow profile I raise the temperature gradually, with the thermocouple taped directly to the PCB with kapton tape. I do this manually, not with an automatic controller.
My toaster oven is a convection model with a fan which helps, and I had some trial and error with the rack position.
The part in question is: TPS62180 which is a dual phase step down converter.
Edit: I think you are conflating I've done both high ball count and 0.5mm. Never both at the same time. I've soldered high ball count Xilinx FPGAs at 0.8mm which was relatively easy. Also I stress these are for prototypes. You can do production runs and that is how the company behind 3DRobotics got their start in the early days of quadrotors but its a whole other ball of wax.
If you can, I'd really recommend talking to dentists who are retiring in your area or buying an XRay machine on ebay. They're pretty cheap and relatively safe and will open up a whole world of PCB fab. The problem with BGAs is the lack of feedback: if you have a chip with hundreds of pins and precise requirements for power boot up timing or dozens of impedance matched traces, figuring out what went wrong with your boards is literally impossible without proper inspection.
Since you have an intuitive feel for soldering BGAs already, I think that with a visualization of the solder joints you would be able to solder high pin count chips at small pitches. Most issues that I've experienced with complex BGAs are caused by mass manufacturing where you dont have the room to reflow every 10th board and you find out too late that a variable (from personal experience: el nino) has changed and reduced your yields by double digits. If you can take the time to do it right by hand, the sky's the limit.
If your back is against the wall you can also hotfix design mistakes like swapping two LVDS MIPI2 pins by using a laser drill to strip the plating on microvias at an angle to preserve the trace above it and resoldering them with wire only slightly bigger than IC interconnects. An almost completely useless skill but nothing beats the feeling you get when you can command RoHS compliant surface tension to do your bidding. Fair warning though: here be dragons.
On the high speed digital front we do have 1Ghz DDR3 and its worked very well - we don't even use controlled impedance. Of course we are extremely careful to keep traces short and length matched. Our application can handle a very small error rate, however in practice we haven't seen any corruption at all in testing.
Its not the right approach if you are pushing the envelope of technology. But if you need a bit more grunt than an atmega by all means design in the ARM SoC and DDR3. You don't need fancy gear to do it.
Edit: Also thanks, we're looking into expanding more manufacturing in house and you've given me some ideas. I'll definitely look into an XRay machine.
On the other hand, I designed a STM32F4 based design last year and it was glorious. SnapEda for all symbols/footprints, Altium design vault for a specialized FPGA design, and github for a STM32 reference design PCB file using the same layers/copper weights as my requirements with imported fab design rules for good measure. The PCBs cost like $1500 and assembly was only $1600. 10 years ago I used to pay that much per single board in quantities of 30, for roughly the same complexity. This time, the whole thing (minus firmware) took a weekend.
Good luck with your mfg! If you have a chance, please blog about your experiences. There arent enough people spreading the art of solder.
It seemed like it wasn't I need fancy gear to make blinky lights, but rather for anything better than 8-bit, kilobyte, and low MHz, the only parts available was the fancy commercial stuff.
I do occasionally do small (100+ pins) BGAs - for that I get a steel solder stencil made and use solder paste and a cheap Chinese reflow oven
Once you see how easy it is to reflow a board you won't want to bother with hand soldering DIP packages anyway.
Been a few years, but IIRC a labmate of mine got a few units of .5mm highish ball count to work using that process and a toaster oven, but it is a bitch to test.
Would maybe do for a hobby project assuming you have access the equipment (e.g. a hackerspace or something) and are willing to drop a buttload of time into trying to get it to work, but if you are working for someone it is probably more cost effective to save the manhours and get it done by someone with the right equipment.
The only time it really fails is if I'm lazy applying the solder paste or if I stop paying attention during reflow. Doing it in house saves a ton of money which is really important when you are trying to get your first prototypes up and running. Plus it also keeps you deeply in tune with what is possible to manufacture and what isn't. DFM stops being just a checklist.
We'd have a lot more hardware startups if people realized how cheap and easy its become. You need the prototype to get funding.
If you screw up the solder job, you can use a hot air gun to remove the BGA part, reball it (using a stencil, fairly cheap if its a common package), then try reflowing again.
If you practice for a few hours on scrap electronics, you can get good enough at it.
It is a bit rude to assume I don't know what a BGA part is.
Anyway, if you do this stuff often enough then I see no reason why you wouldn't get the proper tools, a rework station and an actual reflow oven or something with a PID controlled heating element would make your life so much easier. Working with bad tools would drive me nuts.
The reason the larger BGA center themselves is as soon as the solder goes fluid there is a lot of accumulated surface tension trying to reduce the size of the bridge and that will center the part all by itself. For that to work properly though everything has to become fluid more or less at once and stay fluid until the part has shifted to the right position.
As for the reflow you can actually get more consistent results with the toaster oven - It just won't be able to handle the volume of actual production. Whatever you do just don't try going "semi-pro" and getting one of those IR ovens from China. Stick with the $40 walmart special. The toaster oven when heated slowly is much less likely to have hot and cold spots. Stenciling and placing parts take up a lot more time and are much more error prone.
The trick with BGA parts is to use solder paste and a stencil. Once you try it you will never go back. By the way make sure you read the xilinx documents extremely carefully. Its easy to forget to wire PUDC_B and prevent the thing from ever being programmed. There are lots of little gotchas.
Have you ever considered a DIY youtube channel?
If you, @slededit, others on this thread did something like Louis Rossmann, that'd be fantastic. I've never even seen a dentist's x-ray machine, much less seen it used for DIY design. That'd be amazing to watch.
(But maybe y'all would dial down the rhetoric from an 11 to a 8 or 9.)
With LVDS, make sure to keep tracks short,have some impedance control (doesn't need to be super strict), and match trace lengths. One issue I had before was a clock (90Mhz I think) that wasn't quite in sync. Ended up ordering a new board with a 8 channel inverter, and snaking the clock through a few of the the gates to get a slight delay.
I also thought I'd try to answer some questions that I've seen in the comments. Disclaimer: as a lowly PhD student I am only privy to some information. I'm answering to the best of my knowledge.
1) As mentioned by hardwarefriend, synthesis tools are standard in ASIC/FPGA design flows. However, chip design currently often still takes a lot of manual work and/or stitching together of tools. The main goal of the compiler is to create a push-button solution. Designing a new chip should be as simple as cloning a design from GitHub and calling "make" on the silicon compiler.
2) Related to (1). The focus is on automation rather than performance. We are okay with sacrificing performance as long as compiler users don't have to deal with individual build steps.
3) There should be support for both digital, analog, and mixed-signal designs.
4) Rest assured that people are aware of yosys and related tools. In fact, Clifford was present at the event :-) Other (academic) open source EDA tools include the ABC for logic synthesis & verification, the EPFL logic synthesis libraries (disclaimer: co-author), and Rsyn for physicial design. There are many others, I'm certainly not familiar with all of them. Compiling a library of available open source tools is part of the project.
Edit: to be clear, WOSET has been planned, but will be held in November. Submissions are open until August 15.
Compiling? What's that even mean? What about funding?
I wrote arachne-pnr, the place and route tool for the icestorm stack. My situation changed, I didn't see a way to fund myself to work on it and I didn't have the time to work on it in my spare time. I assume that's one of the reasons Clifford is planning to use VPR going forward (that, and it is almost certainly more mature, has institutional support at Toronto, etc.) I would have loved to work on EDA tools. I've moved on to other things, but I wonder if these programs will fund the likes of Yosys/SymbiFlow/icestorm/arachne-pnr.
Someone who is NOT in the business of SoC design may have wanted to open source something like this just to commoditize their SoCs. But if you're in that business already, then why help reduce the barrier to entry?
But when it comes to EDA, the utterly closed-source culture of the industry has completely prevented what essentially amounts to a horde of smart people to work on the problem: no one - except a very small number insiders - even know what the problems are.
You have layers upon layers of closed source EDA tools and home-brew scripts, all built around horribly abysmal "standards" like SystemVerilog, mostly cobbled together in TCL and Perl.
For instance, the following sounds like a horrible joke, but it is real. In a real life, world-class company, port connections between modules are handled using a Perl script. This perl script invokes, I shit you not, an Emacs-Lisp Verilog parser, that scans your files to figure out what submodules you are instantiating, infers the ports that you want to connect, then injects a blob of connections directly into the source code file you were editing. The engineers at this company then commit the output of this Perl script into their source code repository. Predictably the diffs are full of line noise.
And you can't escape from this, any of it, because there are so many tools involved, and they all do such useful things, that you end up truly stuck using whatever features all of the tools support. And `ifdefs. Sigh...
Working IT (aka DevOps, continuous tech support), it's just cutting & pasting strings. Maybe some data quality stuff.
Gods I miss product development. Actual design & programming.
You can also look at it from the other perspective: at least you can still make money making hardware tooling. That is not possible for software tooling, as everything is open-source already.
You have a very distorted view of OpenSource. People working on OpenSource related project do make money. And lots of it. They're just not selling what essentially amounts to obfuscated source code, a way of doing business that is apparently a great source of astonishment for HW folks.
The other point is the state of hardware tooling. Tool vendors may be making money, but their customers are getting shafted something fierce.
Every single EDA tool I've used feels like it was written in the 80's: slow, bulky, bloated, opaque, complicated, unpredictable behavior, need band-aids all over the place to get it to do what you actually need (Tcl anyone?)
The typical byproduct of an industry focused on secrets instead of innovation.
I was talking about compilers and such. Do you have any evidence to support this case?
In the SW world tooling people are busy kick-starting entire industries. You can argue they may be making less money, but as to their overall utility to the ecosystem, clear win AFAIC.
Here's an example:
Where is the LLVM of the HW world?
Nowadays, I suppose you can still make money writing compilers, but besides being a master at compiler design, you now also have to have the skills to sell your services, which is difficult, time-consuming and boring. You can't just sell shrink-wrapped products like before.
There's a cargo cult confidentiality that achieves absolutely nothing if not completely frustrating out anyone who tries their best to keep their interest in it alive.
Try writing an FPGA SDRAM controller that reliably operates across frequency, voltage, and temperature at 150 MHz. It is doable but a challenge with the common xilinx boards that are sold to hobbyists for $80. If you can do that, then think about designing a circuit which operates 20 times faster, and negotiates a protocol which is 100 times more complex than what an SDRAM controller has to do.
Kernels are extremely complex software blobs with crazy race conditions, and tons of device-specific hacks. Somehow the world goes on with Linux going open source. Nobody is claiming that a single person will bang out a USB 3.1 block, but building up these "IP"s over time through the community would be a huge first step.
All the more reason for sharing your solutions with the rest of the world, or at the very least parts of it.
The problem with the EDA industry is their firmly held belief that these kind of trade secrets is what they're competing with.
Think of Google, who had this exact same struggle internally.
For BigTable and MapReduce, they chose to sit on the "IP" to put it in H/W parlance. This gave the world Hadoop and its evil twin file-system whose name I forget. Yay (Hadoop is a a catastrophe).
For Tensor Flow, they chose the OpenSource route. TF is arguably the best framework for ML. There's a lesson somewhere in there.
That's probably what's lacking here, anyone with anything to gain by opening up all the bemoaned goodies. Is that Darpa's role?
Wouldn't you then want ten times the number of people eyeballing the code? And boatloads of free contributers submitting change requests to fix potential problems?
And a marketplace of users that can freely choose deciciding which "IP (I still find the term so deeply offensive, I can't help but it I have to put it in quote)?
The point is that, while these blocks can be complicated and need lots of customization, the ability to grab the source and modify it yourself is a huge enabler. If I could grab a stock IP from a repo and then spend 40 man-hours customizing it, it would be a huge benefit over spending 1000 man-hours making the IP from scratch.
I hate to appeal to expertise, but I've written HDL, synthesized it, and done mask layouts before. I understand that it requires a lot of customizing, but the first step to all of this is sharing. Sharing infrastructural components is a huge reason of why the commodity software business is as robust as it is.
And let's be honest, a state of the art block ported to the latest process isn't going to be 40 man-hours to customize, you're probably looking at more like a couple man-years unless it's very well designed... Which is why reusable core shops are a viable business. And how vibrant of an open source community would you have, if it took a small team a year just to customize & implement the open source design to their application? You would have very few users.
Now, you could argue that because a lot of the real work is in the customization that they could open source the core. But where does the customization end, and the core, the root functionality begin?
I think a more realistic pitch would be an open source testbench or behavioral model that verifies your core is fully spec compliant.
I doubt most projects need the full 10gibts. What if they only need 750mbits? That's unfortunately not possible without USB 3.1. The alternative to USB 3.1 (Gen 1 or 2 doesn't matter) is USB 2.0 which only can do 480mbits.
Those are hardly that simple. There exist entire companies whose function is designing such cores (and licensing them).
I guarantee you, every single Chinese manufacturer who wants to dodge licensing fees has already done so with stolen IP cores. After all, EDA companies are hopelessly stuck in the past and woefully unaware of proper security practices.
There's a clear need for a Ruby-to-gates compiler, the only possible explanation for why there isn't one is because hw guys just don't get it
Hard not to just throw up your hands.
The sad irony is a lot of the tools, languages, & culture of today's software could really be valuable- for formal verification. But that doesn't get much attention.
His site has been down for a while, but someone thankfully mirrored most of the pages here:
More history about OKAD, plus links to more about Forth both software and hardware:
in australia we have CSIRO which does some amazing research, and our military budget is basically non existent compared to the US military budget
As a percentage of total government expenditure, Australia spends 6% on defence and the US spends 16%.
GDP is not what that defense budget is protecting. People and property are what that budget exists to protect.
China has the second highest defense budget. We have about 1/4 the population of China, yet we spend several times more on defense.
Australia's Defence Science and Technology Group, which does science and other research for the military, is funded at about $400 million AUD/year. It's Australia's 2nd largest research organization after CSIRO.
I'm being a bit facetious of course, I just find it funny that people's acceptance of this argument depends on how much they support the primary subject of the funding.
That requires massive, sustained logistics spending. To the point it may seem absurd and wasteful. But the alternative is fighting a fair fight on equal footing. I’d rather not do that.
So really it’s fair to say that guns and bombs aren’t cheap, but they are also a distraction from the delivery systems, which are catastrophically expensive.
the point it may seem absurd and wasteful. But the alternative is fighting a fair fight on equal footing. I’d rather not do that.
Yeah? Are you sure the alternative isn’t to spend 3 or 4 or 5 times as much as any other nation instead of 7 times as much? Maybe the alternative is to avoid boondoggles, and focus on proven tech. Of course the real alternative is that doing so would get in the way of the real business of arms dealing.
You're right, they sure aren't. But as a result no other military force on earth besides NATO has the capability to launch air superiority fighters from amphibious assault ships,
and perform multi-ton circumglobal bombing sorties. That kind of capability doesn't come cheap, and shouldn't be dismissed.
Take away the B2 and we are fighting to-to-toe with Russian/Chinese long range bombers
Without the F-35 we are on equal footing with Chinese/Russian carrier based aircraft
Without Zumwalt class ships, we are going head on against Chinese missile destroyers and subs of equal capability.
The whole point is that you don't want an even remotely fair fight. "Keeping up" with others's spending, even within an order of magnitude, is a really bad idea if you can help it.
Basically, what are you going to do with a Zumwalt that you couldn't do with a submarine? Is it better at any of those tasks than a conventional destroyer?
You think wrong.
If they still have weapons, they sure as hell haven't been using their existence as a deterrent. The whole point of having nukes is letting potential aggressors know that you have them, and that, if attacked, you may be crazy enough to use them.
You must know that Ukraine was attacked 4 years ago. So the "whole point" is not applicable in this case.
It’s why Russia is considered only a regional power, despite having a larger nuclear arsenal than the US. Their ability to perform intercontinental conventional bombing and troop deployment is non-existent. To cede this power to any other nation would be catastrophic to world peace.
What world peace? The US has started two wars that destabilized a whole region, and it shows no sign of restoring anything like peace there. World peace from such a belligerent nation, one that drone strikes most days of the week, is a bad joke or a real example of Orwell in action. How many pointless losing wars does the US need to start, how many governments does it need to overthrow only to see them revert to something even worse before you get that you’re about world peace.
You are not “the good guys.” The US is just in it for money, and that money only for a fraction of its population. It’s just a good thing the US can’t actually win a war it starts, or they’d have taken over by now. What’s actually happened is the likes of North Korea looked at what happened to Libya and Iraq and realized that they needed nuclear weapons. What a great outcome for “world peace” right? I’m sure those nuts won’t sell the technology, and other countries won’t follow suit so as not be “projected” all over.
Surely all you want to do is shoot them down. Bombers are purely offensive weapons. No one needs them unless they plan to attack somewhere else
The DoD budget, as usually reported, generally doesn't include funding allocated for specific operations (e.g. the Afghan or Iraq conflicts).
> focus on proven tech
The average soldier has always wanted proven tech. But what they really need (and what the officers ask for) is training. More HUMVEEs and Arleigh Burke destroyers, fewer wheel 4.0s and Zumwalts.
We are at the top of a power curve with regard to total dollars spent, but we actually about 4th in terms of defense spending as a percent GDP, (I think we're around 5th). Soft power remains our major advantage. Not that bad when you consider our outsize role in world military affairs.
Your link says US is 4th in list of selected countries, not in the world.
On this page US not in the top: https://en.wikipedia.org/wiki/List_of_countries_by_military_...
Any reason why percent GDP is the right metric? If we're normalizing I would've expected per capita instead.
At the end of the day I’ve personally settled on “yes but probably not as much as you might think”.
There is no second source for the F-35.
A similar argument is usually touted regarding the economic strain that the 'Space Race' caused the U.S.S.R..
 "The Cold War: An International History", p. 42-43 https://books.google.com/books?id=Yv2FAgAAQBAJ&lpg=PA43&pg=P...
 "Comparison of US and Estimated Soviet Expenditures for Space Programs", p. 3-4, https://www.cia.gov/library/readingroom/docs/DOC_0000316255....
I’d add in a consideration of the opportunity cost.
This logistic spending is certainly not for national defense - it has two oceans, a navy, and two thousand nuclear warheads for that purpose.
No, they are not. Go look up how much the military spends on such things. It is a lot. (Per unit and in aggregate.)
The US has one of the highest median incomes on earth. Now combine that with the large human force that the US military has. The bases, veterans, healthcare, various benefits, retirement, equipment for the humans, salaries, and on it goes.
For example missiles + munition costs are a mere 2% of the military budget. Shipbuilding and maritime systems is a mere 3%. Aircraft and related systems is 5.x%. R&D is about 9%.
The budget related to operating the bases, maintenance, salaries, benefits, vets, and all other related human costs = $500 billion.
There's a strong argument that if you adjust China's military spending to factor in the difference in income / per capita costs for human soldiers vs the US (basically PPP adjusted), that China outspends the US already (and that's with an economy 1/3 smaller):
Currently, the best synthesis tools are closed-source and extremely expensive. Imagine the benefit of having gcc/clang be free software. That is the kind of effect that is at stake here.
Usually, hardware designers will write RTL code (Verilog/VHDL) which describes the hardware slightly above the gate level. In order to turn this description into a web of logic gates (called a netlist), the design is processed by a synthesizing program. The produced netlist describes exactly how many AND, NAND, OR, etc. gates are used and how they're connected, but it doesn't actually describe where the gates are placed on the chip or the route the interconnections take to connect the gates. To generate that info, the netlist is fed into another synthesis tool (usually called place and route).
This is a simplified version, but even at this level of detail, there are important factors affecting chips.
- How many gates? (less might be better)
- How far are the gates from each other? (closer is better; less power, area, cost, timing)
- How often will the gates switch? (less is better)
More advanced synthesis tools improve area, cost, power, timing. They also allow designers to have less expertise and still obtain the same result as experienced designers by optimizing out micro-level inefficiencies in the design (though experienced designers will also lean on the synthesis tool).
There is no type checker that can deduce whether your design will work or not and then output machine language. At the end of the day, with silicon you have to actually figure out whether your design can be physically manufactured by running a long compiler process and then testing it, often with rigorous simulations before moving to the fab process.
PDF is the document where you write the specs.
GDS (actually GDSII) is the geometry description of chip layers you send to the foundry for fabrication.
If I managed to get my grubby hands on a moderately modern computer, I can use all manner of open source software and I can create wonderful new software.
The barrier of entry is fairly low in rich countries.
If AMD open sourced all the design aspects of their chips,
I would have to get a loan to build 100 million fab to have any practical manner to enjoy it?
I can see that if Intel/AMD/NVidia/Apple shared all aspects fo their chips cross pollination might bring great things, and academic research would be boosted and might end up giving back more to the community at large, but you are talking about very few entities across the world that can afford fabs.
The financial entry barrier for building a fab is so huge that what in heaven's name would intel lose if they published the RTL for - say - their integer division hardware to show and teach the whole world how it's done when real professionals take a stab at it?
And if they're scared AMD might copy their integer division, why not publish the Verilog code from 2 or 3 generation old h/w? (and this is probably a bad example, I believe AMD and Intel are essentially done competing on stuff like that).
But what I am talking about here is basically unthinkable given the current culture in the EDA world: a person suggesting this inside one of the big shops would be committing career suicice.
Conversely, if you navigate EDA discussion boards a little bit, there is no end to the snarky or sometimes downright insulting comments made by big shop insiders about how lame and terribly inefficient the open source hardware designs published on the net actually are.
In other words: mocking outsiders are their ignorance instead of teaching them how to do cool stuff. That's the culture of the EDA world. Time for a change.
Try $1 to 5 billion. We're talking about something that over half of all extant nation states wouldn't be able to pull off without devoting 10-50% of their annual GDP to the project.
Is there information on what this step would actually require?
Once you've finished your design (I'll leave this as an exercise to the reader :)), you'll start a back and forth with the foundry which will run its own database of rules against your design and work with you to make tweaks that better fit their fab. After you agree on a final mask, you and the fab will run verification simulations and eventually, after several months, you'll get your chips with that new factory smell.
Yes, you can "just" order some chips made. I don't have the URLs off the top of my head but there are a few companies here in California that I've worked with. You can literally walk into their offices unannounced and if you have the money, they will start right then and there. If I remember correctly, in 2012 it cost about $100k to get started with STM's 45nm process (might have been 28nm by then, don't remember). Today, 28nm can cost as low as $4-12k if you're a Canadian researcher  - although that's almost certainly subsidized.
It all seems scary only because the process is so complex that no one person has a grasp on even a minor fraction of what's going on.
(No affiliation. I find this option interesting with the varied responses I get on the topic. Triad Semi was another interesting option in this space.)
The idea is really cool. My go to advice: know your specs, know your vendor, know your delivery guy. If you can implement your design within the restrictions of your chosen structured ASIC, you trust the vendor to deliver on time/in quantity, and you can get them soon enough (I don't know the turn around time) then go for it.
This is a process a substantial number of undergraduate students manage to get through in a semester each year. Some universities will foot the MOSIS or CMP bill if the design passes DRCs.
You don't have to build your own fab or even get a fab to dedicate a whole wafer to you.
The source code is on github.
Is it for PCB design, ASIC design or both? Is a constant current source also considered a “small chip” or just digital designs?
Basically every EDA tool already has the ability to group sub modules which one could distribute as open source if they chose.
Do it in kicad and put your circuit into a hierarchal symbol if you must be all open source.
I get hard IP blocks from vendors all the time for inclusion in our ASICs.
It’s not the EDA tools that are preventing “openness”.
I was just joking the other day how all the PCB designs I’m reviewing lately are just conglomerations of app note circuits and it’s really boring. So to me it seems like there’s plenty of design reuse. :)
Unfortunately, they make very slow progress because they have to painstakingly reverse-engineer everything (with the possible exception of Lattice stuff).
For Xilinx chips, where exactly nothing is publicly documented at the lower levels of the stack, they have to spend mountains of time re-discovering everything.
Even if I deeply admire the effort and how far they've gotten, I can't help but think: what a terrible waste of human talent and time.
Edit: I once asked a Xilinx employee why they didn't OpenSource their entire software stack, because it struck me that they were in the chip manufacturing business, and not in the toolchain business (a blisteringly obvious fact when you look at the quality of such monstrosities as, e.g., Vivado), and that OpenSourcing the tools would potentially enlarge their potential target market by a large margin.
The culture is so broken in that space that I don't think he even actually understood the question.
But if I were to take a guess at why the culture is the way it is, I'd say that it's because programmable logic is fundamentally relatively small logic tiles replicated across large areas.
That means across competing companies there's a high chance for infringing upon arsenals of patents for rather mundane things like interconnect, logic families, or memory cell layout where there are only a handful of viable alternatives yet the patent offices were likely duped into accepting multiple legalese interpretations of the same underlying tech. It's a minefield.
Xilinx is not really a chip manufacturing business either. They're fabless. Imagine having a company that designs RAM memory and outsources everything beyond the cell design. If you don't own the foundry itself you're not going to last very long unless you encrypt the memory access protocols, obfuscate your (probably patent infringing) hardware architecture by layers of undocumented tooling, and dominate the industry by buying up any upcoming contenders while cross-licensing stuff to build up a complex ecosystem of interdependent tools required to get even the most basic project done.
Once more, the problem can be traced to the root evil that is patents, sold to the world as essential for innovation, but which end up having the exact opposite effect.
The situation with Lattice parts was the same; they reverse engineered them.
But if you have to input all of the data that makes up a good route (including coupling, ground/power planes, trace length matching, PDN noise, stackup, EMI/C rules, etc, etc), and then review the whole thing anyway, what's the point of the autorouter?
Also the article says nothing about the various algorithms involved, which are interesting from a computational geometry standpoint. But the gulf between algorithm or academic example and "commercial router" is huge!
This blog post is just... not very good. It's how I imagine a software person who made some stupid blinky LED thing thinks about hardware.
Placement and Routing together is generally the domain of the very-expensive EDA tools inferred in the main article.
At $250,000 per year per person, that supports 100 people for four years.
I… suppose that's not completely insane?
The person he mentions has been behind a lot of advancements in both getting cheap FPGA based boards to the hands of users and creating libre silicon-proven IP. He recently left his company that was making Zedboards and Parallella among other things to join DARPA.
The last big project he seemed to have worked on was a 1000-core processor of which a lot was open source but a lot of the important bits were protected due to likely an NDA with the factory or the provided of the tools they used.
It hope what he is going to work on (EDA) will finally enable DARPA and others to fund truly open source designs and tools and work with fabrics that would allows access to really cheap ASICs or FPGA based systems, building on all the momentum around platforms like RISCV and his experience creating the Epiphany cores.
Of course fabricating a chip will never be as cheap as writing a bit of software, but maybe it will eventually be as cheap as, say, injection molding a piece of plastic?
Manufacturing costs (particularly fixed costs) are going up exponentially. We're getting stuck on economics before physics. You need to be able to sell 10million+ parts to cover your costs.
There's more opportunities if you don't need the best performance or lowest power and use an older manufacturing process node like 65nm.
Once you have a mask set and fab time, it's off to the races. IMO $1M really isn't bad for a simple chip run.
Having these tools as open source and freely available is a huge deal for so many industries. I've worked with these tools at an academic level and now at a startup, and it's amazing the magnitude of this enabling technology. Just the tooling investment will be huge, making the core solvers and algorithms more accessible should spawn a whole new wave of startups/research in effectivley employing them. Just these days, I've heard of my friends building theorem provers for EVM bytecode to formally check smart contracts to eliminate bugs like these .
These synthesis tools roughly break down like this:
1. Specify your "program"
- In EDA tools, your program is specified in Verilog/VHDL and turns into a netlist, the actual wiring of the gates together.
- In 3D printers, your "program" is the CAD model, which can be represented as a series of piecewise triple integrals
- In some robots, your program is the set of goals you'd like to accomplish
In this stage, it's representation and user friendliness that is king. CAD programs make intuitive sense, and have the expressive power to be able to describe almost anything.
Industrial tools will leverage this high-level representation for a variety of uses, like in the CAD of an airplane, checking if maintenance techs can physically reach every screw, or in EDA providing enough information for simulation of the chip or high-level compilation (Chisel)
2. Restructure things until you get to a an NP-complete problem, ideally in the form "Minimize cost subject to some constraints". The result of this optimization can be used to construct a valid program in a lower-level language.
- In EDA, this problem looks like "minimize the silicon die area used and layers used and power used subject to the timing requirements of the original Verilog", where the low level representation is the physical realization of the chip
- In 3D printers it's something like "minimize time spent printing subject to it being possible to print with the desired infill". Support generation and other things can be rolled in to this to make it possible to print.
Here, fun pieces of software in this field of optimization are used; Things like Clasp for Answer Set Programming, Gurobi/CPLEX for Mixed Integer programming or Linear programs, SMT/SAT solvers like Z3 or CVC4 for formal logic proving.
A lot of engineering work goes into these solvers, with domain specific extensions driving a lot of progress. We owe a substantial debt to the researchers and industries that have developed solving strategies for these problems, it makes up a significant amount of why we can have nice things, from what frequencies your phone uses , to how the NBA decides to schedule basketball games.
This is the stuff that really helps to have as public knowledge. The solvers at their base are quite good, but seeding them with the right domain-specific heuristics makes so many classes of real-world problems solvable.
3. Extract your solution and generate code
- I'm not sure what this looks like in EDA, my rough guess is a physical layout or mask set with the proper fuckyness to account for the strange effects at that small of a scale.
- For 3D printers, this is the emitted G-code
- For robots, it's a full motion plan that results in all goals being completed in an efficient manner.
Won't this project reduce barriers to entry for their industry ? and if so, isn't it against their interests to participate?
Quite the contrary, lowering barriers creates more opportunities and thereby new scope for growth.
It took MSFT 30 years to finally understand that lesson and start offering a free suite of dev tools.
The EDA industry still hasn't groked that lesson.
So, their strategy is smart until they, like Microsoft, get in a situation where less and less customers need them. I did think one could do a more open version of EDA, though. I was gonna try to talk to someone at smaller player, Mentor, but they got bought. Im keeping my eyes open for new players wanting a differentiator.
The CAD software is expensive, but it's not barrier of entry expensive compared to design engineering time, tape-out cost etc.
If the CAD software cost gets reduced, it would result in a cost reduction of all companies involved while still having a barrier of entries that's be way too high for anyone but the best funded companies.