Want a linter for your project? That's going to be $50k. Also, it's an absolutely terrible linter by software standards. In software, linters combine the best ideas from thousands of engineers across dozens of companies building on each other's ideas over multiple decades. In hardware, linters combine the best ideas of a single team, because everything is closed and proprietary and your own special 'secret sauce'.
In software, I can import things for free like nginx and mysql and we have insanely complex compilers like llvm that are completely free. In hardware, the equivalent libraries are both 1-2 orders of magnitude less sophisticated (a disadvantage of everyone absolutely refusing to share knowledge with each other and let other people build on your own ideas for free), and also are going to cost you 6+ figures for anything remotely involved.
Hardware is in the stone age of sophistication, and it entirely boils down to the fact that people don't work together to make bigger, more sophisticated projects. Genuinely would not surprise me if a strong open source community could push the capabilities of a 130nm stack beyond what many 7nm projects are capable of, simply because of the knowledge gap that would start to develop between the open and closed world.
There has been quite a lot of study in economics about cooperation. My favorite is Eleanor Ostrom's work on "the commons". She observes that with a certain set of rules, discovered across the world and across varying geographies, people do seem to be able to cooperate to maintain a natural resource like a fishery or a forest or irrigation canals for hundreds of years. Her rules are here (https://en.wikipedia.org/wiki/Elinor_Ostrom#Design_principle...).
That said, economics doesn't rest on this. Macroeconomics doesn't care, nor micro/labor/health/sports/developmental econ either.
Sure trying to predict how someone (and more interestingly how groups) will behave based on their psych profile is an important area of research, but the aforementioned subfields of econ already have well working assumptions about how people will behave in the aggregate, even if they can't derive it from some exact utility function.
There's nothing in the rational agent model that assumes that everyone has the same definition of utility function. Different people want different things.
Of course, it's fine as a very-very general fundamental theory, if you then want to study how people's revealed and non-revealed (old names for implicit and explicit) preferences aggregate into a utility function. (There's a whole bunch of math about pairwise comparison matrices. Lately there's some movement in that space about using perturbation to model inconsistencies in preferences, etc.)
What even the general theory is useful for, is to quickly reduce the question of the very general behavior/decision problem in to "show me your utility function and I'll tell you what you will do". Of course then that becomes a problem. How to model utility, how to formalize expected utility over all possible actions. What about priors (as in path-dependence, so do we want to encode that into the utility function - and constantly dynamically update the function, like a Bayesian agent would update its beliefs - or somehow manage it separately)?
Basically I was trying to persuade people to don't say things like "economists are dumb, because people are not rational utility maximizers", because that's a false implication. Of course they are, but - just like you said - this doesn't help us better predict people's (agents') behavior, just gives us a new task to model their utility functions. And that's where regret comes in. (Which is basically risk-aversion, which is what behavioral economics studies. People are not symmetric about positive and negative expectations. They value avoiding negative-utility events more than they value encountering positive-utility events. And of course with this insight now we can build better economic models - for example we can use this to "explain" why wages are more sticky - especially downward - than we would expect, also with some handwaving we can explain why in times of crisis people are let go instead of decreasing their hourly compensation, why some people are so gullible [peer pressure, confirmation bias ~ cognitive dissonance], and so on.)
Even when entities (corporations, for example) are trying to maximize utility, and an optimal decision is desired, there are issues with how much time and resources can be spent making decisions, so optimality has to be bounded in various ways (do you wait for more information? Do you spend more time and compute on calculating what would be optimal? etc.).
I keep seeing something like this in workplaces people don't join forces they avoid friction. Until something (crisis) or someone (capable leader) flips the thing on its head.
In economics, ... but the power of cooperation seems to be valued much less
A good percentage of being a good leader is being able to pick up the phone and get stuff done, which is often contrary to what the firm wants.
The trick to making it all work is that knowledge and/or tooling (the means of production) ought not be proprietary, but product or service absolutely can be.
however the line does blur when talking about blueprints and designs.
in any case, I think that free software movements are a sociological anomaly, I wonder if there is any academic research into this from an antropological or an historical economics viewpoint.
also, it seems to me that in some sense the entire market works in cooperation, just not very efficiently (it optimizes for other things than efficiency and is heavily distorted by subsidies and tariffs)
I tried to get into FPGA programming a while ago, and it turns out the entire software stack to get from an idea in my brain to a blinking LED on a dev board is hot garbage. First of all, it's insanely expensive, and second of all, it really, really sucks. Like how is it <current year> (I forgot what year it was, but it was 2016-2018 timeframe) and you've tried to reinvent the IDE and failed?
I think projects like RISC-V and J Core are super cool, but I couldn't possibly even attempt to contribute to them based on how awful the process is.
The FPGAs are also relatively inexpensive (<5-10€ in single quantities depending on the model) but are on the lower end in terms of features and performance.
The tools should also support the more powerful ECP5 FPGAs, but I haven't tried them yet.
A popular board a few years ago was the iCEstick for about 20€, but I can't find it anywhere for that price now.
The ICE40 breakout boards from Lattice are also more expensive now for whatever reason.
Olimex has a board with the HX8K that is more reasonably priced, but unfortunately doesn't have an onboard programmer...
For the ECP5, the ULX3S board looks interesting. It's not exactly cheap, but also has more features than just buttons, LEDs, and pin headers like on the other boards.
The whole thing revolves around the marginal cost of an extra copy of a piece of software being close to zero, no other critical industry gets such economies of scale. Making more chips still requires huge capex and opex.
A favorite example is that us legislation to ban advertisements for smoking was sponsored by the tobacco industry. They were spending a lot on ads just to keep up with the Joneses; if Camel voluntarily stopped and Marlboro continued, then Camel would go the way of Lucky Strike. They would rather agree to cut their expenditures! But they needed to make sure no other young tobacco whippersnappers came in and started showing a couple ads which they would have to both best, reigniting the war.
Open source is interesting because it seems to be a marvelous unexpected outcome from the existence of the corporation. Individual people start to work at corporations and are aware that whatever they produce at that corporation is mortal, it will die with that corporation if that corporation decides to stop maintaining it or if that corporation itself folds. The individual wants his or her labor to survive longer, to become immortal. This company could go out of business and I will still have these tools at my next job. So in some sense layered self-interests create a push towards corporate cooperation.
-David Sloan & E. O. Wilson
There is a lot of literature on the subject of cooperation, especially from anarchist philosophers (i.e. Mutual Aid: A Factor of Evolution, Kropotkin).
I don't have the proper background to make a strong case about this, but I feel like middle ages guilds would be closer to the open-source model than to the current "trades secrets" one, wouldn't it?
Likewise, I don't see farmers of ye olde times keeping their crop-growing tricks to themselves as secrets and so on.
Furthermore, we've known about many indigenous cultures where "the tribe" is regarded as more important than the individual, meaning that sociologically they should be more aligned with the open-source model than the capitalistic one, shouldn't it?
Again, I'm not an expert in the area, but it seems to me that our current society is more "historically anomalous" at the commoner level than any more socially-conscious one would be (i.e.: common people has leaned to greed/individuality these days than almost always in the past).
I think that's stretching the definition of 'competition' a bit too much.
First off, many people work on FOSS just because they want to, or because they believe that it's a good thing. Not because they want 'user attention'. Furthermore, I'm pretty sure that even the people that are motivated by user attention don't see it as a competition.
> FOSS projects without enough capital does not gain enough developers, and falls into disrepair and obscurity.
There are plenty of FOSS projects that do fine with just a handful of developers, or even just a single one. with perhaps the occasional pull request from others.
> Not sure what a socialist FOSS movement might look like, but maybe developers would be assigned to projects as opposed to them freely choosing the projects to work on?
Maybe I misunderstand, but I get the impression that you're thinking of 'state' socialism. I wouldn't say being assigned stuff is a core aspect of socialism.
All that said, it's not so much that I think FOSS is like socialism, but rather that FOSS is a counter-example to a common argument I hear from especially the more hardcore 'everything should be market forces' capitalists, which is that we're primarily driven by competition.
But I think in general, the FOSS projects that thrive, that gather users which report bugs, that has an active community, are going to be the ones which attract developers, not necessarily because they seek fame and fortune, but just by the simple aspect that they are more likely to run into those projects, and be able to contribute to those projects more easily due to better documentation and active devs who can answer questions. It is the projects which compete for attention for their livelihood, like an emergent behavior, even if the developers don't explicitly target it.
Besides, developers are humans too, and they can feel pride when their contributions are recognized, and they can consider the work to be helping to pave the way to future employment. Contributing to large active projects give higher chances for both prospects.
Regarding whether it's state socialism that I'm thinking of, you're probably right. I've never been very clear on the definition of socialism. On the most superficial level, I understand socialism to be that the community controlled the means of production. So when applied to FOSS, that seems to mean that the whole community decides what projects to work on, and in the case where the community decided that something needs to be worked on, but not enough devs volunteer, then it seems like the work will need to be assigned.
Finally, as you said, it's not about FOSS being like socialism, but that they devs are not directly competing for big cash prizes that's the interesting aspect. I agree. There are not that many industries that are so fundamental to our society which has a large contingent of people doing critical work for free. Perhaps education comes close. Politics, with political power taking a similar role as attention for FOSS projects, but that's straying pretty far.
I don't mean to engage in some "no true capitalism" libertarian defense, but rather point out that a lot of fine (if simplistic) economic models/theory have been corrupted by various ideologies. A lot of radical-seeming stuff is not radical at all according to the math, just according to your rightist econ 101 teacher.
I.e. a trade is when two parties engage in a mutually beneficial exchange.
The libertarians economists talk about cooperation in terms of spontaneous order. Milton Friedman had his story of the process to manufacture a pencil as "cooperation without coercion". Basically it's the "invisible hand" driving people to cooperate via price signals and self-interest. I don't know if there's much that can be done with the concept beyond that.
EDA is very essential for chip development at this point and it seems like an industry ripe for disruption. We're seeing some inroads by open source EDA software - simulators (Icarus, Verilator, GHDL), synthesis (yosys) and even open cores and SOC constructors like LiteX. In software land we've had open source compilers for over 30 years now (gcc for example), let's hope that some of these open source efforts make some serious inroads in EDA.
How is a 4 years old distribution "very ancient"? There's very likely plenty of Ubuntu 16.04 in the field, and there's nothing inherently wrong with that.
RedHat uses "old" kernels, if that's what you refer to with "ancient" but there are reasons for that, and they're also backported, they're not unupdated.
What does this mean? I've only been in the C++ game for 6 years and for me if I can get something from std rather than rolling my own or pulling in another library, I'm cheering.
the thing is that i think the open source software is a miracle that it even exists, and i don't find it strange that nowhere else has replicated the success. Because open source, at heart, is quite altruistic.
I'd add that just looking at how families or circles of friends operate is also enlightening. The most cynical view is that these interactions are 'debt' based, but practically speaking that often isn't true either. I help my mother not because I calculate the effort she has invested in me or the value she brings me. I just do it because I love her and I don't need to think about how I've come to feel that way (which very well might be based on measurable behavior on her part).
What is present in rural areas is a sort of respect and fear of exclusion (I think these two are reciprocal). But it is not impossible to have that on a local level in a large city. Having lived in both London and New York (similar population size, vastly different population density) I've seen this happen. In Manhattan specifically there is little respect for neighborhoods—comparatively—while in London residents care for their boroughs. Some direct actions that might have fostered that are simple things like community gardens/allotments which are very common in the UK.
that's my humble opinion, I'm not an expert in human social behavior.
The problem of a free-rider is ever present, and in big cities, that problem is evident and thus you don't get the sort of community effort that you see in a small village.
Beyond Dunbar’s Number, it’s not clear that you can employ mutual-aid theories like tit-for-tat to restrain free-riding and cheating.
This is the record of the miracle of St. iGNUtius . Back in The Day, before all the young hipsters were publishing snippets of code on npm for Internet points, Richard Stallman seethed in frustration at not being able to fix the printer driver that was locked up in proprietary code. While addressing St. iGNUtius, patron saint of information and liberty, he had a vision of the saint saying that he had interceded for Stallman, and holding out a scroll. On the scroll was a seal, and written on the seal in phosphorescent green VT100 font was the text "Copyright St. iGNUtius. Scroll may be freely altered one the sole condition that the alterations are published and are also freely alterable." Upon opening the seal, and altering it according to the invitation, Stallman saw the scroll split into a myriad of identical scrolls and spread throughout the world, bringing liberty and prosperity to the places it touched. Stallman hired some lawyers to write the text of the seal in the legal terms of 20th century America. Thanks to the miracle of St. iGNUtius, software today is still sealed with seal, or one of its variants, named after the later saints, St. MITchell and St. Berkeley San Distributo.
 A twentieth-century ikon of St. iGNUtius: https://twitter.com/gnusolidario/status/647777589390655488/p...
The original RMS email announcing the project is a nice read. When placed in the context of his personal frustration with commercial software it can also be seen as a line in the sand.
Now I grumble if it’s a pain to install a massive open source tool chain for free and these days it’s rarely painful.
I should stop and appreciate that more once in a while.
That's why i don't think hardware opensource will work, but that this model "worked" in the software world is a miracle. By "worked", i mean that it exists.
This is true, but in the markets where open source thrives, it's even harder to make money in closed source. One talented programmer might make a new web framework, release it, and gain some fame and/or consulting hours. Good luck selling your closed source web framework, no matter how much VC backing you have.
But writing some open source - making your tools (compilers, linters, runtimes), libraries, frameworks, monitoring, sysops/sysadmin tooling, and so on open source is much more profitable, and that's a huge subdomain of open source out there right now.
You can also get back significantly less than what you put in.
Similarly for independent developers. You don't make money off the open-source software itself. You make money by getting your name out there as a competent developer, and then creating a bidding war between multiple employers who want to lock up the finite supply of your labor. The delta in your compensation from having multiple competitive offers can be more than some people's entire salary.
JITX (YCS18) was funded under IDEA to automate circuit board design, and we're now celebrating our first users.
The issue I see in hardware is that all complexity is handled manually by humans. Historically there has been very little capacity in EDA tools for the software-like abstraction and reuse which would allow us to handle complexity more gracefully.
Not having an advanced degree doesn’t mean you can’t master complexity. It’s the same as with software.
The advanced degree is not meant to teach you how to grok complexity. It's to teach you what problems you can expect to encounter and how to go about solving them.
Now, if you are in CMOS process design or responsible for the CAD software itself and its design rules, it helps (a lot!) to be a physicist. For that you need the MS and PhD training.
You can crank out correct HDL all day with https://clash-lang.org/, and the layout etc. work that takes that into "real hardware" while not trivial, are less creative optimization problems.
This is a post-hoc rationalization behind the decrepitness of the (esp Western) hardware industry, and the rampant credentialism that arises when the growth of the moribund rent-seekers doesn't create enough new jobs.
: Go to Shenzhen and the old IP theft and FOSS tradition has merged in a funny hybrid that's a lot more agile and interesting than whatever the US companies are up to.
I'm a professional PCB designer among other things. Most of what I have learned came from hard-won experience designing PCBs for projects done for myself and my own education even though a custom PCB was hardly cost effective in a commercial sense. They were just much cheaper since my time was free. They weren't useless projects in that what I made was not easy with off the shelf parts, but it would have been possible and there wasn't a timeline to pressure me yet.
But I would not be a professional PCB designer now if it had not been approachable to someone without a budget but with plenty of time and motivation. I've basically spent as much money learning how to make PCBs as other people spend on things like ski trips or going on vacations or other hobbies. A free fab and tools to create designs is a godsend when designing these things is something you want to do for interesting experiments and learning in an otherwise totally inaccessible field. Even if you think now one needs a masters or PhD to do this right, being able to fail cheaply is a pretty amazing learning tool...
That's what this announcement means to me -- free fab means I can finally learn how to do this and get good enough at it that when the time comes that this is a better solution than trying to combine off the shelf functionality I will be well positioned to take advantage of that change.
I am thrilled to release open source designs for cool chip functionality once I'm skilled enough to do it and the only way I'd get there is if the direct cost to me was nothing (even if it's slow).
hate to tell you this, but probably not. there are only 40 slots. those are going to quickly be eaten by people that already know how to do this.
the fab isn't the expensive part anyway. you can get a small run of ASICs done for a few single digit $thousand
what you would get out of this is access to a google-promoted open source PDK. It's specific to SkyWater so "open source" don't mean a whole lot, not today anyway. The available libraries are very immature. It's clearly a nice promotion for SkyWater, one I am enthusiastic about, but nonetheless not the shimmering beacon you're looking for.
You already know this: time is your most precious commodity. Don't bumble your way through a thinly disguised and immature FOSS offering. Cadence Virtuoso online training is just $3k per seat per year. That said, I'm completely talking out of my rear. I've no idea if the training is actually useful without having a license for Virtuoso, where the webs say license costs are 6 figures per seat. I'd enroll in a university program to get access ... 
I just suggest this based on your stated goals, to be able to have enough proficiency to use professionally. Certainly you don't need 10,000 hours, and some of that is amortized by related experience, but time is your #1 enemy here, not money. Any money you can spend to jumpstart things is money well spent. OTOH if you just want to enjoy exploring IC design in a noncommittal hobbyist way, as a learning "experience" ala MasterClass, then this project does seem an excellent starting point.
SkyWater is a "trusted foundry", whatever that means, apparently completely US-based. Therefore, I think the most valuable thing that could ultimately (say 5 years hence) come out of this would be for OpenTitan to be "ported" to ASIC and to the SkyWater process. 130nm is large but for this application, for IoT in general, for anything not a mobile handset, it would be powerful. Imagine a Raptor TALOS workstation with an OpenTitan RoT! Or, your own design PCIe card with your own fabbed and X-Ray verifiable RoT. Powerful.
 https://www.cadence.com/content/dam/cadence-www/global/en_US... (slides 3 and 4)
Design for test, Logical verifcation (simulation, logical equivalence checks, electrical design rules), Physical implementation (library development and characterization, floorplanning, place and route, signal integrity), signoff (timing closure, electrical desing rules (again), physical verification, OPC) all require highly complex tools to automate. For large designs you will have people dedicated to each individual step because the ways in which things can go wrong -- and the absolute necessity of things going right to get a working chip -- are legion. And there's no substitute for experience to know the right questions to ask.
It's like the difference between driving your car across the country and flying to orbit. If you make a wrong turn in your car you can just make another turn or backtrack (edit the source code and rebuild). If don't have the right torque on the tank strut bolts on your rocket, "You will not go to space today".
Source: 30 years in the ASIC and EDA industries doing chip implementation and EDA tool flow development.
Another interesting open source EDA project coming out of Google is Verible: https://github.com/google/verible which provides Verilog linting amongst other things.
FOSS development isn't cost-less. And so the business equation is always present.
The degree to which purely academic support for open source can make progress is asymptotic. People need to eat, pay rent, have a life, which means someone has to pay for something. It might not be directly related to the FOSS tool, but people have to have income in order to contribute to these efforts.
It is easy to show that something like Linux is likely the most expensive piece of software ever developed. This isn't to say the effort was not worth it. It's just to point out it wasn't free, it has a finite cost that is likely massive.
An industry such as chip design is microscopic in size in terms of the number of seats of software used around the world. I don't have a number to offer, but I would not be surprised if the user based was ten million times smaller than, say, the mysql developer population (if not a hundred million).
This means that nobody is going to be able to develop free tools without either massive altruistic corporate backing for a set of massive, complex, multi-year projects. If a company like Apple decided to invest a billion dollars to develop FOSS chip design tools and give them away, sure, it could happen. otherwise, not likely.
Further, games would be very easy to copy for users, to develop cheats for multiplayer, to duplicate by competitors, etc.
The longest lived games are the longest lived because they are open enough for the community to keep them alive.
Compare that with something like Android where I'm lucky if I can get GDB to hit a single breakpoint on a good day.
The problem is, it's also a very, very tough industry to disrupt. Not for lack of trying though.
Hopefully there is a middle ground somewhere, where the folks working on open source software can get compensated for their work so as to enable a healthier system overall, so we aren't all just “software serfs” sharecropping for our overlords.
I think it's also a cultural thing. Like you said, lots of your own special secret sauce, and so many issues trying to fix bugs that may have to do with that secret sauce.
Can't say I miss it at all really.
Tangent: I've noticed this problem in dentistry and sleep apnea! The methods of knowledge transfer in the field seem to be incredibly inefficient? Or there are tons of dentists not learning new things? (I recall a few dentists on HN -- I would be interested in any comment either way on this.)
The reason I say this is that many patients can't tolerate CPAP (>50% by some measures). There are other solutions, and judging by conversations with friends and family, dentists and doctors don't even know about them!
My dentist gave me one of these alternative treatments for sleep apnea, which was very effective. It's mandibular advancement device (oral appliance). Even the name is bad! They have a marketing and branding problem.
Random: Apparently Joe Rogan's dentist did a similar thing with a different device which he himself invented. Rogan mentioned it 3 times on the air.
So basically it appears to me that practitioners in this field don't exchange information in the same way that software practitioners do (to say nothing of exchanging the equivalent of working code, which is possible in theory).
I looked it up and there are apparently 200K dentists in the United States. It seems like a good area for a common forum. I think the problem is that dentists probably compete too much rather than cooperate? There is no financial incentive to set up a site of free knowledge exchange.
Related recent threads I wrote about this:
https://news.ycombinator.com/item?id=23666639 (related to bruxism, there seem to be a lot of dentists who don't seem to understand this, as I know multiple people with sleep apnea and bruxism who don't understand the connection)
https://news.ycombinator.com/item?id=23435964 (the book Jaws)
Those things are in the basic apnea treatment palet over here (Netherlands), actually more common than PAP machines. Why do you think they are alternative?
I've talked to several people who have a CPAP, and they don't even know of the existence of the mandibular advancement device. Their doctors and dentists apparently don't tell them about it!
I'm puzzled why that is the case. I think it has something to do with insurance. If that's true, it's not surprising that other countries don't have the same problem!
Imagine a combined Intel+AMD+Samsung+Nvidia behemoth pooling together their expertise. Internal competition would still exist, but for actual technical reasons now instead of market ones. One could imagine myriad ways to fund such a cooperative endeavour, which are never even tried because the current model is sacred.
I think the main reason why open source has taken off is because access to a computer is available to many people, and as cost is negligible, it only required free time and enough dedication + skill to be successful. For hardware though, each compile/edit/run cycle costs money, software often has 5-digit per seat licenses, and thus the number of people with enough resources to pursue this as a hobby is quite small.
Reduce the entry cost to affordable levels, and you have increased the number of people dramatically. Which is btw also why I believe that "you can buy 32 core threadripper cpus today" isn't a good argument to ignore large compilation overhead in a code base. If possible, enable people to contribute from potatoes. Relatedly, if possible, don't require gigabit internet connections either, so downloading megabytes of precompiled artifacts that change daily isn't great either.
For example, Altium Designer is probably the most modern (not most powerful although close) PCB suite and yet despite costing thousands a seat it is a slow, clunky, single-threaded (in 2020) program (somehow uses 20% of a 7700k at 4.6GHz with an empty design). Discord also thinks that Altium Designer is some kind of Anime MMO
git clone https://git.dev.opencascade.org/repos/occt.git
Like you say, it's kind of shocking to see one core running at 100 percent while the rest do nothing and the app is sluggish in 2020.
In the same vein something like ECAD tools don't use GPU-accelerated 2D rendering but instead use GDI and friends (which used to be HW-accelerated, but isn't since WDDM/Vista).
A lot of "easy" opportunities to improve UX and productivity.
Hardly Altium Designer's fault, but I too would avoid using it.
Even though it has a long history of open-source attempts, as pointed out by Tim in his presentation, they are few and far between, and massively underwhelming compared to the thriving open source software community.
However, if this initiative takes off, it'll be a big help in creating an open source EDA toolchain community.
The opensource EDA toolchain community is already producing some good stuff, Symbiflow: https://symbiflow.github.io/ is a good example, it's an open source FPGA flow targeting multiple devices. It uses Yosys (http://www.clifford.at/yosys/) as a synthesis tool which is also used by the OpenROAD flow: https://github.com/The-OpenROAD-Project/OpenROAD-flow which aims to give push-button RTL to GDS (i.e. take you from Verilog, which is one of the main languages used in hardware to the thing you give to the foundry as a design for them to produce).
The Skywater PDK is a great development, which is a key part of a healthy opensource EDA ecosystem though there's plenty of other great developments happening in parallel with it you will note there's some people who are involved in several of these projects they're not all being developed in isolation. The next set of talks on the Skywater PDK include how OpenROAD can be used to target Skywater: https://fossi-foundation.org/dial-up/
If that is basically a given, why publish anything for free, when you can instead charge 10k/seat in licensing?
Or put another way, Apple and Google are both responding to Intel/the market’s failure to innovate enough in idiosyncratic manner:
- Apple treats lower layers as core, and brings everything in-house;
- Google treats lower layers as a threat and tries to open-source and commodify them to undermine competitors.
I don’t mean this free fabbing can compete chip-for-chip with Apple silicon of course, just that this could be a building block in a strategy similar to Android vs iOS: create a broad ecosystem of good-enough, cheap, open-source alternatives to a high-value competitor, in order to ensure that competitor does not gain a stranglehold on something that matters to Google’s money-making products.
Apple spends $100+ millions to design high performance microarchitecture to high-end process for their own products.
Google gives tiny amount of help to hobbyists so that they can make chips for legacy nodes. Nice thing to do, nothing to do with Apple SoC.
Software people in HN constantly confuse two completely different things
(1) Optimized high performance microarchitecture for the latest prosesses and large volumes. This can cost $100s of millions and the work is repeated every few years for a new process. Every design is closely optimized for the latest fab technology.
(2) Generic ASIC design for process that is few generations old. Software costs few $k or $10ks and you can uses the same design long time.
I don't believe Google does anything because it's a "nice thing to do". There's some angle here. The angle could just be spurring general innovation in this area, which they'll benefit from indirectly down the line, but in one way or another this plays to their interests.
If only Google had this singular focus... From my external (and lay) observation - some Google departments will indulge senior engineers and let them work on their pet projects, even when the projects are only tangentially related to current focus areas.
Looking at Google org on Github (https://github.com/google); it might be a failure of imagination on my part, but I fail to see an "angle" on a good chunk of them.
And by old, I mean /old/. 130 nm was used on the GameCube, PPC G5, and Pentium 4.
Things don't have to be ultra modern to offer value.
You could probably close some of the single thread gap with architectural improvements, but your real problems are going to be power consumption and that you'd have to quadruple the die size if you wanted so much as a quad core.
The interesting uses might be to go the other way. Give yourself like a 10W power budget and make the fastest dual core you can within that envelope, and use it for things that don't need high performance, the sort of thing where you'd use a Raspberry Pi.
(Jim Keller explains in this interview how CPU designers are making use of the transistor budget: https://youtu.be/Nb2tebYAaOA)
They can outsource silicon development. Should not be a problem with their money.
In comparison to dotcom development teams, semi engineering teams are super cheap. In Taiwan, a good microelectronics PhD starting salary is USD $50k-60k...
Experienced teams who have designed high performance microarchitectures aren't common, because there just isn't that much of that work done.
And when you're eventually going to spend $$$$ on the entire process, even a 1% optimization on the front end (or more importantly, a reduction of failure risk from experience!) is invaluable.
At the scale that Google is at, it really wouldn't surprise me if they were working on their own silicon to solve the problems that exist at that scale.
"""Regularly upgrading network fabrics with
the latest generation of commodity switch silicon allows
us to deliver exponential growth in bandwidth capacity
in a cost-effective manner."""
The next step in that set of goals for their data center would be custom silicon where merchant silicon doesn't provide the right combination.
My guess would be a cloud based chip design software is in the works. This would accelerate AI quite a bit I should think?
It's actually a big part of why some silicon companies distribute themselves around timezones - so someone in Texas can fire up the software immediately when someone in the UK finishes work.
It's not unusual to see an 'all engineering' email reminding to you close rather than minimize the software when you go to meetings.
But that all means nothing for companies who buy Virtuoso copies from from guys trading WaReZ in pedestrian underpasses in BJ.
A number of quite reputable SoC brands here in the PRD are known to be based on 100% pirated EDAs.
This is not a critique, but a call to think about that a bit.
In China, you can spin-up a microelectronics startup in under $1m, in USA, you will spend $1m to just buy a minimal EDA toolchain for the business.
Allwinner famously started with just $1m in capital, when disgruntled engineers from Actions decided to start their own business.
- 130 nm process built with Skywater foundry
- Skywater Platform Development Kit (PDK) is currently digital-only
- 40 projects will be selected for fabrication
- 10 mm^2 area available per-project
- Will use a standard harness with a RISC-V core and RAM
- Each project will get back ~100 ICs
- All projects must be open source via Git - Verilog and gate-level layout
I'm curious to see how aggresive designs get with analog components, given that they can be laid out in the GDS, but the PDK doesn't support it yet.
(Remember 130nm gave us the late models Pentium 3s, the second model P4s and some Athlons, though all of these had a bigger die size)
I'm thinking you could have some low power or very specific ICs, where these would shine as opposed to a generic FPGA solution
3.16mm x 3.16mm?
: http://www.apollo-core.com/ I can't easily find how "open source" it is though, but it's free to download.
There are some other pretty nice and featured 68k cores that are open source (TG68, WF68K30L etc.) but none that is really close to the features and performance of the Apollo 68080.
If you do a little research you'll find out that there's plenty of "stay away" and "can't believe they haven't been sued into oblivion yet" indicators and all sorts of misleading claims and marketing.
My summary would be that it's a tightly-controlled, closed project lead by questionable people with well-documented histories of questionable practices including ignoring copyrights, distributing infringing software, deleting critical posts from their forums and putting out misleading information.
That would be a fun upgrade if I had an old 68k Mac in good condition. I've thought occasionally about what if Motorola had had the resources to continue developing 68k like x86.
I have a "Firebee", an Atari ST-like machine built around a 264mhz coldfire, and it's quite nice.
My understanding is that Coldfire got used in a lot of networking hardware, due in part to its network order endianness, and partially because Freescale put network hardware support into some of the cores.
But there is no continuing demand for Coldfire, really, so it has stalled, NXP never continued on with it and is going the ARM route now, like everyone else.
Chips that cost $1000 from a distributor cost 1/10th to 1/100th the price when you have a relationship with the manufacturer, mostly because distributors can't sell them very quickly and have to keep a ton of stock to have the SKUs you want.
On a modern FPGA, processor clocks of 200-300 MHz are possible to get with designs that aren't huge.
Honestly instead of chasing after new 68k silicon it'd be better to just emulate on a modern processor. Not the same romance, I know....but
It really is "spiritually" a 68k series processor, just with some cleaning up. I like it.
I believe freescale currently owns the architecture, and still manufactures some 68k microcontroller cores.
Owns it how? 68060-- the last of 68k's designs-- was released in 1994. Any patents should now be expired.
> Owns it how? 68060-- the last of 68k's designs-- was released in 1994. Any patents should now be expired.
Sure, patents wouldn't be a barrier to clone the design and create an equivalent using the same patented ideas, but copyright still prevents you from copying the design, and will prevent copying significant parts of the design as well.
Shouldn't that already be problematic for the 68k projects in hardware through FPGAs? Apollo already does it and sells hardware, and the MiSTER project also does it by releasing FPGA designs for e.g. the Sega Genesis which has a 68k processor. Is it a different story if you embed 68k in an ASIC?
These are kiosk-sized machines that a company can use to set up a fab with a few million dollars. Any individual can then design a chip and have it fabricated very (as in "I want to make a chip for fun") affordably.
I was not able to find a ton of information on this, but the 190nm process was supposedly ready last year and there were plans to go below this. The wafers are 12mm in diameter (so basically, one wafer -> one chip) and the clean room is just a small chamber inside the photolithography machine. There are also no masks involved, just direct "drawing".
The simplistic explanation for why this works is that electron beams can be easily focused using magnetic lenses into a beam that reaches the nano meter level.
These beams can then be deflected and controlled electronically which is what makes it possible to effectively make a cpu from a cad file.
Furthermore, It's very easy to see how the complexity of photolithography goes up exponentially as we scale down.
Therefore I believe it makes sense to abandon the concept of photolithography entirely if we want open souce silicon. I believe that this approach offers something similar to the sort of economics that enable 3D printers to become localized centers of automated manufacturing.
I should also mention that commercial E-beam machines are pretty expensive (something like 1-Mil) but that I dont think it would be that difficult to engineer one for a mere fraction of that price.
Theoretically it should be feasible to fab 350 nm without double-patterning by optimizing a simple immersion DLP/DMD i-line stepper.
I think ArF immersion with double-patterning should be able to do maskless 90 nm.
I'm not sure where it was, but I remember a seeing a project where someone made a rudimentary homebrew electron microscope by chemically etching the tungsten filament from a light bulb (to get the tip sharp enough) and attaching it to a piezo buzzer that was scored to separate it into four quadrants. The filament could be moved by applying various combinations of voltage to the piezo quadrants.
I didn't find the one I was thinking of (which I think was ca. 2002 and so maybe just vanished by now), but search results suggest that variations of this have been done by several people.
Decades ago computers used magnetic core memory. Those things operate on a macroscopic/classical physics level. You can make a core memory by hand if you buy the ferrous toroid first. But moreover, you mention 3d printers — it’s probably possible to manufacture the toroid on a sub-10k machine these days, be it a 3d printer or CNC machine. Some of these techniques generalize to multiple materials, meaning you could automate both the manufacturing of the toroid and the wires connecting them (and the assembly) and have an actual open-source, easily fabricated memory.
One thing not a lot of people know is that you can create clock-triggered combinatorial logic out of core memories just by routing the wires differently. So you’ve got your whole computation + volatile memory + non-volatile memory built on the same process using just two materials and at a macroscopic scale (think: millimeters). That sounds easier to bring up than silicon.
Yeah, macro-scale has its limitations (speed; power draw!), but it’s still enough to enable plenty of applications, and with room to scale it as the tech gets better.
This very, very much depends on what the algorithm is (integer or FP? how data dependent?), but I would say no for almost all interesting cases.
The only exception would be if you're doing a "mixed signal" chip where some of the processing is inherently analogue and you can save power compared to having to do it with a group of separate chips.
Another exception might be low leakage construction, because that gets worse as the process gets smaller. This is only valuable if your chip is off almost all of the time and you want to squeeze down exactly how many nanoamps "off" actually consumes.
I wonder how many "test chips" Google will let a non-expert team do to get it right? And whether they provide any "bringup" support?
An open source community would go a long way to fixing an issue like this, and these "black magic" projects are actually a fantastic place for the open source world to get started, because it's an area where there's a ton of room for improvement over the status quo.
No, you actually have more leakage at older nodes, what changes is the ratio of current spent on leakage vs. current spent doing something useful.
Of course, the lower gate capacitance allows for lower switching losses. But adiabatic computing could theoretically recover switching losses, allowing for higher efficiency at older nodes. That can be approached by using an oscillating power supply for instance, to recover charges. If someone was to design something like this for this run, it could be very interesting.
Now I'm wondering if this isn't some covert recruitment operation by Google: they will likely comb trough application, select the most promising ones, and the designers will get job offers :)
You have tunnelling losses on bigger nodes as well, they are just not that dominant. Dielectrics got better as nodes shrank, and this is the reason FinFETs became practical (which switch faster, and more reliably on smaller nodes, but leak worse.)
130nm is almost 20 years old at this point. You can do amazing things with this process but saving power is probably not one of them.
So yes, for specific tasks like crypto operations or custom networking, you should be able to make a 130nm ASIC that is going to outperform a 7nm Ryzen. You are not going to be able to make a CPU core that's going to outperform a Ryzen however.
So somebody like me, who did two standard cell based ASICs 25 years ago, probably would have to add a sizable safety margin to produce a reliable chip, and would achieve nowhere near the performance of a pro team at the time.
Previously MOSIS would run select a few student/research designs to go along with the commercial MPW runs, frequently on pretty modern fabs. I'm not really sure how much they still run.
(oh here is the MOSIS/TSMC runs for this year https://www.mosis.com/db/pubf/fsched?ORG=TSMC)