Hacker News new | past | comments | ask | show | jobs | submit login
Google offers free fabbing for 130nm open-source chips (fossi-foundation.org)
1055 points by tomcam on July 7, 2020 | hide | past | favorite | 384 comments

I've spent some time in the chip industry. It is awful, backwards, and super far behind. I didn't appreciate the full power of open source until I saw an industry that operates without it.

Want a linter for your project? That's going to be $50k. Also, it's an absolutely terrible linter by software standards. In software, linters combine the best ideas from thousands of engineers across dozens of companies building on each other's ideas over multiple decades. In hardware, linters combine the best ideas of a single team, because everything is closed and proprietary and your own special 'secret sauce'.

In software, I can import things for free like nginx and mysql and we have insanely complex compilers like llvm that are completely free. In hardware, the equivalent libraries are both 1-2 orders of magnitude less sophisticated (a disadvantage of everyone absolutely refusing to share knowledge with each other and let other people build on your own ideas for free), and also are going to cost you 6+ figures for anything remotely involved.

Hardware is in the stone age of sophistication, and it entirely boils down to the fact that people don't work together to make bigger, more sophisticated projects. Genuinely would not surprise me if a strong open source community could push the capabilities of a 130nm stack beyond what many 7nm projects are capable of, simply because of the knowledge gap that would start to develop between the open and closed world.

I've been thinking about this a lot lately. In economics, the value of competition is well understood and widely lauded, but the power of cooperation seems to be valued much less - cooperation simply doesn't seem as fashionable. But the FOSS world gives me hope - it shows me a world where cooperation is encouraged, and works really, really well. Where the best available solution isn't just the one that was made by a single team in a successful company that managed to beat everyone else (and which may or may not have just gotten into a dominant position via e.g. bigger marketing spend). It's a true meritocracy, and the best ideas and tools don't just succeed and beat out everything else, they are also copied, so their innovation makes their competitors better too - and unlike the business world, this is seen as a plus. The best solutions end up combining the innovation and brilliance of a much larger group of people than any one team in the cutthroat world of traditional business. Just think about how much effort is wasted around the world every day by hundreds of thousands of companies reinventing the wheel because the thousands of other existing solutions to that exact problem were also created behind closed doors. Think about how much of this pointless duplication FOSS has already saved us from! I really hope the value of cooperation and the example set by FOSS can spread to more parts of society.

Classical economics thinks of people as "rational utility-maximizing actors", which doesn't approximate reality in quite a lot of ways. There's been a move toward more sophisticated models - like that people minimize regret more than they seek the optimal reward ("rational actors with regret minimization.") This switch to more complex underlying models is similar to computational models used in computer science. Algorithms used to only be designed for the RAM computation model, which doesn't model real-life CPUs, which have caches and where not all operations take unit time. Now, there's a wide variety of models to choose from, including cache-aware, parallel, and quantum models. You often get better predictors of the real world this way.

There has been quite a lot of study in economics about cooperation. My favorite is Eleanor Ostrom's work on "the commons". She observes that with a certain set of rules, discovered across the world and across varying geographies, people do seem to be able to cooperate to maintain a natural resource like a fishery or a forest or irrigation canals for hundreds of years. Her rules are here (https://en.wikipedia.org/wiki/Elinor_Ostrom#Design_principle...).

The problem with the "rational agent" model is that it's a tautology. Yes sure everyone wants more utility, great, but everyone's utility function is slightly different. As you say some are risk takers pursuing insane rewards/yields/profits with low probability, while others are super risk-averse conservative in their choices, etc.

That said, economics doesn't rest on this. Macroeconomics doesn't care, nor micro/labor/health/sports/developmental econ either.

Sure trying to predict how someone (and more interestingly how groups) will behave based on their psych profile is an important area of research, but the aforementioned subfields of econ already have well working assumptions about how people will behave in the aggregate, even if they can't derive it from some exact utility function.

> The problem with the "rational agent" model is that it's a tautology. Yes sure everyone wants more utility, great, but everyone's utility function is slightly different. As you say some are risk takers pursuing insane rewards/yields/profits with low probability, while others are super risk-averse conservative in their choices, etc.

There's nothing in the rational agent model that assumes that everyone has the same definition of utility function. Different people want different things.

Yes, of course, but that's what I mean by problem. Saying people are (bonded rational) utility function maximizers doesn't give us a predictive theory, it just means in a fancy way people do what they do for "reasons", and everyone usually have different set of reasons.

Of course, it's fine as a very-very general fundamental theory, if you then want to study how people's revealed and non-revealed (old names for implicit and explicit) preferences aggregate into a utility function. (There's a whole bunch of math about pairwise comparison matrices. Lately there's some movement in that space about using perturbation to model inconsistencies in preferences, etc.)

Yes, if people are rational actors, whatever they do is rational according to their private utility function. So, if anything they do is rational, then how is it "rational", which supposedly depends on some objective metric?

It's rational because objectively the agent does what is best for it. Basically that's the definition of being rational. But without any qualifications, constraints, or additional context this description of decision theory is not really useful. (That was the point I tried to convey in my original comment.)

What even the general theory is useful for, is to quickly reduce the question of the very general behavior/decision problem in to "show me your utility function and I'll tell you what you will do". Of course then that becomes a problem. How to model utility, how to formalize expected utility over all possible actions. What about priors (as in path-dependence, so do we want to encode that into the utility function - and constantly dynamically update the function, like a Bayesian agent would update its beliefs - or somehow manage it separately)?


Basically I was trying to persuade people to don't say things like "economists are dumb, because people are not rational utility maximizers", because that's a false implication. Of course they are, but - just like you said - this doesn't help us better predict people's (agents') behavior, just gives us a new task to model their utility functions. And that's where regret comes in. (Which is basically risk-aversion, which is what behavioral economics studies. People are not symmetric about positive and negative expectations. They value avoiding negative-utility events more than they value encountering positive-utility events. And of course with this insight now we can build better economic models - for example we can use this to "explain" why wages are more sticky - especially downward - than we would expect, also with some handwaving we can explain why in times of crisis people are let go instead of decreasing their hourly compensation, why some people are so gullible [peer pressure, confirmation bias ~ cognitive dissonance], and so on.)

> Classical economics thinks of people as "rational utility-maximizing actors", which doesn't approximate reality in quite a lot of ways. There's been a move toward more sophisticated models - like that people minimize regret more than they seek the optimal reward ("rational actors with regret minimization")

Even when entities (corporations, for example) are trying to maximize utility, and an optimal decision is desired, there are issues with how much time and resources can be spent making decisions, so optimality has to be bounded in various ways (do you wait for more information? Do you spend more time and compute on calculating what would be optimal? etc.).

> "rational actors with regret minimization."

I keep seeing something like this in workplaces people don't join forces they avoid friction. Until something (crisis) or someone (capable leader) flips the thing on its head.

Follow the incentives. Any individual employee is massively incentivized to keep their job. There is a slight chance that a certain leader may be incentivized to bring success to the company. That is the exception, not the norm. I learned this the hard way by caring about companies that didn’t care about me.

'keeping your job' implying submission to social pressure to be accepted and nothing more ? I'm not sure I got your whole argument, so if you have more details, I'd be happy to read :)

  In economics, ... but the power of cooperation seems to be valued much less
I'm not sure that I agree with this. The creation of firms and trade are both cooperative. They aren't altruistic though. (I'm not disagreeing with your overall point, just that cooperation isn't valued in economics.)

Agreed - partnerships abound in the real business world. It might be under-modeled in economic theory and not well taught in business school, but the value of networking is and having high-level industry relationships is the life-blood of a good business leader - specifically because of partnering and information sharing.

If anything economics seems like the smarter field when compared to the open source community because the focus is on resource evaluation, allocation, and returns. Open source could probably use more talent but the missing lynchpin is obviously a reward structure. The techno-optimism of the 80s and 90s doesn't scale, we need licensing structures that entitle the parties doing the unprofitable stuff to a share of the profits their work produces. Resisting commercialization has weakened the community not made it stronger, open source can still be free without being free labor.

If you think there's no reward structure, perhaps it's because you can't see it?


It’s the lifeblood of the leader, but not the firm.

A good percentage of being a good leader is being able to pick up the phone and get stuff done, which is often contrary to what the firm wants.

It’s not cooperation at the human level, and that makes all the difference. “Altruism” is a convenient dismissal of a principle (survival through cooperation) that has actually allowed humanity to survive and thrive thus far.

The trick to making it all work is that knowledge and/or tooling (the means of production) ought not be proprietary, but product or service absolutely can be.

FOSS is super competitive, too. Teams splinter off and fork projects, new projects start up that try to dethrone the industry leaders, people compete for status/usage, etc. The main difference is that the work product/process is public and so ideas spread much more rapidly.

Yes but competing in a cooperation is different than cooperating in a competition.

but let's not forget the non-material nature of software allows this, hardware will always be a physical (material) artifact.

however the line does blur when talking about blueprints and designs.

in any case, I think that free software movements are a sociological anomaly, I wonder if there is any academic research into this from an antropological or an historical economics viewpoint.

also, it seems to me that in some sense the entire market works in cooperation, just not very efficiently (it optimizes for other things than efficiency and is heavily distorted by subsidies and tariffs)

> but let's not forget the non-material nature of software allows this, hardware will always be a physical (material) artifact.

Sort of?

I tried to get into FPGA programming a while ago, and it turns out the entire software stack to get from an idea in my brain to a blinking LED on a dev board is hot garbage. First of all, it's insanely expensive, and second of all, it really, really sucks. Like how is it <current year> (I forgot what year it was, but it was 2016-2018 timeframe) and you've tried to reinvent the IDE and failed?

I think projects like RISC-V and J Core are super cool, but I couldn't possibly even attempt to contribute to them based on how awful the process is.

Check out IceStorm (+Yosys, nextpnr...) for a pretty complete open source toolchain for ice40 FPGAs. It's really amazing to go from the infuriating to use vendor tools with all their quirks and bugs to a single 'make' that just generates the bitfile without using the broken IDE or complaining about licenses...

The FPGAs are also relatively inexpensive (<5-10€ in single quantities depending on the model) but are on the lower end in terms of features and performance.

The tools should also support the more powerful ECP5 FPGAs, but I haven't tried them yet.

Interesting, thank you for the pointer. I've been playing around with FPGAs a bit and this sounds like it might just be what I was looking for. Is there any starter board that you'd recommend for this workflow?

I've only used the bare chips on custom boards, so I have no experience with the eval boards.

A popular board a few years ago was the iCEstick for about 20€, but I can't find it anywhere for that price now. The ICE40 breakout boards from Lattice are also more expensive now for whatever reason. Olimex has a board with the HX8K that is more reasonably priced, but unfortunately doesn't have an onboard programmer...

For the ECP5, the ULX3S board looks interesting. It's not exactly cheap, but also has more features than just buttons, LEDs, and pin headers like on the other boards.

Same. Started off in love w FPGA design in college. Software design is light-years ahead of that area in terms of tool maturity, functionality and freedom.

The material nature of hardware is barely any higher than that of software. Yes, you cannot just copy bits around, but you can take the same design and fabricate millions of chips for pennies. The cost of production for any given chip of Silicon has almost nothing to do with its material cost. It has everything to do with the amount of design work that went into it, just like software. None of that appreciably changes the benefits of cooperation vs competition.

There is no historical precedent other than the media industry, which uses the copyright system for an entirely different purpose: to milk as much as possible out of a single investment in time and effort, usually by someone else than the original creator.

The whole thing revolves around the marginal cost of an extra copy of a piece of software being close to zero, no other critical industry gets such economies of scale. Making more chips still requires huge capex and opex.

I'm curious to hear what you mean when you say the entire market works in cooperation? I mean, strategic partnerships happen, and companies work as suppliers for other companies. But that's not the market - the market is where someone wanting to buy something goes and evaluates competing products and picks the one they want to buy. It's pretty comparable to natural selection, where the fittest animals survive and the fittest companies get bigger marketshare while the least fit companies go bankrupt, and the least fit species go extinct. So I guess you could say that the market functions as an ecosystem - maybe the word you were looking for was 'symbiosis' rather than cooperation? Cheetah's aren't cooperating with lions, they are competing - but relative to the rest of the ecosystem, they exist in a form of symbiosis.

There are various forms of cooperation too. Lions merge with cheetahs to hopefully starve the leopards out, then they domesticate emus and antelopes so that they can survive scorching the rest of the savannah so they don't have to deal with those pesky wild dogs. Then they see tigers doing the same in India and say “hey let's agree that you can run rampant through Africa if we can run rampant through India, but we agree on these shared limits so that we are not in conflict.”

A favorite example is that us legislation to ban advertisements for smoking was sponsored by the tobacco industry. They were spending a lot on ads just to keep up with the Joneses; if Camel voluntarily stopped and Marlboro continued, then Camel would go the way of Lucky Strike. They would rather agree to cut their expenditures! But they needed to make sure no other young tobacco whippersnappers came in and started showing a couple ads which they would have to both best, reigniting the war.

Open source is interesting because it seems to be a marvelous unexpected outcome from the existence of the corporation. Individual people start to work at corporations and are aware that whatever they produce at that corporation is mortal, it will die with that corporation if that corporation decides to stop maintaining it or if that corporation itself folds. The individual wants his or her labor to survive longer, to become immortal. This company could go out of business and I will still have these tools at my next job. So in some sense layered self-interests create a push towards corporate cooperation.

Trade and money are just tools to facilitate cooperation. They incentivize agents to cooperate by sharing the value of that cooperation.

'Survival of the fittest' when applied to groups implies cooperation as an essential skill between individuals. It's not all competition. The fittest group is not the one made of the most individually fit members, they also need to function well together.

"selfish individuals beat altruistic individuals. Altruistic groups beat selfish groups. Everything else is commentary."

-David Sloan & E. O. Wilson

To me, it seems as less of a sociological anomaly and more of an example of the quality of production outside the established competitive norms of capitalism. There are multiple such examples throughout history. The gift economy wasn't born with FOSS development.

There is a lot of literature on the subject of cooperation, especially from anarchist philosophers (i.e. Mutual Aid: A Factor of Evolution, Kropotkin).

> in any case, I think that free software movements are a sociological anomaly, I wonder if there is any academic research into this from an antropological or an historical economics viewpoint.

I don't have the proper background to make a strong case about this, but I feel like middle ages guilds would be closer to the open-source model than to the current "trades secrets" one, wouldn't it?

Likewise, I don't see farmers of ye olde times keeping their crop-growing tricks to themselves as secrets and so on.

Furthermore, we've known about many indigenous cultures where "the tribe" is regarded as more important than the individual, meaning that sociologically they should be more aligned with the open-source model than the capitalistic one, shouldn't it?

Again, I'm not an expert in the area, but it seems to me that our current society is more "historically anomalous" at the commoner level than any more socially-conscious one would be (i.e.: common people has leaned to greed/individuality these days than almost always in the past).

Well, I'm yet to see a free (as in beer) lawyer, helping people for free after his regular workday.

Software can be duplicated for near zero marginal cost. It’s a well studied phenomenon in economics.


And corporate consumers can take advantage of public goods without contributing sufficiently to their creation.

Great point. It seems to me the mechanics of competition are at play to some extent in open source: ideas compete against each other, the better ones prevail, and contribute to a better whole.

Whatever one might think of socialism, and I really don't mean to start a political discussion here, FOSS is an example for me that shows that at the very least we're not purely driven by competition.

Haven't thought very deeply about this, but FOSS doesn't seem like socialism, as the capital being competed for is user attention. FOSS projects without enough capital does not gain enough developers, and falls into disrepair and obscurity. Not sure what a socialist FOSS movement might look like, but maybe developers would be assigned to projects as opposed to them freely choosing the projects to work on?

> Haven't thought very deeply about this, but FOSS doesn't seem like socialism, as the capital being competed for is user attention.

I think that's stretching the definition of 'competition' a bit too much.

First off, many people work on FOSS just because they want to, or because they believe that it's a good thing. Not because they want 'user attention'. Furthermore, I'm pretty sure that even the people that are motivated by user attention don't see it as a competition.

> FOSS projects without enough capital does not gain enough developers, and falls into disrepair and obscurity.

There are plenty of FOSS projects that do fine with just a handful of developers, or even just a single one. with perhaps the occasional pull request from others.

> Not sure what a socialist FOSS movement might look like, but maybe developers would be assigned to projects as opposed to them freely choosing the projects to work on?

Maybe I misunderstand, but I get the impression that you're thinking of 'state' socialism. I wouldn't say being assigned stuff is a core aspect of socialism.

All that said, it's not so much that I think FOSS is like socialism, but rather that FOSS is a counter-example to a common argument I hear from especially the more hardcore 'everything should be market forces' capitalists, which is that we're primarily driven by competition.

Hi Mercer, interesting points.

But I think in general, the FOSS projects that thrive, that gather users which report bugs, that has an active community, are going to be the ones which attract developers, not necessarily because they seek fame and fortune, but just by the simple aspect that they are more likely to run into those projects, and be able to contribute to those projects more easily due to better documentation and active devs who can answer questions. It is the projects which compete for attention for their livelihood, like an emergent behavior, even if the developers don't explicitly target it.

Besides, developers are humans too, and they can feel pride when their contributions are recognized, and they can consider the work to be helping to pave the way to future employment. Contributing to large active projects give higher chances for both prospects.

Regarding whether it's state socialism that I'm thinking of, you're probably right. I've never been very clear on the definition of socialism. On the most superficial level, I understand socialism to be that the community controlled the means of production. So when applied to FOSS, that seems to mean that the whole community decides what projects to work on, and in the case where the community decided that something needs to be worked on, but not enough devs volunteer, then it seems like the work will need to be assigned.

Finally, as you said, it's not about FOSS being like socialism, but that they devs are not directly competing for big cash prizes that's the interesting aspect. I agree. There are not that many industries that are so fundamental to our society which has a large contingent of people doing critical work for free. Perhaps education comes close. Politics, with political power taking a similar role as attention for FOSS projects, but that's straying pretty far.

The negligible unit economics of software mean that the successfulness of Free Software should be derivable from those old theories. The monopolistic and rent-seeking alternative that is the proprietary computer industry is also really far from some Ricardo utopia.

I don't mean to engage in some "no true capitalism" libertarian defense, but rather point out that a lot of fine (if simplistic) economic models/theory have been corrupted by various ideologies. A lot of radical-seeming stuff is not radical at all according to the math, just according to your rightist econ 101 teacher.

The economy _is_ cooperation.

I.e. a trade is when two parties engage in a mutually beneficial exchange.

Perhaps because competition relates to monopolies which is where governments intercede, and hence there is a demand for economic analysis.

The libertarians economists talk about cooperation in terms of spontaneous order. Milton Friedman had his story of the process to manufacture a pencil as "cooperation without coercion". Basically it's the "invisible hand" driving people to cooperate via price signals and self-interest. I don't know if there's much that can be done with the concept beyond that.

I've also worked on the hardware side a bit as well as in EDA (Electronic Design Automation)- the software used to design hardware. Since you already commented on the hardware side of things, I'll comment on the EDA side. The EDA industry is also very backwards and highly insular - it felt like an old boys club. When I worked in EDA in the late aughts we were still using a version of gcc from about 2000. They did not trust the C++ STL so they continued to use their own containers from the mid-90s - they did not want to use C++ templates at all so generic programming was out. While we did run Linux it was also a very ancient version of RedHat - about 4 years behind. The company was also extremely siloed - we could probably have reused a lot of code from other groups that were doing some similar things, but there was absolutely no communication between the groups let alone some kind of central code repo.

EDA is very essential for chip development at this point and it seems like an industry ripe for disruption. We're seeing some inroads by open source EDA software - simulators (Icarus, Verilator, GHDL), synthesis (yosys) and even open cores and SOC constructors like LiteX. In software land we've had open source compilers for over 30 years now (gcc for example), let's hope that some of these open source efforts make some serious inroads in EDA.

> did run Linux it was also a very ancient version of RedHat - about 4 years behind

How is a 4 years old distribution "very ancient"? There's very likely plenty of Ubuntu 16.04 in the field, and there's nothing inherently wrong with that.

RedHat uses "old" kernels, if that's what you refer to with "ancient" but there are reasons for that, and they're also backported, they're not unupdated.

4 year old RHL is more like 8 years old for the rest of us. Stability above everything else has its price.

FWIW a lot of c++ developers see std:: and TODO as synonyms. OTOH, having used some of Xilinx's tool, I'd be terrified to read the internals.

> std:: and TODO as synonyms

What does this mean? I've only been in the C++ game for 6 years and for me if I can get something from std rather than rolling my own or pulling in another library, I'm cheering.

Yes they're wonderful to have, don't get me wrong! They're general-purpose and robust! But they're also insufficient for a lot of needs. One thing that comes up for me a lot is a lack of static-allocated structures, and sometimes there are optimizations that can be made by sacrificing certain functionality.

Lots of people replace STL with a faster one. See EA

EA’s STL had more to do with custom allocators and the fact that a lot of the platforms we were developing for 10 years ago didn’t have mmap or paged memory management (or at least nothing that they wanted to expose to us lowly user-mode gamedevs).

> Want a linter for your project? That's going to be $50k.

the thing is that i think the open source software is a miracle that it even exists, and i don't find it strange that nowhere else has replicated the success. Because open source, at heart, is quite altruistic.

I think the principles underlying FOSS are found everywhere. It's just that the idea of 'markets' and 'transactions' being the ubiquitous thing has infected our thinking.

Wholeheartedly agree. In parts of society where our distorted ideas of "productivity" and "success" are absent, people share more freely. The island where my family is from comes to mind (Crete, Greece). There—in the rural areas—people were able to deal with the 2007-2008 crisis a lot more effectively than they did in the cities—esp. compared to Athens. What I observed is that if someone didn't have enough, the rest of the village would provide. There was a general understanding that that way the people with less would get back on their feet and help contribute to the overall pool.

Wonderful example!

I'd add that just looking at how families or circles of friends operate is also enlightening. The most cynical view is that these interactions are 'debt' based, but practically speaking that often isn't true either. I help my mother not because I calculate the effort she has invested in me or the value she brings me. I just do it because I love her and I don't need to think about how I've come to feel that way (which very well might be based on measurable behavior on her part).

IMO this has nothing at all to do with "distorted ideas of productivity and success" and everything to do with the rural vs urban lifestyle. There are plenty of similar examples like the one you gave that happen everyday here in the U.S. As a general human rule, smaller communities are more tightly knit and care more for each other.

That is definitely a factor. More complex social systems provide a degree of distance between you and the rest of the community (how can anyone keep up with having a sort of relationship with a few million others).

What is present in rural areas is a sort of respect and fear of exclusion (I think these two are reciprocal). But it is not impossible to have that on a local level in a large city. Having lived in both London and New York (similar population size, vastly different population density) I've seen this happen. In Manhattan specifically there is little respect for neighborhoods—comparatively—while in London residents care for their boroughs. Some direct actions that might have fostered that are simple things like community gardens/allotments which are very common in the UK.

that's my humble opinion, I'm not an expert in human social behavior.

but does this only work when the number of people in the tribe is low, and everyone has some social accountability to each other?

The problem of a free-rider is ever present, and in big cities, that problem is evident and thus you don't get the sort of community effort that you see in a small village.

Yeah - see “Dunbar’s Number”

Beyond Dunbar’s Number, it’s not clear that you can employ mutual-aid theories like tit-for-tat to restrain free-riding and cheating.

I think the truth is in the middle: there are many examples of secret processes throughout history and craftsmen jealously guarding processes except through apprenticeship programs where payment was years of cheap labor.

Oh yeah, it's not a binary thing. That said, in the case of craftsmen and their apprentices, I think the relationship was much less cold and transactional than, say, my startup friends who got a bunch of interns as cheap labor. At times perhaps, but not generally.

It actually took a lot of hard work in the early days, to prove the model as valid and also defend the legal code and rights that underpin FOSS.

"Miracle" as in tireless efforts of the GNU project decades ago, not random act of the cosmic rays, let's be clear.

You of little faith...

This is the record of the miracle of St. iGNUtius [0]. Back in The Day, before all the young hipsters were publishing snippets of code on npm for Internet points, Richard Stallman seethed in frustration at not being able to fix the printer driver that was locked up in proprietary code. While addressing St. iGNUtius, patron saint of information and liberty, he had a vision of the saint saying that he had interceded for Stallman, and holding out a scroll. On the scroll was a seal, and written on the seal in phosphorescent green VT100 font was the text "Copyright St. iGNUtius. Scroll may be freely altered one the sole condition that the alterations are published and are also freely alterable." Upon opening the seal, and altering it according to the invitation, Stallman saw the scroll split into a myriad of identical scrolls and spread throughout the world, bringing liberty and prosperity to the places it touched. Stallman hired some lawyers to write the text of the seal in the legal terms of 20th century America. Thanks to the miracle of St. iGNUtius, software today is still sealed with seal, or one of its variants, named after the later saints, St. MITchell and St. Berkeley San Distributo.

[0] A twentieth-century ikon of St. iGNUtius: https://twitter.com/gnusolidario/status/647777589390655488/p...

It's far too easy for people new to this to overlook the importance of the Free Software Foundation.

The original RMS email announcing the project is a nice read. When placed in the context of his personal frustration with commercial software it can also be seen as a line in the sand.

Or it's artistic expression for a certain type of individual, and we have accessible art everywhere

Isn't it a miracle! It's fantastic, I've been reading old computer magazines from the 80s and 90s on archive.org, and the costs to set yourself up back then were just astronomical. I remember seeing C compilers priced at $300, or even more.

I’m just 40, I started programming in 87, got serious around 93, I remember getting my hands on a copy of turbo pascal from a family friend - not expensive but was for a kid and it was amazing.

Now I grumble if it’s a pain to install a massive open source tool chain for free and these days it’s rarely painful.

I should stop and appreciate that more once in a while.

I think a full copy of Visual Studio in the late 90s was like $700-800.

This isn't true. Writing open source software is more profitable: it's cheaper (because everyone works on it) and works better (because everyone works on it).

Do you have data to back this up? Because all accounts I've heard say that it's incredibly difficult to make money in open source.

The easiest way to make money in open source is to be a company that makes money from a product that depends on, but isn't, the open source product. Then other businesses will improve your infrastructure for free and make you more money.

aka the free-rider problem.

That's why i don't think hardware opensource will work, but that this model "worked" in the software world is a miracle. By "worked", i mean that it exists.

It's not a free-rider problem if you were the one that released the open source infrastructure, or if you use it, need to change it, and send the patches back in. (That's why the GPL is a good license.)

it's incredibly difficult to make money in open source

This is true, but in the markets where open source thrives, it's even harder to make money in closed source. One talented programmer might make a new web framework, release it, and gain some fame and/or consulting hours. Good luck selling your closed source web framework, no matter how much VC backing you have.

Making money only writing open source is possible, but complicated and out of scope for this comment.

But writing some open source - making your tools (compilers, linters, runtimes), libraries, frameworks, monitoring, sysops/sysadmin tooling, and so on open source is much more profitable, and that's a huge subdomain of open source out there right now.

This doesn't really answer my question though. Seems like the other comment to my question is spot on. If you divide all company actions into making money or saving money then open sourcing your toolset is more of a saving money thing as you can get people who aren't on your payroll to contribute to them. That's all well and good but not exactly what I think of when someone says "making money with open source."

That seems like an accounting mindset. Engineering is not a cost center (to be minimized) it's an investment in the future. This view offers a lot more flexibility and options - including cooperation. It also suggests that you may get back more than you put in. Invest wisely of course.

> It also suggests that you may get back more than you put in

You can also get back significantly less than what you put in.

For a lot of corporate open-source the goal is to commoditize a complement of your revenue-generating product and hence generate more sales. If you're Elastic, having more companies running ElasticSearch increases the number of paying customers looking for hosted ElasticSearch. If you're RedHat or IBM, having more companies using Linux increases the number of companies looking for Linux support. If you're Google, having more phones running Android increases the number of devices you can show ads on.

Similarly for independent developers. You don't make money off the open-source software itself. You make money by getting your name out there as a competent developer, and then creating a bidding war between multiple employers who want to lock up the finite supply of your labor. The delta in your compensation from having multiple competitive offers can be more than some people's entire salary.

DARPA is also funding ($100M) open-source EDA tools ("IDEAS") and libraries ("POSH"): https://www.eetimes.com/darpa-unveils-100m-eda-project/ (this was 2 years ago, I'm not sure where they're at now)

They just wrapped up Phase 1 of IDEA and POSH; programs that were explicitly trying to bring FOSS to hardware. There are now open-source end-to-end automated flows for RTL to GDS like OpenRoad.

JITX (YCS18) was funded under IDEA to automate circuit board design, and we're now celebrating our first users.

Great program.

I have thought about this at one point and have many friends in the EDA industry. I can say with conviction that you are absolutely right. If you want to imagine a parallel to software, just imagine what would have happened to open source movement if gcc did not existed. that is the first choke point in eda and then there are proprietary libraries for everything. add to this an intentionally inoptimal software designed to maximize the revenue and you get a taste of where we are at this point. IMO the best thing that can happen is some company or a consortium of fabless silicon companies buy up a underdog EDA company and just opensource everything including code/patents/bugreports. I'd bet within a few years we would have more progress than we had in last 30 years.

I think the underlying issue here is that IC design is one or two orders of magnitude more complex than software. In my experience, the bar for entry into actual IC design is generally a masters or PhD in electrical engineering. There is a lot that goes into the design of an IC. Everything from the design itself to simulation, emulation, and validation. Then, depending on just how complex your IC is, you have to also think about integration with firmware and everything that involves as well.

I don't know that hardware is inherently more complex than software.

The issue I see in hardware is that all complexity is handled manually by humans. Historically there has been very little capacity in EDA tools for the software-like abstraction and reuse which would allow us to handle complexity more gracefully.

Nope this is just not true.

You can crank out correct HDL all day with https://clash-lang.org/, and the layout etc. work that takes that into "real hardware" while not trivial, are less creative optimization problems.

This is a post-hoc rationalization behind the decrepitness of the (esp Western[1]) hardware industry, and the rampant credentialism that arises when the growth of the moribund rent-seekers doesn't create enough new jobs.

[1]: Go to Shenzhen and the old IP theft and FOSS tradition has merged in a funny hybrid that's a lot more agile and interesting than whatever the US companies are up to.

That might be true for analog/mixed signal design, but not for CMOS. The design rules are built into the CAD. The design itself is a immense network of switches.

Not having an advanced degree doesn’t mean you can’t master complexity. It’s the same as with software.

Same issues apply to digital ICs as well. The design is much more than just the rules encoded into the logic. What size node are you targeting? Is there an integrated clock? What frequency? What I/O does the IC have? What are the electrical and thermal specifications for the IC? Does it need to be low Iq? What about the pinout? What package? How do you want to structure the die? There are a lot of factors involved with determining the answers to these questions and they are highly interdependent.

The advanced degree is not meant to teach you how to grok complexity. It's to teach you what problems you can expect to encounter and how to go about solving them.

I suggest you won't learn those things in college. You RTFM and get some experience on real life design projects.

Now, if you are in CMOS process design or responsible for the CAD software itself and its design rules, it helps (a lot!) to be a physicist. For that you need the MS and PhD training.

Signal integrity and analog design start to share a lot of similarity as you start pushing around the edges.

Absolutely. At the bottom, it's all analog!

And the cost for failure is a lot, lot higher... it’s a different model of development because there are many different challenges involved.

Why is it so high if the fab is free? Intrinsic cost of development time and tooling?

Time especially (though costs are also high). Even with infinite budget you still have a cycle time of months between tape-out and silicon in hand.

Ah, you're talking about this as a current-day commercial project. Carry on :-)

I'm a professional PCB designer among other things. Most of what I have learned came from hard-won experience designing PCBs for projects done for myself and my own education even though a custom PCB was hardly cost effective in a commercial sense. They were just much cheaper since my time was free. They weren't useless projects in that what I made was not easy with off the shelf parts, but it would have been possible and there wasn't a timeline to pressure me yet.

But I would not be a professional PCB designer now if it had not been approachable to someone without a budget but with plenty of time and motivation. I've basically spent as much money learning how to make PCBs as other people spend on things like ski trips or going on vacations or other hobbies. A free fab and tools to create designs is a godsend when designing these things is something you want to do for interesting experiments and learning in an otherwise totally inaccessible field. Even if you think now one needs a masters or PhD to do this right, being able to fail cheaply is a pretty amazing learning tool...

That's what this announcement means to me -- free fab means I can finally learn how to do this and get good enough at it that when the time comes that this is a better solution than trying to combine off the shelf functionality I will be well positioned to take advantage of that change.

I am thrilled to release open source designs for cool chip functionality once I'm skilled enough to do it and the only way I'd get there is if the direct cost to me was nothing (even if it's slow).

> free fab means I can finally learn how to do this

hate to tell you this, but probably not. there are only 40 slots. those are going to quickly be eaten by people that already know how to do this.

the fab isn't the expensive part anyway. you can get a small run of ASICs done for a few single digit $thousand

If the fab isn't that expensive, all the more reason to learn how to do it proficiently for free, no? Even if it doesn't get produced it seems like a good learning project.

absolutely! but that already existed. (free tools that don't result in something that can be fabbed)

what you would get out of this is access to a google-promoted open source PDK. It's specific to SkyWater so "open source" don't mean a whole lot, not today anyway. The available libraries are very immature. It's clearly a nice promotion for SkyWater, one I am enthusiastic about, but nonetheless not the shimmering beacon you're looking for.

You already know this: time is your most precious commodity. Don't bumble your way through a thinly disguised and immature FOSS offering. Cadence Virtuoso online training is just $3k per seat per year[1]. That said, I'm completely talking out of my rear. I've no idea if the training is actually useful without having a license for Virtuoso, where the webs say license costs are 6 figures per seat. I'd enroll in a university program to get access ... [2]

I just suggest this based on your stated goals, to be able to have enough proficiency to use professionally. Certainly you don't need 10,000 hours, and some of that is amortized by related experience, but time is your #1 enemy here, not money. Any money you can spend to jumpstart things is money well spent. OTOH if you just want to enjoy exploring IC design in a noncommittal hobbyist way, as a learning "experience" ala MasterClass, then this project does seem an excellent starting point.

SkyWater is a "trusted foundry", whatever that means, apparently completely US-based. Therefore, I think the most valuable thing that could ultimately (say 5 years hence) come out of this would be for OpenTitan to be "ported" to ASIC and to the SkyWater process. 130nm is large but for this application, for IoT in general, for anything not a mobile handset, it would be powerful. Imagine a Raptor TALOS workstation with an OpenTitan RoT! Or, your own design PCIe card with your own fabbed and X-Ray verifiable RoT. Powerful.

[1] https://www.cadence.com/content/dam/cadence-www/global/en_US... (slides 3 and 4)

[2] https://www.cadence.com/en_US/home/company/cadence-academic-...

Heck, it's a whole 10 sq mm. I'm sure they could work something out to combine multiple smaller projects into one slot, muxed by the supervisor.

and then you're going to do a secondary saw operation? $$$

Yeah, it's a lot more like PCB design, just more extreme in cost and time (especially when looking at debugging and rework). I don't think it's particularly more intrinsically difficult for digital circuits, but it's a lot different from the rapid iteration cycles you have from software (which is the main point). Because of the timescales even for hobbyist stuff you want to invest a lot more in verification of your design before you actually build it. Also the amount of resources available for it is dire, even for FPGAs a hobbyist has a much harder time finding useful information compared to software.

IC design does not need a masters or PhD, that's just what companies want to hire. The old school guys do not have these accreditations and somehow they're still doing fine.

-Engineering PhD

It's not "IC design" that's orders of magnitude more complex. It's "IC implementation" where the complexity lies.

Design for test, Logical verifcation (simulation, logical equivalence checks, electrical design rules), Physical implementation (library development and characterization, floorplanning, place and route, signal integrity), signoff (timing closure, electrical desing rules (again), physical verification, OPC) all require highly complex tools to automate. For large designs you will have people dedicated to each individual step because the ways in which things can go wrong -- and the absolute necessity of things going right to get a working chip -- are legion. And there's no substitute for experience to know the right questions to ask.

It's like the difference between driving your car across the country and flying to orbit. If you make a wrong turn in your car you can just make another turn or backtrack (edit the source code and rebuild). If don't have the right torque on the tank strut bolts on your rocket, "You will not go to space today".

Source: 30 years in the ASIC and EDA industries doing chip implementation and EDA tool flow development.

> Want a linter for your project? That's going to be $50k

Another interesting open source EDA project coming out of Google is Verible: https://github.com/google/verible which provides Verilog linting amongst other things.

This is a reality that exists in any limited market. Tools like nginx and mysql count their for-profit users in the in the millions. This means that there are tremendous opportunities for supporting development. By this I mean, companies and entities who use the FOSS products to support of their for-profit business in other domains, not directly profiting from the FOSS.

FOSS development isn't cost-less. And so the business equation is always present.

The degree to which purely academic support for open source can make progress is asymptotic. People need to eat, pay rent, have a life, which means someone has to pay for something. It might not be directly related to the FOSS tool, but people have to have income in order to contribute to these efforts.

It is easy to show that something like Linux is likely the most expensive piece of software ever developed. This isn't to say the effort was not worth it. It's just to point out it wasn't free, it has a finite cost that is likely massive.

An industry such as chip design is microscopic in size in terms of the number of seats of software used around the world. I don't have a number to offer, but I would not be surprised if the user based was ten million times smaller than, say, the mysql developer population (if not a hundred million).

This means that nobody is going to be able to develop free tools without either massive altruistic corporate backing for a set of massive, complex, multi-year projects. If a company like Apple decided to invest a billion dollars to develop FOSS chip design tools and give them away, sure, it could happen. otherwise, not likely.

I am finding game development to be a tiny bit like this also: very little open source, lots of home-made clunky code, lots of NDAs and secrets. Generally, a much worse developer experience with worse tooling overall. To play devils advocate, it this makes game dev harder, which isn’t entirely bad because there is already a massive number of games being made that can’t sell, so it reduces competition a tiny bit. Also, it’s nice to know you can write a plugin and actually sell it. Still, it’s weird. The community in unreal can even be a bit unfriendly or hostile and they will sometimes mock you if you say you are trying to use Linux. Then again, unity’s community is unbelievably helpful

Commercial games benefit way less than other code from being open source, since they are very short-lived projects once released.

Further, games would be very easy to copy for users, to develop cheats for multiplayer, to duplicate by competitors, etc.

The games, sure. But what about the tooling? Behaviour trees, fsm visualizers, debug tools, better linters, multiplayer frameworks, ecs implementations, or even just tech talks. Outside of game dev, there are so many tech talks all the time on every possible subject, sharing knowledge. In game dev, there is GDC and a few others, but it’s just far less common

The closest of multiplayer games die the day they shut down the servers.

The longest lived games are the longest lived because they are open enough for the community to keep them alive.

That depends a lot on platform and engine. X360 development was still one of the best damn environments on dedicated hardware I've ever worked on. PIX was nothing short of amazing and Visual Studio is still a bar I haven't seen touched from and IDE perspective.

Compare that with something like Android where I'm lucky if I can get GDB to hit a single breakpoint on a good day.

Could a lot of this backwardness also be explained by patents and litigation risk? There are a lot of patents around hardware, seems like there'd be a high chance of implementing something in hardware that is patented without knowing it's patented.

The whole industry still operates like the 90s right down to license management and terrible tooling. It's one of the few multi-billion dollar industries that was mostly untouched in the dotcom, and is still very old-school today.

The problem is, it's also a very, very tough industry to disrupt. Not for lack of trying though.

It’s interesting to consider this from a finance standpoint too. Consider/compare/contrast with open source software ecosystems where large amounts of money that large companies are able to effectively “strip mine” from the ecosystem.

Hopefully there is a middle ground somewhere, where the folks working on open source software can get compensated for their work so as to enable a healthier system overall, so we aren't all just “software serfs” sharecropping for our overlords.

License your work under the AGPL, and they can't do that – at least, without arranging another license with you.

My last job was supporting some hardware companies' design VC. Absolutely insane.

I think it's also a cultural thing. Like you said, lots of your own special secret sauce, and so many issues trying to fix bugs that may have to do with that secret sauce.

Can't say I miss it at all really.

Yes 100%. Maybe software is really unique in this regard, and HN and such forums are a gift?

Tangent: I've noticed this problem in dentistry and sleep apnea! The methods of knowledge transfer in the field seem to be incredibly inefficient? Or there are tons of dentists not learning new things? (I recall a few dentists on HN -- I would be interested in any comment either way on this.)

The reason I say this is that many patients can't tolerate CPAP (>50% by some measures). There are other solutions, and judging by conversations with friends and family, dentists and doctors don't even know about them!


My dentist gave me one of these alternative treatments for sleep apnea, which was very effective. It's mandibular advancement device (oral appliance). Even the name is bad! They have a marketing and branding problem.

Random: Apparently Joe Rogan's dentist did a similar thing with a different device which he himself invented. Rogan mentioned it 3 times on the air.

So basically it appears to me that practitioners in this field don't exchange information in the same way that software practitioners do (to say nothing of exchanging the equivalent of working code, which is possible in theory).

I looked it up and there are apparently 200K dentists in the United States. It seems like a good area for a common forum. I think the problem is that dentists probably compete too much rather than cooperate? There is no financial incentive to set up a site of free knowledge exchange.

Related recent threads I wrote about this:

https://news.ycombinator.com/item?id=23666639 (related to bruxism, there seem to be a lot of dentists who don't seem to understand this, as I know multiple people with sleep apnea and bruxism who don't understand the connection)

https://news.ycombinator.com/item?id=23435964 (the book Jaws)

> mandibular advancement device

Those things are in the basic apnea treatment palet over here (Netherlands), actually more common than PAP machines. Why do you think they are alternative?

Well the picture might be totally different in the Netherlands. In the US, CPAP is the most common treatment, even though so many people can't use them (including me).

I've talked to several people who have a CPAP, and they don't even know of the existence of the mandibular advancement device. Their doctors and dentists apparently don't tell them about it!

I'm puzzled why that is the case. I think it has something to do with insurance. If that's true, it's not surprising that other countries don't have the same problem!

I think it might need a "stone soup" kickstart.

This sounds like there might be non-trivial gains out there if more people looked at how HDL is compiled to silicon.

also have you seen what passes for software in the hardware world? i’m talking about EDA. i haven’t used cadence etc but altium is an trash fire and it’s $5k/seat. below that level the tools are even more atrocious. i’m mystified why someone doesn’t come in and disrupt that space.

Posts like this make me so glad I was directed by market forces out of hardware design into software.

Absolutely. The capitalist conception of closed competition and profit motive is taken virtually as dogma nowadays. However we see its many disadvantages: there are multiple companies ("teams of people") spending billions of dollars and millions of man-hours in largely duplicated efforts which lead in the end to an inferior product that what could be done with cooperation.

Imagine a combined Intel+AMD+Samsung+Nvidia behemoth pooling together their expertise. Internal competition would still exist, but for actual technical reasons now instead of market ones. One could imagine myriad ways to fund such a cooperative endeavour, which are never even tried because the current model is sacred.

You're making an argument that open source is better because it is free. This is a 30 year old argument. The problem is, people need to get paid in the interim. For exmaple, Intel is the biggest contributer to the linux kernel, but without Intel paying its employees by charging for chips, millions of patches would never have made it into Linux. I'm not saying you're wrong, I'm saying it is more nuanced than you are implying.

The interesting thing with open source is that it devalues the contribution of the software engineer. Your effort and ideas are now worth 0. You either do that work for free, on your spare time, or are paid by some company to write software that is seen, by the company paying you, as a commodity. Open source is at the extreme end of neoliberalism. It is really a concept from the savage capitalist mentality of the MBA bots that run the corporate world. They certainly love open source.

Companies love open source projects with MIT-style licenses. If you license your project GPL, no company will touch it, unless they really, really have to.

This is amazing.

I think the main reason why open source has taken off is because access to a computer is available to many people, and as cost is negligible, it only required free time and enough dedication + skill to be successful. For hardware though, each compile/edit/run cycle costs money, software often has 5-digit per seat licenses, and thus the number of people with enough resources to pursue this as a hobby is quite small.

Reduce the entry cost to affordable levels, and you have increased the number of people dramatically. Which is btw also why I believe that "you can buy 32 core threadripper cpus today" isn't a good argument to ignore large compilation overhead in a code base. If possible, enable people to contribute from potatoes. Relatedly, if possible, don't require gigabit internet connections either, so downloading megabytes of precompiled artifacts that change daily isn't great either.

Not only is the software expensive it's often crap. By which I don't mean, oh no it doesn't look nice - crap as in productivity-harming.

For example, Altium Designer is probably the most modern (not most powerful although close) PCB suite and yet despite costing thousands a seat it is a slow, clunky, single-threaded (in 2020) program (somehow uses 20% of a 7700k at 4.6GHz with an empty design). Discord also thinks that Altium Designer is some kind of Anime MMO

KiCad nightly can now import Altium Design files, might want to give it a try ;)


KiCad is getting very good but it requires a lot of work to compete with the big boys - for example there's no signal integrity built in, and impedance control is fairly detached from your board i.e. I don't think you can do RC on impedance control yet. I don't need a huge amount but signal integrity is fairly important for the project I'm designing.

From what I can tell a lot of parametric design software is also single threaded. I felt like this is was an opportunity where usage of multiple cores could make Freecad stand out a little bit. Except Freecad uses opencascade as their kernel and they require you to sign a CLA just to download the git repository. Considering that barrier to just cloning the code I just decided to not contribute anything. They do offer zip file downloads of the source code but at that point I lost interest.

> Except Freecad uses opencascade as their kernel and they require you to sign a CLA just to download the git repository.

    git clone https://git.dev.opencascade.org/repos/occt.git
It's not well-advertised, but they do offer public read-only HTTP access to the git repository.[1] This URL really should be listed on the Resources page as well as the project summary in GitWeb.

[1] https://dev.opencascade.org/index.php?q=node/1212#comment-86...

SolveSpace now has some code paths multithreaded. It's not clear if this will make the next release but you can build from source with -fopenmp.

Like you say, it's kind of shocking to see one core running at 100 percent while the rest do nothing and the app is sluggish in 2020.

I suspect geometric kernels and 2D/3D renderers don't fall into the "easy to parallelize" category. Of course there are functions that use multiple threads, but it's not obvious how you could build the core system to do so. However the code in CAD software is often pretty old, it wasn't that long ago that many of these still used intermediate mode OpenGL and I wouldn't be surprised if some still do.

In the same vein something like ECAD tools don't use GPU-accelerated 2D rendering but instead use GDI and friends (which used to be HW-accelerated, but isn't since WDDM/Vista).

A lot of "easy" opportunities to improve UX and productivity.

It seems like it depends a lot on your representation of the circuit network. If you consider each trace and PCB element as a node in a graph which maps the connections of the traces and PCB elements then you could parallelize provided you can describe the boundary conditions at each node. There's a degree to which they're interdependent, but there are also nodes at which the boundary condition is effectively constant and I think those would make good cut points for parallelization.

> Discord also thinks that Altium Designer is some kind of Anime MMO

Hardly Altium Designer's fault, but I too would avoid using it.

I thought xpcb/gschem were decent although admittedly I’ve only ever tried PCB design once.

I believe you're talking about the EDA toolchain.

Even though it has a long history of open-source attempts, as pointed out by Tim in his presentation, they are few and far between, and massively underwhelming compared to the thriving open source software community.

However, if this initiative takes off, it'll be a big help in creating an open source EDA toolchain community.

> However, if this initiative takes off, it'll be a big help in creating an open source EDA toolchain community.

The opensource EDA toolchain community is already producing some good stuff, Symbiflow: https://symbiflow.github.io/ is a good example, it's an open source FPGA flow targeting multiple devices. It uses Yosys (http://www.clifford.at/yosys/) as a synthesis tool which is also used by the OpenROAD flow: https://github.com/The-OpenROAD-Project/OpenROAD-flow which aims to give push-button RTL to GDS (i.e. take you from Verilog, which is one of the main languages used in hardware to the thing you give to the foundry as a design for them to produce).

The Skywater PDK is a great development, which is a key part of a healthy opensource EDA ecosystem though there's plenty of other great developments happening in parallel with it you will note there's some people who are involved in several of these projects they're not all being developed in isolation. The next set of talks on the Skywater PDK include how OpenROAD can be used to target Skywater: https://fossi-foundation.org/dial-up/

Very similar to what you just said, I suspect that a driving factor in the state of open source in hardware is that anyone working in hardware almost be definition has a large corporate backing, since producing hardware is so capital intensive (compared to software).

If that is basically a given, why publish anything for free, when you can instead charge 10k/seat in licensing?

Possibly because initially the open source software will be significantly worse than the proprietary software and thus won't get any sales, and it will only be better with a lot of contributions, but then it's already freely available and so it still won't get any sales (but might get support/SaaS contracts).

Sounds like there should be open source software for such a thing? I bet the software for laying out transistors and so on will suddenly become viable with something like this, good idea Google!

There is open source software, a good overview is on http://opencircuitdesign.com/qflow/index.html.

Ripe for some innovation maybe...

Strategically, could this be part of a response to Apple silicon?

Or put another way, Apple and Google are both responding to Intel/the market’s failure to innovate enough in idiosyncratic manner:

- Apple treats lower layers as core, and brings everything in-house;

- Google treats lower layers as a threat and tries to open-source and commodify them to undermine competitors.

I don’t mean this free fabbing can compete chip-for-chip with Apple silicon of course, just that this could be a building block in a strategy similar to Android vs iOS: create a broad ecosystem of good-enough, cheap, open-source alternatives to a high-value competitor, in order to ensure that competitor does not gain a stranglehold on something that matters to Google’s money-making products.

These are not related at all. Only common element is making silicon.

Apple spends $100+ millions to design high performance microarchitecture to high-end process for their own products.

Google gives tiny amount of help to hobbyists so that they can make chips for legacy nodes. Nice thing to do, nothing to do with Apple SoC.


Software people in HN constantly confuse two completely different things

(1) Optimized high performance microarchitecture for the latest prosesses and large volumes. This can cost $100s of millions and the work is repeated every few years for a new process. Every design is closely optimized for the latest fab technology.

(2) Generic ASIC design for process that is few generations old. Software costs few $k or $10ks and you can uses the same design long time.

> Nice thing to do

I don't believe Google does anything because it's a "nice thing to do". There's some angle here. The angle could just be spurring general innovation in this area, which they'll benefit from indirectly down the line, but in one way or another this plays to their interests.

> I don't believe Google does anything because it's a "nice thing to do".

If only Google had this singular focus... From my external (and lay) observation - some Google departments will indulge senior engineers and let them work on their pet projects, even when the projects are only tangentially related to current focus areas.

Looking at Google org on Github (https://github.com/google); it might be a failure of imagination on my part, but I fail to see an "angle" on a good chunk of them.

Google has never created a product that does not collect data in a unique manner apart from its other products.

They must be some kind of genius. I don't see how are they going to be able to extract personal information out of here.

They're not doing this out of the kindness of their heart. Just because we don't know the data being collected here (yet) does not invalidate my statement. Name a google product and you can easily identify the unique data being collected.

Not necessarily personal. Maybe training a robot to design circuits?

> few generations old

And by old, I mean /old/. 130 nm was used on the GameCube, PPC G5, and Pentium 4.

Think of all the chips from then and before then that are becoming rare. The hobbyist and archivist community do their best with modern replacements, keeping legacy parts alive, and things like FPGAs, but to be able to fab modern drop in replacements for rare chips would be amazing.

Things don't have to be ultra modern to offer value.

That's not terribly long ago, really. My understanding is that a sizeable chunk of performance gains since then have come from architectural improvements.

Probably the fastest processor made on 130nm was the AMD Sledgehammer, which had a single core, less than half the performance per clock of modern x64 processors, and topped out at 2.4GHz compared to 4+GHz now, with a die size basically the same as an 8-core Ryzen. So Ryzen 7 on 7nm is at least 32 times faster and uses less power (65W vs. 89W).

You could probably close some of the single thread gap with architectural improvements, but your real problems are going to be power consumption and that you'd have to quadruple the die size if you wanted so much as a quad core.

The interesting uses might be to go the other way. Give yourself like a 10W power budget and make the fastest dual core you can within that envelope, and use it for things that don't need high performance, the sort of thing where you'd use a Raspberry Pi.

You wouldn't get access to ASIC fab just to make a CPU. Fill it with tensor cores, or fft cores, plus a big memory bus. Put custom image processing algorithms on it. Then it will be competitive with modern general silicon despite the node handicap.

Your suggestion was more what i was thinking, perhaps something more limited in scope than a general processor. An application that comes to mind is an intentionally simple and auditable device for e2e encryption.

My understanding is that architectural improvements (i.e. new approaches to detect more parts in code that can be evaluated at the same time, and then do so) need more transistors, ergo a smaller process.

(Jim Keller explains in this interview how CPU designers are making use of the transistor budget: https://youtu.be/Nb2tebYAaOA)

My first reaction was that it could be a recruitment drive of sorts to help build up their hardware team. Apple have been really smart in the last decade in buying up really good chip development teams and that is experience that is really hard to find.

> Apple have been really smart in the last decade in buying up really good chip development teams and that is experience that is really hard to find.

They can outsource silicon development. Should not be a problem with their money.

In comparison to dotcom development teams, semi engineering teams are super cheap. In Taiwan, a good microelectronics PhD starting salary is USD $50k-60k...

Opportunity cost, though.

Experienced teams who have designed high performance microarchitectures aren't common, because there just isn't that much of that work done.

And when you're eventually going to spend $$$$ on the entire process, even a 1% optimization on the front end (or more importantly, a reduction of failure risk from experience!) is invaluable.

Does Google have a silicon team?

As of a year and a half ago they had over 300+ people across Google working on silicon (RTL, verification, PD, etc) that I’m aware of.

They created TPU's right? So somewhere inside the alphabet group they must have some expertise

It wouldn't surprise me. They've been designing custom hardware for some time. Look at the Pluto switch and the "can we make something even more high performance" or "we can make it simpler, cheaper, more specialized and save some watts" (which in turn saves on power for computing and power for cooling costs).

At the scale that Google is at, it really wouldn't surprise me if they were working on their own silicon to solve the problems that exist at that scale.

Pluto is merchant silicon in a box, like all their other switches.

"""Regularly upgrading network fabrics with the latest generation of commodity switch silicon allows us to deliver exponential growth in bandwidth capacity in a cost-effective manner."""


I wasn't intending to claim that Pluto is custom silicon but rather that Pluto is an example of Google looking for simplicity, more (compute) power, and less (electrical) power.

The next step in that set of goals for their data center would be custom silicon where merchant silicon doesn't provide the right combination.

Manu Gulati - a very popular Silicon Engineer who worked at Apple left for Google. (He now works at Nuvia with other ex-Apple stalwarts)

They have Norman Jouppi, he apparently was involved in the TPU design.

What are TPUs and quantum computers made of? ;)

Joel Spolsky calls this "Commoditizing your complement".

I'm guessing GP was clearly referencing that phrase, not unaware of it.

I mean someone else said the software to design chips is 5 figures per seat so probably a multi billion dollar industry.

My guess would be a cloud based chip design software is in the works. This would accelerate AI quite a bit I should think?

More like 6 figures per seat...

It's actually a big part of why some silicon companies distribute themselves around timezones - so someone in Texas can fire up the software immediately when someone in the UK finishes work.

It's not unusual to see an 'all engineering' email reminding to you close rather than minimize the software when you go to meetings.

I thought most EDA companies put a stop to that with geographic licensing restrictions.

And this is the reason some companies have shift work...

But that all means nothing for companies who buy Virtuoso copies from from guys trading WaReZ in pedestrian underpasses in BJ.

A number of quite reputable SoC brands here in the PRD are known to be based on 100% pirated EDAs.

This is not a critique, but a call to think about that a bit.

In China, you can spin-up a microelectronics startup in under $1m, in USA, you will spend $1m to just buy a minimal EDA toolchain for the business.

Allwinner famously started with just $1m in capital, when disgruntled engineers from Actions decided to start their own business.

What is PRD? I’m guessing a country acronym?

Pearl River Delta

Absolutely not. "Apple Silicon" is branding for their own processor. This is a road to an opensource ecosystem in HW design.

That's the same thing parent said, so "Absolutely yes".

TFA doesn't really summarize what's available very well, so let me take a shot from a technical perspective:

- 130 nm process built with Skywater foundry - Skywater Platform Development Kit (PDK) is currently digital-only - 40 projects will be selected for fabrication - 10 mm^2 area available per-project - Will use a standard harness with a RISC-V core and RAM - Each project will get back ~100 ICs - All projects must be open source via Git - Verilog and gate-level layout

I'm curious to see how aggresive designs get with analog components, given that they can be laid out in the GDS, but the PDK doesn't support it yet.

I'm thinking 10mm^2 so kinda 3.3mm x 3.3mm. In 130nm

(Remember 130nm gave us the late models Pentium 3s, the second model P4s and some Athlons, though all of these had a bigger die size)

I'm thinking you could have some low power or very specific ICs, where these would shine as opposed to a generic FPGA solution

> 10mm^2 so kinda 3.3mm x 3.3mm

3.16mm x 3.16mm?

Square root by head is hard ;)

From a hobbyist and preservation perspective, it would be cool if this could be used to produce some form of the Apollo 68080 core to revive the 68k architecture a little bit, and build out the Debian m68k port [0][1]. The last "big" 68k chips were produced in 1995 (that would be a 350nm process?) so this could be hugely improved on 130nm. The 68080 core is currently implemented in FPGAs only and is already the fastest 68k hardware out there. With a real chip, people could continue upgrading their Amigas and Ataris.

[0]: http://www.apollo-core.com/ I can't easily find how "open source" it is though, but it's free to download.

[1]: https://news.ycombinator.com/item?id=23668057

The Apollo core is not open source.

There are some other pretty nice and featured 68k cores that are open source (TG68, WF68K30L etc.) but none that is really close to the features and performance of the Apollo 68080.

Note that there have been IP theft and other shadiness allegations from ex-members of the Apollo team.

If you do a little research you'll find out that there's plenty of "stay away" and "can't believe they haven't been sued into oblivion yet" indicators and all sorts of misleading claims and marketing.

My summary would be that it's a tightly-controlled, closed project lead by questionable people with well-documented histories of questionable practices including ignoring copyrights, distributing infringing software, deleting critical posts from their forums and putting out misleading information.

It seems I have a little bit more to learn on this project. Are there any sources I could read?

Ah that's a shame. I suppose this could be used to perform a revival of 68k without the Apollo core, but it's a shame that the engineering effort already there would not be available. Maybe this would be an incentive for them to open source it, but yeah.

The PowerPC 7457 was built on a 130nm process, used in the AmigaOne XE as well as the Apple G4 machines. That's probably about as good as you will get for an community-led chip fabrication. You could probably get it up to over 1GHz if you made a 68k at this size. A modern FPGA is probably the better way to go for this kind of thing. I doubt this free fab includes things like die testing and packaging. That's an expensive process so someone would need to front some money to actually get the testing and packaging done for enough chips to make the cost actually worth it. It would be much cheaper to design an interposer board that could plug into a motherboard to take the place of the original 68k. This would also allow you to continuously upgrade the processor without requiring a whole new fab to take place.

>You could probably get it up to over 1GHz if you made a 68k at this size

That would be a fun upgrade if I had an old 68k Mac in good condition. I've thought occasionally about what if Motorola had had the resources to continue developing 68k like x86.

Coldfire is what they would have, and did, make. By dropping a few problematic instructions they were able to use a more RISC-like approach and produce a nicer core. Most code just needs a recompile, or you can trap the old instructions and work around them.

I have a "Firebee", an Atari ST-like machine built around a 264mhz coldfire, and it's quite nice.

My understanding is that Coldfire got used in a lot of networking hardware, due in part to its network order endianness, and partially because Freescale put network hardware support into some of the cores.

But there is no continuing demand for Coldfire, really, so it has stalled, NXP never continued on with it and is going the ARM route now, like everyone else.

How fast can those FPGAs be clocked? Is it better to have a free but small run of 68k ASICs which might have similar performance, or the potential to run a soft core on off-the-shelf FPGAs, at much higher cost per unit, but with the ability to rapidly iterate on the design?

The soft core approach has many advantages, but FPGA companies have dropped the ball on single-unit (hobbyist) sales.

Chips that cost $1000 from a distributor cost 1/10th to 1/100th the price when you have a relationship with the manufacturer, mostly because distributors can't sell them very quickly and have to keep a ton of stock to have the SKUs you want.

On a modern FPGA, processor clocks of 200-300 MHz are possible to get with designs that aren't huge.

http://www.myirtech.com/list.asp?id=630 looks nice for something under 300$ AND 4GB RAM, but i think the embedded quad-core ARM is still faster than anything you can 'emulate' on the FPGA.

The Apollo project here would be particularly suitable as they have already iterated on the design using FPGAs. The chip is already working as an FPGA and bringing tangible improvements: I'm assuming a 130nm ASIC version would be even better.

I'm not sure that assumption is necessarily right. As a general guide, on a cheap (sub $200) modern FPGA I can clock an RV64 core at 50-100 MHz. As you spend more on the FPGA, you can get higher clock rates and/or more cores. Also it should be possible to clock 32 bit cores higher (perhaps much higher) because there will be fewer data paths for internal routing to skew. On the other hand, modern RISC architectures are designed for this, whereas old 68k architectures may not be.

I had no problem running PicoRV32 at 50mhz (maybe higher... 75mhz? can't recall, at that point I had other issues that might not have been CPU related) on an Artix 7 35t.

Honestly instead of chasing after new 68k silicon it'd be better to just emulate on a modern processor. Not the same romance, I know....but

What happened to freescale/NXP "Coldfire"?

ColdFire is still around but is also not fully binary compatible with 68k. There have been attempts at making Amiga accelerator cards using Coldfires, but I don't think I've ever seen one that was fully finished.

It's not fully binary compatible but pretty damn close. People working on the Firebee were able to make it run pretty smooth, there's some things you can do to trap the old instructions and rewrite. It's never going to work for games and the like, but games and the like from that era have all sorts of other video hardware specific dependencies that are even more difficult to satisfy.

It really is "spiritually" a 68k series processor, just with some cleaning up. I like it.

I am really not an expert in 68k but the Coldfire does not appear to be fully compatible with the old 68ks used in old Macs and Amigas, and Googling around it doesn't appear to have had much uptake if any. It's not being made anymore either.

Coldfires are definitely still in production.

FireBee (http://firebee.org), which is a modern clone of Atari ST, uses it.

No new Coldfire processors made in years, unfortunately. Freescale/NXP seems to be just leaving it.

I guess everyone is just using ARM now.

You would have problems with licensing the 68k ISA.

I believe freescale currently owns the architecture, and still manufactures some 68k microcontroller cores.

> I believe freescale currently owns the architecture,

Owns it how? 68060-- the last of 68k's designs-- was released in 1994. Any patents should now be expired.

> > I believe freescale currently owns the architecture,

> Owns it how? 68060-- the last of 68k's designs-- was released in 1994. Any patents should now be expired.

Sure, patents wouldn't be a barrier to clone the design and create an equivalent using the same patented ideas, but copyright still prevents you from copying the design, and will prevent copying significant parts of the design as well.

In other words: you can rearchitect it from scratch, but you probably can’t extract the die.

"Probably" doesn't seem strong enough. I mean, the only thing that would give you any hope of avoiding being sued out of existence leaving nothing but a small greasy spot behind would be obscurity and commercial irrelevance.

Interesting thing to consider. I wonder how actively people want to protect 68k, as not even Freescale/NXP seems to use it anymore.

Shouldn't that already be problematic for the 68k projects in hardware through FPGAs? Apollo already does it and sells hardware, and the MiSTER project also does it by releasing FPGA designs for e.g. the Sega Genesis which has a 68k processor. Is it a different story if you embed 68k in an ASIC?

Texas Instruments still sells graphing calculators with 68k processors (TI-89 series, most commonly)

All the patents of all the non-Coldfire cores have expired, which is the mechanism for enforcing ownership over ISAs.

68060 - 600 nm

I believe that a lot of people here might be interested in "Minimal Fab", developed by a consortium of Japanese entities.

These are kiosk-sized machines that a company can use to set up a fab with a few million dollars. Any individual can then design a chip and have it fabricated very (as in "I want to make a chip for fun") affordably.

I was not able to find a ton of information on this, but the 190nm process was supposedly ready last year and there were plans to go below this. The wafers are 12mm in diameter (so basically, one wafer -> one chip) and the clean room is just a small chamber inside the photolithography machine. There are also no masks involved, just direct "drawing".

My advice to anyone who's looking for a pathway into open source silicon is to look into E-Beam lithography. Effectively E-Beam lithography involves using a scanning electron microscope to expose a resist on silicon. This process is normally considered to slow for industrial production but it's simplicity and size make it ideal for prototyping and photo mask production.

The simplistic explanation for why this works is that electron beams can be easily focused using magnetic lenses into a beam that reaches the nano meter level.

These beams can then be deflected and controlled electronically which is what makes it possible to effectively make a cpu from a cad file.

Furthermore, It's very easy to see how the complexity of photolithography goes up exponentially as we scale down.

Therefore I believe it makes sense to abandon the concept of photolithography entirely if we want open souce silicon. I believe that this approach offers something similar to the sort of economics that enable 3D printers to become localized centers of automated manufacturing.

I should also mention that commercial E-beam machines are pretty expensive (something like 1-Mil) but that I dont think it would be that difficult to engineer one for a mere fraction of that price.

I suggest you take a look at how easy maskless photolithography is: https://sam/zeloof.xyz

Theoretically it should be feasible to fab 350 nm without double-patterning by optimizing a simple immersion DLP/DMD i-line stepper.

I think ArF immersion with double-patterning should be able to do maskless 90 nm.

fixing the url http://sam.zeloof.xyz/

> I should also mention that commercial E-beam machines are pretty expensive (something like 1-Mil) but that I dont think it would be that difficult to engineer one for a mere fraction of that price.

I'm not sure where it was, but I remember a seeing a project where someone made a rudimentary homebrew electron microscope by chemically etching the tungsten filament from a light bulb (to get the tip sharp enough) and attaching it to a piezo buzzer that was scored to separate it into four quadrants. The filament could be moved by applying various combinations of voltage to the piezo quadrants.

I didn't find the one I was thinking of (which I think was ca. 2002 and so maybe just vanished by now), but search results suggest that variations of this have been done by several people.

That sounds like a pretty standard STM (scanning tunneling microscope). They had a couple of those at the university I went to. We had to cut the tips ourselves with pliers, which was a quite annoying process as you couldn't see if they were sharp enough by eye (the tips were supposed to be only a few atoms thick). They seemed pretty cheap to construct, but they are not the same thing as a scanning electron microscope.

As far as democratizing hardware goes, I wonder if silicon is the wrong place to start.

Decades ago computers used magnetic core memory. Those things operate on a macroscopic/classical physics level. You can make a core memory by hand if you buy the ferrous toroid first. But moreover, you mention 3d printers — it’s probably possible to manufacture the toroid on a sub-10k machine these days, be it a 3d printer or CNC machine. Some of these techniques generalize to multiple materials, meaning you could automate both the manufacturing of the toroid and the wires connecting them (and the assembly) and have an actual open-source, easily fabricated memory.

One thing not a lot of people know is that you can create clock-triggered combinatorial logic out of core memories just by routing the wires differently. So you’ve got your whole computation + volatile memory + non-volatile memory built on the same process using just two materials and at a macroscopic scale (think: millimeters). That sounds easier to bring up than silicon.

Yeah, macro-scale has its limitations (speed; power draw!), but it’s still enough to enable plenty of applications, and with room to scale it as the tech gets better.

How much has the power efficiency improved between 130nm and 7nm? Is it plausible to get better performance/watt for a custom chip on 130nm vs a software application running on a 7m chip? I get that hardware has other benefits but just wondering for accelerators where the cost/benefit starts to make sense.

> Is it plausible to get better performance/watt for a custom chip on 130nm vs a software application running on a 7m chip?

This very, very much depends on what the algorithm is (integer or FP? how data dependent?), but I would say no for almost all interesting cases.

The only exception would be if you're doing a "mixed signal" chip where some of the processing is inherently analogue and you can save power compared to having to do it with a group of separate chips.

Another exception might be low leakage construction, because that gets worse as the process gets smaller. This is only valuable if your chip is off almost all of the time and you want to squeeze down exactly how many nanoamps "off" actually consumes.

An open source WiFi chip would be super cool. I wonder how easy it would be to take the FPGA code from openwifi[0] and combine it with a radio on the same chip?

[0] https://github.com/open-sdr/openwifi

The problem is that analogue IC design is a field that even digital IC design people regard as black magic. It's clearly possible for that to happen but the set of people who have the skills to do it is very narrow and most of them are probably prevented from doing it in their spare time by their employment agreements.

I wonder how many "test chips" Google will let a non-expert team do to get it right? And whether they provide any "bringup" support?

A big part of the "black magic" really comes down to insufficient tooling. And at least in hardware, insufficient tooling comes down to the fact that everything is open source and trade secret, and teams pretty much refuse to share knowledge with each other.

An open source community would go a long way to fixing an issue like this, and these "black magic" projects are actually a fantastic place for the open source world to get started, because it's an area where there's a ton of room for improvement over the status quo.

They're only allowing parts that stay within the bounds of the PDK (which only allows digital designs) for now.

Even if you could technically make it work, I'd be very nervous around the legalities of that. Or is the Wi-Fi spectrum so unregulated that you can run without any certification at all?

Certification has to do power of the signal and frequency. licensing is not required in some frequency bands like in 2.4 GHz used by WiFi.

WiFi equipment (and pretty much every other radio) requires certification in order to be sold in every country I am aware of. WiFI doesn't require a license to operate, but that doesn't mean you can just use any hardware you like (though I think there may be exceptions for hardware you build yourself, at least in the US).

> Another exception might be low leakage construction, because that gets worse as the process gets smaller. This is only valuable if your chip is off almost all of the time and you want to squeeze down exactly how many nanoamps "off" actually consumes.

No, you actually have more leakage at older nodes, what changes is the ratio of current spent on leakage vs. current spent doing something useful.

Doesn't leakage increase again below 22nm because of tunneling losses, though?

Of course, the lower gate capacitance allows for lower switching losses. But adiabatic computing could theoretically recover switching losses, allowing for higher efficiency at older nodes. That can be approached by using an oscillating power supply for instance, to recover charges. If someone was to design something like this for this run, it could be very interesting.

Now I'm wondering if this isn't some covert recruitment operation by Google: they will likely comb trough application, select the most promising ones, and the designers will get job offers :)

> Doesn't leakage increase again below 22nm because of tunneling losses, though?

You have tunnelling losses on bigger nodes as well, they are just not that dominant. Dielectrics got better as nodes shrank, and this is the reason FinFETs became practical (which switch faster, and more reliably on smaller nodes, but leak worse.)

You won't be able to profitably mine Bitcoin on 130nm ASICs (just as an example)

130nm is almost 20 years old at this point. You can do amazing things with this process but saving power is probably not one of them.

But as an example, you WOULD be able to profitably mine bitcoin on 130nm ASICs if all the rest of the world had was CPUs/GPUs/FPGAs, which was more what the grandparent post was asking: 130nm hardware implementations can be much, much faster and/or energy efficient than a 7nm general-purpose chip which simulates the algorithm.

I wasn't able to find great specifications for the 130nm process, but it looks like the difference in transistor size and efficiency is somewhere around 100x. For specialized applications, going from a CPU to an ASIC is usually around a 1000x performance gain.

So yes, for specific tasks like crypto operations or custom networking, you should be able to make a 130nm ASIC that is going to outperform a 7nm Ryzen. You are not going to be able to make a CPU core that's going to outperform a Ryzen however.

130nm was good enough for 2GHz 30W CPUs back in the day. We are talking almost decoding 1080@30 h264 in software performance.

I suspect, however, that the gap between designs that are realizable for amateurs with limited training, and the ones that are realizable for professional teams is wider than in software.

So somebody like me, who did two standard cell based ASICs 25 years ago, probably would have to add a sizable safety margin to produce a reliable chip, and would achieve nowhere near the performance of a pro team at the time.

I would definitely be rather interested in learning how to design some chips with feature sizes large enough for power handling... I'd love to hear about this as well. This sounds like a clever way to commoditize hardware design, like when printing PCBs became affordable.

Depending what application you have but if you have a relatively narrow and complex application, I would say definitely yes.

I should note there's open source ASIC toolchain - OpenROAD[1][2]. I wonder if these can be integrated. You also can use SymbiFlow to run your prototype in FPGA[3][4].

[1] https://theopenroadproject.org/

[2] https://github.com/The-OpenROAD-Project/OpenROAD

[3] https://symbiflow.github.io/

[4] https://github.com/SymbiFlow

Spot on. Both are discussed by Tim in the video as part of the solution stack.

Because apparently no one remembers the other "free" fab service.


Previously MOSIS would run select a few student/research designs to go along with the commercial MPW runs, frequently on pretty modern fabs. I'm not really sure how much they still run.

(oh here is the MOSIS/TSMC runs for this year https://www.mosis.com/db/pubf/fsched?ORG=TSMC)

But this one is open to hobbyists. If you're doing a graduate degree in ASIC design and your group doesn't have the funds to do simple fab runs, you're probably in a somewhat questionable program to begin with.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact