Hacker News new | past | comments | ask | show | jobs | submit | more borramakot's comments login

You say formerly catapult- is catapult no longer being developed?


I don't work for Microsoft or have any real knowledge here, but my understanding (from the grapevine) is that the team essentially was given a much larger purview. So in a sense Catapult is evolving?


LZ4 seems really nice for data with a lot of repetition, but not having any symbolic entropy encoder can really kill the compression for some data. I often default to LZ4, and used to fall back to gzip if that doesn't work well, but I've been really impressed with zstd. It's not as commonly used, but if I'm not concerned about interoperability, I'll be trying to use a lot more zstd.


So, the main result of designing a CPU is a series of masks that essentially indicate where to put what. For example, in this layer, inject boron anywhere the mask doesn't block. The masks aren't wafer sized- they are pretty small, and a machine moves the mask from position to position across the wafer to re-use it. But, at least when I was working on this, some masks would be larger than an individual square (die)- maybe the mask could do 2x2 at a time. In that case, maybe the application of the mask would get you one complete die, and three die off the edge.


Awesome explanation, thanks! Kinda like dual-cavity injection molds I guess


How so? It seems like lots of businesses run successfully on that model for indefinite periods.


Just to throw in one more complication, I'll assert that the only benefits of FPGAs over ASICs are one time costs and time to market. Those are big benefits, but almost by definition, they aren't as important for workloads that are large scale and stable. So, if you do have a workload that's an excellent match for FPGAs, and if that workload will have lots of long term volume, you should make an ASIC for it.

So, for FPGAs to be the next big thing in HPC, you'd need to find a class of workloads that benefit from the FPGA architecture, for long enough and with high enough volume to be worth the work to move over, and are also unstable or low volume enough that it's not worth making them their own chip.


Thats not entirely true - the flexibility can have its own value. Unlike an ASIC you can handle multiple workloads or update flows.

For example timing protocols on backbone equipment handling 100-400Gbps. Depending on how its configured you may need to do different things. Additionally you probably don't want to replace 6 figure hardware every generation.

Another example is test equipment where you can't run the tests in parallel. A single piece of hardware can be far more portable / cost effective.


I may not have said it well, but I broadly agree with you. If a workload needs high performance but not consistently (e.g. because you're doing serial tests by swapping bitstreams), predictably (e.g. because you need flexibility for network stuff you can't predict at design time), or with enough volume (e.g. costs in the low millions are prohibitive), an ASIC isn't the right solution.

But my point is that for FPGAs to come to prominence as a major computation paradigm, it probably won't be because it outperforms GPU on one really big workload like bitcoin or genetic analysis or something. It'll have to be a moderately large number of medium scale workloads.


There is also glue logic between different interfaces that can be satisfied with FPGAs or CPLDs.


> I'll assert that the only benefits of FPGAs over ASICs are one time costs and time to market.

There's one more big one: the ability to update the logic in the field.


I agree that's a really striking and suggestive result, but keep in mind the sub-sample size there is four passing on the private side and six failing on the public side. My guess is there's a real and meaningful effect there, but it may not be to the degree that sentence without context would suggest.


I've wondered for a while- are there heuristics on the income a steamer can make per viewer-hour?

Edit: I'd googled this before, but apparently I found the right way to ask: this site says from 1c to $1/hour/viewer. https://wallethacks.com/how-much-do-twitch-streamers-make/#:....


For what it's worth, that's just income directly from sponsored content. The bulk of livable streaming income (outside of huge endorsements) comes from monthly subscriptions, so conversion rate matters as much as total viewership.


Depending on your definition of cheap, fab access might be pretty cheap right now for old technology nodes, which have totally reasonable performance if you have an architectural advantage.

This article suggests mask tapeout costs are under $1 million in older nodes, sometimes well under. If you have an architectural advantage in a problem domain with tens of millions or more in costs, a simple ASIC can be very worthwhile. That architectural advantage might be hard to find, especially when problem domains aren't fixed for long periods of time (e.g. how many ML accelerators only really work well for dense convolutions?), but I suspect too few companies are making custom chips, rather than too many.

https://www.electronicdesign.com/technologies/embedded-revol...


Why pay for fab? Fabrice Bellard wrote a new kind of qemu last year that's tiny enough to boot operating systems in webbrowsers: https://bellard.org/tinyemu/

Intel also taught us last year that 4kb of code is all it takes to decode every x86 isa (i.e. 1977-2020) https://github.com/jart/cosmopolitan/blob/d51409c/third_part... Thanks Mark Charney. https://github.com/intelxed/xed


What does that have to do with accelerators?


I don't know much about genomics, but I think there are at least FPGA based accelerators in production, for example, Illumina's DRAGEN.

https://www.illumina.com/products/by-type/informatics-produc...


I've done a startup, a couple of midsize companies (~4k engineers), and AWS. Just my 2c, a lot of the advice I'm seeing here applies more to the midsize companies than at least my corner of AWS.

In my Amazon experience:

* Some very high level project requirements would come from above (e.g. after this date, internal technology X is being deprecated, so you should have a really good reason to put out a project with X).

* Otherwise, decisions were mostly made at a low level, documented, debated with the wider team for an hour, then implemented. This was a little more structured than at the startup, but most of it was that documentation and debate happened before implementation, rather than after implementation at the startup.

* Project managers were somewhat active with the team, but a lot of the features we worked on came from the engineers watching what was used, forum requests, or customer requests through other channels (e.g. conferences).

* There was a focus on getting products out quickly, but tech debt/tests/reliability was a much bigger focus than anywhere else I've been.

* The team was fairly small, and encouraged to make heavy use of other team's internal tooling/native AWS tools for anything that didn't really need to be custom. Interactions with those teams was pretty straightforward and mostly supportive- "We're using your service to do Y, and would like to do Z too, but that doesn't seem possible without some tweaks to your API/service, is that something you can put in the backlog to investigate?"

* An individual team could be quick to change, but the organization as a whole has a lot of cultural momentum in the way things are done, and it's not clear who to talk to to make recommendations. For example, at the startup, I could go to the CEO and express concerns about the newly restrictive information security policy. At Amazon, I'm probably not going to email Jeff Bezos and suggest six-pagers be made available in advance of meetings.

* Transferring teams in Amazon is mostly extremely easy

* Conversely, Conway's law applies hard in AWS- it didn't seem straightforward to offer products or features that weren't obviously under one team's purview without forming a new team.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: