I recently finished eight years at a place where everyone used multicast every day. It consistently worked very well (except for the time when the networks team just decided one of my groups was against policy and firewalled it without warning).
But this was because the IT people put effort into making it work well. They knew we needed multicast, so they made sure multicast worked. I have no idea what that involved, but presumably it means buying switches that can handle multicast reliably, and then configuring them properly, and then doing whatever host-level hardware selection and configuration is required.
In a previous job, we tried to use multicast having not done any groundwork. Just opened sockets and started sending. It did not go so well - fine at first, but then packets started to go missing, and we spent days debugging, and finding the obscure errors in our firewall config. In the end, we did get it working, but i would't have done it again. Multicast is a commitment, and we weren't ready to make it.
Yep- the main issue is multicast is so sparsely utilized that you can go through most of a career in networking with minimal exposure to multicast except on a particular peer link- once you scale support to multi-hop the institutional knowledge is critical because the individual knowledge is so spotty.
nothing like the 180 degree turn on having designs signed by multiple parties and then getting suspended for having it built, likely by the same parties.
How does this instance of Computed know that it depends on x? Does it parse the bytecode for the lambda? Does it call the lambda and observe which signals get accessed?
In my homebrew signal framework, which emerged in the middle of a complicated web dashboard, this would look like:
I am using the standard Python library `contextvars.ContextVar` as the foundation of my reactivity system's dependency tracking mechanism. In the computation step, when Signals get accessed, I track them as dependencies.
I've used systems that did this (some TypeScript TUI library comes to mind) and was similarly confused. I think what actually happened was that the x function/getter/whatever had some 'magic' in it that let it communicate with `Computed` as a side-effect of `Computed` computing the value.
Too magical for me. I'd rather have something like you described where inputs are explicit, so I don't have to guess about whether the magic will work in any given case.
I think you need a place with cheap rents that is within striking distance of places with very expensive rents. Artists can afford the former, but their customers (literal art buyers, or culture vultures of various other kinds, media execs, journalists, etc) are located in the latter.
Yeah, part of the problem is how Windows does variable substitution before the command line syntax is parsed, and at a glance I don't see any % in that file.
How come integrated graphics is in the CPU, rather than being part of the chipset? For an actual single-chip SoC, i suppose it has to be, but even my Ryzen 5 7600X has graphics. I would have thought resources on the CPU would be at a premium, so you'd put them all towards compute. Particularly since integrated graphics doesn't need to be that powerful.
Today's northbridge (aka: Memory controller) is on CPUs. GPUs need a powerful memory controller. And the most powerful memory controller between CPU and Southbridge/Chipset is the memory controller on the CPU itself.
More generally there isn't really a place for low performance integrated graphis any more and southbridge style chips are made with old old cheap processes that cripped with poor memory access would probably not run any modern desktop well.
A second option for the memory would be to put a small amount of local memory on the mobo along with the chipset, which again would be slow and still costly while losing the normal iGPU advantage of unified gpu & cpu memory access to th same data (UMA).
High end Apple M chips have it beat still I think.
edit: there's a new marketing claim from AMD that it beats M4 in some configuraton by 2.6x: https://www.amd.com/en/developer/resources/technical-article... .. but it's a small memory mode of M4 Pro, I wonder if there's independently benchmarked numbers of M4 Max vs the 395 out there.
But this was because the IT people put effort into making it work well. They knew we needed multicast, so they made sure multicast worked. I have no idea what that involved, but presumably it means buying switches that can handle multicast reliably, and then configuring them properly, and then doing whatever host-level hardware selection and configuration is required.
In a previous job, we tried to use multicast having not done any groundwork. Just opened sockets and started sending. It did not go so well - fine at first, but then packets started to go missing, and we spent days debugging, and finding the obscure errors in our firewall config. In the end, we did get it working, but i would't have done it again. Multicast is a commitment, and we weren't ready to make it.
reply