You refer to it as Wolverine syndrome, but I also remember it with TMNT and Leonardo. With the cartoon version, they made all the foot clan robots to get around it. It's also quite the shock to read the original comics and find out the turtles love beer, and Leonardo stabs and kills the shredder (permanently) in something like the first issue.
And I read a study a while back that researched this, and analyzed several fatberg deposits. They concluded that there was no trace of biodegradable wipes. And that follows purchasing patterns--99.9% of those bought aren't the biodegradable kind. People just don't care and continue to flush wipes, diapers, fats, cat litter, and who knows what else.
That situation is such a mess. Those dissolving wipes actually exist, and actually dissolve.
So what do people do? They buy the cheap, non-dissolving ones and flush them anyway. Then when they inevitably clog, they call the plumber and insist they bought the correct ones when asked. So now there's widespread confusion about whether the dissolving ones work.
> They’re also not saying how much actual message content they have because the 410GB of heap dumps makes for a bigger headline number.
That's very important to say. I went through one of these massive data dumps recently and it was literally all cached operating system package updates and routine logs. Nothing at all of interest.
It's easy to cut the size on a heap dump. When it's not done it seems sketchy. But it could be a 512GB dump and already pruned, so I could be wrong.
Most of the the heap dump will be filled with stuff like java.util.String!blahjava.util.ArrayList!
Though the heap dump would have messages in flight at the time. It's obviously not as useful if you are just trying to grab messages for a specific person.
Frankly the most useful part might be any in-memory secret keys, which could be useful for breaking deeper into the system.
At a US university, I had an large elective class where the professor refused to start until things had "settled down", and he said he was going to add that time to the end to ensure he got his full 50 minutes.
I had a major-related class 10 minutes after, clear across campus, about a mile of walking. This professor was nice about it, but I was the only one coming in late at all.
So I made sure to sit in the front row of the earlier lecture, and left precisely when the class was supposed to end, leaving no doubt I had places to go.
> However, Wayland's intention to explicitly avoid baking specific desktop concepts onto its core protocols make this somewhat of a conflicting design req.
I would say it's slightly worse. Wayland's intention was to explicitly prevent the implementation of those features in the name of security. To implement a protocol with enough flexibility to allow voice control of the general interface would necessitate walking back limitations that were heavily evangelized.
On the other hand, I'm utterly impressed how much more stable Wayland through Gnome and Plasma are over the last year or so, to the point I've switched to it as a primary desktop. They've also been adding protocols like xdg_toplevel_tag_v1 that were seemingly taboo until recently. I'm optimistic about this current batch of programmers. I think they'll manage to sort out accessibility pretty soon.
Yep, in 2014 Intel's Haswell architecture was a banger. It was one of those occasional node+design intersections which yields a CPU with an unusually long useful lifespan due to a combination of Haswell being stronger than a typical gen and the many generations that followed being decidedly 'meh'. In fact, I still run a Haswell i5 in a well-optimized, slightly overclocked retro gaming system (with a more modern SSD and GFX card).
About a year ago I looked into what practical benefits I'd gain if I upgraded the CPU and mobo to a more recent (but still used) spec from eBay. Using it mainly for retro game emulation and virtual pinball, I assessed single core performance and no CPU/mobo upgrade looked potentially compelling in real-world performance until at least 2020-ish - which is pretty crazy. Even then, one of the primary benefits would be access to NVME drives. It reminded me how much Intel under-performed and, more broadly, how the end of Moore's Law and Dennard Scaling combined around roughly 2010-ish to end the 30+ year 'Golden Era' of scaling that gave us computers which often roughly doubled performance across a broad range of applications which you could feel in everyday use - AND at >30% lower price - every three years or so.
Nowadays 8% to 15% performance uplift across mainstream applications at the same price is considered good and people are delighted if the performance is >15% OR if the price for the same performance drops >15%. If a generation delivers both >15% performance AND >15% lower price it would be stop-the-presses newsworthy. Kind of sad how our far our expectations have fallen compared to 1995-2005 when >30% perf at <30% price was considered baseline and >50% at <50% price was good and ~double perf at around half price was "great deal, time to upgrade again boys!".
It's funny that, in today's CPUs, floating point divide is so much faster than integer divide.
reply