The problem is a near-immediate reversion to pre-WFH/hybrid times. They now have five days to solve planning that may have taken some people years, and the family as a whole is probably going to suffer, and one parent's career prospects are probably going to be damaged as a result. A lot of people aren't living near grandma and grandpa right now, which is something they might have considered doing if they knew they were going to be tethered to a desk in an office.
Yeah, that's what the IT at my company did. Installed Zscaler, rolled out a new root cert to Chrome, and then told people to configure the remaining apps they use to use the organization's root cert.
PC builds seem to short circuit everyone's pricing logic and drive any labor cost down to $0, just because they're willing to do it for free. Anything above that $0 is considered overpriced.
There are services that will build a PC for $200. It's entirely valid to ask where the money goes, and the answer is not the labor to put the pieces together. There's no reason to assume OP is being dismissive of that specific cost.
If you're living in the same dysfunctional world I am, then maybe your organization split things into repos that are separately releasable, but are conceptually so strongly coupled that you now need to create changes on 3 repos to make a change.
Yeah, if you care about 3D acceleration on a Windows guest and aren't doing pcie passthrough, then KVM sure isn't going to do it. There is a driver in the works, but it's not there yet.
edit: I made a mistake and got confused in my head with qemu and the lack of paravirtualized support. (It does have a PV 3D linux driver, though)
KVM will happily work with real virtual GPU support from every vendor; it's the vendors (except for intel) that feel the need to artificially limit who is allowed to use these features.
I guess my comments make it sound like I don't appreciate this type of work; I absolutely do. An old friend of mine[1] was responsible for the first 3d support in the vmware svga driver, so this is a space I have been following for literally decades at this point.
I just think it should be the objective of vendors to offer actual GPU virtualization first and to support paravirtualization as an optimization in the cases where it is useful or superior and the tradeoffs are acceptable.
Pretty much all of them do, though the platform support varies by hypervisor/guest OS. Paravirtualized (aka non-passthrough) 3D acceleration has been implemented for well over a decade.
However NVIDIA limits it to datacenter GPUs. And you might need an additional license, not sure about that. In their view it's a product for Citrix and other virtual desktops, not something a normal consumer needs.
Yes and no; you can use GPU partitioning in Hyper-V with consumer cards and Windows 10/11 client on both sides, it’s just annoying to set up, and even then there’s hoops to jump through to get decent performance.
If you don’t need vendor-specific features/drivers, then VMware Workstation (even with Hyper-V enabled) supports proper guest 3D acceleration with some light GPU virtualization, up to DX11 IIRC. It doesn’t see the host’s NVIDIA/AMD/Intel card and doesn’t use that vendor’s drivers, so there’s no datacenter SKU restrictions. (But you are limited to pure DX11 & OpenGL usage, no CUDA etc.)
Yes. Check out a library like zstd-jni. You'll find native libraries inside it. It'll load from the classpath first, and then ask the OS linker to find it.
I'd like to learn how they do it. Because last time I've looked at this, the suggested solution was to copy the binaries from claspath (eg: the jar) into a temporary folder then load it from there. It feels icky :)
Do NOT force the class loader to unload the native library, since
that introduces issues with cleaning up any extant JNA bits
(e.g. Memory) which may still need use of the library before shutdown.
Remove any automatically unpacked native library. Forcing the class
loader to unload it first is only required on Windows, since the
temporary native library is still "in use" and can't be deleted until
the native library is removed from its class loader. Any deferred
execution we might install at this point would prevent the Native
class and its class loader from being GC'd, so we instead force
the native library unload just a little bit prematurely.
Users reported occasional access violation errors during shutdown.
Ah, looking through the docs [1]; you have to use your own ClassLoader (so it can be garbage-collected), and statically-link with a JNI library which is unloaded when the ClassLoader is garbage-collected.
I have some extremely unwieldy off-heap operations currently implemented in Java (like quicksort for 128 bit records) that would be very nice to offload as FFI calls to the corresponding a single-line C++ function.
Because "some inconvenience/unmet requirement" from a language is not an invitation to "throw out the whole platform and your existing code and tooling, and learn/adopt/use an entirely different, single-vendor platform".
Except if we're talking about some college student or hobbyist picking their first language and exploring the language space...
Assuming it is "sort for 128bit records", that's something C# does really well - writing optimized code with structs / Vector128<T> / pointer arithmetic when really needed without going through FFI and having to maintain separate build step and project parts for a different platform.
But even if it was needed, such records can be commonly represented by the same structs both at C#'s and C++'s sides without overhead.
An array of such could be passed as is as a pointer, or vice versa - a buffer of struts allocated in C/C++ can be wrapped in a Span<Record128> and transparently interact with the rest of standard library without having to touch unsafe (aside from eventually freeing it, should that be necessary).
Your heart literally gets bigger! Your stroke volume goes up, and you probably have more blood plasma, too. This aids in fueling muscles.
Though, if you Google heart hypertrophy, you'll find a bunch of scary things, because your heart can also get bigger in a bad way rather than a good way.
I'd guess there is also some efficiency in muscle tissue taking in oxygen over fat tissue? Also a bit better lungs, for runners and such? Though, I also don't know the relationship between lung efficiency and heart rate.
If you have links on any of this, it feels like it would be a very fascinating read.
> a very fit and very strong person can wrongly be classified (and in the opposite direction too)
Unfortunately, it usually goes in the bad direction, by quite a bit. BMI under-predicts obesity. You only have to hit 25% body fat as a male to be obese, or 32% as a female.
reply