Hacker News new | past | comments | ask | show | jobs | submit | zten's comments login

The problem is a near-immediate reversion to pre-WFH/hybrid times. They now have five days to solve planning that may have taken some people years, and the family as a whole is probably going to suffer, and one parent's career prospects are probably going to be damaged as a result. A lot of people aren't living near grandma and grandpa right now, which is something they might have considered doing if they knew they were going to be tethered to a desk in an office.

Yeah, that's what the IT at my company did. Installed Zscaler, rolled out a new root cert to Chrome, and then told people to configure the remaining apps they use to use the organization's root cert.


The workstation equivalent, the RTX 6000 Ada, defaults to 300W. You can get most of the performance of a 4090 by capping the power.


PC builds seem to short circuit everyone's pricing logic and drive any labor cost down to $0, just because they're willing to do it for free. Anything above that $0 is considered overpriced.


It’s not that they’re willing to do it for free. It’s that they’re doing it for fun. It’s a hobby, not work.

Part of the fun is planning, researching, putting together the pieces and power it on.


There are services that will build a PC for $200. It's entirely valid to ask where the money goes, and the answer is not the labor to put the pieces together. There's no reason to assume OP is being dismissive of that specific cost.


If you're living in the same dysfunctional world I am, then maybe your organization split things into repos that are separately releasable, but are conceptually so strongly coupled that you now need to create changes on 3 repos to make a change.


Yeah, if you care about 3D acceleration on a Windows guest and aren't doing pcie passthrough, then KVM sure isn't going to do it. There is a driver in the works, but it's not there yet.

edit: I made a mistake and got confused in my head with qemu and the lack of paravirtualized support. (It does have a PV 3D linux driver, though)


KVM will happily work with real virtual GPU support from every vendor; it's the vendors (except for intel) that feel the need to artificially limit who is allowed to use these features.


I was mostly hoping qemu would get paravirtualized support some day, because it is leagues ahead of VMware Player in speed. Everyone's hopes are riding on https://github.com/virtio-win/kvm-guest-drivers-windows/pull....


I guess my comments make it sound like I don't appreciate this type of work; I absolutely do. An old friend of mine[1] was responsible for the first 3d support in the vmware svga driver, so this is a space I have been following for literally decades at this point.

I just think it should be the objective of vendors to offer actual GPU virtualization first and to support paravirtualization as an optimization in the cases where it is useful or superior and the tradeoffs are acceptable.

[1] https://scanlime.org/


There has been a driver "in the works" for the past decade. Never coming. MS/Apple do not make it easy anyway.


Do any of the commercial hypervisors do that today?


Pretty much all of them do, though the platform support varies by hypervisor/guest OS. Paravirtualized (aka non-passthrough) 3D acceleration has been implemented for well over a decade.


However NVIDIA limits it to datacenter GPUs. And you might need an additional license, not sure about that. In their view it's a product for Citrix and other virtual desktops, not something a normal consumer needs.


Yes and no; you can use GPU partitioning in Hyper-V with consumer cards and Windows 10/11 client on both sides, it’s just annoying to set up, and even then there’s hoops to jump through to get decent performance.

If you don’t need vendor-specific features/drivers, then VMware Workstation (even with Hyper-V enabled) supports proper guest 3D acceleration with some light GPU virtualization, up to DX11 IIRC. It doesn’t see the host’s NVIDIA/AMD/Intel card and doesn’t use that vendor’s drivers, so there’s no datacenter SKU restrictions. (But you are limited to pure DX11 & OpenGL usage, no CUDA etc.)


Yes. Check out a library like zstd-jni. You'll find native libraries inside it. It'll load from the classpath first, and then ask the OS linker to find it.


I'd like to learn how they do it. Because last time I've looked at this, the suggested solution was to copy the binaries from claspath (eg: the jar) into a temporary folder then load it from there. It feels icky :)


Yep, you're right, they do exactly that. Apologies for the confusion.

Decompiled class file:

    try {
        var4 = File.createTempFile("libzstd-jni-1.5.0-4", "." + libExtension(), var0);
        var4.deleteOnExit();


This wouldn't work on Windows, because you can't delete a DLL while it's in use


You might be able to use FILE_FLAG_DELETE_ON_CLOSE, but this would likely require calling the Windows API functions directly.


Couldn't you: Extract DLL Load DLL Unload DLL Delete DLL ?

Though in the example given, I do see your point now. You'd have to make sure the DLL was unloaded before the delete-on-exit happened.


According to JNA it's not safe to unload the DLL:

https://github.com/java-native-access/jna/blob/40f0a1249b5ad...

  Do NOT force the class loader to unload the native library, since
  that introduces issues with cleaning up any extant JNA bits
  (e.g. Memory) which may still need use of the library before shutdown.
Following the blame back to 2011, they did unload DLLs before https://github.com/java-native-access/jna/commit/71de662675b...

  Remove any automatically unpacked native library.  Forcing the class
  loader to unload it first is only required on Windows, since the
  temporary native library is still "in use" and can't be deleted until
  the native library is removed from its class loader.  Any deferred
  execution we might install at this point would prevent the Native
  class and its class loader from being GC'd, so we instead force 
  the native library unload just a little bit prematurely.
Users reported occasional access violation errors during shutdown.


You can install a shutdown hook to do cleanup like this.

    Runtime.getRuntime().addShutdownHook(...)


That's how java.io.File#deleteOnExit works under the hood. The DLL is still loaded at that point and can't be deleted.


Ah, looking through the docs [1]; you have to use your own ClassLoader (so it can be garbage-collected), and statically-link with a JNI library which is unloaded when the ClassLoader is garbage-collected.

1: https://docs.oracle.com/en/java/javase/22/docs/specs/jni/inv...


Hmm, interesting. They do have DLLs in the JAR...


EDIT: Disregard. I am wrong. Original below.

You can just load as a resource. We do this internally since much of network stack is C. But we use JNI because code is older than Java 22.


You made me search it again. And still I don't see how that's possible. `Runtime.load` requires a regular file with an absolute path[0].

Stackoverflow is full of "copy it into a temp file" solutions. ChatGPT keeps saying "sorry" but still insists on copying it into a temp file :)

[0] - https://docs.oracle.com/en%2Fjava%2Fjavase%2F22%2Fdocs%2Fapi...


Embarrassing of me to give you wrong answer. I went and checked my old code and:

     new FileOutputStream(tmpFile)
Apologies.


Sounds promising.

I have some extremely unwieldy off-heap operations currently implemented in Java (like quicksort for 128 bit records) that would be very nice to offload as FFI calls to the corresponding a single-line C++ function.


Why not give C# a try instead? It has everything you ask for and then some.


Because "some inconvenience/unmet requirement" from a language is not an invitation to "throw out the whole platform and your existing code and tooling, and learn/adopt/use an entirely different, single-vendor platform".

Except if we're talking about some college student or hobbyist picking their first language and exploring the language space...


He would still have to call out to the C++ function.


Assuming it is "sort for 128bit records", that's something C# does really well - writing optimized code with structs / Vector128<T> / pointer arithmetic when really needed without going through FFI and having to maintain separate build step and project parts for a different platform.

But even if it was needed, such records can be commonly represented by the same structs both at C#'s and C++'s sides without overhead.

An array of such could be passed as is as a pointer, or vice versa - a buffer of struts allocated in C/C++ can be wrapped in a Span<Record128> and transparently interact with the rest of standard library without having to touch unsafe (aside from eventually freeing it, should that be necessary).


Wow, you all are sure mad enough to go out of your way and downvote my comments elsewhere.

Stay in the swamp :)


Your heart literally gets bigger! Your stroke volume goes up, and you probably have more blood plasma, too. This aids in fueling muscles.

Though, if you Google heart hypertrophy, you'll find a bunch of scary things, because your heart can also get bigger in a bad way rather than a good way.


I'd guess there is also some efficiency in muscle tissue taking in oxygen over fat tissue? Also a bit better lungs, for runners and such? Though, I also don't know the relationship between lung efficiency and heart rate.

If you have links on any of this, it feels like it would be a very fascinating read.


Having no snaps able to use DNS after upgrading thanks to AppArmor sure was a fun surprise


I wonder if that explains the BitWarden problem that I saw then.


> a very fit and very strong person can wrongly be classified (and in the opposite direction too)

Unfortunately, it usually goes in the bad direction, by quite a bit. BMI under-predicts obesity. You only have to hit 25% body fat as a male to be obese, or 32% as a female.

https://academic.oup.com/jes/article/7/Supplement_1/bvad114....


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: