Hacker News new | past | comments | ask | show | jobs | submit login
JDK 20 will introduce float16 conversion methods (java.net)
53 points by mfiguiere on Oct 12, 2022 | hide | past | favorite | 22 comments



If I understand this API correctly, Java won't get a distinct float16 type. It seems that they'll have 16 bit floats encoded in shorts (16 bit integers).

This seems like it could be a huge footgun. A lot of methods already have an overload for short, so a completely separate set of functions are needed to do operations on them.

I get that it's hard to introduce a 16 bit float as a new primitive type in the JVM, so that's most likely not an option.

.NET introduced 16 bit floats as the "Half" type 2 years ago [1] and I think their approach is far superior. They're also saving it as a (u)short internally [2], but they have something that Java (still?) does not have: They made it a user-defined value type, or when using C# language: a struct. This doesn't create footguns because it is a separate type and also doesn't have any meaningful overhead. And since .NET 7 will introduce generic math [3], we'll seem to be able to use the same APIs as we've always done.

IMHO, this is an API - just like Optional<T> - which could be far better designed if project valhalla would already be a thing.

[1]: https://devblogs.microsoft.com/dotnet/introducing-the-half-t... [2]: https://github.com/dotnet/runtime/blob/d783badaa38596beedefd... [3]: https://devblogs.microsoft.com/dotnet/dotnet-7-generic-math/


> It seems that they'll have 16 bit floats encoded in shorts (16 bit integers). This seems like it could be a huge footgun.

I agree, fortunately the upcoming Primitive Classes [1] will make it possible to create a float16 type by wrapping a short without any overhead, thus acting essentially as a type safe alias.

It's also likely that the JDK will come with an ad'hoc type as the Vector API seems to currently be considering adding a HalfFloat class [2] so that you can leverage float16 directly with SIMD logic which is obviously one of the major benefits. See JDK-8290204 [3] for more details about it.

[1] https://openjdk.org/jeps/401

[2] https://github.com/openjdk/panama-vector/blob/vectorIntrinsi...

[3] https://bugs.openjdk.org/browse/JDK-8290204


God, we're already at JDK 20?


I remember when Java 8 was a new hip thing: look Ma, lambdas! Or Java 5: oh wow, generics!


Cries in current projects still on Java 8.


The 8 -> 9+ transition is rough. But once you've done that, upgrading to the latest LTS is really as simple as it used to be back in the glory days.


And again a pain going 11 LTS to 17 LTS with the great Java EE rename (javax to jakarta).


I see people say this all the time, but I’ve never actually experienced it and I’ve done 2 upgrades from 8 to 11.

Yes they removed some stuff from the jdk, but they’re simple 1 liners in your pom to add back.

What have you run into that was a major blocker?


Are we colleagues?


Java 1.1: Event listeners. No more bubbling!


2 major version releases per year, one in March and one in September.

7, 8, 11, 17, 21 are LTS for the annoying companies that refuse to update. LTS cadence went from 3 to 2 years after JDK 17.


What happened with 14?


I'm not sure - Java 11 was the first LTS in a while and I don't think a new target LTS was decided right after Java 11.


It's a bit too late to try and shoehorn half-floats into the mix.

They are super good for so many things but to have converting back and forth for different systems is over engineered, so before ALL systems support them they probably wont take off.

It's a chicken and egg problem. Khronos should have thought about it earlier!

In my particular case Raspberry 4 GPU does NOT support them in drivers yet. And I cannot rely on them being implemented properly.


32 bit floats have holes big enough, why use 16 bit ones?


Float16s are useful in ML and compute workloads where they make it possible to double the throughput of vector based instructions (SIMD in CPUs / GPUs) at the cost of a lower precision which can be acceptable for many such workloads.


Exactly so, there are a bunch of Java interfaces to native ML libraries that have some messy bit hacking to convert between fp16 and Java floats. This API along with the intrinsification (https://github.com/openjdk/jdk/pull/10500) will mean I can throw away that code and replace it with something that performs correctly and is much faster.


Why is this of note? When is this even used? In special math libraries? Don‘t they bring their own special methods?


The only major use of 16 bit floats are bfloats(that I know of) so I assume that is what this is for.


It's used in machine learning pretty often.


Would be nice if they had some wide integer primitives instead, like a u128. That would be nice.


I like float16




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: