Posits have advantages for low-precision numbers, e.g. up to 32 bits.
For high-precision numbers, e.g. for 64 bits, the standard double-precision floating-point numbers are the best.
Any number format using a given number of bits has the same number of values that can be represented. The only difference between various formats, e.g. integers, fixed-point numbers, floating-point numbers or posits, is how the same number of points is distributed over the line of the real numbers, being more dense in some parts and more rare in other parts.
For scientific computation, the optimal distribution of the representable numbers is when the relative error is constant. The ideal number format would be logarithmic, but it is too difficult to add and subtract logarithmic numbers, so the second best format is used, i.e. floating-point numbers, which allow fast implementations for all important arithmetic operations.
Posits have a non-uniform relative error, low close too 1 and high for large and small numbers, so they are unsuitable for complex physics modelling, which needs more computations with large numbers and small numbers than with numbers close to 1.
On the other hand, for the applications where low-precision numbers are suitable, e.g. machine learning, graphics or some kinds of DSP, posits could be better than floating-point numbers.
I agree with you that Posits only really make sense for <=32 bits. That said, I disagree that the optimal distribution of numbers is logarithmic. Especially for 16 bit or smaller computations, if you are dealing with big or small numbers, you need to renormalize to roughly 1 anyway to prevent overflow and underflow. As such, it makes a lot more sense to have extra accuracy near 1.
I agree with you that for small precision logarithmic distribution may not be optimal and that is why posits may be a better choice for such applications.
Logarithmic distribution is optimal for complex physics models where there are relationships between many different kinds of physical quantities, so it is not possible or convenient to choose a scale factor that would make all quantities have values close to 1. In such applications many numbers are very large or very small and any other distribution except logarithmic leads to increased errors in the results. Such models also typically need a 64-bit precision. Moreover, having a much greater exponent range than provided by the 32-bit floating-point format, in order to avoid overflows and underflows, is frequently even more important for them than having a greater precision.
If for an application the range of the 32-bit floating-point numbers is good enough, then there are indeed chances that posits might be better for it.
For high-precision numbers, e.g. for 64 bits, the standard double-precision floating-point numbers are the best.
Any number format using a given number of bits has the same number of values that can be represented. The only difference between various formats, e.g. integers, fixed-point numbers, floating-point numbers or posits, is how the same number of points is distributed over the line of the real numbers, being more dense in some parts and more rare in other parts.
For scientific computation, the optimal distribution of the representable numbers is when the relative error is constant. The ideal number format would be logarithmic, but it is too difficult to add and subtract logarithmic numbers, so the second best format is used, i.e. floating-point numbers, which allow fast implementations for all important arithmetic operations.
Posits have a non-uniform relative error, low close too 1 and high for large and small numbers, so they are unsuitable for complex physics modelling, which needs more computations with large numbers and small numbers than with numbers close to 1.
On the other hand, for the applications where low-precision numbers are suitable, e.g. machine learning, graphics or some kinds of DSP, posits could be better than floating-point numbers.