The difference is in scaling. The top US labs have oom more compute available than chinese labs. The difference in general tasks is obvious once you use them. It used to be said that open models are ~6mo behind SotA a year go, but with the new RL paradigm, I'd say the gap is growing. With less compute they have to focus on narrow tasks, resort to poor man's distillation and that leads to models that show benchmaxxing behavior.
That being said, this model is MIT licensed, so it's a net benefit regardless of being benchmaxxed or not.
1. electricity costs are at most 25% of inference costs so even if electricity is 3x cheaper in china that would only be a 16% cost reduction.
2. cost is only a singular input into price determination and we really have absolutely zero idea what the margins on inference even are so assuming the current pricing is actually connected to costs is suspect.
I think the US created Starlink for military use. It provides world wide coverage and very small latency that helps a lot with UAVs. UAVs that aren't in line of sight need satellite communication. They just allow the public to use part of it, so as to reduce the cost of the system.
The real big-ticket customers will be hedge funds getting trans- and intercontinental financial data a few ms ahead of competitors.
It would not be surprising if Starlink charged them 100x as much for each ms of latency boost. They would be paying not so much for the ms ahead of fiber as for their lower-paying competitors to get less ahead of fiber.
In fintech, they say "a microsecond is an aeon, a millisecond an eternity". You can do millions of computations in a ms.
The US didn't create Starlink, SpaceX did and they did it without a government grant. They did what they thought would solve the issue of internet connectivity and make a lot of money with it.
Sure they will have many government including military, emergency services, coast guard, NASA and so on.
Starlink has many use-cases for the military they certainty didn't create it with that application as the primary goal. The primary goal is making money to fund SpaceX.
They originally chose to use x265 to calibrate the bitrates, possibly something went wrong there and the 'Tiny', 'Big', etc. are somewhat meaningless.
At 'Large' and 'Big' settings of this image -- which are still in much less than 1 bpp bitrates, i.e., below internet image quality -- you can still observe significant differences in the clouds even if balloons are relatively well rendered.
Nothing went wrong there, it's just what you get if you configure an encoder using just some quantization setting and not a visual target. The same will happen if you would encode images with libjpeg quality 50 (and then derive all other bitrates from there). In some cases the image will look OK-ish at that setting, in other cases it will be complete garbage.
JPEG XL is the first codec to have a practical encoder that can be configured by saying "I want the worst visual difference to be X units of just-noticeable-difference". All other encoders are basically configured by saying "I want to use this scaling factor for the quantization tables, and let's hope that the result will look OK".
> All other encoders are basically configured by saying "I want to use this scaling factor for the quantization tables, and let's hope that the result will look OK".
crf in x264/x265 is smarter than that, but it's still a closed-form solution. That's probably easier to work with than optimizing for constant SSIM or whatever, it always takes one pass and those objective metrics are not actually very good.
JPEG XL isn't yet optimised for extremely low bpp. I thought the label for tiny, large and medium etc are sort of misleading without looking at bpp number.
It is a bit like looking at bitrate for Video quality without looking at video resolution.
The labels are indeed not very useful. It would have been better to use bitrates based on the jxl encoder, which has a perceptual-target based setting (--distance), as opposed to setting it based on absolute HEVC quantization settings (as was done here), which for some images causes 'Big' to be great and for others makes 'Big' still kind of low quality.
Part 1 and 2 define the codestream and file format, respectively. They are both finalized at the technical level (the ISO process is still ongoing, but there is no more opportunity for technical changes, the JPEG committee has approved the final draft). So it is ready for use now: the bitstream has been frozen since January, free and open source reference software is available.
Part 3 will describe conformance testing (how to verify that an alternative decoder implementation is in fact correct), and part 4 will just be just a snapshot of the reference software that gets archived by ISO, but for all practical purposes you should just get the most recent git version. Parts 3 and 4 are not at all needed to start using JPEG XL.
I've been watching developments here since FLIF days and I wanted to say thank for you for taking the serious time, effort, and tireless communication to see things through the standards processes. That takes perseverance!
The quality is normalized to x265 q24 setting. I believe this process/setting is either not working for images or something else went wrong there, because the observable quality as well as the bitrates vary from image to image.
Bitrates vary from 0.26 bpp (Nestor/AVIF) to 4+ bpp (205/AVIF) at the finest setting. Nestor at lowest setting is just 0.05 bpp, somewhat unusual for an internet image. A full HD image at 0.05 bpp transfers over average mobile speed in 5 ms and is 12 kB in size. I rather wait for a full 100 ms and get a proper 1 bpp image.
It seems to try really hard to preserve high frequencies, where WebP just gives up. Hopefully it's just a question of tuning the quantisation tables for low bitrate.
Simply put when there is lower power factor, there are higher losses into cables. And It is not always negative power that causes this (as many say). For a bridge rectifier the power is always positive.
reply