Vulkan is not supported on game consoles, with the exception of Switch, and even there you should use NVN instead.
It is not officially supported on Windows, it works because the GPU vendors use the Installable Client Driver API, to bring their own driver stack. This was initially created for OpenGL, and nowadays sits on top of the DirectX runtime.
On the embedded space, the OSes that support graphical output many are stil focused on OpenGL ES.
While Metal might be easier to use, i'm pretty sure it is still easier to have to worry about Vulkan alone than Vulkan+Metal. And Metal predating Vulkan is really only of concern to code that existed before Vulkan was made available (which wasn't that much).
Vulkan capture support on Windows was introduced in v25 (on linux you need to use a plugin). There is no Vulkan renderer support—which the post clearly stated...
Prototyping platforms have tiny markets, but lead to downstream sales. Many a company were brought down by more developer-friendly platforms ignoring the "tiny" userbase of people who want to do unconventional things.
Most IC vendors provide free samples and support because of this. That's a market size of close to zero -- electronic engineers -- but leads to a market size of "massive." I can get an application engineer to visit my office for free to help me develop if I want.
Arguably, iPhone and Android won by supporting the tiny market of developers, who went on to build an ecosystem of applications, some long-tail, and some unexpected successes.
And arguably, x86 won for the same reason.
Atmel had shipped 500 million AVR flash microcontrollers, due in large part to the ecosystem created by Arduino.
Balmer said "Developers! developers! developers!" Visual Studio was not a major revenue driver for Microsoft; what was developed in it was.
> Prototyping platforms have tiny markets, but lead to downstream sales. Many a company were brought down by more developer-friendly platforms ignoring the "tiny" userbase of people who want to do unconventional things.
Qualcomm doesn't even make small/cheap MCUs so they aren't going to win over that market by buying Arduino. Their first board post-acquisition is a mashup of a Linux SBC with an MCU devkit, and while the Linux SOC is from QCOM, the MCU is from ST Micro.
>Atmel had shipped 500 million AVR flash microcontrollers, due in large part to the ecosystem created by Arduino.
How do you know the 500 million sales is due to the Arduino ecosystem?
I used to work in embedded for 10+ years and in the 4 companies I worked at so far, none of the products ever featured AVR microcontrollers. The microcontroller of choice for production was always based on the feature/cost ratio for each application, never on the "is it part of the Arduino ecosystem?" question.
Tinkering with Arduino at home, and building products for mass production, have widely different considerations.
If they sold 500 million microcontrollers and your workplaces never bought any, then your experience doesn't tell us anything about why the people that did buy them, bought them.
All of the products that i've been involved with that included AVR microcontrollers are from before the Arduino platform existed. The STMicro ARM M3 chips are more capable and cheaper then the 8-bit AVRs; The Arduino IDE never factored into the decision, even at the height of its popularity.
FWIW: I've used Arduinos, but never with their IDE.
AVR was super-developer-friendly well before the Arduino. It replaced the PIC for a lot of hobbyist projects.
To the points in the thread, on major product development, these things don't matter. On the long tail of smaller products, as well as on unexpected successes, they do.
That is the downside. you can prototype with one chip and when the concept works switch. I've worked with many projects over the years where that was done. Sometimes an intern proved it works with arduino - which wat cheap enough to buy without needing supply management, but then we did the project with 'good code' on our internal controllers. Othertimes we bought a competitor andiagain first thing switched them to our controllers. (Our controllers are designed for harsh environments which means millions of dollars spent designing the case and connectors)
I can confirm. While there is a fair amount of train infrastructure, it is horribly unreliable. Plan for being delayed for 30-50% of the scheduled travel time.
"You don't think 'oh, the lawnmower hates me' - lawnmower doesn't give a shit about you, lawnmower can't hate you. Don't anthropomorphize the lawnmower." - Bryan Cantrill
> "You don't think 'oh, the lawnmower hates me' - lawnmower doesn't give a shit about you, lawnmower can't hate you. Don't anthropomorphize the lawnmower."
True for now.
Smart devices might well be controlled by people who hate you. Even if they just do not care about you, its very different from a lawnmower not caring about you.
Qcom is a corporate behemoth, much like Oracle. In the immortal words of Bryan Cantrill, it is a lawnmower and if you stick your hand in it you'll get it chopped off.
I'm doing my best! I think I have made Berlin's bureaucracy a lot more approachable, but there's only so much you can do as a single person without official backing.
Or if spun around, it's incredible what can be done by a single motivated person, and sad that the entire bureaucratic apparatus is incapable of doing it.
While in many way software freedom won the server and workstation battle, we lost all the new battlefront which opened in the last two decades:
- Phones (the thing in the hand of almost every human now. And sorry LineageOS and GrapheneOS are quickly being marginalized now by things like Google Play Integrity)
- Javascript (yes it is a big problem [0])
- the Cloud
- IoT
The FSF was actually pretty good at identifying those issue early on but was overwhelmed and probably marginalized because they were right.
Notice that none of those new "Open Source" advocates really care about those ubiquitous issues.
We won some battles but lost the war.
The fact France endorses some UN Open Source principles really doesn't matter.
You might think caring about software freedom is almost fringe but look at:
- The US freaking out about all those Chinese devices and cyber attacks,
- The EU now freaking out about US big tech and the cloud.
I believe the best way to safeguard sovereignty and safety is for everyone be able to control as much as possible what is running on our "computers" and as close to you as possible. The FSF [1] has been consistent regarding those issue and doing something about it. But also some other folks like OpenBSD [2].
Very unclear to me what the goals of the UN and the OSI type foundations really is.
Does AV1 need a successor right now? At least as of some years ago SVT-AV1 was stronger than x265 on both software encoding speed and quality/bitrate[1], and a successor would reset the timer on getting hardware decoders rolled out.
It looks like VVC (H.266) will be significantly better compared to HVEC and AV1.
But due to the patent issues it'll bound to have, I suspect common usage will practically be nonexistent, just like HVEC.
> I suspect common usage will practically be nonexistent, just like HVEC
HEVC is used in all TV broadcast station. FaceTime and other Cameras, Netflix, Amazon Prime, Disney+ and many other large streaming services outside US. The only one that doesn't have any usage of HEVC is Youtube.
The problem with the patents are largely misunderstood. Most importantly the patents do not directly apply to the individual consumer downloading and decoding such audio/video content. The patents only apply to commercial settings - the sales of software that can encode or decode audio and video in the MPEG formats, and sales of audio and video content encoded in those formats. This is why Mozilla made a big fuss over not wanting to include H.264 decoding in Firefox years ago, because they feared they'd have to spend a bit of their money since they are after all a commercial endeavour. No, really, it was never about wanting to "protect" users, it was always about their earnings. You can happily encode AAC audio and H.264 video and share it free of charge with everyone, and they can always listen to and watch that content, without any worries.
And pardon the nitpick but it's H.264 and H.265, not x264 and x265.
Seems like the last h264 patents expires in about 5 months, silly to start moving around to options that might have submarine patents when we'll have something functional that's patent free in quite a short time.
Can you be more specific? I can't really tell what you mean by "normal" here.
And while I do like smaller files, if I compare with a few years ago my connection is faster and my drives are bigger so presumably the limit for "normal" has gone up...
H.264 does just fine with 4K. If you know what you're doing you really don't need to throw 10 Mbit/s at it to get crisp quality.
(p.s. I'm fully onboard with H.265 being fantastic, it's amazing to see what e.g. x265 can do for it, being able to provide practically identical output at 30-50% lower bitrate. I'm just saying that H.264 isn't in any way incapable.)
H.264 may allow 2160p video, but the 4K UHD standard is more than just 2160p. For example, HDR is absolutely critical to 4K, and the only way to do that in H.264 is to use Hi10P which isn't supported by most devices.
In fact, I'd say HDR is more important than 2160p resolution in that I'd rather watch 1080p HDR video than 2160p SDR video.
The trick is knowing what the optimum settings are to use.. with h.265 as you lower the bitrate it smooths more and more and you lose detail. h.264 does blocking instead, so there is an image quality difference.
At the lower end of useful bitrate there's absolutely a difference. Video encoding is complex territory and there's no way around knowing and understanding "optimum settings" when wanting to keep bitrate down, no matter MPEG-4 ASP, H.264, H.265, AV1, what-have-you.
Compared to other modes of operation, CRF doesn't work better or worse at any arbitrary bitrate. In itself it doesn't do anything fundamentally different about how changes between frames are analyzed or how the changes are encoded. It's a "constant quality" mode of operation, and it will use as much data as it deems necessary in order to meet the quality target. That is, CRF produces a varying bitrate product and you have no actual control over the final bitrate requirement.
I know it's not doing anything very different. But that's my point, that you don't need fancy tuning features.
As for final bitrate, maybe we need to talk more about use case here. Because for very small encodes (often around 250kbps), I never cared about moment to moment bitrate, just the final file size. And if that's too far off I change the factor and run it again.
For things I intend to stream I usually have a bitrate limit on top of the CRF setting, but that's the only optional flag and it doesn't kick in very often. The result is quite high quality out of 2-3Mbps AV1, without any flags that affect the details of the video encoder, so I don't see a need for knowing and understanding optimum settings. And the same setup worked with h.264 at a moderately higher bitrate.
The best thing you can do for the encoder is give it time to work.
They've been working on it for years but I'm not sure there's any great need for it right now. The various MPEG alternatives seem to be eating themselves with patent infighting and fragmentation.
It says AV1 is open source and royalty free, and all modern hardware seems to have hardware decode for it. It doesn't seem any of the big players are realistically worried about bogus patent claims.
Vulkan support was introduced in OBS Studio 25.0 in March 2020, 5.5 years ago.
reply