When I brought this up, the proponents of the library dug in their heels and said that it's not intended for apps like Flash, PDF readers, and browsers, but rather intended for (a) teaching and (b) for companies that have prohibitions on using third-party code. The teaching use case (a) is unconvincing, because while Logo was indeed a good teaching tool and it'd be great to resurrect it, there's no need to do so in the standard library. As far as I could tell, the real reason was (b) for the proponents of the spec, and, needless to say, that's a horrible reason to put something in the standard library. It's effectively an attempt to route around some random dysfunctional company policy by saddling everyone with a burden for all time.
They should learn from the example of Python where first class libraries are intentionally left out of the standard lib so that they don't fossilize.
That being said. I'd like to see libAgg get more support as it's somewhat drifting in the wind and it has a nicely designed composable rendering pipeline that aligns well with C++ principles.
Again, no need for this in standard library when there are reasonable libraries with good licenses around.
Agg is weak in comparison, though it has a few cute things going for it.
First because I have found memories for Borland's BGI and there is hardly something like that for C and C++ that works easily everywhere.
Secondly, because the attacks against the graphics proposal, could be equally used against iostreams, filesystem or networking.
Not every platform where C++ runs can support them, none of them fully supports modern OS features and all of them have better third party alternatives.
Yet for some magic reason it is ok for them to be part of the standard library, while graphics are not allowed to.
Plenty of application developers are more than happy with a Swing/GDI like canvas.
Not everyone needs to be an OpenGL shader guru to draw a couple of lines.
You'd be surprised. Just because they would be there, readily usable, and portable, would make them be adopted.
Unlike graphics which is a moving target, even 2D. Where do you stop? Bezier paths with subdivision and full blended gradient support for those? Nonlinear projections? Fonts? Text composition?
If you bite off too much, you end up with the OpenGL problem where nobody implemented everything.
The example of a pretty bad implementation of 2D would be a) Cairo b) whatever Android and Chrome uses for drawing.
Should it handle SSL, TLS, http routing, HTML templates,....
Where do you stop CSS, JS bundlers?
And why, pray tell, are those "pretty bad"?
IMHO, something should be in the standard library if a large class of programs would want to use it, and if a "lowest common denominator" approach would be good enough for the majority of those
Example: JSON parsing and generation should be IN.
A tremendous number of programs want to produce and consume JSON, and the bulk of these are not performance critical and don't have too many edge cases. Just boring normal JSON.
Example: Sockets should be IN.
As above, vast numbers of programs want to use sockets and a standard approach would suit almost all of them perfectly well.
Example: Graphics should be OUT.
A lot of programs could potentially use a graphics library, but many of these would not be satisfied by a "standard" approach. Cross platform is the main thing - having to interface with GDI, SDL, Quartz, and whatever else, and doing a bad job of all of that.
The point of C++ is to avoid burdening you with costs you don't want to incur. Putting this stuff in stdlib is exactly that.
If you could focus on modules and package management that didn't suck, would you still argue that you need to have JSON and high level sockets in stdlib?
I think I saw passing reference in a recent trip report that for now they are only focusing on math primitives .
See also https://www.reddit.com/r/cpp/comments/900dor/stdweb_view_pro...
I think the BSD socket library is that standard approach...
Still has its hurdles but it is a step in the right direction for trying to wrangle the wild west of C++.
The difference is, having an external library allows it to be versioned with breaking changes if necessary, without breaking consumers.
We need less, not more, conflation between languages, runtime environments, build systems, and package repositories. It's senseless to couple these concepts.
Just don't complain about adoption then.
I just went back to C and have spent too many hours trying to get small libraries to build and link properly. I'm pulling my hair out.
Having the OS handle it is a much better solution than having every single language come up with it's own solution.
That's really irrelevant, and not the proper way to install programming dependencies for developers. It's for installing system dependencies (including libs) and to build software for the system (e.g. as a system admin/user you want to build X and it needs the -devel package installed), not for developers on their projects.
Nobody (or very few) in Rust, Python, Ruby, Go, Java etc would ever use the "system package manager" for installing their language's third party libs.
For one, you don't want to pollute the system with your program's deps.
Second, you want to have isolated, different versions, of various dependencies, only visible to this or that project you work on, not to the whole system.
Completely disagree, the system dependencies are my dependencies and whenever possible I want to be able to "make install" and have my dependencies update with the rest of my system.
> Nobody (or very few) in Rust, Python, Ruby, Go, Java etc would ever use the "system package manager" for installing their language's third party libs.
Java has always been a "fat VM", it's a platform and not part of the system, so not surprising that it handles it's own. For python/perl/ruby, it used to be quite common to include packages from the OS repository, until they reinvented the wheel. For rust, that's why it's a joke as a "systems language".
> For one, you don't want to pollute the system with your program's deps. Second, you want to have isolated, different versions, of various dependencies, only visible to this or that project you work on, not to the whole system.
Entirely doable with existing package managers, just use the -root switch with dpkg or the --prefix switch with rpm or (or configure if you build from source). Typically I only want a small subset of version specific packages.
So why would I want yet another package manager when my existing one is sufficient?
You can disagree, but you'd be wrong in anything that a one-map-shop setting working on a single project, and willing to depend on the distro's (or third party distro package manager repos) versions of language libraries and frameworks.
>Entirely doable with existing package managers, just use the -root switch with dpkg or the --prefix switch with rpm or (or configure if you build from source). Typically I only want a small subset of version specific packages.
Typically this is a luxury that software shops with different projects (including older, already installed), different customers, etc., can't have.
>or python/perl/ruby, it used to be quite common to include packages from the OS repository, until they reinvented the wheel
Or until those languages weren't used merely by sys-admins and one-off scripts, but for large project development, and "include packages from the OS repository" wouldn't cut it anymore.
>For rust, that's why it's a joke as a "systems language".
Total non-seguitur. In fact, one of the main goals of C++ (as per the posted roadmap) is to add a package manager.
But the total non-seguitur that "Rust is a joke as a systems language" because it has cargo I think disqualifies anything else that can be said here.
No one has time to publish their library into the myriad of OS specific package managers that are out there.
We don’t need yet another package manager. Every language seems to have to have its own package manager, separate from my operating system’s package manager, that does exactly the same thing but in an incompatible way.
Now, if I want to know what software I have on my system, I need to use five different list commands instead of just one. If C++ had its own package manager, that would mean six. Please help to stop this madness, not perpetuate it.
In particular, distros generally work best when one version is enough, or maybe a few versions. Anything else leads to dependency hell.
Of course, if I fail, the negative consequences are minor, people just won't use it. By contrast, having bad stuff in a standard library lasts almost forever.
("Where possible" would certainly cover all the containers, all the algorithms, etc; it's less clear how you'd provide a reference implementation of something that needs to use low level OS functionality for its implementation, such as 'operator new' or 'ofstream')
Of course, it would also be nice if timely bugfixes went into the C++ standard library. For instance, std::generate_canonical was standardized in C++11, had an "issue" lodged in 2015, and the "new specification for std::generate_canonical" is still awaiting proposed wording (last updated a year ago, cough, so ... maybe in C++23?)
I would quibble with the hash table example though. I think there is a case to be made not for changing unordered_map but for adding a new standard hash table with somewhat different guarantees that make it suitable for higher performance implementations. Hash tables are so widely useful that a better standard option without breaking backwards compatibility seems worthwhile. Other languages have standard libraries that evolve in this way and while it can get out of hand (looking at you C#) it's a reasonable solution in moderation.
Seriously, Go's standard library is the best I've ever used, and would make a great case study for the "batteries included" case.
But it doesn't have a good story at all around graphics, 2D, 3D or UI. That doesn't bother gophers much, because the language was(is) intended to build servers with. But for C++, different story.
> Especially when you compare C++ to other languages, there’s a pretty strong argument to be made for a more inclusive and even all-encompassing standard library - look at the richness of the standard libraries for Java or Python.
What is that argument? And even if you accept that argument, can we execute? That is, will the result be at least of a similar quality, or will it be riddled with subtle gotchas and sharp edges like the existing C++ standard library?
> So, what should go in the standard library? Fundamentals. Things that can only be done with compiler support, or compile-time things that are so subtle that only the standard should be trusted to get it right. Vocabulary. Time, vector, string. Concurrency. Mutex, future, executors.
Can the standard be trusted to get these things right?
- Vocabulary: There's a fair amount of depth to what "right" means for vector and string in many contexts, so the standard can only be trusted to get them "right" in cases that have very few constraints (where it can do admirably). Implementors have been known to break the ABIs of std::vector and/or std::string between compiler/library versions, so they aren't appropriate types to use in interfaces.
- Concurrency: Leaving aside the C++ memory model, the library support should largely be avoided. For an example of why: std::condition_variable's ctor either takes another condvar or no arguments. pthread_cond_init takes a pthread_condattr_t. On Linux, this can specify a clock. On my system, if you want to wait for a second according to CLOCK_MONOTONIC/std::chrono::steady_clock on my implementation, condition_variable::wait_until will convert your time to CLOCK_REALTIME/system_clock (by now()ing both clocks and adding the difference to the deadline you supplied) and sleep until the converted time. That's not what you asked for; it's totally wrong and possibly dangerous.
But... it gets a lot of stuff really wrong. Like why does outputsurface have an "fps" argument on the constructor? What on earth is io2d::refresh_style::as_fast_as_possible supposed to mean? That doesn't really map to anything sane. UIs certainly don't behave like that at all, they only redraw on-demand. Even for a game which does refresh "as fast as possible" it still does that with its own game loop which will involve more than drawing, and often pipelined anyway.
Why does rgba_color, even though it's 4 floats, only have a valid range from 0.0 to 1.0 with no mention of color space? It seems like it's supposed to be linear, but in that case it should be linear extended sRGB, which has a valid range of -0.5 to 7.5.
Or like you can query for display_dimensions, but which display?
Adding this to a standard would definitely be a huge mistake. It may make a useful beginners library for just quickly getting up to doodling on the screen, but that's sort of it. You'd never build an actual shipping product off of this proposal.
In fairness, piet is currently missing the fancy stuff too. But I'm hopeful we'll get there, in part because we can afford to break compatibility.
(Going deep into the weeds, presentation doesn't belong at the piet level, but, in my stack, at the druid-shell level. I'm working on this, using quantitative measurements of latency and power dissipation. But it's all evidence that this stuff is quite hard, and Titus is absolutely right that putting it in to a language standard is folly)
I haven't looked at the proposal myself, but regarding your complaint about rgba_color float values being on [0.0, 1.0] this is super standard in my experience. I would expect any well designed API to map [0.0, 1.0] to pixel values on [0, 255] unless otherwise clearly noted. Color spaces other than raw pixel values I would expect to be indicated by the name. That's consistent with at least OpenGL and Java's AWT, but I think also DirectX and most other APIs.
OpenGL actually works like I said - the float values are not limited to [0.0, 1.0]. There is no clamping in the GPU pipeline, and a transfer function can be applied to choose between linear & non-linear. For details see extensions like https://www.khronos.org/registry/EGL/extensions/EXT/EGL_EXT_... or https://www.khronos.org/registry/EGL/extensions/EXT/EGL_EXT_...
This all matters because there is no longer any platform on which sRGB can be safely assumed to be the overwhelming majority. Nearly all new flagship mobile devices are DISPLAY_P3. Nearly all TVs are already some form of HDR, and desktop monitors are rapidly following suit (Rec. 2020 & 10-bit, so [0, 255] doesn't even cover enough bits).
So anything graphics that doesn't already work with colorspaces in some way is basically abandoned or dying, and anything new that doesn't work with them is going to be DOA.
> Most APIs that do [0, 255] are assuming sRGB
I admit that I've almost exclusively used OpenGL so my impression is far from expert, but I don't think this is correct. My (admittedly limited) experience is that a linear color space is assumed unless otherwise specified. In particular, my understanding is that OpenGL works almost exclusively in a linear color space except for a few specific sRGB image formats and a few specific functions. For arbitrary non-linear color spaces I would generally expect to need to select a backing format of the necessary bit depth, output linear values, manually apply a conversion function, and then somehow indicate to the underlying hardware what color space to use when displaying my image data to the user.
> OpenGL actually works like I said - the float values are not limited to [0.0, 1.0].
Agreed, I never meant to imply otherwise. However, do note that when outputting values from a fragment shader their range (and type) has to match what the GPU is expecting. So in practice, unless you're using a floating point image format (such as GL_RGBA32F) your output range is going to be limited. Personally I almost always use unsigned normalized formats.
> This all matters because there is no longer any platform on which sRGB can be safely assumed to be the overwhelming majority.
I'd just like to point out that in principle, a hypothetical API could handle all programmer interactions in a single color space (eg linear) and then transparently convert as appropriate for the display device currently in use. Of course it goes without saying that all conversions must be handled correctly! Importantly, I'm not trying to claim that such a limited design would be a good one. Rather, my point is simply that conversions between color spaces are purely an implementation detail provided you have sufficient bit depth to work with.
Non-linear blending is very common. Do a CSS gradient, for example, and it'll happen in sRGB gamma space not linear space, giving you incorrect results.
Linear should be used more than it is, though, definitely.
> I'd just like to point out that in principle, a hypothetical API could handle all programmer interactions in a single color space (eg linear) and then transparently convert as appropriate for the display device currently in use.
Linear isn't a colorspace. Linear is about the gamma function, which is independent of the actual colorspace.
sRGB can be both linear & non-linear. In the absence of something specifying sRGB assumes a non-linear gamma function of 2.2, but it doesn't always. Linear sRGB is very much A Thing. If you're from the OpenGL world then you may be thinking of EGL_GL_COLORSPACE_LINEAR_KHR? If so, that's linear sRGB. If you wanted to output, say, Display_P3, then you need to use EGL_GL_COLORSPACE_DISPLAY_P3_LINEAR_EXT instead.
But you can very much do a single color space. That'd be the linear extended sRGB I originally talked about, which is float in the range of -.5 to 7.5. Also called scRGB. Microsoft makes use of this as of Windows Vista.
> If you're from the OpenGL world then you may be thinking of EGL_GL_COLORSPACE_LINEAR_KHR?
...I've just realized, are you talking about the EGL API (as opposed to OpenGL)? I don't actually use that - I nearly always restrict myself to an OpenGL or GLES core profile (no extensions) and use a support library (nearly always GLFW3) to handle all interfacing with the local system in a platform independent manner. It just keeps my code sane and manageable. :)
So for me, to output 8-bit non-linear sRGB colors from a fragment shader I would attach a GL_SRGB8 formatted image to my FBO. Much more typically though I would simply use GL_RGBA8 and output on the range [0, 1].
> Linear isn't a colorspace. Linear is about the gamma function, which is independent of the actual colorspace.
I was very confused by this statement and most of what followed it. After reading a bit, I think I was conflating color spaces and color models. If I understand correctly, all along I've been manipulating a linear RGB color model which was then mapped by my monitor on to (most likely) some approximation of the sRGB color space. So where I said linear earlier, I believe what I meant was any linear RGB color model (not space). Bonus points for using [0, 1] as the interval because it's easy to think about.
What it comes down to is that as a programmer, I just want to get my code working. Linear models on [0, 1] are easy to process and think about - 0 is off, 1 is on, and 0.5 is half way in between. If you want to add things together, you just add them together. Remapping from one range to another is trivial. It all just works. Sure it doesn't match up with human perception the way you might expect, but that's what the graphics API, drivers, a color calibrated display, and possibly some complicated external libraries are for, right? At least in theory.
OpenGL/GLES don't directly do anything with color spaces or gamut. It's part of the integration with the windowing system that does it. Which in the mobile usage is typically EGL, but desktop tends to do something else like WGL.
So sounds like you're just punting this decision over to GLFW3, and you're getting whatever behavior it felt like giving you. Which is probably sRGB or linear-sRGB.
> What it comes down to is that as a programmer, I just want to get my code working. Linear models on [0, 1] are easy to process and think about - 0 is off, 1 is on, and 0.5 is half way in between. If you want to add things together, you just add them together.
Easy to think about, but also wrong :)
If you want easy, you want linear extended SRGB, aka scRGB. This gives you [0, 1] in the colors you are typically familiar with. And it means when you display pure white, you're not shoving 1,000 nits into the face of a user with an HDR monitor. But it means your valid range becomes [-0.5, 7.5] instead.
> a color calibrated display
It doesn't matter how calibrated the display is if the source content isn't color aware. When you say glClearColor(1.0, 0.0, 0.0, 1.0), which color red is the display supposed to give you? It'd be broken if it just gave you the reddest-red it can display, because then your colors will never match when going between different gamut displays.
Anything that takes a color must also be given a colorspace or have a well-defined one. Otherwise nothing about color works. And if it's a well-defined single colorspace, that single colorspace needs to cover the entire visible spectrum (which extended sRGB does, but something like DCI-P3 doesn't), otherwise it'll just become obsolete when displays get wider color gamuts.
All the legacy APIs that don't do this just behind your back say "this came from sRGB colorspace" because that's what used to happen. But anything new shouldn't be doing that, because then it won't work with HDR, wide-gamut mobile displays, etc... By which I mean "can't display the full range of colors possible on the display"
> Easy to think about, but also wrong
Not at all! It's only wrong if there isn't a way to tell the API what color space the data is in, ie how the data is meant to be interpreted. You keep describing a data format that is color space aware, while I'm describing an API that is color space aware coupled with a data format that is generic.
What I'm arguing for here is a clear separation between image data and the color space used to interpret that data, such that algorithms don't have to be customized to fit a specific (likely platform dependent) color space. I think that data storage and manipulation should happen using a simple linear model such as [0, 1], with a separate mechanism for communicating to the API what color space the data occupies.
So yes, I do think that a hypothetical clearFrame(1.0, 0.0, 0.0, 1.0) function call should result in the reddest red possible - within the currently configured color space. Separately, it should be possible to do something like setColorSpace("AdobeRGB") and thereby change the meaning of (1.0, 0.0, 0.0, 1.0) to the API. Of course the graphics stack and underlying hardware then have to work together to actually display that data correctly. It could well be that the display doesn't support the particular color space that was specified and will need to convert appropriately, but the entire point here is that the algorithms written by the programmer don't have to be tailored to a specific color space.
As clearly illustrated by your HDR example, sane defaults are a necessity for any system. Given the history, it seems that a reasonable API ought to assume sRGB in lieu of an explicit selection, which it seems they already do for the most part. Thus in the example you provide, the color (1.0, 0.0, 0.0, 1.0) would result in a perfectly reasonable shade and intensity of red on any device.
Note that the entire problem of obsolete APIs you refer to is due entirely to making assumptions about which color space the caller is using. The approach I've described here completely avoids this - you can bolt on new color spaces later in a clean and fully backwards compatible manner. More than that, you can even convert existing APIs in a fully backwards compatible manner because you can reasonably assume that they were already using linear sRGB unless otherwise specified.
>> a color calibrated display
> It doesn't matter how calibrated the display is if the source content isn't color aware.
Well yes, naturally. I was only meaning to note the need for the entire stack to handle things properly, from driver through to display device. My line of reasoning was that if your API sends non-linear sRGB data to a display expecting, for example, linear Adobe RGB data, or if the display isn't color calibrated in the first place, or ..., then things obviously aren't going to work correctly. I never meant to imply that my color calibrated display could read my mind! You say that "Anything that takes a color must also be given a color space or have a well-defined one.", and I completely agree.
> OpenGL/GLES don't directly do anything with color spaces or gamut.
Actually OpenGL specifically supports the use of non-linear sRGB textures as a special case. Otherwise though your point is well taken, by default it indeed operates with colors that occupy a generic linear vector space.
And even if you are exclusive fullscreen there's things like adaptive vsync or even just choosing an alternate refresh rate.
requestAnimationFrame is simply a callback and entirely optional. That'd be backpressure control more than anything else, and you'd still have things like input intermixed with it.
But yeah, as much as I would love to have an out of the box cross platform (2d) graphics library, I worry that by standardizing it, it'll be susceptible to being left behind very soon. Not even mentioning that there is so much stuff 'missing' in the current proposal - like what other color spaces? Image formats? I'd much rather have a graphics library in Boost that can evolve over time, and for people to stop complaining about / shitting on Boost; and maybe a way to have Boost packages work with a package manager, in a way so that you can pick and choose the components you need.
Seems odd that the standard library for C++ would assume a particular operating environment.
My point is that you can't just use "the standard library is already really big" as a justification for not adding new things which make working with the language much nicer.
Most of the time, std::string is fine, so I expect there to be APIs exchanging string data using std::string.
Rope/cord is a domain-specific data structure. If I was building software which was constantly modifying ranges of text, then rope might be a good choice.