Hacker News new | past | comments | ask | show | jobs | submit login
What should go into the C++ standard library (2018) (abseil.io)
71 points by tim_sw 21 days ago | hide | past | web | favorite | 88 comments



I've unfortunately been dragged into some of the conversations around the proposed 2D graphics library (despite trying to avoid them), and the proposal is awful. It's a mid-'90s vector graphics proposal that is inadequate for fast GPU-accelerated rendering. A modern library that aimed to be actually used in the industry would probably look more like WebRender (though WebRender's API is admittedly a bit of a mess, the principles are sound). There is no way that browsers or PDF readers or Illustrator or Toon Boom Harmony or whoever else does vector graphics in industry would ever use the C++ proposal.

When I brought this up, the proponents of the library dug in their heels and said that it's not intended for apps like Flash, PDF readers, and browsers, but rather intended for (a) teaching and (b) for companies that have prohibitions on using third-party code. The teaching use case (a) is unconvincing, because while Logo was indeed a good teaching tool and it'd be great to resurrect it, there's no need to do so in the standard library. As far as I could tell, the real reason was (b) for the proponents of the spec, and, needless to say, that's a horrible reason to put something in the standard library. It's effectively an attempt to route around some random dysfunctional company policy by saddling everyone with a burden for all time.


The 2D effort is a complete waste of time. If they plan to include font support it will be a total nightmare. Anyone that needs graphics rendering can choose from a mature range of libraries that best suits their needs. There is zero need for something like that in the standard lib.

They should learn from the example of Python where first class libraries are intentionally left out of the standard lib so that they don't fossilize.

That being said. I'd like to see libAgg get more support as it's somewhat drifting in the wind and it has a nicely designed composable rendering pipeline that aligns well with C++ principles.


If they want to steal an API, SDL exists and can be modernized.

Again, no need for this in standard library when there are reasonable libraries with good licenses around.

Agg is weak in comparison, though it has a few cute things going for it.


I am on the other side of the fence.

First because I have found memories for Borland's BGI and there is hardly something like that for C and C++ that works easily everywhere.

Secondly, because the attacks against the graphics proposal, could be equally used against iostreams, filesystem or networking.

Not every platform where C++ runs can support them, none of them fully supports modern OS features and all of them have better third party alternatives.

Yet for some magic reason it is ok for them to be part of the standard library, while graphics are not allowed to.


Plenty of industry applications that need to deal with files use iostream. The number of apps that would use a bad 2D graphics standard, however, is zero.


I pretty much doubt that, many apps would be already well served with a BGI like library.

Plenty of application developers are more than happy with a Swing/GDI like canvas.

Not everyone needs to be an OpenGL shader guru to draw a couple of lines.


>The number of apps that would use a bad 2D graphics standard, however, is zero.

You'd be surprised. Just because they would be there, readily usable, and portable, would make them be adopted.


I strongly agree that networking should not be in there either.


It's already there in POSIX C. (Winsocks is almost compatible.) The C++ basic version actually would be good to have. Potentially a basic threaded version too.

Unlike graphics which is a moving target, even 2D. Where do you stop? Bezier paths with subdivision and full blended gradient support for those? Nonlinear projections? Fonts? Text composition?

If you bite off too much, you end up with the OpenGL problem where nobody implemented everything.

The example of a pretty bad implementation of 2D would be a) Cairo b) whatever Android and Chrome uses for drawing.


POSIX C doesn't do HTTP/S, asynchronous networking, and isn't available in all platforms supported by C++ compilers.

Should it handle SSL, TLS, http routing, HTML templates,....

Where do you stop CSS, JS bundlers?


>The example of a pretty bad implementation of 2D would be a) Cairo b) whatever Android and Chrome uses for drawing.

And why, pray tell, are those "pretty bad"?


Cairo and Skia are immediate mode 2D graphics APIs on the JavaFX model. These APIs were designed when CPU rendering without SIMD was the norm and result in messy upload and state change problems when ported to GPUs. Google's heroic work on Ganesh with a relatively enormous team has shown that this API can be made decently fast on GPUs with a ton of work, but I don't see why we should repeat known mistakes for a brand new API that we expect to have many implementations of.


Has nothing been learned from the current package/module ecosystems that were built around modern programming languages? I mean, has boost taught nothing?


He referenced someone wanting to put a 2d graphics library into the standard library. That seems completely insane.

IMHO, something should be in the standard library if a large class of programs would want to use it, and if a "lowest common denominator" approach would be good enough for the majority of those

Example: JSON parsing and generation should be IN. A tremendous number of programs want to produce and consume JSON, and the bulk of these are not performance critical and don't have too many edge cases. Just boring normal JSON.

Example: Sockets should be IN. As above, vast numbers of programs want to use sockets and a standard approach would suit almost all of them perfectly well.

Example: Graphics should be OUT. A lot of programs could potentially use a graphics library, but many of these would not be satisfied by a "standard" approach. Cross platform is the main thing - having to interface with GDI, SDL, Quartz, and whatever else, and doing a bad job of all of that.


Agreed. Even if 2d graphics somehow made sense for standardization, it's not the place to start. How about first standardizing a linear algebra library that defines 2d/n-d vectors & matrixes? It's a prerequisite for an ergonomic 2d graphics library. And even such a library, which arguably has way more utility being in a standard, you'd still be hard pressed to find an optimal design (just look at design tradeoffs between glm, Eigen, etc.).


Which is exactly what the authors are now pursuing.

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p138...


I wouldn't want to see json parsing in the C++ standard library. Sockets yes but not json. That is another domain better addressed with an improved package / dependency management story IMO.


None of the things you mention really need to be in stdlib. There are plenty of awesome options, and the great thing is, they all have their own pros/cons and you can choose depending on your requirements.

The point of C++ is to avoid burdening you with costs you don't want to incur. Putting this stuff in stdlib is exactly that.

If you could focus on modules and package management that didn't suck, would you still argue that you need to have JSON and high level sockets in stdlib?


I don’t think C++’s “Pay for what you use” applies here. You don’t have to use something just because it’s in the standard library. The only practical difference for a user is a larger binary when statically linking. (The burden on compiler developers is a very real concern, however.)


Can you help me understand your concern? I'm not a c++ person. Is it that there's more stuff that needs to be packaged in even if it's not being used?


iirc the graphics proposal has gone through several iterations, including a std::web_view [0]. Yes, putting a web browser in the stdlib.

I think I saw passing reference in a recent trip report that for now they are only focusing on math primitives [1].

[0] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p110... See also https://www.reddit.com/r/cpp/comments/900dor/stdweb_view_pro...

[1] https://www.reddit.com/r/cpp/comments/au0c4x/201902_kona_iso...


> Example: Sockets should be IN. As above, vast numbers of programs want to use sockets and a standard approach would suit almost all of them perfectly well.

I think the BSD socket library is that standard approach...


It's been a few years but the libraries needed for different OSs changes. The source and imports needed between Linux and Windows differs. Also comes with some other baggage, like I don't recall if Windows offers a Unix socket implementation.


Coming from any language with a package manager to C or C++ is like a trip back to the dark ages. Sometimes homogeniety is much much better than flexibility. I too prefer a slimmer std lib with a sane package manager over batteries included (and kitchen sink, and now outdated gfx api). You might get shit like leftPad.js, but you'll also get a very very vibrant ecosystem.


The problem is that the community has already diverted in the way to organize sources, and each of them believes they have the best solution. Similarly, all C++ issues originate from the fragmentation of community: some (maybe 2/3) prefer include a network lib because it facilitates their daily development, while others (maybe 1/3) may have different ideas because they don't use it/they need performance over usability/they want to use this on embedded systems/their own stdlib fork doesn't want to support it. Meanwhile the committee cannot enforce everyone in the community to do something because it has no real-world control on either compiler dev/user community.


The concern over fragmentation is why believe conan's approach is the best: decouple dependency management from build rather than forcing everyon to align on build.

Still has its hurdles but it is a step in the right direction for trying to wrangle the wild west of C++.


You also inherit an entire chain of trust over code you yourself didn't write nor did anyone actually validate. The issue with leftpad.js wasn't that it was stupid, it was that it was dangerous.


That concern is somewhat orthogonal to the utility of a package manager itself. If you are using OSS in any way you need to pick and choose what you take on as a dependency. The package manager solves problems like distribution, dependency resolution, and discovery. The ease of use may contribute to poor decision making, which should not be wholly discounted.


To piggy back, this also goes down the dependency chain. Leftpad wasn't bad because it was being used directly. Projects imported other libraries which either directly pulled leftpad or, more likely, pulled another library which may be the calling party or not.


You get just as much trust with officially maintained, but non-std libraries as you do from std...


I disagree. Especially if those non-std libraries are built on other non-std libraries and so on. Trusting a single organization is much easier than trusting a chain of organizations.


If they are officially maintained, then by definition they are written and maintained by the same organisation as the std library.

The difference is, having an external library allows it to be versioned with breaking changes if necessary, without breaking consumers.


There are plenty of ways to organize C++ code, including several package managers. Why should a language -- a specification for a grammar and an abstract machine to run programs using it --- specify things so concrete as the way you download code? It might as well dictate what editor you use.

We need less, not more, conflation between languages, runtime environments, build systems, and package repositories. It's senseless to couple these concepts.


And yet having coded in Rust, Cargo solves all this bikeshedding. You can of course use your own build system but Cargo is pretty consistently used across the board which finally makes developing in a systems language as easy as Python (like virtualenv & pip but better).


Not really, cargo still doesn't provide an answer for binary libraries, and depending on how the workspace is configured and the projects one is compiling, it may end up compiling the same crates multiple times.


I feel ok about making binary libraries difficult. I want open source to be the easy path. I think the idea of abi compatibility forces a lot of unnecessary overhead on designs. Consider the radical design of TempleOS where everything is c source, jit compiled when run.


Which is ok, assuming the Rust community doesn't want to be represented in such markets and doesn't have an issue waiting for cargo building always from source.

Just don't complain about adoption then.


I think you will be happy sooner rather than later, but I heard about it second hand, so I don’t want to say too much.


Thanks, looking forward to it. :)


Couldn't agree more about package managers.

I just went back to C and have spent too many hours trying to get small libraries to build and link properly. I'm pulling my hair out.


I'm guessing your on an OS without a package manager or not making use of it?

Having the OS handle it is a much better solution than having every single language come up with it's own solution.


>I'm guessing your on an OS without a package manager or not making use of it?

That's really irrelevant, and not the proper way to install programming dependencies for developers. It's for installing system dependencies (including libs) and to build software for the system (e.g. as a system admin/user you want to build X and it needs the -devel package installed), not for developers on their projects.

Nobody (or very few) in Rust, Python, Ruby, Go, Java etc would ever use the "system package manager" for installing their language's third party libs.

For one, you don't want to pollute the system with your program's deps.

Second, you want to have isolated, different versions, of various dependencies, only visible to this or that project you work on, not to the whole system.


> That's really irrelevant, and not the proper way to install programming dependencies for developers. It's for installing system dependencies (including libs) and to build software for the system (e.g. as a system admin/user you want to build X and it needs the -devel package installed), not for developers on their projects.

Completely disagree, the system dependencies are my dependencies and whenever possible I want to be able to "make install" and have my dependencies update with the rest of my system.

> Nobody (or very few) in Rust, Python, Ruby, Go, Java etc would ever use the "system package manager" for installing their language's third party libs.

Java has always been a "fat VM", it's a platform and not part of the system, so not surprising that it handles it's own. For python/perl/ruby, it used to be quite common to include packages from the OS repository, until they reinvented the wheel. For rust, that's why it's a joke as a "systems language".

> For one, you don't want to pollute the system with your program's deps. Second, you want to have isolated, different versions, of various dependencies, only visible to this or that project you work on, not to the whole system.

Entirely doable with existing package managers, just use the -root switch with dpkg or the --prefix switch with rpm or (or configure if you build from source). Typically I only want a small subset of version specific packages.

So why would I want yet another package manager when my existing one is sufficient?


>Completely disagree, the system dependencies are my dependencies and whenever possible I want to be able to "make install" and have my dependencies update with the rest of my system.

You can disagree, but you'd be wrong in anything that a one-map-shop setting working on a single project, and willing to depend on the distro's (or third party distro package manager repos) versions of language libraries and frameworks.

>Entirely doable with existing package managers, just use the -root switch with dpkg or the --prefix switch with rpm or (or configure if you build from source). Typically I only want a small subset of version specific packages.

Typically this is a luxury that software shops with different projects (including older, already installed), different customers, etc., can't have.

>or python/perl/ruby, it used to be quite common to include packages from the OS repository, until they reinvented the wheel

Or until those languages weren't used merely by sys-admins and one-off scripts, but for large project development, and "include packages from the OS repository" wouldn't cut it anymore.

>For rust, that's why it's a joke as a "systems language".

Total non-seguitur. In fact, one of the main goals of C++ (as per the posted roadmap) is to add a package manager.

But the total non-seguitur that "Rust is a joke as a systems language" because it has cargo I think disqualifies anything else that can be said here.


Only when writing portable code isn't a concern.

No one has time to publish their library into the myriad of OS specific package managers that are out there.


Please, no.

We don’t need yet another package manager. Every language seems to have to have its own package manager, separate from my operating system’s package manager, that does exactly the same thing but in an incompatible way.

Now, if I want to know what software I have on my system, I need to use five different list commands instead of just one. If C++ had its own package manager, that would mean six. Please help to stop this madness, not perpetuate it.


It's not madness; each approach has problems.

In particular, distros generally work best when one version is enough, or maybe a few versions. Anything else leads to dependency hell.


Dependency hell comes from binaries that insist on not using the shared library provided by the distro through the package manager. Windows suffered from not having a package manager in the dark ages, leaving installers to silently clobber each others' changes.


I like the absence of a package manager at the language level and the embracing of shared objects and stable ABIs by using the OS package manager. Missing a library? Just install it, and list it as a dependency in the .spec-file, or whatever packaging system you use.


There’s conan-center, but it’s still rather sparse.


This makes interesting reading, as I've been following C++ graphics proposals for a while, and they are one of the inspirations behind piet. I hope for piet to have a role in the Rust ecosystem similar to what Titus envisions - it's certainly not a standard, but hopefully will be the tool people reach for, much like serde is for serialization.

Of course, if I fail, the negative consequences are minor, people just won't use it. By contrast, having bad stuff in a standard library lasts almost forever.


I completely agree with this article, it is very well written and makes total sense. Managing dependencies is SO much more important and allows code to move quickly while existing legacy code sticks with what worked at the time it was written. There are tons of solutions out there for dependency management for C++ and a graphics library would be much better as a separate package that can be versioned, improved, deprecated and ignored as required.


What should go into the C++ standard library? A liberally licensed reference implementation, where possible. A testsuite, where possible.

("Where possible" would certainly cover all the containers, all the algorithms, etc; it's less clear how you'd provide a reference implementation of something that needs to use low level OS functionality for its implementation, such as 'operator new' or 'ofstream')

Of course, it would also be nice if timely bugfixes went into the C++ standard library. For instance, std::generate_canonical was standardized in C++11, had an "issue" lodged in 2015, and the "new specification for std::generate_canonical" is still awaiting proposed wording (last updated a year ago, cough, so ... maybe in C++23?)


I mostly agree with this article and have been of the opinion for a while that graphics has no place in the standard library and that what is really needed is a better cross platform build, packaging and dependency management story.

I would quibble with the hash table example though. I think there is a case to be made not for changing unordered_map but for adding a new standard hash table with somewhat different guarantees that make it suitable for higher performance implementations. Hash tables are so widely useful that a better standard option without breaking backwards compatibility seems worthwhile. Other languages have standard libraries that evolve in this way and while it can get out of hand (looking at you C#) it's a reasonable solution in moderation.


I’ve had many discussions with my coworkers about this. The standard library is scary to me, completely over engineered. We should be dumping shit from the standard library that makes no sense (ostream anyone?). Don’t get me started on iterating over containers. C++20 is shaping up to be the way to hell paved with good intentions.


Can you elaborate on iterating over containers? What don't you like?


Pretty much everything about it is unwieldy. We had std::begin and std::end, now we have legitimate ranges. It is still difficult to iterate over customized STL containers, or even generic ones you specify. I’m looking for something like takeWhile.


As the title says, put Go in there ;)

Seriously, Go's standard library is the best I've ever used, and would make a great case study for the "batteries included" case.

But it doesn't have a good story at all around graphics, 2D, 3D or UI. That doesn't bother gophers much, because the language was(is) intended to build servers with. But for C++, different story.


Just picking a couple quotes from the article:

> Especially when you compare C++ to other languages, there’s a pretty strong argument to be made for a more inclusive and even all-encompassing standard library - look at the richness of the standard libraries for Java or Python.

What is that argument? And even if you accept that argument, can we execute? That is, will the result be at least of a similar quality, or will it be riddled with subtle gotchas and sharp edges like the existing C++ standard library?

> So, what should go in the standard library? Fundamentals. Things that can only be done with compiler support, or compile-time things that are so subtle that only the standard should be trusted to get it right. Vocabulary. Time, vector, string. Concurrency. Mutex, future, executors.

Can the standard be trusted to get these things right?

- Vocabulary: There's a fair amount of depth to what "right" means for vector and string in many contexts, so the standard can only be trusted to get them "right" in cases that have very few constraints (where it can do admirably). Implementors have been known to break the ABIs of std::vector and/or std::string between compiler/library versions, so they aren't appropriate types to use in interfaces.

- Concurrency: Leaving aside the C++ memory model, the library support should largely be avoided. For an example of why: std::condition_variable's ctor either takes another condvar or no arguments. pthread_cond_init takes a pthread_condattr_t. On Linux, this can specify a clock. On my system, if you want to wait for a second according to CLOCK_MONOTONIC/std::chrono::steady_clock on my implementation, condition_variable::wait_until will convert your time to CLOCK_REALTIME/system_clock (by now()ing both clocks and adding the difference to the deadline you supplied) and sleep until the converted time. That's not what you asked for; it's totally wrong and possibly dangerous.


I am little sceptical and also curious about the Graphics library. Is this meant as a building block for more advanced graphics? Or do you still need to use things like DirectX if you need speed and features?


In terms of the proposal itself it appears to be like an SDL type of thing. It has a basic way to create an output surface and then draw on it. Implementations on Windows may opt to do that with Direct2D.

But... it gets a lot of stuff really wrong. Like why does outputsurface have an "fps" argument on the constructor? What on earth is io2d::refresh_style::as_fast_as_possible supposed to mean? That doesn't really map to anything sane. UIs certainly don't behave like that at all, they only redraw on-demand. Even for a game which does refresh "as fast as possible" it still does that with its own game loop which will involve more than drawing, and often pipelined anyway.

Why does rgba_color, even though it's 4 floats, only have a valid range from 0.0 to 1.0 with no mention of color space? It seems like it's supposed to be linear, but in that case it should be linear extended sRGB, which has a valid range of -0.5 to 7.5.

Or like you can query for display_dimensions, but which display?

Adding this to a standard would definitely be a huge mistake. It may make a useful beginners library for just quickly getting up to doodling on the screen, but that's sort of it. You'd never build an actual shipping product off of this proposal.


The dirty secret is that it's basically Cairo, a 1990's graphics API, with modern C++ syntax. So all of the work that's happened since then, including HDR color spaces, precise control of swapchain present, dynamic multimonitor support, doesn't exist. I also got a chuckle out of chapter 10, text (basically TBD, like that's not one of the biggest of all possible cans of worms).

In fairness, piet is currently missing the fancy stuff too. But I'm hopeful we'll get there, in part because we can afford to break compatibility.

(Going deep into the weeds, presentation doesn't belong at the piet level, but, in my stack, at the druid-shell level. I'm working on this, using quantitative measurements of latency and power dissipation. But it's all evidence that this stuff is quite hard, and Titus is absolutely right that putting it in to a language standard is folly)


I do agree that this sort of thing doesn't belong in a language standard.

I haven't looked at the proposal myself, but regarding your complaint about rgba_color float values being on [0.0, 1.0] this is super standard in my experience. I would expect any well designed API to map [0.0, 1.0] to pixel values on [0, 255] unless otherwise clearly noted. Color spaces other than raw pixel values I would expect to be indicated by the name. That's consistent with at least OpenGL and Java's AWT, but I think also DirectX and most other APIs.


Common, yes, but wrong in ways that increasingly matter. Most APIs that do [0, 255] are assuming sRGB or they take a color space but can only really handle slightly-wider colorspaces like DISPLAY_P3.

OpenGL actually works like I said - the float values are not limited to [0.0, 1.0]. There is no clamping in the GPU pipeline, and a transfer function can be applied to choose between linear & non-linear. For details see extensions like https://www.khronos.org/registry/EGL/extensions/EXT/EGL_EXT_... or https://www.khronos.org/registry/EGL/extensions/EXT/EGL_EXT_...

This all matters because there is no longer any platform on which sRGB can be safely assumed to be the overwhelming majority. Nearly all new flagship mobile devices are DISPLAY_P3. Nearly all TVs are already some form of HDR, and desktop monitors are rapidly following suit (Rec. 2020 & 10-bit, so [0, 255] doesn't even cover enough bits).

So anything graphics that doesn't already work with colorspaces in some way is basically abandoned or dying, and anything new that doesn't work with them is going to be DOA.


Apologies, what I wrote wasn't very clear. I didn't mean to imply that an API should only be able to handle such values, just that a linear color space is the only sane default in my opinion. And I most certainly didn't mean to imply that color channels should be limited to only 8 bits, just that it's a very common default (and often sufficient for simple tasks). As you very rightly point out, any even semi-modern API absolutely must be able to support the use of other color spaces when requested.

> Most APIs that do [0, 255] are assuming sRGB

I admit that I've almost exclusively used OpenGL so my impression is far from expert, but I don't think this is correct. My (admittedly limited) experience is that a linear color space is assumed unless otherwise specified. In particular, my understanding is that OpenGL works almost exclusively in a linear color space except for a few specific sRGB image formats and a few specific functions. For arbitrary non-linear color spaces I would generally expect to need to select a backing format of the necessary bit depth, output linear values, manually apply a conversion function, and then somehow indicate to the underlying hardware what color space to use when displaying my image data to the user.

> OpenGL actually works like I said - the float values are not limited to [0.0, 1.0].

Agreed, I never meant to imply otherwise. However, do note that when outputting values from a fragment shader their range (and type) has to match what the GPU is expecting. So in practice, unless you're using a floating point image format (such as GL_RGBA32F) your output range is going to be limited. Personally I almost always use unsigned normalized formats.

> This all matters because there is no longer any platform on which sRGB can be safely assumed to be the overwhelming majority.

I'd just like to point out that in principle, a hypothetical API could handle all programmer interactions in a single color space (eg linear) and then transparently convert as appropriate for the display device currently in use. Of course it goes without saying that all conversions must be handled correctly! Importantly, I'm not trying to claim that such a limited design would be a good one. Rather, my point is simply that conversions between color spaces are purely an implementation detail provided you have sufficient bit depth to work with.


> My (admittedly limited) experience is that a linear color space is assumed unless otherwise specified.

Non-linear blending is very common. Do a CSS gradient, for example, and it'll happen in sRGB gamma space not linear space, giving you incorrect results.

Linear should be used more than it is, though, definitely.

> I'd just like to point out that in principle, a hypothetical API could handle all programmer interactions in a single color space (eg linear) and then transparently convert as appropriate for the display device currently in use.

Linear isn't a colorspace. Linear is about the gamma function, which is independent of the actual colorspace.

sRGB can be both linear & non-linear. In the absence of something specifying sRGB assumes a non-linear gamma function of 2.2, but it doesn't always. Linear sRGB is very much A Thing. If you're from the OpenGL world then you may be thinking of EGL_GL_COLORSPACE_LINEAR_KHR? If so, that's linear sRGB. If you wanted to output, say, Display_P3, then you need to use EGL_GL_COLORSPACE_DISPLAY_P3_LINEAR_EXT instead.

But you can very much do a single color space. That'd be the linear extended sRGB I originally talked about, which is float in the range of -.5 to 7.5. Also called scRGB. Microsoft makes use of this as of Windows Vista.


That CSS gradient example seems like a proper foot gun to me - I don't use CSS much, and I would never have expected that.

> If you're from the OpenGL world then you may be thinking of EGL_GL_COLORSPACE_LINEAR_KHR?

...I've just realized, are you talking about the EGL API (as opposed to OpenGL)? I don't actually use that - I nearly always restrict myself to an OpenGL or GLES core profile (no extensions) and use a support library (nearly always GLFW3) to handle all interfacing with the local system in a platform independent manner. It just keeps my code sane and manageable. :)

So for me, to output 8-bit non-linear sRGB colors from a fragment shader I would attach a GL_SRGB8 formatted image to my FBO. Much more typically though I would simply use GL_RGBA8 and output on the range [0, 1].

> Linear isn't a colorspace. Linear is about the gamma function, which is independent of the actual colorspace.

I was very confused by this statement and most of what followed it. After reading a bit, I think I was conflating color spaces and color models. If I understand correctly, all along I've been manipulating a linear RGB color model which was then mapped by my monitor on to (most likely) some approximation of the sRGB color space. So where I said linear earlier, I believe what I meant was any linear RGB color model (not space). Bonus points for using [0, 1] as the interval because it's easy to think about.

What it comes down to is that as a programmer, I just want to get my code working. Linear models on [0, 1] are easy to process and think about - 0 is off, 1 is on, and 0.5 is half way in between. If you want to add things together, you just add them together. Remapping from one range to another is trivial. It all just works. Sure it doesn't match up with human perception the way you might expect, but that's what the graphics API, drivers, a color calibrated display, and possibly some complicated external libraries are for, right? At least in theory.


> ...I've just realized, are you talking about the EGL API (as opposed to OpenGL)? I don't actually use that - I nearly always restrict myself to an OpenGL or GLES core profile (no extensions) and use a support library (nearly always GLFW3) to handle all interfacing with the local system in a platform independent manner. It just keeps my code sane and manageable. :)

OpenGL/GLES don't directly do anything with color spaces or gamut. It's part of the integration with the windowing system that does it. Which in the mobile usage is typically EGL, but desktop tends to do something else like WGL.

So sounds like you're just punting this decision over to GLFW3, and you're getting whatever behavior it felt like giving you. Which is probably sRGB or linear-sRGB.

> What it comes down to is that as a programmer, I just want to get my code working. Linear models on [0, 1] are easy to process and think about - 0 is off, 1 is on, and 0.5 is half way in between. If you want to add things together, you just add them together.

Easy to think about, but also wrong :)

If you want easy, you want linear extended SRGB, aka scRGB. This gives you [0, 1] in the colors you are typically familiar with. And it means when you display pure white, you're not shoving 1,000 nits into the face of a user with an HDR monitor. But it means your valid range becomes [-0.5, 7.5] instead.

> a color calibrated display

It doesn't matter how calibrated the display is if the source content isn't color aware. When you say glClearColor(1.0, 0.0, 0.0, 1.0), which color red is the display supposed to give you? It'd be broken if it just gave you the reddest-red it can display, because then your colors will never match when going between different gamut displays.

Anything that takes a color must also be given a colorspace or have a well-defined one. Otherwise nothing about color works. And if it's a well-defined single colorspace, that single colorspace needs to cover the entire visible spectrum (which extended sRGB does, but something like DCI-P3 doesn't), otherwise it'll just become obsolete when displays get wider color gamuts.

All the legacy APIs that don't do this just behind your back say "this came from sRGB colorspace" because that's what used to happen. But anything new shouldn't be doing that, because then it won't work with HDR, wide-gamut mobile displays, etc... By which I mean "can't display the full range of colors possible on the display"


I will acknowledge that if we're forced to select only a single color space to work with, scRGB is an elegant drop in solution due to the value overlap. However as a developer, I really don't want to be forced to work with 16-bit color depth in all cases and I'd much rather write algorithms to process data on the interval [0, 1] due to the reduction in complexity of both thought and code.

> Easy to think about, but also wrong

Not at all! It's only wrong if there isn't a way to tell the API what color space the data is in, ie how the data is meant to be interpreted. You keep describing a data format that is color space aware, while I'm describing an API that is color space aware coupled with a data format that is generic.

What I'm arguing for here is a clear separation between image data and the color space used to interpret that data, such that algorithms don't have to be customized to fit a specific (likely platform dependent) color space. I think that data storage and manipulation should happen using a simple linear model such as [0, 1], with a separate mechanism for communicating to the API what color space the data occupies.

So yes, I do think that a hypothetical clearFrame(1.0, 0.0, 0.0, 1.0) function call should result in the reddest red possible - within the currently configured color space. Separately, it should be possible to do something like setColorSpace("AdobeRGB") and thereby change the meaning of (1.0, 0.0, 0.0, 1.0) to the API. Of course the graphics stack and underlying hardware then have to work together to actually display that data correctly. It could well be that the display doesn't support the particular color space that was specified and will need to convert appropriately, but the entire point here is that the algorithms written by the programmer don't have to be tailored to a specific color space.

As clearly illustrated by your HDR example, sane defaults are a necessity for any system. Given the history, it seems that a reasonable API ought to assume sRGB in lieu of an explicit selection, which it seems they already do for the most part. Thus in the example you provide, the color (1.0, 0.0, 0.0, 1.0) would result in a perfectly reasonable shade and intensity of red on any device.

Note that the entire problem of obsolete APIs you refer to is due entirely to making assumptions about which color space the caller is using. The approach I've described here completely avoids this - you can bolt on new color spaces later in a clean and fully backwards compatible manner. More than that, you can even convert existing APIs in a fully backwards compatible manner because you can reasonably assume that they were already using linear sRGB unless otherwise specified.

>> a color calibrated display

> It doesn't matter how calibrated the display is if the source content isn't color aware.

Well yes, naturally. I was only meaning to note the need for the entire stack to handle things properly, from driver through to display device. My line of reasoning was that if your API sends non-linear sRGB data to a display expecting, for example, linear Adobe RGB data, or if the display isn't color calibrated in the first place, or ..., then things obviously aren't going to work correctly. I never meant to imply that my color calibrated display could read my mind! You say that "Anything that takes a color must also be given a color space or have a well-defined one.", and I completely agree.

> OpenGL/GLES don't directly do anything with color spaces or gamut.

Actually OpenGL specifically supports the use of non-linear sRGB textures as a special case. Otherwise though your point is well taken, by default it indeed operates with colors that occupy a generic linear vector space.


I would guess that io2d::refresh_style::as_fast_as_possible means as opposed to Vsync. Vsync times drawing or at least buffer swapping so that you don't get video artifacts due to the monitor reading say half of one frame and half of another. Vsync is not as fast as possible assuming you can render faster than 60 fps on a typical monitor.

Also, who says the game will control the loop? In Javascript (requestAnimationFrame) and many other frameworks, that's not true.


> I would guess that io2d::refresh_style::as_fast_as_possible means as opposed to Vsync. Vsync times drawing or at least buffer swapping so that you don't get video artifacts due to the monitor reading say half of one frame and half of another. Vsync is not as fast as possible assuming you can render faster than 60 fps on a typical monitor.

That'd be a reasonable theory except it's not how any of this works anymore. Unless you are exclusive fullscreen it's not possible to get tearing on any modern compositor. Window's DWM, Android's SurfaceFlinger, Linux's Wayland, etc... on all of those you're drawing to an offscreen surface, and when you flip it doesn't go directly to the display it goes to the compositor. You're basically always vsync'd by the compositor. So for a windowed application what you really want is more like Android's Choreographer, or JavaScript's requestAnimationFrame - a callback that just says "hey if you want you could redraw now" for rate limiting purposes more than anything else.

And even if you are exclusive fullscreen there's things like adaptive vsync or even just choosing an alternate refresh rate.

> Also, who says the game will control the loop? In Javascript (requestAnimationFrame) and many other frameworks, that's not true.

requestAnimationFrame is simply a callback and entirely optional. That'd be backpressure control more than anything else, and you'd still have things like input intermixed with it.

Also C++ has threads, so discussing games or loops in JavaScript vs. C++ doesn't really apply in the same ways.


Someone motivated could build an implementation that uses Direct2D underneath. I think MS has one already. There are provisions in the proposal to expose handles to the underlying graphics systems, for example if you want to render text with DirectWrite (or Pango/Harfbuzz if you're using a Cairo-based implementation). Of course without the appropriate ifdefs this would make your code non-portable at that point.

But yeah, as much as I would love to have an out of the box cross platform (2d) graphics library, I worry that by standardizing it, it'll be susceptible to being left behind very soon. Not even mentioning that there is so much stuff 'missing' in the current proposal - like what other color spaces? Image formats? I'd much rather have a graphics library in Boost that can evolve over time, and for people to stop complaining about / shitting on Boost; and maybe a way to have Boost packages work with a package manager, in a way so that you can pick and choose the components you need.


After reading other responses I agree that the Graphics library should be done by somebody like Boost. It makes no sense to have it in the standard,


Making the standard library ever bigger is not a good solution. Self-standardizing with independent packages is a much better idea. Something like a graphics library should NEVER go in a standard library.


I always dreamed with a standardized, cross-platform, cross-arch, (sort of) automatic, non-blocking, DMA-based memory copying functions (like a special memcpy) for duplicating or passing big chunks of memory (like for example, H.264 frames or network packets, from a driver to user space).


There's enough stuff in it already.


There's also some glaring holes. It will be nice to eventually have idiomatic (and cross-platform) support for networking, filesystem operations, maybe more modern format strings, etc.


A graphics API is not one of those holes though. Filesystem library already exists. https://en.cppreference.com/w/cpp/filesystem


The Filesystem interface seems to assume the limitations of a Unix-like environment. No mention of extended properties or complex access control beyond what you'd see on a POSIX system.

Seems odd that the standard library for C++ would assume a particular operating environment.


Yes, but it is new in C++17 (which a lot of people can't use yet), and I'm not advocating the graphics proposal specifically.

My point is that you can't just use "the standard library is already really big" as a justification for not adding new things which make working with the language much nicer.


I agree for the most part, but we should only really be adding standard libraries for things that people normally have to fall back to the C standard library/POSIX code for when writing C++ code. Things like filesystem operations was one of those. Threading was another. Networking is probably another. Format strings I would not include in that, though updating the existing string formatting libraries for the newer C++11 concepts would be a good idea.


Threading was a structurally different change from fs operations and networking. You need the compiler to cooperate when you're writing to a location in one thread and reading it from another. You can either get there using hacks built on escape hatches like 'asm volatile' or by specifying some semantics for the operation. Specifying a memory model, however imperfect, means we can move away from the implementation-dependent hacks.


Sounds like he’s recommending the Rust approach. Seems reasonable.


Why does string need/deserve to be in the standard? Doesn't the existence of cord cast doubt on thst?


Back in the dark ages of 1990s C++, it seemed liked every single project had its own custom string class. Things coalescing around std::string, as imperfect as it is, is a huge relief.


string is a common-currency type. I don't want to be converting between char* and vector<char> and custom::unicode_string, etc. because each API or library has their own slightly specialized version. (anymore than we already do, anyway)

Most of the time, std::string is fine, so I expect there to be APIs exchanging string data using std::string.

Rope/cord is a domain-specific data structure. If I was building software which was constantly modifying ranges of text, then rope might be a good choice.


Go in the C++ STL! Now that would be a headline!


An unfortunate capitalisation for sure. If "into" and "the" were titlecased, or "Should" and "Go" lowercased, there would be far less ambiguity.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: