> But some tests, I have no idea what changed that made them break.
Michael Feathers in "Working Effectively with Legacy Code" defines a unit test as a test with two qualities:
1. It runs fast (1ms is slow for a unit test)
2. Has excellent locality, so when a test fails it's obvious what code broke it
From my experience fixing sparc64 package builds for Debian the second quality is hard to get right. Many times I would encounter a test failure that was the result of a complicated chain of function calls and it wasn't obvious what was broken.
When I was fixing the gtk build I discovered their codebase is sprinkled with ASSERTs which turned out to be SUPER useful, particularly because they communicated exactly where a precondition wasn't holding up. Previously I was never that interested in asserts but that experience makes me want to look into them again.
On the flipside the reason excellent locality usually doesn't happen is that the narrower the scope of your tests are (think on the spectrum of integration tests to unit tests), the likelier that they are highly coupled with implementation details that might change, making tests brittle and refactoring harder. Or you go the full java style strategy-pattern-dependency-injection-abstractsingletonproxyfactorybean route to get both locality and ease of changing behavior, but then you're now building these complex class hierarchies and doing anything takes a lot more effort and boilerplate. Now changing behavior is easy but changing architecture is hard. It's a tough tradeoff, really. I'm not saying we don't need one or the other, but it's more of a case by case judgement call that folks would like to admit IMHO.
Assuming the tests ever passed, git-bisect should make it possible to find the commits that broke them. This is tedious and time-consuming (particularly if the build and test cycle is slow) but it can be automated.
With continious integration (CI) one gives excellent locality in time/changesets. So I think the locality argument might not be as important today.
Though I am biased, since I generally prefer to have tests that exercise more code, normally only against external APIs.
Strong agreement on that pre/post-condition/invariants are highly useful though. I prefer that they are enabled also in production code, except for in the hottest of codepaths.
To be fair, the original article does not contain the word "unit" - it just talks about tests, which is arguably more general.
In my personal opinion, tests that are "non-unit" tests are still useful - testing the validity and correct functioning of larger pieces of software is a good thing, even if it does not run fast, or if the broken code is not immediately obvious.
Unfortunate reality is that Cairo project doesn't want to be helped.
I've been checking Cairo from time to time for a very long time. There was a period of active development because it was used by FireFox and, I think, had at least one dev working on it paid by Intel.
But there's no indication that the project wants your help. See https://www.cairographics.org/ and try to find the part that tells you how to submit a patch. To submit a bug you need to use a mailing list or antiquated bugzilla instance.
Cairo is ostensibly a Gnome-affiliated project as it's used in Gtk. Federico is a big deal in Gnome project.
It's telling that he had to setup up essentially a personal fork of Cairo on gitlab to do any work.
At this point in time, moving the code officially to GitHub (or GitLab) from anongit.freedesktop.org should be a no brainer if your goal is to have contributors.
You are claiming that Cairo doesn't want to be helped, but your actual complaint seems to be that Cairo uses a development workflow that you personally dislike and see as old-fashioned.
I don’t think the parents complaint is about the workflow as such. There is basically no information on how to start as a developer, giving the impression of a closed cabal. A project that wants and welcomes help should in some way facilitate that: have a page or a section of a page that welcomes contributors, explains how and where to start, how to get changes accepted. It’s fine if that is “please just send us an email, we’ll pick you up there.”, but “how to contribute?” appears exactly zero times on the page.
The guide on how to get started is in the source tree, in a top-level file called HACKING [1]. The idea, presumably, is that anyone seriously interested in the project is likely to begin by downloading and reading the source.
I didn't get that impression at all, though. If people want to contribute they only need (1) the source code, and (2) a way to contact the authors. Both conditions are met, so there's really no need for redundant lip service, especially on a minimalist, straight-to-the-point website like that.
Now it's the next day and I see Federico did not comment here, but he did send merge requests: https://gitlab.com/cairo/cairo/activity So it looks like the CI will live in the official GitLab soon :)
They talk about patch submission in the downloads page git subsection, and the documentation page welcomes contributions in the first paragraph.
I can't speak for the Cairo project, but some projects prefer there to be a slight barrier to patch submissions to improve the signal:noise ratio. Anyone vested in a well-formed patch should be motivated enough to send an email to either a mailing list or any of the developers found in the git commit history. Reviewing patches takes significant time, so it can make sense to require the submitter to invest a bit of time before consuming yours as maintainer.
Do you think it's reasonable someone would have said the same thing about moving to sourceforge or google code, and how that would have been a bad idea? The reality is services such as this are ultimately ephemeral, and out all the ephemeral things one may want to manage, email and mailing lists maybe the lowest common denominator.
Google Code and and Sourceforge were around for, what, 10 years or so? That would be 10 years of potentially greater levels of contribution. IMO, that's a price worth paying for moving a repo once a decade.
Everything is ephemeral. Take advantage of it while it makes sense for whatever it is that you're trying to achieve.
A lot of open source projects don't want to be helped, given how hard it is to contribute. I've submitted a few patches to various projects, closing open issues, that still linger as PR's years later.
"The reference implementation of this paper provides a software implementation of the math and rendering support classes. This is based on the Cairo library; indeed, so far the reference implementation has been based on Cairo. However, it is now possible to provide an implementation more appropriate to the target platform."
The problem with Cairo is that it's painfully slow. Last time I tried, I don't remember what it was exactly but it just couldn't cope with realtime performance.
> The problem with Cairo is that it's painfully slow.
It really isn't though... it's just painfully easy to fall onto slow paths, and not obvious how not to. Time and again I've seen people stack benchmarks of cairo where they're only benchmarking the image surface; of course that's slow, it's CPU rendering! If your realtime graphics app is ever creating a new surface and it's not using cairo_surface_create_similar() or cairo_{xlib,xcb}_surface_create(), you've probably fucked up.
Nearly 100% of the time, when someone's fallen into a slow path using cairo, it's because they've managed to get cairo to copy something back from the graphics server, do some operations on the CPU, and then reupload back to the server. And surprise, that's very slow. After that, the majority of remaining slow cases are due to creating too many clipping regions (which is common in lots of complex rendering applications, but usually caught by the application developers themselves since Cairo's actually remarkably good at debugging these kinds of rendering issues with tools like the script surface and cairo-perf-trace).
Meanwhile, Firefox has managed to pretty much stamp out all of the rest of the actual painfully slow Cairo paths...
Part of this perceived slowness in Cairo is because the example code is pretty bleak and abysmal (just look at https://www.cairographics.org/documentation/ if you don't believe me), part of it is because everyone who used to maintain cairo (before they ran off and are now doing VR at Valve, or Wayland, or Tizen, or whatever) were Xorg developers and Xorg things were "so trivially obvious" to them that isn't trivially obvious to... basically everyone else, and part of it is because of "stack-overflow-itis" - "I'm just going to copy this example code because this guy got it working and I don't care to understand the mechanics of how/why it works/why it's slow in my code."
tl;dr: cairo's not slow, stop creating/rendering against image surfaces, use the perf tools to keep clear of bad application behaviors.
Yeah, I was actually surprised at how fast cairo can be. I had some ideas as to optimize things like prerender map markers into some tiles and then render those tiles, but it seems for my use case (rendering less than 10k small images on single surface) I didn't even need to optimise it, with 2k images I had over 40fps which felt more snappy than google maps, which was enough. Of course if someone needs something better, hw accelerated opengl is the way to go. Current processors are fast, but they have their limits.
As I am just getting into gtk-based UI programming, I have not found any good overviews how to do things "right" to get the best performance. Could you give a pointer to where I could find information how to efficiently draw with cairo, including some pixel data?
I don't think that documentation truly exists. You can read the Gtk+ documentation and the Cairo documentation, but that really only tells you what the APIs do in the grand scheme, not why they were added, what problems they solve, or the gist of how to get best perf. Most of the time when you're writing new widgets you only care about drawing lines, placing icons and laying out text, for which Gtk+ has fast higher level mechanisms and you should use those (see gtk_render_line(), gtk_render_icon(), gtk_render_layout() and their friends). When in doubt, use the newer widgets and functions (the ones with the latest "Since:" annotations), since they have usually replaced older functions with worse performance or behavior. If you're lucky, those new functions will even document why you shouldn't use the old function anymore...
The general theme of working with Gtk+ is "do as the library does", as it's mostly been written and maintained by people who know these things through decades of experience and that code can help guide you towards avoiding bad behaviors (like the famously bad StackOverflow-style toy example code that creates an image surface, renders some clock or gears vector image to it imperatively, and then setting that image surface as your widget cairo context's source and rendering the whole image to your widget and wondering why you can only get 10 fps vs using gdk_window_create_similar_surface() to make your offscreen rendering target, splitting your drawing up into the "clock face background" and the "clock hands foreground" pieces, and only re-rendering the hands as they move or even prerendering all of the different hand positions to another offscreen buffer and blitting the pieces as necessary and suddenly realizing you can get hundreds of FPS easily.)
Sadly all of this means source diving a lot of the time. And it can be hard, especially in cases where you want to stray off the beaten path and do a completely new widget design - you will want to find and make friends with great GNOME people as they can really mentor you through these more complicated widgets until you get your legs under you.
The toolkit as a whole is moving towards a model more like that of modern browsers, where widgets are conceptually smaller and simpler and composite widgets and layouts are more of what you actually have to maintain. (A good example of the latter is looking how complicated the current code in GtkSpinButton was, compared to how conceptually it could be a much simpler composite widget that only handles the key/button events between a pair of buttons and a text entry).
This shift in design is a boon for many application developers, but it can mean really struggling if you want to write or maintain complex applications with components such as drawing canvases (e.g. the GIMP) and layered cross-process rendering (I used to work on VMware Workstation...)
You can help! Document your findings as you learn, write patches against these libraries and their documentation, get it reviewed and committed. The Gtk+ folks are not scary to work with, just very opinionated.
Typically condescending answer, "you fucked up." There are times when you absolutely DO want to use image surfaces, such as when you need raw access to the pixel data, or if you want to combine cairo rendering with something else.
It was a looong while since I used cairo so I don't know if my information is correct, but at that time it was slow. The slowness originated in cairo's use of libpixman's not very well optimized functions for interpolation. I actually didn't just complain -- I wrote patches to libpixman to make it faster. They lingered on that stupid freedesktop.org site for a few years before being marked as WONTFIX. Back then, the project wasn't friendly to new contributors at all. But Federico is a very friendly and active developer so I'm sure the situation is much better nowadays.
Typical hackernews half-off-topic reposte. Not even sure why I bother...
> There are times when you absolutely DO want to use image surfaces, such as when you need raw access to the pixel data, or if you want to combine cairo rendering with something else.
And at that point, you really can't complain about Cairo being slow, can you? You're not even doing vector graphics at this point, you're back in raster graphics territory. Cairo would have been a better library if the image surface was only public in debug builds, forcing developers to learn and understand the implications of using it or reinventing it yourself with some combination of calls that end in XGetImage or XPutImage which has plenty of neon lights around it reminding people that it's slow to do these operations.
Literally any raster API that has to download the image from the server, change a pixel, and put the image back on the server is going to be slow. This is precisely why X extensions like Xdamage and Xrender exist to attempt to mitigate the damage by manipulating smaller image regions (less data to download/upload) and pushing operations server-side, and why X APIs like "XPutPixel" exist. But there are limits to just how good these things can ever be; they're going to be limited by GPU-to-main-memory bandwidth and CPU rendering speeds _always_.
These raster problems are why UI toolkits have moved on to using OpenGL( ES) and Wayland and similar architectures on other OSes; OpenGL, despite its flaws, makes it very explicit when you're doing obviously slow things like download/modify/upload and has tools that make it easy to avoid the need to do these roundtrips, and Wayland doesn't know so much what a "pixel" is, just how to move buffers around and notionally where those buffers are in space - it's up to the compositor's implementation to figure out how to put together the actual images in buffers and put them on output devices.
> It was a looong while since I used cairo so I don't know if my information is correct,
And this seems to be the recurring problem... everyone's moved on, cairo's now "infrastructure", nobody gives a damn about infrastructure until it's crumbling, and then it's hard to find anyone to devote resources to repairing and maintaining infrastructure... Luckily, Federico's good at these things, and has historically been good at finding devs to give him a hand.
I can't really tell you anything about your pixman patches though, besides the fact that pixman's practically unmaintained too (see my first post about how the devs moved on), it's shared infrastructure with the X server, and nobody really wants to touch it because it's "stable" and was mostly written before the era of unit tests and CI/CD so it's hard to prove changes to it are correct across all of the various platforms and use cases it has to support... It's like trying to change Win32 APIs, only worse because people are still building Xorg on bizarro platforms and architectures you won't have access to test on.
Your best bet is to actually bug developers with commit access to review/push your patches and then work your way up to commit access and start actually maintaining this stuff.
> And at that point, you really can't complain about Cairo being slow, can you? You're not even doing vector graphics at this point, you're back in raster graphics territory.
The dichotomy between vector and raster graphics does not exist on screens built using pixels. Eventually whatever you are doing you end up with pixel data. It is incredibly common to require access to this pixel data for some reason. For example, for a long time Cairo used pre-multiplied alpha for alpha compositing which caused all sorts of grief for people who required straight alpha.
You idea of Cairo hiding the image data would have made it impossible for use for rendering transparent PNGs as they have (or had, I don't know how Cairo works these day) a pixel format Cairo doesn't support. Or for using cairo with SDL for that matter.
Your description of how raster APIs works is wrong. What they do is that they write to an image buffer which is then uploaded to the X server. But if the server has the shm extension, they just write the pixels to a shared buffer.
Wayland doesn't impose any rendering API on clients. It gives you a pixel buffer or a GL context and lets you draw whatever you want.
TL;DR pure vector graphics does not work. And the sooner you vector weenies realize that, the better. :p
Michael Feathers in "Working Effectively with Legacy Code" defines a unit test as a test with two qualities:
1. It runs fast (1ms is slow for a unit test) 2. Has excellent locality, so when a test fails it's obvious what code broke it
From my experience fixing sparc64 package builds for Debian the second quality is hard to get right. Many times I would encounter a test failure that was the result of a complicated chain of function calls and it wasn't obvious what was broken.
When I was fixing the gtk build I discovered their codebase is sprinkled with ASSERTs which turned out to be SUPER useful, particularly because they communicated exactly where a precondition wasn't holding up. Previously I was never that interested in asserts but that experience makes me want to look into them again.