A maxim of technology is that failures reveal underlying
mechanism. A good way to learn how something works is to
push it to failure. The way it fails will usually tell you
a lot about how it works.
 In XScreenSaver: a collection of free screen savers for X11 and MacOS by Jamie Zawinski .
Edit: if you want to read the source code---highly recommended!---retype the address below by hand or you'll become acquainted with jwz's countermeasures.
Mirror here: http://web.mit.edu/ghudson/dev/nokrb/third/xscreensaver/hack...
As one of the maintainers of the C++ ABI for an operating system, I assure you, it is best avoided for system components unless there's a compelling justification.
As someone that helps build an operating system distribution, we consider Xscreensaver to be a system component.
Also, for licensing purposes, the GPL, as an example, considers software distributed with the system to be a system component in some scenarios. For example, "GPLv3 has adjusted the definition of System Library to include software that may not come directly with the operating system, but that all users of the software can reasonably be expected to have."
xscreensaver is X11, not GPL.
Why does a C++ ABI really matter in a system distribution? All the distros I've used are internally consistent in their ABI and update all their deps when the C++ ABI is updated.
Not from a distributor's perspective ;-)
Correct, but since it is distributed as part of the system, or is a component that most users will have, then, as an example, if you link your GPL'd code to Xscreensaver's API, that's ok, because it's considered a "system library".
Really, I was referring to more for the purposes of definition than anything.
Why does a C++ ABI really matter in a system distribution?
Because ABIs create a point of synchronization, and internally for that distributor, that create a flag day where a given set of components must be rebuilt at the same time.
Ideally, for a distributor, you don't rebuild the entire world every time one component changes. You only rebuild what you have to, since every delivery causes new binaries to be produced, which then in turn increases the amount of downtime for consumers that are upgrading.
As such, changing ABIs (thankfully GCC's doesn't change as much as it used to) create unnecessary churn, and if you can easily avoid them, you usually do (as a distributor).
It also means that if a user had their own Xscreensaver module, it too would be required to be using the same ABI.
In short, there's no particularly compelling advantage and a lot of disadvantages for a widely used and distributed component where you want to maximize its availability on platforms.
C++ is just a tool; it's not always the right one. The maintainer has made implementation choices to maximize compatibility and portability. You have to remember that Xscreensaver is designed to be used and executed on a wide variety of platforms, some of which are relatively old.
(he seems to port xscreensaver to all the things he uses, so I can understand not wanting to accept a huge pile of someone else's code in a different language than the rest of the package)
I was able to port many of my screensavers to other platforms easily; in most cases, the savers just have two functions: setup() and draw(), the first of which does any one-time initialization, the second which does all per-frame rendering. One can write a simple wrapper for any platform that initializes the GL contexts and handles user input, as well as any locking/unlocking code.
However, I don't think that would achieve much. Most of the savers were written in very terse, nasty form to achieve the various constraints the authors worked under.
Yes, this is what I meant. How big a justification does he need to refuse code when such a thing is possible?
But my point is that maybe he has reasons other than being a jerk. If it isn't worth it to you to do it otherwise, why does the discussion start with it being worth it to him?
He's often right (see CADT) but in a way that is jerkish, and that's really limited the adoption of jwz code.
He blocks links coming from HN. I believe he always has.
That's caused by a different effect. NTSC has terrible color bandwidth. The YIQ signal is really an I (intensity) signal at full bandwidth, and YQ signals with severely limited bandwidth (because, during the transition from black and white to color, they had to be stuck in subcarriers of the main black and white signal. NTSC Color TV is really greyscale TV with color tweaks superimposed.
You can only go through the YQ color space about 10 times across the width of the screen. The way to simulate this is to convert to YIQ, apply a low-pass filter horizontally to the Y and Q components, then convert back. Note that this is a horizontal issue only; in the vertical direction, every line can be a different color without problems.
My old CRT only supported PAL. So this is the reason why NTSC inputs were displayed without color.
For NTSC, since it's only really the timing and color encoding that differs it's then not uncommon to get a grayscale picture from NTSC input.
so no, our consoles didnt look like http://i.imgur.com/Pq2Yra4.png, they did indeed look pixel perfect http://retrorgb.com/images/Link%20RGB%20Compare.jpg
(I'm not old enough to remember any TV connector before about 1997. By then, in the UK, only really cheap or old stuff lacked SCART.)
I wonder why there was no rgb inputs in US TVs, this is the most natural 240p/480i signal you can think about, goes almost straight from video chip to CRT electron gun.
Like, yeah, maybe on your ntsc tv, my pixel-art games looked pretty good on my monitor.
"Something exceedingly Contrary to the American Method"
"Perfection at Last" or "Picture Always Lousy"
(Is there one for DVB vs ATSC?)
(BTW, did you notice the part where the author specifically says he's talking about NTSC, and other standards work differently? Because, to be honest, getting all nationalistic over obsolete TV connection standards is kinda weird.)
Whenever I see those artifact-loaded attempts I think it's more of a combination of a very bad CRT, cable, and RF modulator. With most of those artifacts in signal coming from RF modulator.
Its like someone in US decided to once again screw whole consumer base and remove essential feature in order to create expensive niche. Now that I think about it, what did people in US use professionally to send video? Were there "professional broadcast quality" labelled TVs with component inputs in the eighties?
When I started editing, BetaSP was the broadcast format.
The results are indeed brilliant.
It is ok, removes most of cvbs garbage like dot crawl, but still nowhere near rgb.
Awesome picture quality, though.
This reminds me the first time I used one of those flat CRT, I was so used to the convex curve that the flat one looked concave.
I've been trying to find something similar for my Windows 10 machines, but no luck. I wish they'd port it across, but failing that I'd love to hear recommendations for a Windows app like Cathode.
Very cool and pretty fun.
There are a few minor errors in this section:
> Cathode ray tube displays work by repeatedly sweeping an electron gun back and forth across the extents of a fluorescent screen. The strength of the electron beam may vary during this sweep, and this in turn affects the brightness of the phosphors it illuminates. In early black-and-white models, only a single electron gun was necessary; with the introduction of color TVs, three electron guns were used, each tuned to a different phosphor, which would appear as the color red, green, or blue.
> The points of light created by this interaction are too diffuse to produce a sharp image, so a shadow mask is used to focus the electron beams before they reach the screen. A shadow mask consists of a metal plate with holes or apertures designed to filter out unwanted electrons. The shape of the shadow mask and the configuration of its holes vary by model and contribute greatly to the characteristics of the resulting image. The shadow mask also defines the dot pitch of the display, effectively limiting its highest possible resolution. As we will see later on, the dot pitch of the display does not necessarily correspond to the resolution of the displayed image.
The three electron guns in a color CRT aren't tuned to the different color phosphors, they are physically positioned so that the holes in the shadow mask or the gaps in the aperture grille block the beams from hitting the unwanted color phosphors and only hit the phosphor that each beam should light up.
In a shadow mask CRT, there is a triad of phosphor dots (or stripes in newer TVs) for each hole in the shadow mask. The electron beam from each gun goes through this hole at a different angle, so it only hits the correct phosphor and is blocked from the other two.
An aperture grille CRT uses a similar concept, but instead of mask with holes in it and triads of dots, it has a series of parallel vertical wires and vertical phosphor stripes serving the same function.
The mask or grille is not there because the electron beam is too diffuse to produce a sharp image. A monochrome CRT has no shadow mask or aperture grille, and with proper focusing it can be plenty sharp. The mask or grille is there simply to block the the beams from hitting the wrong color phosphors.
More information is in the Wikipedia articles on shadow masks and aperture grilles:
I also have a question some reader might be able to answer: if you do the simple implementation, (at least) 2/3rds of your electrons will hit the aperture mask. To improve power efficiency, you would need finer control over the video signal. did any television ever do that? Or did they already have to do it to prevent the electron beams for different color from interfering with each other?
Moreover, if you physically rotate the display, then the colour changes because that alignment is thrown off—by Earth’s magnetic field, IIRC.
Sure, there's the nostalgia factor, but 20 seconds later I'm over it. If you have a game or something that has a CRT emulation mode, let me turn it off.
The "image orthicon" https://en.wikipedia.org/wiki/Video_camera_tube#Image_orthic... can only be described as a Rube-Goldberg device.
How many of you are making games?