The goal was to compose a layout into a single image.
You created a layout by literally cutting and pasting things onto a board. Then you placed that board in the area at the bottom and took a picture of it that was transferred to film loaded in the top.
You're right that the film was special; but it's the other way around from how you were thinking. The film was not sensitive to red light. To this film, red is "black" and cyan or blue is "white".
Why this was useful:
- You could open the box of film (it came in sheets) in a room that was darkened except for a red bulb, without exposing it.
- You could use overlays of transparent red material (rubylith) to mask things precisely. Even though you could see through to the layer below, the camera would see it as all black.
- And, as the article mentioned you can add notes to the layout with blue pencil and it would be invisible to the transfer. We always called this "non-repro blue" though, as in, the camera wouldn't reproduce it.
-- Litho film was also very high contrast so everything pretty much came out black or white. (Photos weren't actually reproduced as greyscale but rather as a set of larger or smaller black dots using a halftone screen. This still applies when things are printed.)
-- Because litho film was sensitive to blue, the non-repro blue writing on the white paper would, like the white itself, be an exposed part of the image. This results in a black area of the negative where silver halide has been turned into metallic silver. This black area would then become white again when the negative was used to create a printing plate.
Yeah, I expect that's just someone with a mania for Wiki-standardization. It's not a precise shade; any cyan-ish color would do. In practice non-repro pencils and markers varied from sky blue to a rich turquoise.
The article seems confused - it's implying that there is some magic shade of blue that cameras can't see (even today), which is totally wrong. I think that's why someone found it interesting to post here.
Graphic arts film wasn't at all fussy about the shade of blue (as you note) and so while there were expensive non-repro blue markers and pencils, everyone I knew (at the very end of the era of graphic arts cameras) used blue highlighters, so design studios were full of them.
I've stuck with blue as the only highlighter colour I'll ever use, more than 20 years since the original rationale.
Also, used to freak people out scribbling (non-repro) obscenities on a flat that was going to be sent to photo and turned into a newspaper the next day.
Especially since an sRGB triplet only specifies how to perceptually reproduce the color, and film has a different spectral response from the human eye. The dye in non-photo blue probably should actually spectrally be in the blue range rather than having any dye in the red or green range, since it would likely show up on film otherwise.
They will say it. I worked for a startup acquired by Yahoo. I was hired by the founders, but then they all left and I was kind of stranded. Anyway I tried to make a go of it, and proposed how we were going to hire some new great people, and my manager expressed confusion about why I wanted great people.
In his view, if you get great people to do not-so-great jobs, you'll end up overpaying and then they will just leave. For him it was obvious that not all jobs are going to be great. Most of them will kind of suck.
It was 180° away from how I thought, which is the standard "only hire A players" script. But then I realized it did make a kind of sense. Startups and small companies that win have geniuses doing everything, including very unglamorous work. Often they have great solutions which work okay for them, but are incomprehensible to others. (Example: Paul Graham's Yahoo Store builder in Lisp).
Such people are compensated with maybe stock options, but more importantly, with freedom. That wasn't really on offer at a big company. So we had to look for average people to do average things at an average rate, and usually it would be multiple people to do the job that one brilliant person would have taken on.
I can't claim to have jl's "x-ray vision", but I think I've gotten a lot better at this. Possibly because I'm not naturally good at this, so I have to do it consciously.
The way in for me was English literature. Since the modern novel, writing in English focuses on revealing character through what someone does or says. Why do they choose this word and not another? By their actions, what can you learn about what's going on in their head?
Turns out that works pretty well in real life too.
A trivial example: I had a supervisor who I would say had problems with role-reversing. While ostensibly the boss, this person really wanted other people to notice what a difficult time they were having. Again I figured this out just through word choice. Some people, in the boss role, express their vision as "we need to do this"; with this boss it was more like "nobody is supporting me, so you all need to step up". Instead of performing confidence, they performed their personal discomfort, which is a cry for others to show that they care.
I think most people would realize that this is annoying, but look a little deeper, to see the need that's behind that. This is someone who habitually fears abandonment. (Maybe that's even why they took the boss role.)
So, all it took was a tiny comment every now and then from me, to allay their fears, and our relationship dramatically shifted for the better.
Now, this kind of personal attention can be just kindness. You can use it to give people around you the motivation they need to succeed. But in this case, the neediness was sort of a bottomless pit, so it made me seriously question ever working with this person again.
Most people are broadcasting all kinds of things about themselves, almost painfully strongly. It's just that for the most part, nobody is picking up on it. Developers in particular usually don't want to pick up on it.
Let's not forget that your team needs to manage itself differently for the remote worker to be a success. Here's stuff that we (Sauce Labs Mobile Team) have found to work:
- Slack everything all the time even if you are in the same room. If you're not using Slack get on it; it's by far the best tool for remotes. If you have a discussion in person, copy the conclusions to the channel.
- Use video hangouts for meetings whenever possible.
- During such meetings, everyone must be "equal" in how they are present. Our solution is to have everyone log into the meeting at their own desk. (Audio and video will be delayed by a second, so you all need headphones for this to work.) So everyone appears as a talking head, equally taking turns. Another solution might be to do videoconferencing in a meeting room with a very large screen, so all the remote workers appear to be as present as people in the room.
- Make sure that team members socialize with the remote worker. We have semi-randomly-assigned sessions we call "coffee" where you both get a beverage and just hang out for an hour on a video chat. I know it sounds silly but it has a HUGE benefit.
This has to be a sick joke. The monorepo at Google was the bane of my existence, and everyone else that I knew.
I don't know what it's like now, but at the time Google was on svn, and there had to be a specially hacked version of it. It used a home-grown internal requirements system, to pull down the tree in a sparse way. And to accomplish that you had to re-specify all your dependencies in these special text files (because specifying deps in two places is obviously better.) And then the tool would spend a lifetime figuring it all out, and then you would still download like 35% of the repo anyway, because the nature of giant repos is to get tangled together.
With languages like PHP it might not be so bad, as you typically are editing one file at a time. But most of Google is Java, and most people like using IDEs for that, so it was a disaster. An IDE likes to touch everything all the time, to re-index and re-compile. And, fun fact: at the time Google recommended you store your entire home directory on an NFS network drive, for security. Everyone quietly ignored that.
It was better in one way: everyone could see everyone else's source, re-using libraries was more common, and branching and contributing patches was more common. It prevented people from hoarding their source code, and just throwing .jars or .so's over the wall. But I think the Github era has shown that this did not require everything being in one giant repo! I mean, at the time, Google had Codesearch and other great tools; even though git wasn't super common at the time, this could have been solved in other ways.
My current company is in the process of breaking up their monorepo, even though our codebase is nowhere near the same size. Lately I get to taunt my co-workers because I created a new project in the new broken-up way. All the tests on my micro-repo pass in 5 seconds, and it takes many minutes for the big one.
About a year ago I started being careful to call my things projects too.
In the current climate people want to hear what your monetization strategy is first, or they think you're a loser. And a lot of the time I don't have one, or it's really vague. I just have an idea that seems like it will be really important to at least some people.
Calling it a project also liberates me from feeling guilty about not having a corporation around it, or going to meetups to yammer with my city's alleged startup entrepreneur scene, etc.
Rent in the downtown core is about $1700 for a 1-bedroom.
Developers can do decently well in Vancouver, especially if you are employed by a foreign company. I am not sure about typical salary for developers, but I'm doing okay.
So, compared to San Francisco, for a good developer, Vancouver is slightly better. However, for everyone else in Vancouver, it's a disaster, since the median income is much lower. This is why Vancouver is actually less affordable than San Francisco, for the general population.
In this  excellent book, it is said that they do it as form of saving. Basically, they don't have bank accounts, and they know that if they don't buy those 1-2 bricks right away after they get paid, they'll spend the money on something else.
This is a large part of the reason. Money that gets earned gets spent pretty quickly on one thing or another, and it would be next to impossible for a family to save up enough to buy all the materials to build a house in one go.
Another reason is that they don't have easy access to credit markets, so they can't just get a mortgage and move into a home and start paying it off.
One more part of the reason is that they usually build the homes themselves, and its rewarding to see the house go up brick by brick - visual progress is being made.
I'm not an expert, but these are my informed observations and learnings.