Once you allow that, you can put anything in that graph. For example, that jump in the graph between the Cray and the AlphaServer calls for a product of around 3 cubic meters. I conclude that the Apple car will be more like a people mover than a car :-)
But the interesting thing, is that the iPad was created before the iPhone, and shelved to make way for the iPhone, so I'd say the modified graph reflects if not the release order, at least the order of innovation.
When the article refers to 'graphics-arts cameras', what do they mean?
Is there a spectral notch filter applied to the CCD, or is the article a troll?
"Scanning in black-and-white makes it possible for the non-photo blue still to serve its original purpose, as notes and rough sketching lines can be placed throughout the image being scanned and remain undetected by the scan head."
Any black-and-white scanner should have a spectrally-flat response, picking up blue just as black and white photographs see the sky as darker than white.
It's entirely possible that older lithographic film didn't have much response in the blue, but there's really no way that a modern imaging system won't pick it up.
What am I missing?
Edit: Experiment is the arbiter of truth: I took a picture of the screen with my digital SLR. As expected, every color swatch in the article is blue. Desaturated the RAW image. Looks grey.
The goal was to compose a layout into a single image.
You created a layout by literally cutting and pasting things onto a board. Then you placed that board in the area at the bottom and took a picture of it that was transferred to film loaded in the top.
You're right that the film was special; but it's the other way around from how you were thinking. The film was not sensitive to red light. To this film, red is "black" and cyan or blue is "white".
Why this was useful:
- You could open the box of film (it came in sheets) in a room that was darkened except for a red bulb, without exposing it.
- You could use overlays of transparent red material (rubylith) to mask things precisely. Even though you could see through to the layer below, the camera would see it as all black.
- And, as the article mentioned you can add notes to the layout with blue pencil and it would be invisible to the transfer. We always called this "non-repro blue" though, as in, the camera wouldn't reproduce it.
-- Litho film was also very high contrast so everything pretty much came out black or white. (Photos weren't actually reproduced as greyscale but rather as a set of larger or smaller black dots using a halftone screen. This still applies when things are printed.)
-- Because litho film was sensitive to blue, the non-repro blue writing on the white paper would, like the white itself, be an exposed part of the image. This results in a black area of the negative where silver halide has been turned into metallic silver. This black area would then become white again when the negative was used to create a printing plate.
Yeah, I expect that's just someone with a mania for Wiki-standardization. It's not a precise shade; any cyan-ish color would do. In practice non-repro pencils and markers varied from sky blue to a rich turquoise.
The article seems confused - it's implying that there is some magic shade of blue that cameras can't see (even today), which is totally wrong. I think that's why someone found it interesting to post here.
Graphic arts film wasn't at all fussy about the shade of blue (as you note) and so while there were expensive non-repro blue markers and pencils, everyone I knew (at the very end of the era of graphic arts cameras) used blue highlighters, so design studios were full of them.
I've stuck with blue as the only highlighter colour I'll ever use, more than 20 years since the original rationale.
Also, used to freak people out scribbling (non-repro) obscenities on a flat that was going to be sent to photo and turned into a newspaper the next day.
Especially since an sRGB triplet only specifies how to perceptually reproduce the color, and film has a different spectral response from the human eye. The dye in non-photo blue probably should actually spectrally be in the blue range rather than having any dye in the red or green range, since it would likely show up on film otherwise.
I believe they are referring to a technology of the ancients where they made thin films of photosensitive chemicals, exposed them to light, then processed them to make images. The chemicals varied in which wavelengths would activate them.
For instance, red would not activate the paper commonly used for black and white prints, hence the red lights in dark rooms.
It is also possible the cameras illuminated the artwork with a light to which the non photo blue ink was transparent.
The magic word here is "orthochromatic". Orthochromatic photo emulsions (the light-sensitive part of film or photo paper) are only sensitive to short wavelengths of light. The first photo emulsions were all orthochromatic, which makes skin look weird. Later we developed Panchromatic film which is equally sensitive to all colors. It replaced ortho in the camera, but ortho continued to be very useful in the darkroom and in compositing because it allows the red safelight and tricks like non-photo blue.
Not necessarily. Orthochromatic ("correct colour"), or ortho, materials were actually improved-spectrum materials that were sensitive well into the yellow-green. Prior to that, film and paper were really only significantly sensitive to blue/ultraviolet or "actinic" light. Getting to panchromatic ("all colours") was indeed significant, but ortho was advanced technology at the time. (And yes, being able to see what you were doing in the process room was a Good Thing™. Also, rubylith for masking.)
The article does mention diddling with the contrast and brightness as well as desaturating it. However, it doesn't give references to digital-based workflows working like this. I associate non-repro blue grids and pens with doing physical paste-up on a light table. I wouldn't think they'd be part of a typical digital flow although someone in that business would know better than I.
[Edit: As someone wrote, the article just seems confused. Yeah, you can adjust a digital B&W image so that a light blue goes away. You can also adjust it so a light yellow or a light anything goes away. Digital sensors do have different wavelength sensitivities but the use of non-repro blue and rubylith were a function of the specific sensitivities or lack thereof of litho film.]
Submitted because I really wanted to comment on this sentence: "For a startup, work needs to be both faster and more rigorous than an academic lab."
In our academic lab, if we knew how to be faster, more rigorous, or both, we'd be doing it.
The notion that a startup can do rigorous research faster than academia is curious. The scientific journals are not full of the output of startups. Industry in general, and startups in particular, can't expend the resources on covering corner cases and tidying loose ends. The profit is generally in getting the gist of an idea, not in writing it down, vetting every detail, and sharing it broadly for free.
I chuckled when I read that. Apart from the uninformed way this was expressed, the author's actual interpretation of this statement comes after that:
"In academia, in order to publish a paper, often you just have to get it to work one time out of ten – so you think, OK, I’ll just keep doing the experiment until it works. We need it to work nine or ten times out of ten."
So the author confuses it with engineering. Then -- allow me to be a bit cynic -- the actual meaning of the article would be something like:
"If you want to be a scientist at a startup, you need to become an engineer."
> "In our academic lab, if we knew how to be faster, more rigorous, or both, we'd be doing it."
There are some cases where it's obvious how to be faster (and occasionally more rigorous): judiciously throw money at the problem. In academia, where grant funding is pretty limited and equipment is expensive, you probably want to only have enough infrastructure for, say, the 50th-90th percentile of load (microscopes, thermocyclers, analysis compute power, what have you), and accept that 10-50% of the time there will be a queue. If the difference between being successful in the market and having your lunch eaten by someone else truly is a matter of weeks, then it makes sense to you and your investors to put some extra money into a widget that will sit idle for most of the time but that helps when things are crunched.
That said, money can't fix everything---9 women can't produce a baby in one month!
'Rigour' in the article means 'intensity'. 'Rigour' in science means 'validity'. A study with high rigour means that they've nailed down more loose ends than a study with low rigour. The two concepts are largely orthogonal - and it's disturbing that someone telling scientists what to do has used the wrong version of 'rigour' for that industry.
Ironically, using the 'rigour' of science, startups require much less rigour - startups just need something that works enough, not something that is as correct as can be done.
Academic rigor applied to industry would be a type of overfitting. Just as one example, customers don't care how accurate a fitness monitor is, as long as it is vaguely effective. Yet, few journals or funding agencies would buy your claim of building a fitness monitor unless you can rigorously demonstrate its performance and novelty.
Of course, the story is different in pharma and a few such fields. Theranos is an example of how lack of rigor and openness can damage healthcare startups.
Here is one answer in the form of an analogy -- but I preface this by saying that comparing the set of all academic science with the set of all industry science is fraught at best -- academic science is to cottage industry as industrial science is to factory production.
A related consideration is that training grad students and post-docs is a key component of most academic science. The requirements of training often limit the size of teams working on a single project with the PI-trainee relationship dominating the organizational structure.
As the "PI" of a science startup R&D team, I can start and stop new projects at will with varying team sizes and mandates without the consideration that my folks need to produce a body of published work to further their careers.
The author might have meant aiming for actual results instead of publishability. Academia, in general, is fucked up. Research is being done in order to publish - which means lack of real rigor and lot of fake one. Studies are being performed using bad methodologies, and then massaged and/or repeated until results cross the magic threshold, at which point they get published in 10 papers that say the same thing in slightly different words.
It's a terribly inefficient process that could be optimized by changing the goal from "publishing" to "making something that actually works". Thus you could achieve greater speed and more rigor at the same time.
(Basic research could probably be optimized too, although not through market incentives.)
While I generally agree with your comment, I think the one big advantage of private companies over academia is that they're typically much less resource constrained. Academic labs sometimes take huge shortcuts on cost with strange side effects. Often the way to be more rigorous or faster is "have fewer resource constraints" and even a mediocrely funded startup can have much more money than a well funded academic lab. On top of that, the more experienced team members spend a smaller percentage of their time fundraising.
The same thought occurred to me; I think it's probably more along the lines of "faster and relying on more intuition".
I don't think this is a new concept; Jon Gertner in his excellent book "The Idea Factory" writes how the researchers at Bell Labs switched from basic research to development during WW2 and operated in a similar way. If anything, the urgency of development in the war effort resembled a startup in how it accomplished within a few short years projects that would have taken decades in peacetime.