
Computer Vision in 1982 Using Altair Computers and a Cromemco Cyclops Camera - Flyguy_
https://www.youtube.com/watch?v=2y5oVHNfbf8
======
lotyrin
In the context of my life right now, this makes me pretty sad. Seeing effort,
ingenuity and engineering leap the many hurdles of that era and build
something that still seems remarkable to me because of prolonged exposure to
the lack of talent, knowledge, skill, vision and perseverance in my corner of
industry (the overlap of "web development" and "enterprise IT").

If the footage here was taken with modern camera equipment so I could pass it
off as being from today, I feel like I could probably show it to colleagues,
make up a story about how it takes some servos designed by Elon Musk, a
quadcopter with a 4k camera that hovers in place above the maze (for whatever
reason), uploading the footage (with no regard to latency) to a "Deep
Learning" algorithm invented by Google and motion control algorithm designed
by Boston Dynamics that "only" uses 8 GPUs and "only" costs a few dollars an
hour running in Amazon Web Services without having many questions to answer
about any part of that.

~~~
WhitneyLand
>makes me sad...my corner of industry (the overlap of "web development" and
"enterprise IT"

Why do you stay there and keep doing it?

~~~
i336_
That's where the money and expertise is, I'm guessing.

And it's one of those fields where you have to invest a truckload of time and
energy in advance learning about everything, but once you've done that keeping
up is (provided you're on a focused team) not _that_ hard and it's easy to be
productive. But webdev and enterprise IT are so thoroughly vertically
integrated (especially when developing LoB apps) moving away basically means
facing a nontrivial period of downtime.

If you're valued enough and don't have a primarily liquid lifestyle (relative
to cost of living etc) then living off savings could probably work for long
enough to study and line up a job somewhere else, but that doesn't discount
the added cost/toll of the extra stress it will inevitably impose.

~~~
WhitneyLand
I think your guess is probably right.

But some enterprise IT jobs can be soul crushing. I don't mean to sound 1st
world ungrateful but we are lucky enough to have choices.

How do you non-financially weigh the benefit of pursuing a lifetime of
rewarding work at say, one half the salary you could otherwise earn?

~~~
i336_
> _But some enterprise IT jobs can be soul crushing._

Mmm, yeah. Good work-life balance that provides for some sense of distance is
critical with such positions. Something that helps you see beyond the job.

If the job makes that difficult or impossible and it's not because you're in
the middle of a sprint or some other temporary situation, burnout is a matter
of _when_ , not _if_. If you can't find a way to to mix things up and get a
change of scenery within your current environment (eg, working on a different
project, working as design lead on some totally unrelated team, etc), it might
be a very very good idea to start mentally preparing yourself to redo your CV,
I think.

> _How do you non-financially weigh the benefit of pursuing a lifetime of
> rewarding work at say, one half the salary you could otherwise earn?_

Obvious answers are quality of life, general mood and health, etc.

Money is after all a tool, not an end in and of itself. Sure, money will get
you lots of "friends" and social status, but that's only because you look
shiny and might drop breadcrumbs. I'm not aware of any other reasons money is
useful; indeed, the more you have the more vulnerable you are, beyond certain
thresholds.

------
deutronium
I love how it sounds like it's a re-purposed memory chip used for the camera.

[http://jalopnik.com/the-first-digital-camera-you-could-
buy-w...](http://jalopnik.com/the-first-digital-camera-you-could-buy-was-a-
total-hack-1215543300)

~~~
cromwellian
To me this is one of the purist expressions of the hacker ethos. I work at
Google, and our ethos focusing on doing things the right way, e.g. well
engineered, scalable, etc. This works, but is often expensive and slow to
execute, and the result may last longer without having to be rebuild from
scratch.

But when you're hacking, you use whats available, often outside of the
intended purpose of the components. You make the impossible, possible, today,
rather than years from now.

So Cromemco was selling a totally non-consumer friendly item, but they made
prosumer digital photography available 12 years before anyone else, and while
this didn't benefit a huge number of people, this kind of early exploratory
product creation can influence lots of other people to do things with it, that
eventually spawns off industries.

I think the first real AI breakthrough, or what we think as SciFi style AI,
might come not from researchers or large companies, but from some hackers
mashing up a ton of techniques and approaches that make no logical sense to
combine, and can't be fully explained, but somehow "work".

~~~
deutronium
Yeah I agree it's a really cool hack! I wonder if they could have even used
multiple chips to increase the resolution, assuming they could re-package the
silicon to minimise gaps between them.

One of the books I really love is "Hackers: Heroes of the Computer Revolution"
by Steven Levy. Which shows the inventiveness of those guys!

With respect to AI, that sounds really interesting, I'd not thought a lone
researcher could create something so complex, but that would be really
awesome. One of the projects I find really fascinating is OpenWorm, if I
recall correctly they have a bunch of videos, where the neurons of the C.
Elegans worm have been recorded optically and they're creating their model
from those.

------
contingencies
Interesting to consider how, according to my recent reading of some machine
vision review literature, the industry's standard process for process-oriented
vision tasks has apparently changed so little in 35 years. Already we see high
contrast polarization of the image (reduction to black and white/binary),
feedback loop, ring lighting, and task-specific image object classification.
Of course, there are newer techniques, but for many linear processes
incorporating machine vision the same general approach is still taken.

~~~
tluyben2
I was thinking that as well; I had a project that needed computer vision and I
thought to hack a prototype to give to the real experts. I used OpenCV but
with techniques I have learnt begin 90s in university on Sun Sparcstations in
C. When I was done I handed it over for the 'real' implementation and that
team told me it was done as they would have done it as well. Of course they
knew many tricks to make it more optimal but like you say, the basis did not
change much. Do the modern neural nets need this level of preprocessing too?

~~~
Cyph0n
I used OpenCV for the first time around 2 years ago, and I asked one of my
professors if he had a good CV text I could use to quickly learn the basic
techniques. The book was from the late 90s, but the algorithms it covered were
basically the same as those in the OpenCV API. It was quite surprising to me
that not much had been added to the state-of-the-art since the publication of
the textbook.

For those curious, I was writing some code for 2D object distance estimation
for use in a undergrad robotics competition. My team ended up losing, but I
think we could have done better had I not started reading up on CV and writing
code only a week before the deadline. I'm surprised we were able to even get
past the first round!

------
muzster
If I have seen further than others, it is by standing upon the shoulders of
giants.

