
Programming in the 1960s - nickpsecurity
https://www.slideshare.net/lenbass/programming-in-the-1960s
======
daly
I worked for Unimation, the company that invented Robots. The 'research lab'
consisted of myself and Ming. We were, I believe, the first people to ever
connect a computer to a robot. It was a PDP-1103
([https://en.wikipedia.org/wiki/PDP-11#/media/File:DEC_LSI11-2...](https://en.wikipedia.org/wiki/PDP-11#/media/File:DEC_LSI11-23.jpg))
connected to a 6-axis industrial robot
([http://www.robothalloffame.org/inductees/03inductees/unimate...](http://www.robothalloffame.org/inductees/03inductees/unimate.html)).

We built a board to talk to the robot by replacing the robot plated-wire
memory
([https://en.wikipedia.org/wiki/Plated_wire_memory](https://en.wikipedia.org/wiki/Plated_wire_memory))
with our wires. We ran into two problems. The first was a hardware bug. We had
a 'ground loop'
([https://en.wikipedia.org/wiki/Ground_loop_(electricity)](https://en.wikipedia.org/wiki/Ground_loop_\(electricity\)))
when we connected the 5-volt DC computer buss to the 440-volt AC robot. The
computer was unhappy.

The programming bug was more subtle. Ming's computer card required a 'trash
read' (read-and-ignore-results) in order to initialize the card. So the first
read of the robot position required 2 reads, the first result was ignored and
the second result was used. Normally I wrote machine language using paper
tape. But that week I got a BLISS language compiler
([https://en.wikipedia.org/wiki/BLISS](https://en.wikipedia.org/wiki/BLISS))
from a programmer at CMU. It was a super-optimizing compiler. The machine
language program worked. The BLISS language program did not. Ming finally
convinced me that the BLISS program, despite what the source code said, was
NOT issuing an inital double-read (using an oscilloscope on the data lines).
The bug was that the BLISS compiler figured out that the first read results
were never used and that it could optimize the first read away, no matter what
the source code said. Props to the BLISS guys.

~~~
phreeza
What drove the robot before that, just a bunch of relays? Or manual input?

~~~
daly
The robot had a 'teach mode'. You used a hand controller to move the robot to
a position, pressed a 'teach' button, and then moved to a new position. The
values from the 6 position encoders were written to the next memory location
when the teach button was pressed. To run the robot you just step the memory
and move to the recorded position.

Ford had a long (multi-mile) production line. It took weeks to set up a new
line, part of which was taken up by robot programming. You had to move the
product line a few millimeters, teach a new robot position, and repeat. Our
research problem was to teach the robot standing still and then, given
information from an encoder on the production line, compute the path that the
robot should take to hit the moving position. That would reduce the robot
programming time to a few minutes.

Really a simple problem. A 7-degree of freedom, non-collision robot path in
real time based on static data, programmed on a micro with 8 kilobytes of
memory, including space for initial data.

Bill Perzley was the 'head of research', on loan from Condec Corp. He was a
genius and a super-intense cigar smoker (the harder he thought, the more smoke
was generated). He worked out the exact memory data BY HAND to make sure I got
the right answers by program.

Bill and I flew out to SRI International to learn how to compute robot joint
positions from (Richard) Lou Paul. On the flight back to Connecticut we sat in
the back of the plane near the restroom studying our notes. Bill was puffing
on his cigars. By the end of the flight you couldn't see all the way up the
cabin. Those were the days.

------
daly
When I worked at IBM Research we got access to the latest hardware (IBM
RS/6000) and the latest software (AIX), written by guys down the hall. This
machine was "blazingly fast" for its time and had a useful feature for us. You
could add a new disk drive and dynamically extend your filesystem over all the
available drives. We grew to a 3-drive system which contained all our master
source code.

One day a hard disk failed. The system couldn't reboot because the operating
system needed "quorum" (all the drives) and, in particular, it needed part of
the operating system on the failed drive. So we lost ALL of our source code.
You didn't keep sources on your own drive because you used SCCS to check them
in/out and you didn't have enough space for very much.

All eyes turn to me. Sigh. The game was to hook up a disk drive to another
machine, open the drive for raw read, and read the drive sector by sector. The
theory was that the failed drive could still be read, just not certain
sectors. Anyone who tries this will discover that there are a LOT of details
about drives they never knew, such as "overflow sectors" and file-system
layouts and log-file formats. Plus there are multiple copies of code pieces
because the new version is allocated but the old version is in "free space"
but not reclaimed. All in all, it was a long time recovering the source tree.

Raw read works. You knew about that, right? You can handle head/track/sector
maps on your SSD, right? You know your B-trees cold and your log-structured
filesystems, right? Forget the Order of an algorithm. Can you recover vital
company assets from a failed hard drive?

Fun times. Don't you miss them?

~~~
cat199
Miss those times?

Nothing specific to those times about having a terrible backup strategy..

Magtape existed from the very early days and definitely was around during this
fiasco.. similarly people still don't run backups when they should..

which reminds me, I need to kick off an offsite backup :)

~~~
daly
So, backup to tape... Story there. I did a backup system on the mainframe to
tapes. The first issue was that the machine room operators would 'recycle' the
tapes they dismounted. So while we thought we had backups, the operators just
assumed they were 'scratch tapes'. Second, after having solved that problem we
found that a tape was giving us hundreds of thousands of duplicate records.
When I visited the actual tape drive I could see that the drive found a bad
spot on the tape, did 'auto-recovery' to reread the tape, rinse and repeat.
The end result was a physically transparent section of tape. I used to carry
it around in my wallet to explain that even backups fail.

~~~
DrScump
Was that before tape catalogs and volume labels? That was well before my time.

In the OS/MVS environment I was in in the 80s, the job's JCL would identify a
data set and a relative version number (+1 when creating, 0 for current, -1
for its predecessor, etc.), then the tape catalog would look up the volume
number, which was written on the tape case itself (and _usually_ correct). If
the operator mounted a wrong tape (either wrong dataset or version), the tape
would be kicked off and you'd be prompted again ("you dummy humans can't even
handle _unpacked decimal?!_ ")

It was typical that backup copies be made via an additional DD pass.

------
daly
I was a mainframe systems programmer. Every so often the system would crash.
Step 1 was to visit the machine room and look for the toolbox. The on-site IBM
maintenance guy would randomly power down and open cabinets to perform
"preventative maintenance" so the system wouldn't crash due to hardware
failures. Occassionaly he powered down our disk drives.

One of the standard hardware debug methods was to 'swap cards' with a working
unit. We had many controllers for our terminals (connected by coax to the
office desks). When one failed they would swap cards with another controller
until the first started working. Then they would order a new replacement for
the failed card.

As a result we had a 3-day outage. Each controller was purchased at a
different time so the controllers had cards with different versions (different
'yellow wires' for patches). Eventually a bad combination of controller cards
caused the channel (an IBM closet-sized box) to be confused. That caused the
CPU to be confused (since it needed channel 0). So the CPU refused to boot,
which IBM maintenance tried to debug by changing cards in the CPU. Sigh. We
had to bring in one of the mainframe designers to debug it.

~~~
chris_st
I was watching the VAX-11/780 at college get repaired by the DEC tech. She
told me of her most difficult case: a randomly crashing 11/780\. They checked
everything (I believe she said that they eventually replaced every card!) and
finally found a little piece of metallic gum wrapper -- it'd blow around in
the fans and eventually bridge two pins, crashing the machine. They'd shut it
down to look at it, it would settle to the wires above the fans, and wait its
next chance.

Glad I work in software :-)

~~~
WalterBright
I worked my way through college by wirewrapping prototype boards. The last
step was to get a magnifying glass and carefully inspect the board, pin by
pin, looking for any stray tiny bits of wire, or any wire hairs sticking out.
For soldered boards, I'd examine for any tiny solder bridges.

Doing that would save a ton of debugging time :-)

Since wirewrapping is a long obsolete technology, here's what the boards
looked like:

[https://microship.com/wp-
content/uploads/1974/07/wirewrap.jp...](https://microship.com/wp-
content/uploads/1974/07/wirewrap.jpg)

~~~
tsomctl
Wirewrapping is still done, it's pretty common for copper telephone lines and
T1 circuits at the central office.

------
daly
My first industrial programming job was at Unimation.

Grab the edge of the table. Line up your wrist and your forearm. You'll
discover that you can move your wrist/forearm combination without moving your
hand. That is, there are multiple possible solutions that will place your hand
in that position.

The Unimate 2-arm robot had that problem. When the wrist/forearm aligned, the
wrist violently twisted one way while the forearm violently twisted the other
way yet the hand never moved. Nobody knew why but it was clearly "the
computer's fault".

I was given the paper tape that ran the robot (binary code for a Data-General
Nova computer). I asked for the 'source code' but they had the program written
by contractors who had gone out of business and they never asked for the
source code. It turned out that the program was trying to compute tan(0/0)
when both the wrist and forearm encoders read 0. All I had to do was rewrite
the trig routines (in fixed point/real time code) to avoid the issue. In
binary. Using a pin-punch to modify the paper tape (a block of metal with 8
holes and a round metal pin that fit through a hole to punch the tape,
yaknow... like you used in your CS courses) by finding a JMP instruction I
could mangle properly to jump to the new subroutine address, compute the
result, and jump back.

See what you're missing?

------
daly
Unimation had the first 2-arm robot, a research project for Ford. I was in
charge of programming the Nova (Data-General) computer (by toggle swithes
first, and then by paper tape).

We were assembling a transmission control unit. It changed gears based on
rotation speed by sliding a calibrated spool valve against a spring. The spool
valve was a TIGHT fIt. It controlled oil flow to shift the transmission. You
couldn't insert it by hand without it jamming. The robot was so precise it
could insert it easily.

The robot was made of aluminum and had to be run until it reached operating
temperature because otherwise the joint lengths were not right. Everything had
to be perfect.

Assembling involved picking up the control unit, placing a small cover,
driving two screws, turning the controller over, inserting a spring, inserting
the spool valve, placing a second cover, inserting two screws, and put the
finished assembly away. Repeat.

We made hundreds of these. The robot was FAST. REALLY FAST. In a fit of
misplaced confidence we invited Ford senior executives to a demo (I HATE THAT
WORD). They had invested a LOT of money so I guess they should see what they
bought.

On the day of the demo, during warmup, the robot managed to mangle its fingers
on a fixture. Our machinist made a new set from raw materials while we smiled
and chatted with the Ford executives and warmed up the machine.

Demo time. We lined up near the robot, with the Unimation chief engineer and
the project engineer, and the 3 Ford senior execs in front, Ming and I behind
and out of the way.

Push the button.... The robot picks up the control unit, places the small
cover, throws the screws at the Ford people, turns the controller over, throws
the spring at the Ford people, picks up the spool valve (a solid steel piece)
and... We hit emergency stop (the big red button). I nearly died trying to
keep from laughing watching Ford execs dodging flying parts.

We recalibrated the robot and re-ran the demo, which worked fine.

------
daly
An old programming meme is the "Halt-and-Catch-Fire" bug. It turns out that
this is a real bug.

The X11 system used to require knowledge of monitor characteristics in order
to install it. So you had to know things about the clock rates, flyback time,
etc. to use a monitor. There were many X11 threads about finding the settings
for a new monitor since the manufacturers were not very forthcoming with them.

A bad setting for these monitor parameters would cause the monitor to overheat
and potentially catch fire.

On another thread, the Pentium processor had the 'FOOF' bug
([https://en.wikipedia.org/wiki/Pentium_F00F_bug](https://en.wikipedia.org/wiki/Pentium_F00F_bug)).
A user-level, non-priv instruction 'F00F' (in hex) would cause the processor
to halt. It was normal to send someone an executable that contained that
instruction just to force them to reboot their system.

So some dimwit (umm, me?) decided that deliberately setting the X11 params
incorrectly followed by F00F would essentially cause the system to "Halt-and-
Catch-Fire". Fortunately we unplugged the monitor before it got too hot.

~~~
pjmlp
> The X11 system used to require knowledge of monitor characteristics in order
> to install it. So you had to know things about the clock rates, flyback
> time, etc. to use a monitor. There were many X11 threads about finding the
> settings for a new monitor since the manufacturers were not very forthcoming
> with them.

Oh boy, I remember those days playing around with xfconfig and hoping for the
best.

When I got Slackware 2.0, my card was a Trident that could only do 640x480 by
default, anything higher required playing with that configuration.

~~~
frik
I remember an early Linux from around 1999, had to dig around xfconfig as
well. The book I used as guide had scary all uppercase letters mentioning be
extra careful.

And getting sound and modem working took me several days. Especially the
WinModem caused troubles, as basically major parts of the modem functionality
sat in the Windows modem driver, so getting it to work on Linux was not funny
- but I got it working.

------
daly
Out-of-position welding occurs when you have a seam between two pieces of
metal and the seam is not horizontal. Companies would spend a lot of money to
build fixtures to rotate their equipment in mid-air so seams could be flat and
they did not require Master Welders but could use lesser-priced regular
welders. Some industries, such as making super-tankers, cannot do this and
have to employ Master Welders.

I was hired because I did machine vision in grad school (using punched card
images because the the school did not own a camera). My job was to use the
camera to optimize the 'puddle' of molten steel as it melted so the seam was
precise. But some bright spot figured out that we could use current-feedback
from the arc instead (so we never bought a camera).

We created a robot that you could just guide along a seam to be welded, press
a button, and it would weld the seam. It was light-weight and could be clamped
anywhere so it was extremely portable and perfect for things like ship-
building.

Everything ran off my PDP-11/40 computer which we loaded using paper tape off
a teletype machine. We shipped the robot, computer, and teletype to Detroit
for the demo. Just for precaution because something might go wrong I used
mylar tape instead of paper tape. We get to Detroit the night before. The
investors would be in a big hall next to the demo room where we had the
equipment. Joe Engelberger ([https://www.robotics.org/joseph-
engelberger/about.cfm](https://www.robotics.org/joseph-engelberger/about.cfm))
was to give them a big speech and then bring them in for a demo.

The truck arrived that night. We started unpacking. The robot was fine. The
PDP-11/40 had all of its circuit boards in a heap at the bottom of the box.
The teletype had a big V dent in the keyboard (up to the Y-row of keys)
because the fork-lift smashed it. So we reset the PDP-11 boards and rewired
the teletype so we could start/stop the paper tape reader and give commands by
switch without using the keyboard. Then we tried to load the program but it
wouldn't work. It turned out that the memory-address chip on the PDP-11 memory
board was cracked. So we ran out to Radio Shack as soon as it opened in the
morning, bought a replacement chip, a soldering/desoldering gun, and some
solder. We replaced the chip while someone stood at the back of the lecture
hall signalling Joe to "keep talking". Finally we loaded the paper tape and it
worked. Oh, the joys of programming.

------
gtirloni
Always entertaining to read about these stories.

The elevator shaft issue with the bigger computer was fun. We once rented some
co-located space in a data center that had a small door and 19" servers
couldn't pass without inclining them (which got tiring and stressful after a
few servers, specially if we heard some loose component moving in the
chassis). Nothing compared to having to modify the building for a different
computer!

Wikipedia has a great page about this era:
[https://en.wikipedia.org/wiki/Timeline_of_computing_1950–79](https://en.wikipedia.org/wiki/Timeline_of_computing_1950–79)

------
52-6F-62
My grandmother worked in the oil industry around that time. I think she was at
Gulf in the 60's / 70's before moving on to Shell and retiring in the later
80's.

When I was too young and unknowing she would lecture me about her work in
Fortran and other languages in her systems analysis work. Unfortunately she
lost her mind and passed away before I had the chance to question her in my
own better state about her work.

------
johnohara
I think slide 21, "Me ~1980" is dated a couple of years too early. VT220's
were introduced in 1983. Doesn't matter, that's what it looked like. That was
my desk too. Although I had an overhead bin.

Chasing commas, in amber, at 9600 baud, 8-bit, no parity, one stop bit.

~~~
daly
Which reminds me... When I moved from front panel switches to hooking up a
teletype I ran into a problem. The teletype was a current-loop device
([https://en.wikipedia.org/wiki/Digital_current_loop_interface](https://en.wikipedia.org/wiki/Digital_current_loop_interface))
and the computer used voltage. I took the computer apart (a Data-General
Nova), looked at the interface circuit for a while, and realized I could get
current-loop if I made some minor hardware mods involving cutting some circuit
traces. Fresh out of school on my first job it was kind of nerve-stretching to
mangle a very expensive computer board (DG liked a single giant motherboard).
Nevertheless, I persisted. And it worked. Whew.

Surely CS grads today learn electronics, right?

~~~
qubex
”The only person more dangerous than an electronic engineer with root
privileges is a programmer with a soldering iron.”

------
A_Person
One of my early jobs in the late 60s/early 70s was with a small government
group providing online data processing services to various local hospitals.
The system ran on two CDC computers, a 1700[1] and a System 17. These were
originally intended for things like process control, but this group had added
a home-grown multiuser timeshare system on top, then an online pathology
application system on top of that!

My first experience in this group was watching them debug a problem. They had
an old FORTRAN listing spread out over a table-sized (10Mb!) disk drive unit.
The bug was in 5 lines of code, but it just didn't make any sense - it was as
if one line was simply not being executed.

Eventually one of the guys went over to the command console and looked at the
object code on disk. "Hey, that 'IF' statement isn't in the object code!".
More head scratching ensued, until someone looked more carefully at the grubby
thumbprint at the start of that line on the dirty old listing.

Under that grubby thumbprint was ...

[can anyone quess?]

[1][https://en.wikipedia.org/wiki/CDC_1700](https://en.wikipedia.org/wiki/CDC_1700)

~~~
jloughry
A stray character in the comment field (cols 1--7)?

Tell us!

~~~
A_Person
A 'C' in column 1 - FORTRAN's comment indicator! So the statement was there,
but commented-out. The 'C' was exactly under the thumbprint at the start of
that line on the dirty old listing. No-one had thought what might be under
that thumbprint :-)

------
keithpeter
Undergraduate mid 1970s (c.f. slide 3 in OA): "Coffee batch". Put your cards
in with a job card. Go and have coffee. Come back and pick up the output
(Algol 68 so usually a mistake or two).

Postgrad early 1980s: Card punch on mainframe. I recollect some dumb terminals
and some access to minicomputers. Supervisor bought an Apple II and a floppy
disk drive so things got saner later on.

Happy days.

~~~
walshemj
I remember in the late 70's going into our terminal room and meeting one of
our engineers who was checking on the progress of her job she was running on
an ICL mainframe down at AWRE and she sighed and said "48 jobs in the queue"
presumably it run the next day

------
pjmorris
It was 1981, but this technology got me my first date. Community College
Fortran (77) class, punched (IBM 029 keypunch) cards fed to a 360 emulator
running on a Burroughs 6800. Had to hand the cards in at a window, wait
anywhere from hours to the next day for a green bar printout usually laden
with errors. Lots of time spent desk-checking, going over cards. Because of
the setup and the time involved, the keypunch/window room was a sort of social
center. After some initial struggles, I sort of got the hang of things, to the
point where people would ask for help. I didn't think anything of offering the
help. One day a girl said 'Let me buy you lunch, as thanks.' She had other
plans romance-wise, but we were friends for years, and it was the first time I
even had an inkling that being a geek didn't have to mean a life alone.

------
DrScump
"They had an IBM 360/44... what we would now call a RISC machine"

Wow, I wasn't even aware of that... I thought the instruction sets of the 360
series were all compatible.

[https://en.wikipedia.org/wiki/IBM_System/360_Model_44](https://en.wikipedia.org/wiki/IBM_System/360_Model_44)

------
Stratoscope
I learned how to program in 1968. My high school in Scottsdale, Arizona had a
computer class. I didn't know about the class and didn't enroll in it, but I
saw the Teletype in the corner of the math classroom and asked about it, and
they let me use it after school.

Dialup computer time was expensive; I think the school paid $30/hour - in 1968
dollars! So you couldn't just stay online and type in your program
interactively. You would punch your entire program onto a paper tape and then
feed the tape back through to print it out locally so you could check it. Then
you'd dial up, feed the tape through again, wait for the results to be
printed, and hang up.

This video gives a pretty good idea of the Teletype sounded like. Listen to it
with headphones or speakers turned up:

[https://www.youtube.com/watch?v=ObgXrIYKQjc](https://www.youtube.com/watch?v=ObgXrIYKQjc)

Yes, it hummed like that the entire time it was turned on, and the keyboard
and print mechanism were both _loud_. At a later job I had, you could tell
when the mainframe went down because you could hear people up and down the
hall bashing on their keyboards in frustration.

A bell rang somewhere around column 72 (you can hear it in the video), and if
you got to column 80 the print head would just start typing over the same spot
over and over again.

At the end of each line you pressed three keys to punch these characters onto
your tape: Carriage Return, Line Feed, and Rubout. Carriage Return moved the
print head to the first column, Line Feed advanced the paper, and Rubout was a
character that was ignored by the print mechanism and by the computer you
dialed up to. This extra character gave the print head and paper feed some
time to catch up when you ran the tape through.

Rubout was also how you corrected errors. Being character code 0x7F, it
punched out all the holes in one row of the paper tape. Whatever other
character was there before, it was now a Rubout. So you would hit Backspace to
literally move the paper tape and print head backwards. Then Rubout to punch
away the error, and start typing again. Of course you had a mess on the paper
at this point (both your old and new characters printed at the same spot), but
the tape would be OK.

The next summer I got a job at the timesharing company the school used, and it
was an amazing experience to type in a command and get a response back from
the computer within a few seconds, without having to worry about the cost of
being online.

My official job title that summer was "Night Operator". That way they only had
to pay me $2/hour. But it was OK, their timesharing service shut down at night
and I got to do whatever I wanted with the SDS Sigma 5. I found the Algol 60
Report and the the assembly language manual and went from there.

It was my first personal computer!

------
agumonkey
so ibm360 are the roman gauge of elevators ?

~~~
stevefolta
Maybe the other way 'round: I've heard that IBM designed their equipment to
fit in standard elevators (whatever that may have been).

