Programming with trits and trytes!
Unfortunately, domestic Soviet computers were effectively killed by a 1970 decision to base all future efforts on a "Unified System" which was really a clone of IBM System/360:
Playing it safe by shooting yourself in the head...
exactly why I'm learning AWK via the gray book
* You can negate a number by interchanging 1 and ̅1.
* The sign of a number is given by its most significant non-zero trit.
* Rounding to nearest integer is the same as truncation.
By the way if we had balanced ternary computers today, then you wouldn't have programming languages with signed and unsigned integers. They'd be the same thing.
Of course the above might be academic if it's more difficult to implement in CMOS ...
Edit: Also you can argue that the most economical radix is e, and 3 is closest to this ideal. See: https://web.williams.edu/Mathematics/sjmiller/public_html/10...
Edit #2: Removed the claim that addition is easier than binary. I misread what Knuth was trying to say.
I'm skeptical of value of ternary. The key behind the success of binary computers is that binary has great noise immunity; having a _single_ threshold is way simpler to get working than two or more. (Yet we do go there for NAND Flash, but note how speed and endurance worsen dramatically as thresholds are added).
The "most economical radix" is often cited but it's a red herring - it ignores practical concern of greater importance.
""" Fame is partial with her favor, and has not seen fit to bestow any upon the creators of the panel switch, the type E relay, the crossbar marker circuit. There are no biographical anecdotes we can summon to illuminate the lives of these men; the only readily available remains of their lives are the stark fossils of the machines they created.
There are times when I can lapse into such prose, but I try to balance my writing with my perceived audience's expectations. One could compare my postings here with that of other forums I frequent, like /r/justrolledintotheshop, and they would definitely notice a difference.
I don't believe I could summon the level of prose this author has, though.
Finland had strong import controls directly after the war, so imported electronics was very expensive. Industry automation still used legacy hydraulic logic (fluidic logic) to control complex automation in pulp mill processes in 70's. It was very steampunk.
Hydraulic logic control is still a useful thing but nobody builds complex industry automation using them.
One downside to this is the case where there's condensation in the line and temperatures go below freezing. You can lose control thanks to the ice that forms. I remember this causing trouble with powerplants in the 90's during an ice storm in Houston (which is usually quite warm and humid, so they didn't think too much about icing.)
Amusingly enough, lots of those pneumatic systems use 3-15 PSI signalling, which works the exact same way that 4-20 mA signalling works -- with a live zero so breaks in circuits can be detected!
It's like classic reverse DNS. There was already a mapping from A to B (phone number to outgoing wires) and a mapping from B to A was needed for billing purposes. But B to A info couldn't be obtained from the switch fabric. A physically separate B to A mapping had to be built and maintained in sync. There was nothing which inherently made the two match. That was all done by hand.
The early history of computing was a struggle to find a usable memory device. Relays were very bulky as memory devices, and none of the relay computers had much memory.
Also here: http://www.inf.fu-berlin.de/lehre/SS01/hc/zuse/node4.html
Interestingly, the machine (built in 1941) used binary floating point. (Not IEEE754. :-))
However it was a marvel to see and hear that thing click and clack all day, with technicians with their ear trained to detect issues and replace relays by listening to the switch.
Of course billing was done through a guy on top of a ladder taking large format B&W pictures of the bank of user's (mechanical) counters.
Might be expensive to house and maintain something like the SSEM or the CSIRAC, but universities have spent money on sillier things.
I love getting my Dad talking about some of the labs he's worked in over his career and kind of wish I could get a sense of how much has changed over such a short time.
After a couple go-rounds with that, we were then allowed to use the terminals, mostly Visual 200, with a few VT100s, an LA36, and an LA120. The Visual 200 wasn't a particularly good terminal; it seems there was always at least one out of order at any given time.
It's kind of scary that a $5 Raspberry Pi Zero is much faster and has vastly more RAM than the 11/34, which cost well north of $100K back then. Not to mention that there was a grand total of 28 MB of storage on that 11/34 in the form of two RK07 disk packs - you could fit thousands of RK07 images on a single 64 GB MicroSD card.
"You can buy FETs that have 1970s performance specs in 2017, very cheaply in fact, but in 2017 you can also buy FETs that output many watts of power at 10 GHz x-band microwave freqs. Here's a DEC flip-chip module containing two flipflops that topped out around 30 or so MHz in 1965, your assignment is to use modern transistors in the lab to make a modern work-alike operate over 10 GHz"
> buy FETs that output many watts of power at 10 GHz x-band microwave freqs.
Part number? Gate drive for that must be 'interesting'.
That said, on emulation, it's a great idea. It's easy to forget how many of the assumptions of C-like languages and Unix-like operating systems are built into the hardware, particularly when there are other ways to go about the task. There are reasons the industry has gone the way it has, but it's worth keeping those ideas around for posterity.
It had seven words of five bits each (that's all the relays we had). It could add, subtract and store results.
We got honourable mention.
BTW - how old were you (if you care to tell the world!) when you created this machine?
What made you take pictures of it? Did you keep any part of it, or was it all relegated to the "junk bin"?
I have found that it is extremely rare that these machines like yours ever have pictures, much less the writeup like you have created. It doesn't appear many people created such computers back then, and I don't know of any who published how they created them. For instance, I have yet to see any old "Popular Mechanix" style article from the 1960s on "Build Your Own Electronic Brain" (as I would imagine it would be titled) - but such an electro-mechanical project would certainly fit those kind of pulp magazines.
Which I find odd. I don't know why these machines - few as they were - were never publicized; perhaps there wasn't an audience, or because there were so few, those with the ability to write such an article were fewer? I do know there was some interest in computing at a "lay-person's" level, because there were several books on contemporary forms of computing and programming available in the 1960s (most had enough information to allow a person with sufficient skills and knowledge to design and build a simple machine - perhaps you got inspiration from such a source for yours?).
These early "hobbyist" computers, along with early hobbyist robots - represent the very earliest dawn building toward the microcomputer revolution of the later 1970s - but the vast majority, if not all of them, are lost to time, unfortunately.
I don't have much experience in assembly, but reading this article made me appreciate it more. Assembly basically operates with the same principles as the first computers.
...they're called "reproducing pianos" (also "reproducing player pianos" and "reproducers"). Not many were manufactured, due to their complexity, need for a lot of maintenance, and sheer cost.
Basically, they were a kind of player piano that strived to reproduce the actual mechanics and technique of the person who "recorded" the original paper roll. They did this by having additional tracks which handled certain nuances of the player and such, such that when the roll was played back, the piano could play in the same manner.
These player pianos were much more mechanically sophisticated than regular player pianos, and those extra tracks acted like a form of control structure for the notes being played. I believe that on some of the models meant for public performances, you could select the song (and it would "wind" itself to the song, sensing when it had located the piece), and I think they also had an auto-rewind function - but that was about the limit of their operations.
I've always thought of a CPU - in it's simplest form - as nothing more than a sophisticated and fast "player piano", with memory being the roll, the word at an address being the holes in the roll at a certain point, and the CPU being that which controlled the operations and were instructed by those same holes. This in fact was actually implemented in some early electronic computers (known as "drum-based" computers).
The history of computers and computation is a fascinatingly deep and varied field of study; I encourage everyone to delve into it a bit.
These types of articles always puts the amount of computing power I spend to watch videos of a cat jumping into a box and falling over into perspective.
It's quite amazing what they accomplished with mechanical relays!