He described how a sensor in would connect to this spot and feed the system temperatures from a sensor in some remote part of the building. The control output would then go to the appropriate HVAC system and turn on/off A/C, heat or whatever to get the temperature to what it needed to be at.
The system would spider out around the building with these sensors and regulate the building's temperature.
At the heart of the board (which looked like a PC motherboard) was the CPU, a z80, already hilariously ancient when he showed this to me. So I asked him "why not use a more modern CPU?"
He responded, "why? This z80 can control an office building's entire HVAC system, and poll each sensor 200 times a second, how many times per second do you need? Temperature in a zone doesn't change that fast."
It was my introduction to the concept of "Lateral Thinking with Withered Technology" https://en.wikipedia.org/wiki/Gunpei_Yokoi#Lateral_Thinking_...
All because designing in a "limited" computer didn't make economic sense, and programmers couldn't help but use the extra CPU capacity that was available.
That is what makes IoT a challenge / bad-idea to a lot of people.
This comment is purely fictional. IoT is perfect.
But the primary problem, which is not limited to but obviously visible in IoT, is that companies ask themselves "what sells?" instead of "what is good and useful?". All that crap that is being created, with useless "features" that introduce security holes foster the fragmentation of the ecosystem, is pushed because someone out there figures out that people will buy it. But almost no one understands the implications of all these "features" so the buying decision they make is usually wrong and stupid.
I wish someone cut out sales people from the design process. You should be able to get designers and engineers together, having them ask themselves what would be an optimal, actually useful smartwatch/smartfridge/smarttoilet/whatever and how to build it, and then tell sales people to figure out how to sell it. But no optimizing for better sellability.
Can't go this far though :
> I wish someone cut out sales people from the design
> process. ... no optimizing for better sellability.
There does seem to be a minimally required feature set for selling things these days. "High Quality" isn't the compelling feature it once was.
That said, relying on an older processor may actually not save money. Sure, there's a premium on the absolute newest processor, but in general what's cheapest is what is most mass produced Right Now(tm).
I think a z80 on something like this was likely similar to the reasons that NASA control systems typically use the most reliable hardware they can, which means something that has been in use for many years.
For HVAC, maybe a little of each, but also the software may have been written to the z80, and if you change that out, you have to do all the testing you'd have to do if you built a new machine.
I often think back on this old chat I had with my grandfather, where he kind of tilted his head at something I was explaining about 90s tech and said something like:
"Interesting. In my day, we programmed the software to the hardware, it kind of seems like now you all are programming the hardware to the software."
I find that story utterly implausible.
The day Foxconn makes unapproved changes to Apple designs is the day that...well, never.
It seems like a service discovery system for IoT devices might be a good idea where the service discovery system is tracking what is actually allowed to run a particular service - like an HVAC system running embedded webserver.
For example, imagine if the industry had it so that the HVAC system was to announce that it had the capability to be an embedded webserver for status - but it instead should check to see if there is a different host to where it should send its metrics. This way you could control what the core host is for said website - and have all systems in the community basically ask for direction on self-hosting or publishing...
This is the ultimate YAGNI.
As I'm typing this my keyboard's controller is an 8051 variant, the touchscreen of my phone also uses an 8051, the mouse has a 6502, and the monitors in front of me have an 80186 core in them.
They are fast enough for the task, cheap, and development tools widely available, so they really don't need to be replaced.
(But you'd also find this spilling over into 3rd party hardware: My HD controller for my Amiga 2000 back in the day had a Z80 on it.
That machine was totally schizophrenic: In addition to the 6502 core on the keyboard, the M68k main CPU and the Z80 on the HD controller it also had an x86 on a bridge board - the A2000 had both Amiga-style Zorro slots and ISA slots, including one slot where both connectors were in line where you could slot in a "bridge board" with an 8086 that effectively gave you a PC inside your Amiga, with the display available in a window on the Amiga desktop).
The CPU was less powerful than any of the x86 32bit chips that were widely available at the time, but as a kid it still really gave me the idea that whatever I could think of, I could make a computer do.
I'd agree, understanding things at a really basic level first helped me to better understand things at a higher level later on. It probably helps me to keep in mind what a computer actually needs to do to run code as well. I think it's probably one of the reasons Knuth uses MIX in TAOCP.
I'd say with the older ones. With those, you can put a logic analyzer on the memory bus and see what's going on - if the pins aren't on a BGA under the chip and the board has no vias.
To work with them is to teach bad habits and useless skills.
It's not that they don't have craziness, it's that the functionality that mere mortals need to use to write efficient code is simpler.
The M68k's 8 general purpose data registers and 8 general purpose address registers alone is sufficient to make a huge difference.
For me, moving to an x86 machine was what made me give up assembler - in disgust - and it is something I've heard many times over the years: it takes a special kind of masochist to program x86 assembly; for a lot of people who grew up with other architectures, it's one step too far into insanity.
The regularity of the 6502's instruction set is a partially a consequence of using a PLA for instruction decoding. If you can decode a bunch of instructions with a simple bit pattern, it saves space.
But secondly, where the 6502 deviates from a tiny set of regular patterns it is largely by omitting specific forms of instructions, either because the variation would make no sense, or to save space - the beauty of the 6502 is how simple it is:
You can fit the 6502 instruction set on a single sheet of paper with sufficient detail that someone with some asm exposure could understand most of it.
(Look at page 9. This IC is found in a lot of generic mouses.)
(That "one exception" was the TI-84 Keyboard for the original Nspire, which did run the 84's firmware in a Z80 emulator on the Nspire's ARM processor.)
The thought never occurs to them that it's rock solid, only needs quarterly patching(at most) and has had 20+ years of tweaks that make it fit our needs. We don't need to replace it, yet.
Take, for example, the lockstep facility on certain IBM processors:
You can take two or more of them, run identical software on them, and compare their output on a cycle-by-cycle basis.
Now, the 750GX series may be a bit out of date in the modern era ... but good luck achieving that level of paranoid system integrity with just about any truly modern system.
One thing that I think they don't teach so well in most colleges is that a system's compute performance is not always the most important measurement of the system's capability.
A link for you:
Notice how "all systems crashed" from heat except the AlphaServer. That's great engineering, right there. It's why I wish they were still around outside eBay. That plus PALmode: most benefits of microprogramming without knowing microprogramming. :)
Thanks for sharing this, I love finding creative new ways to take advantage of 'tried & true' technology and it's something that regularly feeds into how I build software–sometimes to the displeasure of colleagues who are most interested in the shiniest new tools. It's interesting to read about how this sort of thinking worked for Nintendo.
Though I am a big proponent of lateral thinking in general, for battery powered devices the optics change a little I think.
They wouldn't be replacing the Amiga for $2M. They'll be replacing a 19 site remote HVAC control system, certainly including new radio systems, local controllers, and the central controller. They will want something with a warranty and a service contract for the next 20-30 years.
Though your point is still totally valid, I'm just thinking out loud.
Cheap and reliable Internet access is still not widely available. You don't want to have your HVAC stop working because ISP botched something up. Frankly, IMO there should be zero reason for any of the HVAC control loop to extend beyond the building. Sending data from your controller to your device next to it through half the world and back is insane, and yet it is what's happening with IoT right now.
1) Reliable internet isn't everywhere, it cuts out a bit.
2) A lot of internet enabled thermostats are just not that great at reconnecting when the eventually loose their connection.
Though to be fair the thermostats generally will just continue on the normal schedule if they loose connectivity.
The company ended up using on a different wireless protocol to talk from the base station to the thermostats. (z-wave or zigbee fwiw).
Ironically we had a 80s error programmable thermostat in the office, which worked pretty well.
I don't know, again I'm just thinking out loud, I'm no HVAC professional. We do use the internet to manage off site devices in my company and we tend to have more issues with power outages than we do with our ISP uptime, it's been a good solution for us.
Not saying it's not a good idea, just saying that in reality I've seen these systems work very poorly together in most office environments I've been in.
If I ever teach a CompSci again, I would think about making a climate controller a problem set. My first choice is a storage building controller, but this would be interesting.
Even if they do install local controls, a central monitoring system makes some sense for buildings that go unoccupied for days at a time.
Not all that much pops out of a few searches, but there are discussions of projects that make me think that $100,000 per building isn't a lot to spend on such a thing.
The point I was trying to make was that they likely already have other systems in the remaining buildings.
It is programmed by plugging dozens of wires into a control panel; each wire indicates something like this card column goes to this adder and that printer column. To change the program, you pull out the panel and replace it with a different panel. The "software library" consists of shelves of panels wired up for each task.
This system actually surprisingly sophisticated considering the lack of technology. For instance, there's a mechanical mechanism to suppress leading zeros in numbers. It can print text as well as numbers and supports three levels of subtotals as well as conditionals. It It's also fairly fast, processing 150 cards per minute.
"Columbia processes grades by using a system that was first released in 1972. In order to have kept it running for more than 40 years, it has consistently run special-purpose emulators that make its otherwise state-of-the-art systems think that they are stuck in the 70s and using an operating system called “CP/CMS”:
The grading system is written in a programming language called “Focus,” which in 1975 was one of the very first database languages developed and released:
But because of this, grades are processed only once per day, in a batch job that runs at about midnight every night. I am not making this up.
The university, recognizing that it is time for it to upgrade, does have plans for replacing the grading system. The upgrade is scheduled for the year 2020. I am not making this up either."
I imagine these schools will eventually move to something similar: an emulated system with specialized hardware to interface with the mechanical equipment.
The company is long out of business, but every so often someone finds some of their products at a garage sale or industrial auction or it comes in a box of other things on eBay. When they search for manuals on the product they often find me because by coincidence someone asked about those products on a forum I frequent and that conversation tends to come up in the first page on google.
It's amazing how many really old systems are still running after decades without change and no one gives it a thought until something breaks. I've often wished I had a list of their old customers so I could contact them and offer my services to upgrade old controls.
[sigh] another business opportunity missed.
There's nothing wrong with batch processes. They are often easier to reason about than having a system where everything can change in real time.
Sometimes you want real time. Sometimes (many times) a daily batch is just fine.
Playing devil's advocate here...
Do grades change during the day?
Enter the grades when you're working - i.e. during your work day.
Oh .. I know, teachers work way more than they are given credit, and don't make nearly enough for the work they do, given that they're up until 4am doing some paperwork or other in order that a few hundred students get their results .. just playing devils advocate here. This is a 40-year old system, still working. If it ain't broke, why fix it?
Seriously though, my personal nerd-ego says that anyone who can keep a system running in emulation over 4 decades deserves a special kind of award for the high status of their accomplishment. This is something few of us are capable of doing these days, alas ..
As to whether a system should process grades more than once a day, that's obviously a judgement call that Columbia has made. And as someone who worked in the administration of similar university, a small part of me is cheering on whoever at Columbia has held fast against the demand for up-to-the-minute GPAs.
The vast majority of them wait til the last minute to do it, right at the end of term.
I would say that's an assumption we as developers need to teach society, but we should really work on teaching it to ourselves first. I can't tell you how many times I've heard "We should just start over" rather than try and understand what we have.
There will be backwards compatibility options for an awfully long time.
A single developer could emulate whatever IO goes to that radio and have it controlled by a simple computer with redundancy. I wonder what they're paying the original developer to maintain this turkey. I imagine he's charging enough to where the sunk cost fallacy is obvious here.
What I find interesting is all the places where FOSS is lacking. There doesn't seem to be a popular and well managed FOSS climate control system. Is anyone making standardized controllers, thermostats, etc? This stuff should be a commodity by now. I'm guessing its hard to work with all the proprietary HVAC stuff and its also an unsexy problem to solve. I'd love to replace my thermostat with a raspberry pi and have a lot of fun modern features. There's a hackaday article about someone doing this, but its a pretty primitive project.
https://github.com/VOLTTRON/volttron from Pacific Northwest National Lab is one example.
A few years back a seller came to my workplace to ask if we wanted to advertise at our local cinema. We didn't, but I had a chat with the guy and was pleasantly surprised to discover that he was still creating and showing the ads using an A1200 with a genlock card running Scala (25 years old this year, real popular in the nineties with cable companies and broadcasters as big as CNN and BBC). Reason for not upgrading? "It still works great." Put a big smile on my face for the rest of the day.
And was the genlock card a Video Toaster by any chance?
Now that's job security!
Sadly, the only system that picked up was an HVAC control computer with no authentication. Apparently they figured that no one would find the number. After some experimentation, I was able to change the temperature in specific buildings where I went to class, which was neat for about a day. Anyway, this article makes me wonder if that system is still online.
We're currently developing a replacement SAP solution. I'm yet to decide which is more obtuse.
edit: The SAP solution was spurred not because the old system was breaking but rather that when the COO asked for some daily reports and the IT propellerheads replied that the data crunching to generate said 24 hour reports took more than 24 hours to run.
I think the most interesting part of this article is the use of radio for a wide-area network, even if it only got a passing mention in the article.
Some questions I'd love to know: What kind of protocol was used? Are computer uses of walkie-talkie radio bands allowed by the FCC? How do the receivers work? If they don't run on their own microprocessor, how were they designed?
These kinds of radio channels are already allocated to 'business' or 'government' service by the FCC. The user has licensed the use of one or more in a specific geographical area. These channels are 2.5kHz wide, but the old fashion was 5kHz bandwidth. They were allocated with FM audio in mind, which is what the portable radios do. You're normally allowed to run data on these channels.
The radio used at each node is probably a "mobile" radio intended for mounting in a vehicle but configured as a kind of base station by adding a DC power supply. Motorola has been selling microprocessor controlled, frequency-synthesized radios since the early 1980's, and the earliest one I know of used a 6800 processor (actually Hitachi 6300 series clone of M's cpu). The channel is probably shared with the maintenance chatters because the county already had the license, a radio fleet, and a shop to maintain the radios. By changing a "CTCSS" or "PL" tone to one that is different to the ones used by the voice radios, the maintenance chatters don't have to hear squawks and whistles all day and night.
The modem is probably an FSK kind of modem with a rate of something like 1200 bps. This works by representing 0 and 1 with different audio tones and is very simple with complete integrated silicon available during the 1980's. It's interfaced to the radio's audio input and output, the computer/modem needs a way to cause the radio to transmit, and it is nice to have a signal to show whether the radio channel is busy or idle. Otherwise it is probably a serial interface to the local controller at each site. The modem itself may have a microprocessor to do simple tasks like ensure the transmitter does not get stuck on by a fault or to prevent transmitting when the radio is receiving a signal already.
As far as the protocol goes, it can be anything and is probably something extremely simple: My address, length, payload, checksum.
The piece of spectrum this system actually uses, since it is shared with their maintenance radio fleet, is definitely not part of any amateur service allocation ("band")
The term 'packet radio' also implies (to me anyway) certain kinds of applications and messages, which don't apply to a system like this. Anyway, if there was some kind of IP or other routable protocol on the air here, that would be overdoing it. The messages probably look more like what you'd find on a multidrop serial bus, with elements including source and destination address, length, payload, checksum. A bunch of FSK modems working on the same radio channel is not much different to a multidrop serial bus.
Running ethernet to these locations can be challenging, and often an SMS based approach or mesh network is used instead - which is really not that far off from this approach (with some obvious benefits).
I haven't really kept up on it since, but I believe it's still running a humble 486. Granted, the rest of the machine isn't much like my old Dell, but I always found that interesting.
The Mars rovers have moved to a more efficient PowerPC platform, but its still old tech. Radiation hardening CPUs isn't cheap and the processing requirements for what these projects do isn't high (its more sane to just send the data to NASA and have terrestrial computers do all the heavy lifting), not to mention you don't run a bloated desktop OS on these things. A minimalist VxWorks RTOS is typically used.
Price? The RAD750 on the Curiosity Rover starts at about a quarter million dollars.
They probably built them with a lot more margin back then too.
At the presentation, the spokesman stressed that the previous bond was strictly to shore things up—anything that could be delayed, was. I can see that he wasn't kidding.
I knew when I saw the MC68k the first time that it had the Schwartz to rule them all — and decades later it's still true :)
I know its a "ha-ha look at that antique" piece but seriously, how many computers could you buy at BigBox store today that will run for 30 years non-stop? The Amiga was just an off the shelf consumer PC.
They also have artifacts like MacPaint source code in their collection: http://www.computerhistory.org/collections/catalog/102658076
How on earth! And where can I apply?
 To put it in perspective, getting the controls right dwarfs energy use. Let's call the cost $1.9 million and the number of sites 19 giving a cost per site of $100,000. Let's assume that the energy cost per site is $100,000/month and that the controls are capable of +10%. That puts the payback period at 10 months, so let's call it a year. Even if the efficiency improvement is only 1%, the payback period is well before the 30 year life cycle of the building (which not by coincidence matches the 30 year maturity of typical bonds and the bond financed control system built around an Amiga).
"A mid-sized school district with 800,000 square feet of space pays more than $1M annually for energy."
"Space heating, cooling, and lighting together account for nearly 70% of school total energy use."
So the $100k/mo figure is probably high by a few factors - maybe an order of magnitude at most.
To me, it looks like a very optimistic guess.
Bringing Stocking Elementary out of moth balls, replacing boilers and roofs, and removing asbestos were just some of the projects GRPS put on the Warm, Safe and Dry list before the Commodore computer.
It seems they've already replaced boilers. They could have fixed any leaking radiators while they did that. ISTM they really are just talking about the control system. Perhaps it made sense to build a custom system in 1982, although I really doubt it. Nowadays, Honeywell certainly sells units that can just be dropped in, at each site. Unless they have steam tunnels running between the 19 buildings, centralization of control seems unnecessary. Sure, holidays, snow days, and terms move around slightly, but schools are on the internet now so staff at district HQ can manage each site's schedules when they need to do so. It would make sense to have centralized reporting, but just run that over the internet instead of the walkie-talkie bands.
Neither is there anything about the "commissioning" process that mandates that control systems may only be replaced when mechanicals are also replaced. That would be ridiculous. Please note that I am not quibbling about the $2M cost. I merely observe that TFA discusses the control system only.
It's awesome that it's been functional for this long, but given the press attention I hope they don't receive unwanted attention given that this system was implemented in the 1980s, before computer security was as large an issue as it has become.
With the amount of publicity this article has generated, I bet someone with an SDR is going to drive over and analyze the RF signals, and then hijack the HVAC systems for the lulz.
That being said, I'm actually rather tempted to at least figure out what the air interface is. The article mentions a 1200 baud modem - does a standard UHF/VHF voice channel have enough bandwidth to run PSK/FSK V.22 or V.23? That'd be quite a hack.
And, probably, they could run their software under an emulator, on any US$200 desktop PC.
Indeed. The major annoyance I see is the radios interfering with each other. The centralized control architecture is also something I'd like to change. Each building does not require anything much smarter than an Arduino to be completely standalone and, if you want to be fancy, an RPi to send collected data to a centralized location.
I suggested a US$ 200 PC because many of those have no moving parts and more storage space than any Amiga ever dreamed of. They should be able to easily last 30 years or more.
That said, I'm pretty sure I could get an Amiga emulator running on a Raspberry Pi for a lot less than $1.5 million.
Every budget ever written stuffs things that management thinks can be put off until later into funding for things that absolutely need to be done today.
I ran a Ras. Pi system for monitoring my sprinklers and such but after about 9 months it has had some funkiness, I suspect power related and then the flash has had a couple little glitches. I couldn't see it lasting 5 years. I have an odroid-c1 that seems to be more robust in terms of power and such (it has a dedicated transformer, not some cheap USB that seems to "work") I do worry about the flash crapping out... Although if the policy is to burn a few flashes and just keep some spares, you should have quick recovery.
Maybe some good server grade hardware, in a data center, you can expect some decent life.
If you have robust power supplies from a reputable manufacturer (I like TDK/Lambda) with good filtering on the input (maybe external powerline filter modules), and you protect your external inputs and outputs, then the next thing to worry about is flash lifetime which should be measured in decades if you're not writing to it.
Problem with a Rasp Pi/Beaglebone is that it's running Linux and there are likely disk writes occurring, so your lifetime drops. If you can both limit disk writes and get long lifetime SSDs, I don't see why it shouldn't last at least 10 years.
The Debian Wiki has an overview of what you might need to change to achieve it: https://wiki.debian.org/ReadonlyRoot
Lower that it used to be. The design life for the Ford EEC-IV engine control system from the 1980s was 30 years. The program is mask-programmed onto the chip, and the parameter table is in a fuse-blowing type ROM. Many of those are still on the road.
With newer hardware, lifetimes are shorter. This is a big problem for long-lived military systems. Electromigration becomes more of a problem as IC features get smaller.
There are embedded systems where 30-50 years of operation is needed. Pumping stations, HVAC, railroad signals, etc. have equipment which can run for decades with occasional maintenance. The NYC subway is still using century-old technology for signaling. It's bulky, but works well.
I did a bit of work at NASA-AMES and they were just getting rid of Amiga 1000s used for video compositing.
It. Just. Works. though - they should never upgrade.