Hacker News new | past | comments | ask | show | jobs | submit login
Melbourne Train Control System is running on a hardware emulated PDP11 (mastodon.sdf.org)
129 points by SerCe 15 days ago | hide | past | favorite | 40 comments



Apparently, this is no longer the case, and the old emulated PDP-11 system was replaced with a new system in 2014–2016: https://news.ycombinator.com/item?id=29157259

The old system we are talking about here was Ericsson JZA 715, mostly written in Pascal with some parts in PDP-11 assembly, running under RSX-11M. Melbourne was the first site in the Southern Hemisphere for Ericsson's JZA series train control software, going live in 1982. JZA 715 first went live in Oslo in 1979. Earlier iterations of Ericsson's JZA software (JZA 410) went live in Stockholm in 1971 and Copenhagen in 1972.


There is also an even newer CBTC system going into use for the new metro tunnel.


I wonder how much of that code is the old code compiled for the new platform.


As part of the new system, I don't think very much at all, since the two technologies are fundamentally different; CBTC is trains talking directly to each other, and cab/wayside signalling is the tracks telling the trains what to do.

That being said, it's pretty common in retrofits for the older system to still be around as a fallback if the CBTC fails.


The Melbourne Metro is just a new tunnel on existing lines, so I would guess they are going to have to support some kind of interoperation between CBTC and non-CBTC? I mean, the ability to run CBTC trains on non-CBTC track or vice versa?

This is different from Sydney, where the Metro lines are physically disconnected from the traditional rail lines. (Apparently there is still some track linking the Northwest metro line to the Main Northern line underneath Epping station, legacy of the old Epping-Chatswood Rail Link, but it is blocked with off with stops; there is no plan for any rail connection between the new Metro West line and the traditional rail network, despite the fact that remnants of the old Carlingford line are directly adjacent to the site of the new Metro West maintenance facility, meaning they could create such a connection if they wanted to.)


The tunnel is signalled only for CBTC. The outer ends of the line have only the old fixed-block signalling. The section from Westall in the south east to (eventually) Sunshine in the north has both. There is a dedicated model of train, the HCMT, which is fitted with the CBTC and is the only type that can run through the tunnel during normal operation. When an HCMT passes Westall heading towards the tunnel, the lamps on the fixed block signalling ahead of it go out. When a diesel regional train has HCMTs ahead of it on the next fixed block, it will see the lamps lit for Stop.

Trains runnin have been running this way in normal passenger service for months now, though I think it’s only in the last few weeks they’ve been testing it with HCMTs entering the tunnel at Hawksburn with diesels behind them instead going to South Yarra. (Currently, the trains with actual passengers take the South Yarra route at that junction.)

The HCMTs run only on this line, from Sunbury to East Pakenham/Cranbourne. They have their own new depot at East Pakenham so they don’t have to go elsewhere for stabling and maintenance. And they probably can’t; just for them to go through the City Loop as is temporarily required that tunnel had to be resignalled (the old location wasn’t visible from the cab.)

They did run an old Comeng train through the Metro Tunnel, without any signalling, to test clearance for a track maintenance train.


At least in other systems the way this works is that any train using CBTC going onto a non-CBTC section will just operate using non-CBTC. Which is fine for lower frequency branches or emergency diversions. All you need for non-CBTC operation is a driver who can see.

It is also, where provisions have been made, possible to put CBTC equipment on old trains. That is what NYC is doing, for example.


> All you need for non-CBTC operation is a driver who can see.

Wikipedia's article on CBTC [0] seems to want to limit the term to moving block systems only, not including fixed block – I'm not sure if that is correct or if that is just some agenda some Wikipedia editor has.

But it seems to me that even in a fixed block system, the train could operate (semi-automatically) if it knew its location and the location and current state of signals – and the current signal state could be broadcast to it via radio. Would you call such a system CBTC or not?

Also, it seems to me that fixed block with physical signals and moving block CBTC could coexist on the same line. Moving block CBTC trains are authorised to disregard red signals, and instead get their movement authority via radio; fixed block trains are not. If a fixed block train is in a fixed block, that locks the block out for all trains (both fixed and moving); if a moving block train is in a fixed block, that locks the block out for all fixed block trains; but two moving block trains can coexist in the same fixed block provided their moving blocks are non-overlapping.

[0] https://en.wikipedia.org/wiki/Communications-based_train_con...


CBTC is moving block because it just obviates the need for fixed block wayside equipment. Fixed block signalling just detects if a track segment is occupied, but in CBTC the trains can directly talk to each other, at which point blocks are superflous.

Both types of systems can be automated. And yes they do often coexist, like i mentioned in my grandparent comment. But brand new rail lines that install CBTC often don’t, to simplify maintenance; trackside equipment gets exposed to pretty hard operating conditions and can’t be fixed without some kind of shutdown of the rail line.


In almost every CBTC system trains communicate with a central zone controller and do not interact directly with each other.

Urbalis Fluence works as you describe, but that is a very new approach to CBTC and as far as I'm aware has only one installation.


I'm not sure that they are going to run non-CBTC trains through the new tunnel.

I have heard it said that the level crossing removal program is associated with a goal of moving to driverless trains network-wide. Presumably there is a network-wide CBTC migration plan.


> I'm not sure that they are going to run non-CBTC trains through the new tunnel.

The old trains wouldn’t work with the platform doors, so no. But the other way around: those lines continue beyond the tunnels on the surface, and some of those surface sections may be shared by both old and new trains, which would require some form of interoperability between CBTC and non-CBTC, until they move the whole network to CBTC.

Also, I would expect that maintenance trains (track inspection, etc) will likely be shared between CBTC and non-CBTC and so have to support both


Almost 20 years ago I worked for a company that was bidding on replacing the CRTs that had been rendering platform train information, which was spat out as a stream from said PDP11. I even went to Flinders St and got to see it.

It was everything you imagine a 70s era data stream to look like. 1200bps, weird control sequences, etc., etc. And no-one could really tell us much about it, but there was some poorly photocopied incomplete documentation.


I worked on this system during its development in the 1980s.

There were actually two PDP-11s, the one to run the platform displays running locally-written software.

The “weird control sequences” sent between the PDP-11 and the plaform displays were HDLC, a synchronous protocol then common in IBM token ring networks as SDLC. This was actually a decent technical solution because they only had to run one coax cable down each train line and the PIDS could sit there watching for their token slot. The hardware for HDLC would have been commodity, whereas fibre optic or carrier sense for long-run packet was not.

The other PDP-11 that ran the signals (the “train describer”) could plot the position of trains on glass TTY terminals using escape sequences (VT100, same as in xterm today) so our PDP-11 pretended to be one of those and screen-scraped. So I was told, when I asked the guy who wrote it.

All of that was done before I got there. I was called in with 6 weeks to go before commissioning to fix the bit in the middle that recalculated train positions and arrival times.


As a Melburnian and a PDP-11 bits collector, this is just fantastic information!

Do you have a blog or somewhere else where you share tidbits of information like this? Cause I sure would love to read that.


The largest nuclear power plant in North America, Darlington Nuclear Generating Station in Ontario, a bit east of Toronto, uses robots for handling the fuel rods. The plant was designed in the late 70s and the control software is apparently written in PDP-11 assembler. And they will keep using it for the remaining lifespan of the plant. So probably long after I retire and I'm not old. Now that's an idea for any young coders looking for a language that'll still be in use when they retire; they were hiring maintenance engineers to help keep it running a few years back. Maybe a bit of a dead end skills wise but some of it'll transfer to the embedded VAXen that still run half the assembly lines in the same region. (That career advice is sarcastic - probably.)


> That career advice is sarcastic - probably

If you are into retrocomputing, they’ll pay you to have fun doing archeology and necromancy. It’s heaven.


Point me to the hiring board!


The toot itself is from 2021, and links to a 2012 PDF* from the "ASPECT 2012 Conference (UK)", specifically

   2.10 strangaric - legacy train control system stabilisation.pdf
[*] Due to the website's interface, navigating directly to https://webinfo.uk/webdocssl/irse-kbase/ref-viewer.aspx?refn... seems to serve a web page that requires clicking through to request the PDF.


Wayback machine solved the problem for me https://web.archive.org/web/20221225231302/https://webinfo.u...


We still run VAX VMS in production. Sometimes I run it on SimH.

https://en.m.wikipedia.org/wiki/SIMH

You can run it too.

https://gunkies.org/wiki/Installing_VMS_V1.0_on_SIMH


where would you get help running that simh? it builds and produces binaries on linux but chokes on things like 'set quiet' says non-existant device, much less do much in the vax780.ini. should I try again on a 32 bit installation, as I'm using a 64 bit linux right now?

I found the problem, when I followed the links I ended up installing the "classic" version of simh but it didn't like the vax780.ini, biggest clue was "set quiet" everything else was very confusing to me haha

Installing simh from "open-simh" and everything works according to instructions, I tried using the version V3.12-5 mentioned on the page https://simh.trailing-edge.com/ and this (and all 3.x?) is the "classic" simh and didn't support the referenced vax780.ini, but the version from github (4.x?) worked. Very cool to see vms boot!


And so do several U.K. high street banks, written in COBOL for the PDP-11, run via an emulation layer of an AS/400, last I stuck my head into that particular hell, and I doubt much has changed.


I got an insight in to why this is the case in 2012, when I had the misfortune to be part of the worst project I've ever worked on.

I was at one of Australia's 'big four' banks. My $100m project was charged with replacing a tiny part of their mortgage system: the onboarding of a new customer. That was it: just getting a new customer in to the existing legacy system.

Oracle were the prime. They were a disgrace. Accenture had their grubby little fingers in the pie and were theoretically, laughably, going to run the thing. They used to charge us $2m to copy and paste documents. The thieves.

It was an utter shambles. The 'plan on a page' was diamond shapes on a PowerPoint slide. There was no link to any sort of reality.

I lasted 9 months before the blessed gods let me go. Shortly thereafter the whole thing was cancelled.


> Oracle were the prime. They were a disgrace

I think I may have actually been working on the project you are talking about from the Oracle side. I think many of us tried our very best as individuals.

I was never on it full-time, I worked for Oracle engineering but they'd sometimes send us on-site at big customers to fix problems when they blew up. It can be a lot easier to debug a customer's problem when you have the source code right in front of you.


> last I stuck my head into that particular hell

I don't think it's hell if you like retrocomputing. Personally I find it fascinating. In the past I sometimes had to work on legacy systems and I liked playing the archeologist role.

Maybe you are not referring to the platform, but rather to code quality. But I have found some a lot of jumbled messy React code in early stage startups, and some decent code in legacy systems I have worked on, so I guess code quality is not always directly related to age of the system...


I like retro computing - and this ain’t it. This is consultants gouging their clients, and a cultish “we don’t touch that” inherited through generations of management.

To be clear, parts of it are in COBOL. Other bits are in RPG, others yet in PL/1, and then a generous application of java, c++, windows shell scripts, unix shell scripts, and it’s all held together with TSVs.

It’s more like hoarding than anything else - “don’t touch that, it might be useful, I don’t know, just leave it as it is, we’ll get eleven new servers for whatever it is that it does”

In the end we gave them a pile of python that ingested malformatted EDIs that another system made and spat out TSVs so that same system could then not trip over its own shoelaces.


Ah, makes sense, yeah that's not what I had pictured, and it seems like it's not fun at all!


All the UK high street banks I worked with were running on IBM mainframes (or UNIX in one case) rather than AS400. The only bank I have come across running on an AS400 was a small one in Luxembourg that serviced the mafia.


Why keep the mafia computing at the bank when your crime cartel could have the AS/400 in org? Apparently the Cali cartel had sophisticated “business analytics and intelligence” back in the day.

https://www.vice.com/en/article/the-cartel-supercomputer-of-...


At least those who started on a System 360 can run some applications essentially unmodified on modern hardware...

The bus factor on that PDP-11 software must be uncomfortably low.


Christ, why would you go from (relatively) commodity PDP-11 hardware to a more restrictive IBM platform? If you want reliability, any industrial/rugged x86 at that time would have been cheaper and easier to maintain in-house.


Couldn’t tell you, but probably some variant of “that’s what the CTO’s last gig used”

FYI: "The PDP–11 is a series of 16-bit minicomputers originally sold by Digital Equipment Corporation (DEC) from 1970 into the late 1990s, one of a set of products in the Programmed Data Processor (PDP) series." https://en.wikipedia.org/wiki/PDP-11



Ironically, because of the unbelievably bad infrastructures and general policies, Melbourne because a relatively affordable place to live in Australia. When other capital cities saw double digits property price growth, Melbourne prices actually went down.


Whilst not a train guy, I found this talk at Laracon AU to be pretty entertaining... https://www.youtube.com/watch?v=bbPzSdyroRM


The emulated PDP-11 was replaced sometime around 2015 by a system from Invensys.


Maybe the PDP-11 is the one running the linked website.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: