I had my own text library where I converted to/from ASCII internally. There was nothing special about the 6 bit boundaries, so you could use any number of bits per character that you wanted until it was time to interact with the rest of the system. By the time I used it they had extended the character set to include lower case by using a special prefix character.
The CDC 6600 was used as the student mainframe for George Mason University (GMU) in Northern Virginia in the 1980s. For engineering classes (e.g., electronics engineering) the 6600 was still excellent; it could run simulations far faster than many systems that had been built later, and it was certainly faster at that task than the personal computers of that early time (Apple //es or the original IBM PC). People also used the 6600 for writing text, compiling, etc. The computer was a terrible match for that, but it was fast enough that the mismatch of capabilities still made it useful for the purpose.
Oh, and a quick aside: Today's computers are much faster, but much of that speed is taken away by software that's more functional and less efficient. I once shared an IBM PC (4.77MHz) among 16 users. If your computer software is generally efficient, and you're willing to reduce what it does, a single computer can serve a remarkably large number of users. Nobody worried about font selection, for example. So it was with the 6600; it could serve many users, if carefully managed.
Now for the story, which I can't confirm but I believe to be true. At the time GMU was a scrappy new local university. It was in a great location (Northern Virginia, which was expanding rapidly). However, although it was a state university, it had little support from the capital (power is divided among counties, and the rural counties easily outvoted Northern Virginia).
GMU's president, President Johnson, had much bigger plans for GMU. So Johnson arranged for a hand-me-down supercomputer (the 6600) for nearly nothing. This computer was considered obsolete by then, and it was terrible at text processing. Even so, it was still fast enough to be useful, and more importantly, it was a supercomputer, even though it had been obsoleted. My understanding is that Johnson sold the pitch of "GMU has a supercomputer now" to all the local businesses, asking them to help fund GMU & cooperate in various ways. The local businesses did, greatly so.
I suspect most of the businesses knew the "supercomputer" wasn't a current speed champion, and as I far as I know no lies were told. But that wasn't the point. The pitch "we have a supercomputer" was good enough to make it much easier for some people in various businesses to justify (to their cohorts) donating to GMU & working with GMU. Many businesses in Northern Virginia wanted GMU to succeed, because they would greatly benefit long-term with a good university in their backyard... they just needed a good-enough story to win over those who might veto it internally. This "we have a supercomputer" pitch (and other things) worked. Once GMU got "real" money, they invested it, e.g., they upgraded (and kept upgrading) to better equipment. GMU grew quickly into quite a powerhouse; it's now the largest state university in Virginia. One of GMU's distinctives is the huge number of connections it has to local businesses and government organizations. Originally this was because GMU couldn't rely on state funding, but the need to interconnect with local organizations led over time to an emphasis on applying academic knowledge to real-world problems. It's interesting to see how problems + scrappiness can lead to long-term cultures within an organization. Johnson passed away in 2017 ( https://www2.gmu.edu/news/427091 ), but his legacy continues.
Gopher, not csci. The University certainly did promote their Cray supercomputing connection for local business support, just as GP describes. Cray computer was the cover model for the coursebook, while us plebs really got to timeshare on a BSD VAX :)
Thanks! I think it's important to note that to my knowledge, no lies were told.
Those in organizations who delved into the details found that yes, it's a supercomputer. It's a really obsolete one. But it is more capable than the PCs. More importantly, it showed that the university was very resourceful, and imagine what it could do if it got real money! In addition, having a good university next door was attractive, but only if there was a plausible path to get there.
But that was a complicated story to tell, so this whole thing provided a simpler story: "They have a supercomputer". All big organizations have bureaucratic inertia, and this simpler story was useful for people who didn't want to go into the details but might veto something.
My wife calls this "theater", and that's a good word. In this case, it was theater that helped counter bureaucratic inertia.
GMU took the few resources it had, and did a lot with them. People saw that, gave them real resources, and GMU quick grew into a powerhouse. I think that's an interesting story, and the 6600 played a part in it.
To be fair a 6600 was a great choice too to have students learn on at the time. It's basically a Cray-0, and would be representative of the architecture of supercomputers up through the mid/late nineties.
Hell, at the time, given the choice between a Cray and two 6600s, for students I'd lean two 6600s.
It was the general student computer at the University of Minnesota, and the uses it were put to were all over the map. Despite being optimized for number crunching it was an amazing general purpose computer.
The most interesting architectural feature was that all I/O was relegated to peripheral processors so that the main CPU could run unimpeded.
I think UCLA had a CDC 6600 being used as a time share system. My memory is very hazy though. We used it remotely via 150 baud terminals. On hot days occasionally bits would get scrambled on the way there and back.
10 PRINT "YOUR MOMMA" came back as 10 PRINT "KOUR IOMMA"
The toolchain does not have to run on the supercomputer itself. Most supercomputer architectures have self-hosting toolchains there are also supercomputers that do not. Also compiling or even debugging programs directly on the machine is in most cases plain waste of (expensive) computing resources and it is not that one would ever have only the supercomputer and not any other computers (in fact, many traditional supercomputers cannot boot on their own and have to be booted by some kind of frontend computer).
> many traditional supercomputers cannot boot on their own and have to be booted by some kind of frontend computer
CDC went all in on this. Their large computers had ‘peripheral processors’ (for the CDC6600, based on the CDC160) that essentially ran the OS, leaving the main processor free for what it was good at.
The Wii and WiiU run most of the "OS" on an external ARM core "Starlet"/"Starbuck". All I/O, network access, encryption for SSL, booting the main cores, the USB stack, etc. is on that ARM core, not the main PowerPC cores so those can be dedicated to running "game code".
The Cell in the PS3 is a SPI slave that gets booted by an external processor.
The PS4 is the same way, and that external core holds most of the filesystem (how game updates happen with the console "off").
And then most SoCs (including most AMD and Intel chips) boot system management cores (ME/PSP/etc.) that then is responsible for initializing the rest of the core complexes on the chip. Pretty much every ARM SoC sold these days will talk about how they have a CortexM3 in addition to they CortexA cores; that's what it's for. SiFive's Linux capable chip has one of their E series cores in addition to their U series cores for the same purpose on the RISC-V side of things.
> Pretty much every ARM SoC sold these days will talk about how they have a CortexM3 in addition to they CortexA cores; that's what it's for.
Usually the advertised-on-the-datasheet M cores are available for user code and you'll get a virtual serial port or some shared memory to go between them and the big core. I don't doubt that there are additional hidden cores taking care of internal power management, early boot etc.
At least, this is how it is on the STM32MP1 and the TI Sitara AM5 socs.
You are confusing theory with practice. Back then, computers were expensive and rare. The general student population at my university had two choices: the CDC 6400, or an HP time-sharing system that ran BASIC. A friend and I actually wrote a complete toolset in BASIC that allowed students to learn HP-2100 assembly language. (I did the editor and assembler, he did the emulator and debugger). But writing a PASCAL cross-compiler in BASIC, that output a paper tape of COMPASS, or binary? No way. Or FORTRAN, SNOBOL, Algol, ...
I learned FORTRAN on a HP 2000C timesharing system, using a FORTRAN emulator written in BASIC. It was dog slow, but it worked. I have no idea where the emulator came from.
I believe so, the comp. arch. textbooks were pretty emphatic on the description of the CDC 6600 as "full of peripheral processors", e.g. for I/O and printing, etc. Deliberately, not something tacked on later as an afterthought.
I cannot find any information about whether one of the peripheral processors in CDC 6600 (which were full-blown CPUs, not glorified DMA engines as in Cray-1 or System/360) has some kind of system management role. On the other hand Cray-1 needs not one, but two frontend computers to work (one is DG Nova/Eclipse supplied by Cray which actually boots the system and second one has to be provided by customer and is essentially an user interface)
The peripheral processors were integral to the CDC 6600 and it's successors (6400,6200,6700,7600, and Cyber 70 series) built inside the same mainframe cabinet. In the 6000 and Cyber 70 series There were '10 of them' that shared the same ALU with a barrel shifter that would shift 12 bits after each instruction. That shift would load the registers for the 'next PP' in a round robing fashion. They were pretty primitive. There were no index registers so self modifying code was a regular thing and polling was the only method of IO supported at least at first. I think the later models did support some sort of DMS. The PPs did have access to the 60 bit main memory and there was an instruction exchange jump or XJ which would load the register block and switch between user and supervisor modes.
What do you mean? The CDC OSes actually ran on the PPs and for all intents and purposes managed the system. The two-headed video console was hardwired to a PP as well, and used to managed the system.