Hacker News new | past | comments | ask | show | jobs | submit login
Chuck Peddle Dies at 82; His $25 Chip Helped Start the PC Age (nytimes.com)
272 points by i_am_not_elon on Dec 24, 2019 | hide | past | favorite | 53 comments



The 6502 changed my life by really hooking me on computers. Got my first professional gig (fixing a bug in a truck driving simulator using Commodore PET Basic) that paid in a pizza outing.

An interesting feature of the 6502 was that memory between 0 and 255 was the fastest to access (literally twice as fast). Apple Basic's "GetNextToken" routine lived somewhere below 255 just for the speed advantage (might have been Commodore's Basic).

Another interesting feature was that the 6502 instruction set did not include 16 bit add/subtract nor any sort of multiply/divide. Add/subtract are easy enough but efficient multiply/divide is "weird".

I once interviewed for an assembly level programming job for the Z80 cpu (they were building a weird APL variant). The tech interviewer wanted to see me read some 6502 assembly from just the byte code. I was flummoxed and explained I could only do it with a disassembler. One of my earliest tech interview failures :)

Thanks Chuck!


Googled up an Apple ][ in-browser emu (what a time to be alive, etc) and poked around for a moment, think this might be it:

https://i.imgur.com/jvFvdsF.png

Edit:

You can see how it's used here:

https://bigflake.com/disasm/applesoft/Applesoft.html

It's actually two (overlapping) subroutines with entry points $B01 and $B07, their symbols in the source are CHRGET and CHRGOT. So 20-odd bytes in page zero for two calls. The code is, of course, also self-modifying. All in all, a fitting little tribute to Chuck Peddle.


That's funny, I still recognize some of the op-codes--though I was more familiar with them in decimal or their ATASCII characters. I used to be able to write machine language subroutines with character escapes in a string and JSR the string address from BASIC.


If by "Apple Basic" you mean Applesoft BASIC, it and Commodore BASIC were both based on Microsoft's 6502 BASIC so they would both likely have the same routine.


> Applesoft BASIC

Lol. A purist among us sinners :)

But yes, Applesoft BASIC. I never saw an Apple ] (heheheh) only an Apple ][.


Purists wrote in Woz’s Integer BASIC! Microsoft’s version came on a cassette in the early Apple ][ days.


Yeah I just happen to have been hacking on Microsoft BASIC over the last two weeks and stumbled on it. Here is its disassembly. Self-modifying, looks for colons and spaces:

> (C:$00b7) d 73

> .C:0073 E6 7A INC $7A

> .C:0075 D0 02 BNE $0079

> .C:0077 E6 7B INC $7B

> .C:0079 AD 00 08 LDA $0800

> .C:007c C9 3A CMP #$3A

> .C:007e B0 0A BCS $008A

> .C:0080 C9 20 CMP #$20

> .C:0082 F0 EF BEQ $0073

> .C:0084 38 SEC

>.C:0085 E9 30 SBC #$30

>.C:0087 38 SEC

>.C:0088 E9 D0 SBC #$D0

>.C:008a 60 RTS


> Apple Basic's "GetNextToken" routine lived somewhere below 255 just for the speed advantage (might have been Commodore's Basic).

It lived there for another reason as well: it allowed the keyword table to be extended, which would have been impossible if that routine lived in ROM. All MS and MS derived BASIC variations had this feature.


Similar here. The KIM-1 was my entry to programming. So simple and understandable. I still remember some of the opcodes, hand-assembling and punching them in was a powerful rote teaching method. Obviously I had some nerd in me if I thought it was easy to understand. IMHO every programmer should spend some time coding this close to the metal.

The chess game that fit in 1K was a goood example of what someone smarter than me could do, too.

Thanks Chuck! We’ll raise a toast to you as we close out the year.


Zero page had (has) 3 access modes actually, with index with Y was my favorite, i.e. stuff like 'STA zeroPage,Y'

That being said the processor got so many address modes that looking at nowadays flat page full 64bit address space feels surreal.


Fun fact, you can still buy new 6502 hardware from William Mensch's company. The elegance and simplicity of the core makes it virtually embedded computing "dust" in almost everything we use.

http://www.westerndesigncenter.com/wdc/


For those who want to build a new system around the 65C02 there's plenty of information to get you started here: http://wilsonminesco.com/6502primer/potpourri.html

A bunch of people are still building new systems with modern twists, fixing old ones and writing software. forum.6502.org has been an invaluable resource for me building my own machine


Thank you so much for this! This guy's site is a goldmine.

Going up one level on that site:

http://wilsonminesco.com/6502primer/

I've been fiddling with a 65C02 over the past month. This is exactly the kind of information resource I've been looking for!


Doh! That's a better link :)

It's certainly rewarding and interesting to design the hardware and then write the software in Assembly, and helps understand how simple operating systems work.


I submitted this link as a post just now:

https://news.ycombinator.com/item?id=21892123


If anyone is interested in the 6502 and its history, this YouTuber does a fantastic job covering the history. Some of his videos include past employees of MOS & Commodore employees.

https://www.youtube.com/watch?v=eP9y_7it3ZM


Just to save subscribers the click; yes it's a link to The 8-Bit Guy.


I don't wish to detract from Chuck Peddle's work, because the 6502 really is an amazingly efficient and elegant design. But I've always felt the microprocessor has been overstated in its importance in the computer revolution.

Throughout the 60s, 70s and 80s, memory was usually considerably more expensive than the CPU, for all kinds of computer system. The majority cost in mainframes, minis and embedded machines alike was the core memory. Early semiconductor RAM was even more expensive than core!

It was possible to design a simple CPU in the early 70s using a few dozen SSI/MSI chips, and a board with such a CPU could have been done in less than 30x30 cm. Indeed, that's basically a description of the numerous low-end MSI-based minicomputers of the 70s. The cost of such a design would have been in the same ballpark as many of the early single-chip microprocessors released in '74 and '75 before the 6502.

But what use would such a processor be, whether discrete or a single-chip microprocessor, without memory? A mere 4 kilobytes of DRAM in 1973 would have cost over $1000, and a useful general-purpose computer really needs more like 8 or 16 KB to host an assembler and disk-operating system.

By late 1978, the cost of 4 KB of semiconductor DRAM fell to $60. By 1983 a mere $2. In many ways, that was the true revolutionary technology. Even if no one had thought of microprocessors, and even if advances in CPU speeds had petered out, a minicomputer with 2 MB of RAM is an entirely different kind of beast from one with only 4 or 8 kilobytes.


The 6502 was an enabler if there ever was one. Yes, memory was expensive. But we used only a very small fraction of what we think of as normal today. A typical system would have 2K ram; maybe 4K and if you were rich you'd splurge for the 16K upgrade (but what would you do with so much memory?).

In comparision, the 6800 sold for a large multiple of the 6502, and Motorola wouldn't have slashed their prices if not for the 6502. Peddle & Co showed that a CPU did not have to be expensive and that was a game changer.

A whole generation of 8 bit machines depended on it: VIC-20, CBM 64, Atari, Acorn Atom, BBC Micro, Apple I, Apple II.

None of those would have happened without an affordable CPU to power them.

Very few chips can claim such a distinguished legacy, and without showing the market potential for this 'personal computer' thing IBM might not have entered the game at all. Visicalc (which required 32K!) was the breakthrough moment.


It's not hard to see the effect on ram price when you have all these tiny personal computers broadening the market. Synergy I say.


But we also didn't need as much memory. The memory resident portion of CP/M was only 2kb, and that is with some inefficiency because it was necessary to stick with the 8080 instruction set.

Just recently there was an article on HN about the passing of Randy Suess, who along with Ward Christensen created the first real BBS. The other part that often isn't recognized, is the work that Irv Hoff did to re-write the memory resident part of CP/M, so that it not only did everything that a operating system needs to do, but could also accept remote connections, deal with a modem, and have some (very rudimentary) security. All in 2kb. That's amazing.


> But we also didn't need as much memory.

Another way to look at it is that we only did the things we could do with such little memory. CP/M is a marvel of size-constrained code, but it's also crude in the extreme.

A multitasking virtual memory system with a decent file system and an optimizing high-level language compiler -- what most developers would consider the bare minimum for being civilized today -- really relies on having about a megabyte or more of RAM in practice.

Consider the web, one of the "killer apps" for computing in general. It's not hard to imagine implementing something like modern packet-switched networking and basic text-only hypertext documents, with a computer which can only do about 100,000 instructions per second. But doing it with only 4 or 16 KB of RAM, though? Just this comment is 843 bytes.


You are writing all this in (1) someone's obit, and (2) even though you preface it that you don't want to detract from their achievements you do nothing but and (3) you do so with a couple of decades of hindsight. Yes, things have changed. No, without the microprocessor we would not have all the goodies we have today.

What 'most developers consider the bare minimum for being civilized today' simply was not available to 'most developers' at the time, we were happy with what we had.

And in some ways I wished developers today would not be so spoiled that they allocate megabytes like it doesn't matter because it does add up and then suddenly it does matter.


As to your first point, I suppose I kind of assumed the readers here were familiar with MOS Technology and their work in integrating DRAM onto the same die as a processor, along with advances in ROM manufacture, and the general push to system-on-a-chip integration.

To be clear, Chuck Peddle also helped design chips like the RRIOT (1 KB of ROM, 64 bytes of RAM, a parallel IO port and a timer). A machine like the KIM-1 (also of Peddle's design) would not have been possible without the RRIOT chip, anymore than without the 6502. I just find it skewed that we all focus on the microprocessor he designed, when a chip like the RRIOT was just as important. Microprocessors are pretty nifty indeed, but they'd be useless without IO and memory just as cheap and small to go with them.


The crowd on HN ranges from people born after the millenium to those that cut their teeth on 8 bitters and even older people.

I started out on the KIM-1, I think you mean the 6530.

Your comment above reads a lot nicer than the ones preceding it, and yes, they needed IO and memory but without them that IO and memory would have never happened and the processors were the flagships of their respective companies.


MP/M didn't have virtual memory, but it had everything else on your list, was released in 1979, and had a minimum memory requirement of 32kb.

In the mid 1980s, I ran a four line BBS on an Atlas ACS8000-14 running MP/M. I wrote the BBS in DR CBasic. I attempted to port it to Turbo Modela-2, another high level language, but the IDE was so buggy that I eventually gave up (being female and a teenager, there was also a lot of social pressure not do computer stuff).


The PDP-11 had 16k.


Yes, the original PDP-11/20 came with as little as 8 KB of RAM.

But early operating systems and languages for the PDP-11 were painfully primitive. We're talking paper tape and assembly only, in 1970 on the base 8 KB of RAM.

On the other hand, consider the memory requirements of UNIX v7. It required separate instruction and data address spaces using the PDP-11's virtual memory features, as neither the kernel nor some of the compilers would fit into only 64 KB of data/code. 128 KB was necessary to boot a minimal single user system. 256 KB was realistically required to run multiple processes.

By 1979, that's software requiring 16x as much memory as in the base configuration of the original PDP-11 of 1970, just to boot. It just would not have been possible to write something like v7 UNIX in only 16 KB of RAM.


For comparison of costs and dollar figures, $1000 was a huge amount in 1973. I recently did the inflation math on a 1978 advertisement for a $699 TRS-80 bundle (with cassette tape drive!), adjusted for today it would be equivalent to almost $2800, enough to build a really powerful gaming desktop.


Yes, you could build a CPU from a number of MSI chips. But what kind of clock frequency would you get from that kind of a setup? One chip gave you much better performance. A multi-chip CPU would be a toy; a one-chip CPU could do real work (by the standards of the time).


> But what kind of clock frequency would you get from that kind of a setup? One chip gave you much better performance.

You might be surprised, because the answer is in the ballpark of 100 MHz when cost is no object. When cost is an object, well even simple designs like the MSI/LSI versions of the PDP-8 were clocked around a 1 microsecond memory cycle, comparable to most early microprocessors.

For an extreme example, the Cray-1 was constructed entirely from tens of thousands of discrete NOR gate chips using ECL, and was clocked at 80 MHz.

Multi-chip implementations remained standard on big systems into the 1980s. The ability to use bipolar technology over the much slower (at the time) MOS was a major factor, but it also simply wasn't possible to add to a single die sophisticated features like large register sets, pipelining, caches, etc. with the limited transistor budget available in the 1970s.

It wasn't until the mid-80s that CMOS microprocessors started to win decisively versus discrete logic for mid-range applications. It lasted even longer on the very high end -- DEC, IBM, Cray, and others, were manufacturing processors constructed out of many MSI/LSI chips well into the 1990s. Even the first POWER processor from IBM consisted of 9 VLSI chips, in 1990.

The overwhelming advantage for single-chip processors back then was the low cost. A minicomputer like the PDP-11/20 from all the way back in 1970 constructed out of SSI chips could run circles around the 6502, but the processor for the PDP-11/20 was like $2000 and the 6502 was $25.


I completely agree with all your points, but would add that we also didn't have the CAD tools to make big stuff, the 6502 was built by cutting litho to make transistors (actually physically cutting different coloured transparent tape and sticking it to a transparent sheet), nowadays I could knock out all fully timed 6502 in a few days with the right tools. Building something the size of a Cray on paper was an amazing achievement


The Intel 8008 was designed for what is arguably the first personal computer, but its designers ended up going with a TTL CPU instead, partly because it was faster: https://www.righto.com/2015/05/the-texas-instruments-tmx-179... .


The approximate timeline (after valves) was discrete transistors, preassembled transistor modules, gate-level SSI, logic-level TTL SSI/MSI, bit-slice MSI/LSI, and VLSI, with MOS VLSI as a spin-off that eventually ate the industry.

ECL for super high performance applications ran along a parallel slightly slower path. ECL was always a power-hungry monster and it was obvious it was eventually going to overtaken by improvements in VLSI - although this didn't happen definitively until the 90s.

The low cost of the 6502 wasn't just about low cost as a number; it was about consumerisation and personalisation. A 6502 8-bitter was a very slow, limited machine - but it was your slow, limited machine. You could buy it retail and all of its cycles were yours.

Smaller PDP-11s could also be personalised, and eventually you could buy LSI versions of the architecture. But the sales channels were almost exclusively corporate/industrial/academic - and so were the prices. (Heathkit actually sold an LSI-11 kit, but it was priced at $$$$$ and hardly anyone could afford it.)


> Smaller PDP-11s could also be personalised, and eventually you could buy LSI versions of the architecture. But the sales channels were almost exclusively corporate/industrial/academic

One of the great missed opportunities. I remember drooling over the DEC VT-103 in late 1979. This was a VT-100 terminal with an LSI-11 inside along with a TU-58 tape drive (one of the variety of strange little tape drives DEC were so fond of). It ran RT-11 (the inspiration for CP/M) which was a mature system with lot of software and good documentation. This would have made a really sweet personal computer with floppies instead of the TU-58. Had DEC marketed it as such back in 1979 might have forestalled the IBM PC. I wanted one really badly, and the pricing was not that bad for the time, but there was no way to get one as an individual. They were only sold to large companies or integrators for factory automation projects and such like.

Two years ahead of the IBM PC DEC had a factory churning out better systems with great software and all the development costs already paid for. Imagine what might have happened if they had thought to actually sell them to anyone who wanted one.


such a CPU could have been done in less than 30x30 cm.

That's far too big for just the CPU of a home computer. Look at the form factors of the C-64, the Apple ][, the TRS-80, the ZX Spectrum, etc. You had to be able to take it home and plug it into your TV. Something the size of a mini fridge is not really a 'personal' device.


30x30 would fit comfortably in those cases, so put it on a riser card and only suffer an extra couple of inches of thickness.

Of course the manufacturing price would suffer. I don't see someone making an LSI populated PCB for $25


Most of these don't have two sides that are 30cm. The only reason the Apple ][ does is the expansions slots.


The price of several of them also very much depended on the low cost of assembling small, simple boards.

Commodores price points would have been impossible with larger boards with more components, for example.

The Apple II was crazy expensive in comparison to many of the other 6502 based computers.


The Z80 was around $65 when the $25 6502 came out.


And the 6800 was $175, and with the $ exchange rate of the time it was unaffordable here.


The Amp Hour podcast had Chuck Peddle as a guest a few years back. Definitely worth listening to. https://theamphour.com/241-an-interview-with-chuck-peddle-ch...


Here is an amazing video on how to build a computer based on 6502 on a breadboard, and program it in binary to get a Hello World equivalent. I highly recommend it: https://m.youtube.com/watch?v=LnzuMJLZRdU



Also discussed here: https://news.ycombinator.com/item?id=21847718. I feel like it's fine to have two.


The 6502 in my Apple IIe was a great learning experience until I fried it trying to generate an NMI. Had to spend $$$ to replace/fix it to the 65c02. I really liked that there were some many useful technical resoures.

Things that bothered me then that still bother me now: the Apple's horrible hi res graphics memory mapping/color implementation, which made it really hard for an inexperienced developer to do high speed line graphics.


>Virtually all of the early, successful, mass-market personal computers were built around the 6502

Except for the TRS-80! And that's a big except. :-)

Now that I think about it, the only mass-market PCs using the 6502 that I can think of were the Apple and the Commodore.

And I suspect the TRS-80 outsold both of them together.


FYI, I think that $25 6502 chip they refer to was released in 1975, and according to this site[1], $25 then would be ~$120 today.

[1] https://www.usinflationcalculator.com/


The price fell precipitously over just a few years, too. By 1978 it was going for about $8 ($31 today) for large quantities, and by 1981 you could purchase them for $4 ($15 today) individually and well under $3 in bulk.


And $25 was an order of magnitude cheaper than the available alternatives.


The 6800 was a $300 chip for comparison.

[edit]the original Apple I was $666.66, which tells you how much the 6502 made it possible.[/edit]


I think HN definitely needs an obituaries section. I'm really not kidding.


Probably the greatest chip designer of all time.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: