Hacker News new | past | comments | ask | show | jobs | submit login
Lisp Interpreter in Tandy TRS-80/100 Basic (facebook.com)
105 points by franze on Dec 31, 2019 | hide | past | favorite | 54 comments



Hi... I'm the guy who ported this, which was actually not too difficult, as the original gentleman, to whom all honour belongs, left good hints. Most "issues" where really just IFs without THENs. There were only two errors I saw in Ira Goldking's source, a wrong line number and a THRN instead of THEN. This is a link for those not on Facebook: https://pastebin.com/2Mih71eF


Any chance of putting it on github, perhaps even starting with the original?


Chaire o eschaton!

As per your request:

https://github.com/KedalionDaimon/LispForTandy


From the author:

>This one is just for fun. When I first learned Lisp in 1981, I did so by reading Winston and Horn's book and writing my own Lisp interpreter. The interpreter was written in Basic on the only system I had access to at the time: a TRS-80 Model I. This interpreter eventually ended up as part of a series of three articles on Lisp that I wrote for 80 Micro, a TRS-80 hobbyist magazine. The first part of the series, which contains the source code for the interpreter itself, is included here. Note that the listing has all optional spaces removed so that I could fit both the interpreter and 1100 (!) cons cells into 16K (!!) of memory

http://mypage.iu.edu/~rdbeer/


Oh hah, I know this guy from Case Western CS dept. Around 1990 he was writing some Lisp on a TI Explorer lisp machine to simulate cockroach neurons for an AI project.


His pdf of the original article is there too: http://mypage.iu.edu/~rdbeer/Software/BasicLisp/BasicLisp1.p...


If we're willing to extend the discussion to microcomputer Lisps targeting machine language, I'll repost an old comment https://news.ycombinator.com/item?id=20019592 :

> There were various Z80 and MOS 6502 microcomputer Lisp implementations written back in the '80s, some commercially released at the time, some readily accessible nowadays. https://www.wisdomandwonder.com/link/3787/how-small-can-a-sc.... https://groups.google.com/forum/#!topic/comp.lang.scheme/Z_2....

> (There's one that I'm horribly, horribly overdue to get back to the authors about archiving on the Internet, actually. Anyone who's based in North America and happens to have the hardware and the expertise to recover data from both 5.25" and 8" floppies in some CP/M format?)


You might check out the ClassicCmp (http://www.classiccmp.org/) or Vintage Computer Foundation (http://vcfed.org/wp/) mailing lists. There are folks on there with extensive expertise and equipment for recovering and archiving data.


Thanks!


I've got access to an 8" drive as well as plenty of 5.25" drives, and the hardware to do flux images. There are others with access to similar hardware. As kjs3 suggested, contacting people via the classiccmp and vcfed lists is a good idea.


Funny, my current job involves a lot of coding in BASIC (BASIS and ProvideX commercial flavors) and I'm trying to wedge in some Racket where I can. I got my start typing in listings from Rainbow magazine at age 10. (I never thought I'd be writing BASIC for a living though, life took a turn at 35)


> BASIS and ProvideX commercial flavors

Looks like these are both "business" BASIC variants. Wiki has some information about ProvideX, nothing about BASIS (either the company or the BASIC product). What's the point of coding in these sorts of languages at the dawn of the 2020s? Do you still have a lot of legacy code around that can't be forward-ported for some reason? Do you have non-technical folks surveying the code, or even contributing to it?


Business BASIC was all the rage in the 1970s, especially for small to medium-sized businesses, and for good reason. It provided business functions like forms and database access in a manner that was much, much easier to work with than the dominant business language of the time, COBOL. It also had sophisticated programming constructs. For example, while it didn't have structured exception handling like Java, it did have Go-style mandatory error handling; every command that performed I/O had to be passed, as required parameters, the line numbers of where to branch under various error conditions.

Both BASIS and ProvideX are based on Java and provide rather thorough JVM integration. The languages themselves also haven't remained stagnant; they have approximate feature parity with Visual Basic. Both are derived from the MAI Basic Four dialect that was popular in the 1970s.

And yes, some large complicated business software, including the MAS 90 and MAS 200 accounting systems, was built using Basic Four. Porting such a complicated code base would be cost-prohibitive compared to porting the Basic Four dialect of Business BASIC to new environments and just running it as is -- so Business BASIC is COBOL-like in that regard also.


> "Both BASIS and ProvideX are based on Java and provide rather thorough JVM integration."

I'm aware of this evolution. However, the software at my company was not maintained to keep pace with it. Instead it was endlessly customized within the constraints of the original interpreter. At this point, updating the software to use newer variants even by the same companies would take roughly the same effort as rewriting it in Python, for far less reward.


Business BASIC was all the rage in the 1970s, especially for small to medium-sized businesses

The first piece of software I ever wrote and sold commercially was in Business BASIC. It was what we would call a CRM these days, though massively scaled down compared with what we have today. I did it for a limousine company. This was around 1982, IIRC.


You said it, legacy. Our ERP is essentially a folder of three hundred or so programs that all `RUN` each other and operate on flat files. The coding is mostly maintenance, but we sometimes need to add new programs/modules/menus due to changes in the business. We will be replacing it at some point, hard to say when. I won't miss trying to understand a 30-year old program with line numbers and two-letter variables, but I will bet money that whatever the new system is will be slower and more cumbersome for users.


Does 'RUN'ning a program keep global variables and the like around (like CHAIN)? If not, then those pieces of code are self-contained and could be replaced gradually by something saner.


Only if "sane" === zero cost.

Outside of the SV bubble, it's not common for companies to migrate working systems to something new unless the benefits clearly outweigh the costs, and there's nothing better on which those those costs could be spent.

There are a number of stories online, in HN, and in the business press where companies "upgraded" from legacy systems to something "saner" and it ended up costing more time, money, and customers than it was worth.


I wish! No, frequently programs will rely on variables defined in previously-run program. Believe me, I've had the same thought.


Provided the interpreters(/compilers) are still being updated and maintained; What disadvantage would writing in BASIC have?

ProvideX sounds like it's now Sage Group's ERP extension/plugin language. Like SAP's ABAP, Microsoft's "AL", etc.


The flaws are in the language itself.

Relying on line numbers, GOTO, GOSUB and RUN/CALL for flow control, all variables are global. This makes it extremely difficult to reason about code and make changes without breaking anything. A modern language would have functions, objects and modules with descriptive names, scoping, and, you know, white space. OH, and it would be stored in a normal text file; the source code for these BASIC interpreters is stored in a binary tokenized format, which makes it difficult or impossible to use version control systems like Git. (I do have something jury-rigged in place for this but it's very ugly.)

Additionally the data is stored in flat files. So a "record" (in this system at least) is basically a really long string with the fields stored in specific places in that string. If you need to change the data structure you have to update the hardcoded substring references in every single program that accesses the file. (Often you are better off defining a new file and cross-referencing it with the others.) There is no relational aspect; what would be a two-line SQL query becomes 40-100 lines of BASIC code where you zip through the whole file record by record, often involving building an intermediate temporary sort file and then zipping through that.

In essence: BASIC, especially this older flavor, is such a "simple" language that using it for anything non-trivial is little better than writing in assembler.


> OH, and it would be stored in a normal text file; the source code for these BASIC interpreters is stored in a binary tokenized format, which makes it difficult or impossible to use version control systems like Git.

In some BASIC implementations, it was quite possible to 'LOAD' and 'SAVE' programs as plain ASCII text by adding the appropriate incantations somewhere, even though the default representation was tokenized. I'm not sure if the "Business" BASICs you're using can do this.

> Relying on line numbers, GOTO, GOSUB and RUN/CALL for flow control, all variables are global. This makes it extremely difficult to reason about code and make changes without breaking anything.

Well, I'd say less "difficult", and more like "impossible absent a tedious reverse engineering effort". Figure out all the jump targets and all the subroutines, and document the globals that each relies on/modifies. Then you can treat your subroutines as something a bit closer to plain functions. (They'll still be non-reentrant however. Meaning that subroutines cannot recurse in general, not even indirectly.)

> So a "record" (in this system at least) is basically a really long string with the fields stored in specific places in that string. If you need to change the data structure you have to update the hardcoded substring references in every single program that accesses the file.

The really old BASICs had their own notion of "record"-based file that was essentially the only kind of file that could be randomly accessed. "Sequential" files were entirely separate and only supported appending, absent language extensions.


I was shown some ProvideX code written by a Sage MAS 200 "partner / reseller" for a custom vertical version of MAS 200 to review for a Customer. The code was a horrifying maze of numbered lines, GOTOs, and global variables. It reminded me of the Applesoft BASIC stuff I wrote when I was 8-10 years old... except this code was handling hundreds of millions of dollars of revenue. Yikes!

I ended up researching ProvideX pretty heavily as part of that work. The language was evolved from its original roots (much like Microsoft QuickBASIC / PDS evolved out of GWBASIC). Just like QuickBASIC, thought, it could still run the old line-numbered, globally-scoped, nightmare spaghetti code too. There were some neat constructs to deal with ODBC, GUI controls, etc, to be sure, but it seems like it exists now only to serve the old, rickety codebases that are in that "milk the market dry" mode (which pretty much sums up all of Sage's software, too, interestingly enough).


> Provided the interpreters(/compilers) are still being updated and maintained

That's a big question. And how much is that ongoing maintenance going to cost you, when you're critically reliant on it? It's just a single point of failure that will ultimately have to be addressed.


Pretty sure my life would have turned out differently if I'd seen some Lisp listings in Rainbow. I do recall William Barden Jr. making fun of parentheses once.


Facebook has to be the least convenient way this could be hosted. Here's a mirror: https://pastebin.com/raw/Yq5khyQL


Facebook has to be the least convenient way this could be hosted

Compared to the rest of the internet, yes. But for the Model 100 people, this is actually the cream of the crop.

Sadly, information in the Model 100 community is notoriously difficult to get to. Well, not fully "difficult." More like "obnoxious."

Most of the good stuff is on old web sites with many broken links, little documentation, and abysmal navigation. I think the main repository is actually the web site of a guy who died years ago and is kept online, but unmaintained and suffering from severe bitrot, by people about to go the same way.

For interactive discussion, the primary source is a mailing list where people ask the same questions over and over and nobody trims their replies, so you might get a message reply that just says "yep" followed by 150 lines of nested quotes. In digest form, it's simply unusable.

So, amazingly, a Facebook post is a thoroughly modern and convenient way for Model 100 enthusiasts to pass information around, compared to all of the other methods they employ.


> Most of the good stuff is on old web sites with many broken links, little documentation, and abysmal navigation. I think the main repository is actually the web site of a guy who died years ago and is kept online, but unmaintained and suffering from severe bitrot, by people about to go the same way.

As long as the site(s) have been grabbed by Internet Archive, I'm not sure that this has to be an issue. (I've noticed that the archive.org 'Wayback machine' will even archive some downloads, albeit it's not something that should be counted on.)


I'm not sure that this has to be an issue

It doesn't "have to," but sadly it is. The internet, like the rest of the world, is not a perfect place. If IA doesn't find a web site before the bitrot sets in, then what? There are huge swaths of the web that are not in IA, and from what I read on HN, IA doesn't have the capacity to archive.


I don't want to speak ill of them because they're doing an amazing job, but the Internet Archive doesn't have everything (especially old, long gone and/or obscure sites), isn't always complete, and often doesn't have things that back in the day that were on FTP sites. Sometimes you get lucky, but I've not had great luck there.

If you're maintaining some repository of information others find useful, please-please-please have a plan to pass it on to a successor caretaker and a notation in your will about your wishes.


I'm well aware of that, sadly. But often, part of "maintaining" a valuable repository of information (especially one that has reached stability and is not seeing ongoing updates) is uploading it to the Archive yourself. The Internet Archive has "trouble" dealing with information in the gigabytes and terabytes (which is why donations to them are valuable) but retrocomputing stuff is not like that.


part of "maintaining" a valuable repository of information... is uploading it to the Archive yourself

The problem is that we're talking about information that started suffering from bitrot before the Internet Archive became a big thing. Or even before the Internet Archive existed. Heck 99.999% of it existed before the internet.

Sure, it would have been great if on May 13, 1996 someone would have uploaded all of the information that there has ever been about the TRS-80 Model 100 to IA, but that didn't happen.

So often in places like HN the answer to flaws is "Well, you should have...," as if that solves the problem, or makes it go away. But it doesn't. It just makes the person writing that look like a jerk who isn't interested in solving problems, but in pointing out the failings of others.


Facebook posts are the persimmons of bitrot...


Printed in the magazine and having to type it in was significantly less convenient than Facebook.


Depends upon perspective.

Can walk into shop and purchase magazine and the magazine owner wouldn't know what you read, when you read it.... Equally the outlay would be just for the magazine compared to internet connection, computer capable of browsing the internet and then still have to transfer it to the TRS-80/100.

Then there was the important aspect many miss-out upon today, when you typed something in from a magazine, you got to know the code over just running a program. You made mistakes, and often the magazines would have erata sections to cover typo's from the last edition and with those mistakes - you learned even more than just hitting run.


Yeah I'm not supporting Facebook just stating that the physical act of typing it all in vs copying from a Facebook post is not as easy. I have no Facebook account. I typed many programs in from Compute and other magazines back in the day. I miss Byte magazine most of all. I may even try to this this on my Model 100 at some point. Just for kicks.


The TRS-80 model 100 has a RS232, so a null-modem cable should be the only hardware needed to transfer.


And for convenience, a terminal program. It makes debugging the connection to the Model 100 a lot easier. I use Serial on macOS. I don't remember what it costs, but it couldn't have been much.

The Model 100 supports RS232 connections up to (IIRC) 115k baud, but mine gets flaky at that speed, so I keep it at 9600. That's more than fast enough for the tiny files the 100 uses. It's the next door neighbor to instant.


Printed in the magazine and having to type it in was significantly less convenient than Facebook.

But, apparently, infinitely more resilient!


Looks more or less like "standard" MS-Basic... Shouldn't be too hard to get it running with vintage-basic.net ?


> `PRINT#-1`

... on line 4500 looks to be some kind of idiom.. I don't see a corresponding `OPEN` statement?



That might send output to the printer. I know on the TRS-80 Color Computer, the printer was #-2.


I have one of these, with printer. I should really dig it out one of these days. It's from before my time but something about cassettes and thermal printers piques my interest.


You know what else was written in BASIC? The original Smalltalk interpreter https://youtu.be/pACoq7r6KVI?t=1189 . ("I'm not ashamed, I got it going quickly" - Dan Ingalls.) Obviously this was just a scratch implementation of what became Smalltalk-72, rather than a Smalltalk-80.


Blogpost reposting content from 80 Micro Magazine (published sometime in 1982). I assume that the original issue is somewhere in the archive.org collection.


Good thinking. Looks like it might be this: https://archive.org/details/80-microcomputing-magazine-1983-...


From looking at the contents listing of this issue (400+ pages!) this looks like it might be Part 1 of a LISP tutorial that perhaps was extended in subsequent issues? I wonder if archive.org has got it all.



I'd wondered if there was a Color Computer variant and indeed there was/is: From Hot Coco, April 1984 http://www.colorcomputerarchive.com/coco/Documents/Magazines...

Now to test it out.


Hi everybody,

I put it also here as per eschaton's request:

https://github.com/KedalionDaimon/LispForTandy


There was a TRS-80 with a 68000...

Also, I know that ELIZA was available for TRS-DOS (which I think ran on the Z80) and wasn't that based on Lisp?


Broken link, can't access without an account.


I can’t access it even with an account.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: