My first paid programming job was writing a Logo Adventure program for C64 Terrapin Logo. They wanted a simple non-graphical game that showed off Logo's list processing capabilities, and I just used Logo's top level interpreter as the parser! That saved a lot of coding, but it did let you cheat (aka "learn to understand and program and debug Logo").
Here is the source code to LLogo in MACLISP, which I stashed from the MIT-AI ITS system. It's a fascinating historical document, 12,480 lines of beautiful practical lisp code, defining where the rubber meets the road, with drivers for hardware like pots, plotters, robotic turtles, TV turtles, graphical displays, XGP laser printers, music devices, and lots of other interesting code and comments.
> SYNSTAX ... A STACK OF SYNTAX PROPERTIES
If there were a "stupid-clever variable names" hall of fame, that one would certainly make the cut.
(OR (BOUNDP 'BIBOP) (SETQ BIBOP ITS))
;;THE TURTLE PACKAGE IS GOING TO EAT LOTS OF FLONUM SPACE, SO IN BIBOP LISP, ASSURE
;;THAT ENOUGH WILL BE AVAILABLE.
(AND (MEMQ 'BIBOP (STATUS FEATURES))
(ALLOC '(FLONUM (2000. 4000. NIL) FLPDL 2000.)))
BIBOP is the dynamically expandable version of MACLISP, the SAIL standard MACLISP. Essentially, the main advantage of BIBOP is that whenever one of the expandable spaces runs out of space, BIBOP requests a larger core allocation from the monitor and the delinquent space grows in the allocated memory.
December 1973; updated March 1974
The (in)famous "Bibop" (pronounced "bee-bop") LISP scheme has been available for some time now and seems to be more or less reliable. Bibop means "BIg Bag Of Pages", a reference to the method of memory management used to take advantage of the memory paging features of ITS. The average LISP user should not be greatly affected in converting to this new LISP (which very eventually will become the standard LISP, say in a few months).
WEDNESDAY FEB 13,1974 FM+7D.3H.28M.38S. LISP 746 - GLS -
AS OF VERSION 746, LISP SYSTEMS OF THE SAME VERSION WILL SHARE
PAGES AMONG THEMSELVES; I.E. A LISP, MACSYMA, AND CONNIVER CAN
ALL SHARE PAGES COMMON TO THE LISP SYSTEM.
BIBOP LISP HAS BEEN GREATLY RESTRUCTURED; I WILL RE-EDIT THE BIBOP
DOCUMENT AS SOON AS I CAN (SAY WITHIN THE NEXT WEEK). -- GLS
There's nothing easier on the eyes than some good old fashioned PDP-10 Midas assembler MacLisp source code!
I didn't follow every last instruction, but it looks like it used a 91 (or maybe 92)-element LUT of 4-byte floats, split into 2 halves. This allows direct look-up of the SIN of every angle in the first quadrant, that is 0 to 90 degrees, plus the value for the next 1-degree angle.
It looks like after looking up the two precalculated values, linear interpolation is then performed between them.
(Also, the FP numbers are in a format that must have been easy to do software arithmetic on. Leading byte is exponent, with bias of 126; next 3 bytes are sign and mantissa, with no implied 1-bit. This has just a hair less precision than a modern 32-bit 'float'. 7A 47 7C 2D is the entry for SIN(1), and is equal to 0.017452407... while double precision SIN(1) is 0.0174524064372835...)
Next, I bodged together a version of what I assume the linear interpolation step does. In both double precision and simulated LOGO precision, the maximum relative error is about .00005 and the maximum absolute error is about .000045, which would be awful hard to detect when plotting to an entire display with about as many pixels as an application icon on your modern high-dpi telephone. On the other hand, that's only 4 correct digits.
This is an interesting contrast to the Commodore BASIC implementation of sin/cos that I'm familiar with. It used a fairly high degree polynomial, which might have gotten a few more accurate digits (5 digits for sin(1.5 degrees)!) but is sure to be quite a bit slower than 2 table lookups, a multiply, and a few add/subtracts. (plus range reduction)
0.0261759515 Estimated Apple LOGO SIN(1.5 degrees)
0.026176948307873153 Modern Double Precision SIN(1.5 degrees)
0.0261769483 Commodore 64 BASIC SIN(1.5 degrees)
As a native English speaker I feel fairly ignorant of how programming is taught in other (human) languages. I recall a quote  by Eric S. Raymond commenting (or perhaps speculating?) that Linus commented his kernel code in English because it never occurred to him to do otherwise. But I'm curious for your perspective as I imagine the experience of a child learning from a book is quite different from that of an adult who is immersed in the English-dominated "hacker culture" on the internet.
We started with a language called Portugol or "structured Portuguese" which is something like a formalized pseudo-code language and was used to teach the basic concepts of structured programming: variables, control flow, subroutines.
We spent about three months working with Portugol exclusively, never actually running it on a computer, but just for understanding programming logic. After that we moved to C and eventually "C with classes" C++. Comments were still written in Portuguese.
The transition to C was fairly easy in my experience, because the actual English vocabulary you need to understand to program in C is very small, and because the concepts themselves were taught in Portuguese with Portugol, so you only had to do dictionary translation, really.
We never compiled or ran Portugol (IIRC, it was so vaguely defined that you'd be hard pressed to implement it), so practical matters like how applicable it was to scientific computing or data processing was never a concern.
My dad went to Germany and bought an Atari (I forget which one exactly), and realized it was all in German. Seeing that he didn't know any German, he learned it enough to navigate the UI.
In other words, the time cost of learning German was less than the cost of the computer + the trip to Germany. Basically necessity.
There are plenty of non-English speaking Westerners, you know? :D
repete 4 [ avance 100 tournedroite 90 ]
repete 4 [ av 100 td 90 ]
It was the same for me. I started learning in VB.net and then Java/Python. The English syntax never bothered me because I started learning English when I was 7 years old.
Trying to code in another language would just be a nightmare. So it makes sense to keep the syntax in one language (English in this case) for consistency.
So when learning programming it did not strike me as 'weird' that the language was English.
Nowadays, because of online gaming, it is probably even less of a problem for kids.
I'd hazard a guess that Linus deserves credit for thoughtfully writing Linux in English on purpose, instead of just assuming that "it never occurring to him to do otherwise".
On the other hand, Yuri Takhteyev wrote a book called "Coding Places: Software Practice in a South American City" (MIT Press, 2012), about the social side of software and the culture surrounding the Lua programming language, developed at the Pontifical Catholic University of Rio de Janeiro in Brazil. And he actually spent three years interviewing the developers to find out the what the facts were, instead of speculating.
The name of the language Lua is Portuguese for "Moon", and the developers' native language is Portuguese, but the Lua language, code, documentation, mailing lists, and papers themselves are all in English.
He interviewed the developers and members of the global community, and examined why it had become so successful (modulo [or in spite of] the fact that it was extremely well designed and implemented, of course). His conclusion was that the developers made a conscious decision to write the code, comments and and documentation in English instead of Portuguese, because it was the de-facto language to use if you intended to foster an international global community around the language, which they did.
Coding Places: An Interview with Yuri Takhteyev
Luis Felipe R. Murillo, April 15, 2014
In “Coding Places: Software Practice in a South American City” Yuri Takhteyev depicts a group of developers from Rio de Janeiro working on software projects with global aspirations. His ethnography, conducted in the span of three years, provides rich detail and insight into the practice of creating a programming language, Lua, and struggling to form local and global communities. In his narrative, Takhteyev sets off with a task that is particularly akin to anthropological studies of globalization: to specify socioeconomic and political forces shaping localities and creating instances of production and circulation of transnational scope. We asked him a few questions related to the book and his research on the topics of globalization, computing expertise, and politics of information technology. Enjoy!
Coding Places. Software Practice in a South American City. By Yuri Takhteyev
An examination of software practice in Brazil that reveals both the globalization and the localization of software development.
Software development would seem to be a quintessential example of today's Internet-enabled “knowledge work”—a global profession not bound by the constraints of geography. In Coding Places, Yuri Takhteyev looks at the work of software developers who inhabit two contexts: a geographical area—in this case, greater Rio de Janeiro—and a “world of practice,” a global system of activities linked by shared meanings and joint practice. The work of the Brazilian developers, Takhteyev discovers, reveals a paradox of the world of software: it is both diffuse and sharply centralized. The world of software revolves around a handful of places—in particular, the San Francisco Bay area—that exercise substantial control over both the material and cultural elements of software production. Takhteyev shows how in this context Brazilian software developers work to find their place in the world of software and to bring its benefits to their city.
Takhteyev's study closely examines Lua, an open source programming language developed in Rio but used in such internationally popular products as World of Warcraft and Angry Birds. He shows that Lua had to be separated from its local origins on the periphery in order to achieve success abroad. The developers, Portuguese speakers, used English in much of their work on Lua. By bringing to light the work that peripheral practitioners must do to give software its seeming universality, Takhteyev offers a revealing perspective on the not-so-flat world of globalization.
As the person who translated the Pascal Logo model to TI9900 assembler it was definitely the case that at the beginning of the project I translated every line of Pascal to a few instructions of assembler. By the end I was just looking at the gist of what a routine would do (its “contract” as they say) and writing it in assembler as you would new code.
It’s much like translating a literary work. There’s a famous quotation (which I’m not comfortable relating in full) about translations being either faithful or beautiful but not both.
Essentially maybe I became a fluent native speaker of 9900 assembler.
I do have the sources for both those, but only printed not electronic media.
Basic was built in so I ended up going back to that. I learned logo, pascal and basic, but being built in and not having access to pascal at home made my choice.
I did return to logo in high school on macs for some neat fractal and equation graphing programs.
I recall seeing LOGO and turtles in some of the magazines but never got the chance to try em out. Back then I used to try and just type out snippets of what in hindsight was just pseudo-code from all sorts of places hoping it worked, then attempted to enter all the machine code for an assembler that came in a big red binder (didn't work, despite checksums, but you also had to write the 'editor' so I stuffed that up I suspect).
I don't recall exactly that it was that I didn't like about LOGO back then. One thing probably was that I needed to first boot up into a "special environment" to run my code, and thus it didn't feel like I was writing a real program for the machine. Another might have been the emphasis on moving a turtle around, versus actually doing anything with text.
>Alan Kay on "Etoys, Alice and tile programming": "This particular strand starting with one of the projects I saw in the CDROM "Thinking Things" (I think it was the 3rd in the set). This project was basically about being able to march around a football field and the multiple marchers were controlled by a very simple tile based programming system. Also, a grad student from a number of years ago, Mike Travers, did a really excellent thesis at MIT about enduser programming of autonomous agents -- the system was called AGAR -- and many of these ideas were used in the Vivarium project at Apple 15 years ago. The thesis version of AGAR used DnD tiles to make programs in Mike's very powerful system."
A souped-up Apple II at that time would have had 64 KB of RAM, and 140 KB of storage per floppy disk.
I guess you needed a PDP-11 to do any real work in those days.
Earlier assemblers (and the later ones!) also typically supported a capability to reference another source file from another which lets a large assembly chain itself without needing to have the whole thing in memory at once.
At runtime, you could use overlays to essentially load in program modules in and out of RAM dynamically so you'd be surprised at the (relative) sophistication of some software packages back then!
Per the link, the file was editted and built on a PDP-10 running ITS, which was a 36 bit machine with an 18 bit address space.
Interestingly, almost everything else I've seen from ITS uses the 6 bit upper-case-only encoding. This is pretty clearly ASCII.
What have you seen from ITS?
Here's the disk image:
Here's a web-based emulator to run said disk image:
Here's the manual:
This is the tool that introduced me to the notions of recursion and fractals when I was 11.
I remember spending two weeks cobbling up a Logo program to draw the dragon curve on my Apple IIe
The version I wrote can read values from a spreadsheet, so could be used in other applications too. I think this kind of thing could be a great way for a young person to go from a homework project to the front of Hacker News. If any of you have kids, I'd be happy to help gather the data for some other subway networks!
Also, where should I contribute old Apple II software? I recently used ADTPro to copy a lot of old programs onto my MacBook Pro as DSK images.
I've also got a larger selection of DD floppies, HD floppies, JAZ disks, SCSI disks, IDE disks, MacFormat and other CDs. Please tell me if you want the links!
I'd love to get my hands on an actual physical Logo turtle.
APOUT: AND #$7F ;eat Apple idiot char codes, type Ascii
They don't like Apple's "idiot" character encoding (which is custom, Apple II specific), so they throw away the top bit with the AND leaving only ASCII characters.
ASCII is a character encoding, and it's a 7-bit encoding. It has no code pages.
There are many 8-bit encodings / code pages that are ASCII supersets (including the Apple II one), however.
It would be ambitious to make a self hosting 6502 Logo meta assembler, by porting the entire 6502 assembly language Logo source code to the 6502 Logo Assembler!
I'm asking because of reaperducer's comment from just a week ago: https://news.ycombinator.com/item?id=18086566