Early 80's: TRS-80 / Apple / C64 BASIC, Z-80, and 6502 assembly-language via books and the wealth of magazines available.
Mid 80's: Turbo Pascal via a local CP/M community and a couple of books. I did take an introductory Pascal class.
Mid/late 80's: C, 80x86 assembly-language, and AWK from books and magazines and by necessity at my job.
Beyond coding I picked up development strategies ... version-control, team coding strategies, testing methodologies, ...etc. initially from the job, supplemented by reading material.
2) I was 15 when I started learning basic in 1980.
3) I was 21 when I got my first full-time programming job, but it was unrelated to my self-taught skills, at the time. I had taken vocational training in IBM mainframe ( COBOL, Assembler ) technologies and worked in those for about a year-and-a-half before I moved onto a new-to-the-company MS-DOS team ( programming initially in C ).
"BANCStar actually came with a "screen generator" that was supposed to be used to construct applications. But the 5.1c version of the generator was so limited that experienced programmers soon began to pry off the covers and modify directly the intermediate code that the run-time module actually executed."
All of the numbers you're looking at are an object code for a VM that (apparently) allowed for user-defined screens in the BancStar product. Programmers found themselves reverse-engineering the meanings of the different numbers and began to build the text-format object-code by hand.
This doesn't appear to have ever been intended to be a programming environment in this form.
I feel like this is an important point. One could easily say "omg look at this horrible programming language I have to deal with --
30001001: 8b ec
30001003: 6a ff
30001005: 68 90 10 00 30
3000100a: 68 91 dc 4c 30"
When it's just x86 and nobody is supposed to be using it directly. Of course, we have workable tools that sit on top of machine code so we don't have to, and apparently that wasn't the case for OP, but I think that's more of a statement about their inability to do the one thing makes progress in computing possible: create abstractions.
There was certainly a time that people wrote assembler, and even machine code, directly.
People of course still do write assembler directly (but not as much as they used to!), but I don't think many have written machine code directly for a while. (Maybe for some specialized hardware?).
But yeah, it's good to note that this underlying sort of vm code wasn't intended to be written by humans... but I wouldn't say the customers were wrong, exactly, to successfully use it to do high business value things with the product they were locked into, that they couldn't otherwise.
It does make one think how happy the customers would have been if the vendor had provided an actually intended fleible scripting or 'macro' language, and then to think about the aspects of the market that still didn't lead to the vendor doing so.
There was a text-based GUI but the application programmers never used it. It was too limited and wouldn't let them accomplish what they needed to do, at least through the GUI. That is an interesting point: the underlying VM mechanism was more capable than could be exploited through the GUI. The method of working with text files described here had been established prior to my arrival. I was given training on the GUI, and then advised not to use it.
Why didn't anyone ever write a compiler for a saner (probably custom) language that targeted this VM for its output? Certainly that would be my first thought if the only choices were a GUI that was insufficiently capable or writing a bunch of comma-separated integers by hand.
I assume either the transition was too gradual so nobody quite realized the absurdity to a sufficient degree, or everyone was too busy running around with hair on fire getting the day's tasks done to spend any time on long-term improvement, or management forbade it as a waste of time. I'm curious which it was, or if it was something else.
You correctly identified the causes. Mostly, it was lack of time; keeping up with compliance changes is a full-time job at any bank, and there wasn't time for 'science projects'. What tools did get written were done to scratch a personal itch, and were done late at night and on weekends, outside of work hours.
I think, as I said in an earlier comment, that there was a general lack of awareness amongst the programmers on the team that anything better was possible. Most were not computer science graduates. Professional programmers they were. But I'll bet money that most had never heard of trees.
Management interest evolved over time from indifferent to actively hostile. The first generation LIST annotator improved productivity of the team and reduced errors probably at least fifty percent. Management took no notice at the time. Improvements to the tool, also developed out-of-hours, finally met with angry demands from management to stop and bury the results. Shortly afterwards the entire team was laid off.
Thanks for the details. I find these environments fascinating, although I'm sure they're best observed from a great distance in both space and time.
One more question, if I may: once you got used to it, what was the productivity like? I imagine these sorts of things as taking days or weeks to make the simplest of changes, but maybe the horror I imagine becomes routine after a year or two of experience.
It took less than two weeks to become fluent (I started writing the annotator tool around that time). Probably, it was the well-developed---if informal---methodology that Annie, the experienced programmer who took me under her wing when I arrived, taught me. She made a little ceremony of it, giving me my copy of the binder that held all of the accumulated knowledge about the reverse-engineered language in its photocopied pages, showing me how she used different coloured highlighter pens to tie together things in a listing, and giving me little example programs to understand. I studied diligently; it was like working crossword puzzles. About two weeks later, I was promoting my first changes to the test environment.
Edited to add: I never got over the white-knuckle feeling of it: everything global, limited supply of working-storage variables that were constantly being re-used, and interactions that could bite from a distance. One of the most useful features of the LIST annotator was that it generated an ordered cross-reference listing that could be directly compared. That feature made choosing safe working-storage variables a much safer activity.
> the underlying VM mechanism was more capable than could be exploited through the GUI.
I don't think that is uncommon. VM's are (almost?) always more capable than whatever language compiles to them (as are processors). I particuarly annoying example I have had to deal with is the Java VM. It has an instruction to take/release a lock on a variable. However, there was no language level feature that exposed this directly; and the synchronized construct added try/catch blocks so that you released the lock in a different scope than you took out the lock. This was espesially annoying because I was working on a 'decompiler' based on the idea of compiling JVM bytecode to Java source code. This meant that I basicly had to resort to pattern matching how javac compiles synchronized, so my solution was non generic. Also, Java for some stupid reason does not let you subclass Enum.
In the late 90's, I began selling several command-line programs for Windows using the shareware model. I sold a command-line SMTP mailer (MailSend), a command-line POP3 reader (MailGrab), a command-line scheduler (TSched), and a command-line Dial-up-networking disconnect utility (HangUp).
Total income over several years was in the low five-figures when bundling deals were included ( some people wanted to include these programs as part of bigger systems that they sold ). I created a successor to MailSend called MailWrench that provided better support for Microsoft Exchange Server and SMTPS (SMTP over SSL), but the market just didn't seem to be there for command-line mailers any longer.
These programs are now free-to-run software with source ( with the exception of MailGrab).
The copper was a video-sequencing co-processor that lived as part of the Agnus chip. A copperlist is a sequence of copper instructions that would execute in parallel with the main processor. Please see:
Betts discussed some of the hurdles they encountered. Including their creation of a custom compression algorithm applied against the UTF-16 JS data so that much more could be stored on the client device.
pg - I'm curious if you've ever given thought to the idea of exposing a filter or ranking Domain Specific Language to us ... something that we could enter in a spot in our user-profile ... so that we could either choose the current ranking algorithm or could write our own.
It seems that HN may be different things to different people, so perhaps allowing the presentation to be customized might put people at ease?
I've had issues with multiple AV companies that pertained to binary-string signatures in my code. The AV companies I've dealth with all seem to have online ticketing systems that allowed for rapid correction of these situations.
A few months ago, I found that a command-line screen-capture tool that I publish was flagged as malware by multiple AV products due to behavioral characteristics.
In ScreenKap, I was experimenting with obfuscation of text-strings used by the code. I removed the obfuscation from the code and resubmitted to VirScan.org. I received a clean bill of health.
Note that I did not formally pursue this with any of the AV companies as the string obfuscation was an experiment and was nothing that needed to remain an integral part of my product. If my assumption is correct ( please note that it is an assumption ), we might be restricted to coding in the way the AV companies think we should code.