Hey, I recognize those opcodes. 6502 (Commodore, Apple 2, etc) Cool! I had no idea what was running these things.
Time Slice Task Master (aka "cooperative multitasking without preemption")
Interesting they set the PWM (pulse width modulation) based on battery voltage as not to stress out the motors. Cheaper than some caps & resistors for a voltage divider from a high V source, or a boost/buck controller. Still necessary for the CPU though. If I remember correctly, this thing ran off 4 1.5V cells (+6V). Yay, software!
Clever use of tilt sensor to seed the pseudo-random num generator. (page A22)
Too bad the diagnostic file isn't included (diag7.asm) per page A21.
(A33) That cycle-counting timer loop brings back memories, too -- throwing in a few NOP (no-operation) calls just to eat a cycle and get the timing right on a horizontal/vertical retrace -- while here they're doing it so the service intervals on sound & motor control come out right.
(A36) LOL @ "Rap mode"
A39 - the various sensor sequences needed to trigger the modes
I gave up at about 45 pages, but there is obviously some neat stuff here.
How much time did you take to do this, I can read code, but I simply can't casually take a look at code like this and say, hey that's cool, hey that's interesting, I have to concentrate and go line by line, a more debugging state of mind.
There are programmers that can sort of skim read code which just amazes me.
Keep in mind that a lot of this kind of code follows a pattern : lots of EQU ("equates") statements that link a variable to an address like a port or a signal line.
Then you'll have the setup code that gets everything into a basic state where it's ready to run the next part
The big loop
Here's where the action's act. Just spin in a big loop, waking up every few hundred cycles or so to check and see if a button is pressed, or the IR chip is sending a byte, or the battery is dead, etc.
If you're already doing something, keep doing it until you're done.
If you're done, wait around for a while or go do something else.
At least, that's what I come expecting. So when I see code like this that basically fits that model, parsing it is simple.
And, like I said, there are great comments. Even a couple about the round-robin service routine that made me laugh.
Well that would be the answer ...
what's funny to me is the idea that a person can go do something else for 10 or 20 years (C, C++, DotNet, robotics) and barely touch a line of assembly, yet it's all still there waiting. Maybe it's because game development in 6502asm was at the start of my career and I was so passionate about it, but some things are just ingrained. Even stuff like favoring zero-page memory (the first 256 bytes of memory from $0000 to $00FF) because it can be accessed in one cycle instead of two like indexed, offset or direct access memory. Later, moving to the 68000 family felt like a high-level language (Multiply AND divide opcodes -- you mean I don't have to write a multiply routine?)
But having a asm background has ruined me in some ways. Learning C was straightforward enough, and the jump to C++ felt very natural. However, I find modern languages repulsively inefficient (e.g. Ruby) and struggle with functional programming (Lisp, Elixir). Python feels reminiscent to Applesoft BASIC so while I have to look up the syntax every time, it still feels normal.
somewhat related, since I'm rambling...
I recently read "Making 8-bit arcade games in C" by Steven Hugg and found it brought on a strong dose of nostalgia. It's a really ambitious book that attempts to give a feel for programming various 8bit game platforms, and covers some surprisingly advanced techniques and some basic stuff that schools may not event each any longer (two's complement, run-length encoding, interrupts, etc.). The book has some rough edges and wanders a bit but the good parts easily outweigh the defects. It achieves its aim of
Speaking of game programming -- how about meta-game programming? programming in a game.
I bought a game called TIS-100, and found it to be loads of fun in the same way a mechanic might buy a derelict car and restore it to new condition, or an EE might fiddle around with an old tube radio just for fun. The premise is that an uncle dies suddenly/disappears and leaves a prototype computer behind. Hidden in this computer are some signal processing circuits that hold a secret. You (the player) are tasked with restoring this prototype military computer to unlock the knowledge held within. Very simple, only a handful of opcodes, but it's a challenge and fun in a way I haven't felt since the 80s. Being forced to juggle a variable around on the stack or between indexes or the accumulator an only having a few bytes of storage requires a different type of thinking. Good stuff.
TIS100 definitely isn't for everyone and the challenge curve gets rather steep but once I "got it" it was one of those stay-up-all-night challenges. https://store.steampowered.com/app/370360/TIS100/
great review: https://blog.codinghorror.com/heres-the-programming-game-you...
It seems assembler makes it a bit more complicated without much benefit, and surely there's a compiler for a chip as common as the 6502.
Back in the 80's and 90's the best compiled languages were probably COBAL and Forth. As far as I know nobody seriously used these for games or applications.
There are more recent C compilers, but the code generation is multiples of size and speed (5* maybe?) away from assembler.
Because it was cheap: https://groups.google.com/d/msg/comp.lang.forth/LzOasFOyIMg/... (forth not mentioned here; plausibly(?) extrapolating)
Some nearby discussion about 6502 market penetration (I can't determine whether Forth is being discussed here - oh and dodge the cartoon fistfight-cloud wending its way between the posts, happens a lot on this usenet group): https://groups.google.com/d/msg/comp.lang.forth/Yoa9u55cpbo/...
And back when this product was released, optimizing compilers were really poor by comparison with a human for these tasks.
Today you probably wouldn't bother with assembly for all but the most timing critical parts.
It looks like this is basically a 6502 with no Y register and a reduced set of instructions.
Some past discussion of this: http://forum.6502.org/viewtopic.php?f=1&t=2027
Why would you do this? It absolutely can't have made any difference to the cost of the processor in 1998.
The NRE to make the change is at least $100,000 (probably closer to $500,000--but I'm spitballing here). And that is up front cash.
Those changes are really small. So, lets assume a penny saved.
It would take you 10 million chips to make that back.
The total run was 25 million? or so--and nobody expected that.
If the NRE was $500K at a penny saved, you never make it back.
Even at a nickel (which is probably a huge savings), again it would take a 10 million chips to make it back.
Given the oddity that this is, I bet that this was a failed batch of chips that was just enough damaged to be really cheap but not so damaged as to be useless.
Datasheet for the chip they used:
Looks like a fairly generic "sound processor" that was most likely used in tons of toys and other consumer electronics. Hell, I wouldn't be surprised if the Furby line wasn't even their biggest contract.
There is no chance that removing Y and its instructions saved 50% die area on a chip with 80K ROM and 128 bytes of RAM. The "savings" is likely due to this being in a 350nm or 500nm process rather than anything else.
Seeing as this is a cheap Chinese fabless company and these things were Chip-on-Board, they had a failure on the Y register and just datasheeted it into compliance. This is little different from the original 6502 which datasheeted the lack of Rotate Right into compliance on the first versions.
If it's cheap enough, nobody cares.
Of course one could go look at the layout of the (NMOS) 6502, which has been fully reverse engineered at this point, to prove or disprove this.
The Y register is WAY smaller than even I expected. It's a tiny vertical strip on the lower left. And the PLA entries are on top, and a couple entries would again be tiny.
Any savings due to omitting Y would be miniscule.
The more I examine it in detail, the more it becomes clear that it's just a bug that got datasheeted.
It's likely it has been used in all sorts of toys and the like for the last two decades. And it was probably dirt cheap. And probably designed in the early 80s, hence the design decision to go with the 6502 as a base platform.
I thought Sunplus was a bit newer than that (90s), but a quick search reveals that your guess was pretty close:
Founded in 1990.
A prime example is https://en.wikipedia.org/wiki/Intel_80486SX where Intel spent extra money to develop a way of damaging a 486DX so it could be sold as a 486SX at a lower price.
Who knows maybe the processor had been designed and produced a long time before that, and that made it super super cheap to throw on a BOM because it was technology from a few years before.
Also likely to use sliiiightly less power, and be even more responsive to interrupts.
I distinctly remember them being advertised as intelligent little friends who learn about you.
If you turn a Furby upside down, it will start saying that it's scared after a bit. So they do an experiment where they ask kids to hold a barbie doll, a Furby, and a hamster upside down to see how long the kid will hold the item upside down before the kid becomes uncomfortable and turns it right side up.
The show goes on to interview the guy who designed Furby (Caleb Chung). He defends the Furby as being alive. He then talks about a dinosaur toy he designed (Pleo) that a review site posted a video of the reviewers beating up the dinosaur till it stops functioning. That left him very uncomfortable. He's now working on designing an animatronic baby doll and is cognizant about how to design it so that it discourages (or at least won't encourage) any type of sociopathic behavior.
This part starts 20 minutes in and is 20 minutes long. The first third of the episode is about the Turing test. The last third of the episode talks about using VR to allow someone to put themselves in another person's body.
I enjoyed the whole episode. Radiolab doesn't publish transcripts, so you'll need to make an hour to listen.
Listen on 2x speed and it only takes 30 minutes :)
If you can't handle 2x, start with 1.25x and slowly ramp up.
(Of course if you're playing the audio so quickly that you can't even make out some words, then I'd expect comprehension to go down)
This is why I'm a big fan of transcripts
This is why being a good teacher requires not just presenting the data in a good way for your learning modality of choice, but for all of them.
The biggest problem is that the truly effective mechanisms for learning require very low student to teacher ratios and high engagement with and from the teacher over relatively short often repeated sessions (3 hour lectures, for example, do not qualify).
I probably have whatever "hearing dyslexia" is called. I seem to misunderstand a lot more words than the people around me. I'm the living version of that Benny Hill gag. Or Radner's SNL "violence on TV" skit.
Regardless, I either have to take notes or totally lock my attention onto the speaker (which most people find creepy) to retain much.
The professor should be able to give notes that people can use.
I personally can’t stand 1x. It’s just too damn slow.
For me every time someone pauses I tend to "tune out" of what they are saying. Pause too often and I can't stand listening.
I wonder if there's any research to prove/disprove something like that? Is empathy desensitization a thing?
For example, when considering latency you might say:
The service is experiencing 3 seconds of latency at the 99th percentile with periodic timeouts.
You wouldn't say
Alice wasn't able to buy medicine because the transaction failed.
The upper class spoke French and when they asked for beouf, they wanted meat, not a cow. Likewise when they asked for poulet, they wanted a cooked chicken, not a live one.
The lower classes spoke German and gave us cū (kuh in modern German) for a cow and cīceb (Küchlein in modern German) for chicken.
As English grew from both of these roots, the different words remained.
Nevertheless, s0rce is correct on the original linguistic point: that the Norman food-word is "mutton", whereas "lamb" is the Germanic word for the young animal.
Captain Pedantic, checking in: it should be "sheep/mutton" and "lamb/lamb". Lambs are baby sheeps.
But that's another time and another place; I'm kind of curious about when the change came about. Partly because, were that article written thirty years ago I would have argued that it is wrong. But I've since become a vegetarian, and might not have gotten the memo when the change came about. Or maybe I've always been wrong, along with all the other hicks I grew up with. :-)
At this point you might be right, but the origin is about wanting to fit in with the new french overlords.
In regular language, you would call porc the meat that comes from a cochon, and bœuf the meat of a vache.
It's not as clear cut as in English, as a bœuf is also an ox, and you can also call a pig a porc, but I think it indicates that the linguistic distinction between an animal and its meat is indeed at its core a dehumanizing process, it might just have been helped by the French-speaking rulers of England, but it still happened in France eventually, more recently and without exterior intervention.
Why? I feel absolutely no emotional difference between "cow meat" and "beef", and would be surprised to hear that it really made a difference to pretty much any non-vegetarian... Further, it seems nobody has felt the need to introduce a different word for 'lamb' - it's still an extremely popular meat despite the word literally meaning 'baby sheep'...
I think it's drawing a really long bow to think it's desensitising language (dehumanisation isn't really the right word, since cows, sheep etc. aren't human). At the end of the day, I think it's just a quirk of language development...
"Can you run out of empathy" by Kati Morton (professional therapist).
Factory farming wouldn't exist if that wasn't a thing.
Yes. It is a well known process and is successfully used to intentionally create new torturers.
For example, in the past, when someone asked you for directions you gave it to them. Now some people are like “don’t you have google?” Same goes for asking for any type of informational help.
Before, many people would cook or help people do things like move or pick them up from the airport. Now they can just do it via a gig economy app so why bother.
So yeah, parents don’t really pay attention to their kids, friends don’t pay attention to their friends, dating has become shallow, people’s attention spans are lower etc.
Why? Because we have new modes of communication with swiping and LOL and we don’t actually want to hear long heartfelt explanations when the shorthand takes far less time so we can fit in more interactions in a single day.
The worst I received when asking for directions was a brusque "I don't know" (like, just a couple of times in my whole life). If I ask friends for help, they help. We do dinners together at home and last time I moved, a little army materialized.
If you were looking for reasons to shit on the gig economy (and I would have nothing against it) you could find really better reasons.
Most of the people in my building under 60 don’t know ther upstairs neighbors, or even on the same floor. The streets, once filled with people, are mostly empty. People rarely write letters or postcards and visit less often than before. Dating communication has texting instead of voice (takes too long when you’re multitasking). Heck even with birthday wishes - once thoughtful calls have been replaced with facebook posts where you don’t even look at the person’s wall.
If you really think it’s just me, compare photos of the 50s to now. Also here are studies and statistics:
Genuinely curious, what am I supposed to take away from that comparison? I mean, I see a lot of oversized gas-guzzling cars and people with ugly sunglasses. In both sets of photos.
There's a difference between asking for help because you are having problems and asking for help because you are lazy. There's a spectrum along this, but if you ask too much of a random person, just for your own benefit, they will feel rightly taken advantage of. You probably wouldn't solicit a random passerby to help you move all your furniture in or out of a building, but you might quickly call for help from anyone around if the specific item you are moving is about to be dropped. This is all a roundabout way of saying people are generally happy to help if it costs little compared to how much it helps.
As for directions, in the past knowing how to get somewhere was hard without prior knowledge, and providing help cost relatively little to the person helping compared to the help gained by the other. Stopping to give someone directions that are less accurate that can be found on any number of services that most people have access to can easily be seen as inconveniencing someone for little or no reason. That said, I've had good experiences by also explaining why I need directions from someone while asking. Suffixing your request with something like "I have directions but they aren't making any sense to me" or "My phone died" seems to make all the difference.
Give GPS an address and it knows what all the street names are.
I suspect it will be a futile effort because:
A, the person will know it is not alive.
B, stuff like this is done specifically to get people to click the video.
Frankly more and more the web is resembling a massive schoolyard, with people egging each other on with dares and promises of rewards if they do some kind of icky or potentially dangerous task.
One game i have been following recently released an update that allow twitch viewer to vote on various modifiers to the game play, more often than not to the detriment of the streamer.
There also seems to be a rise of "hard" games that are specifically there for streamers and their audience. This so that much like the audience of a gladiator match they get to jeer and name call whenever the streamer slip up and the play character gets mauled by some game mechanic.
Seems like this could also backfire and discourage taking a computer feigning distress seriously--after all, it is a sort of dishonesty about the nature of the toy, which presumably isn't harmed by turning it upside down.
The kids tormented one robot and not the other because doing so with one was fun and the with the other it was not. They were playing. It’s what children do. It doesn’t reveal so insight into the human mind.
> The kids tormented one robot and not the other because doing so with one was fun and the with the other it was not.
Why was it fun with one, not the other? Why do we take to certain forms of play? How do toys encourage these forms of play? This is extremely insightful as child’s play is partially the process of practising adult behaviours.
And then it takes a second to realize .. wait...that's ..that's just a robot. But our brains make that association because it looks human enough.
So watching that Pleo video; even though I rationally understood what I was feeling, I still felt bad for the dino and I'm glad I felt it. That was just cruel and disturbing. I know I'm feeling bad for a toy, but our brains evolved to have those types of emphatic responses to identify when other animals or humans are in pain -- similar to pain we can experience.
It's a pretty interesting trait, and it's not just purely nurture(vs nature) because we see similar sounds and emphatic reactions on many other mammals.
There are quite a few human beings who don't feel that; who can watch animals or people in pain and not feel anything or have their sounds/crys trigger neurological response. Those people usually grow up to run banks, or become CEOs of multinational conglomerates.
Got a citation? I imagine most people like this would end up in prison and/or socially marginalized. Being incapable of empathy is basically a disability.
When robots become self-aware and ingest the video of us kicking the Boston Dynamics robots and screwing with them with brooms and all that... that's not gonna be good.
This is where it came from, in case anyone (including me) was wondering --- there are a handful of articles online claiming it was "leaked", but it doesn't appear to be so.
It is probably not coincidental that this marks the 20-year expiry of that patent.
Patents can have very interesting things in them, including source code. I can't find the reference now but I remember reading that a famous early calculator (TI? HP?) was reverse-engineered and emulated down to the transistor level because of the detailed chip layout and source code from a patent.
I proceeded to contact him and he proceeded to scan and publish it in a few days.
 - https://hackaday.com/2015/11/24/building-the-infinite-matrix...
"Furby is an American electronic robotic toy released in 1998 by Tiger Electronics. It resembles a hamster or owl-like creature and went through a period of being a "must-have" toy following its holiday season launch, with continual sales until 2000. Over 40 million Furbies were sold during the three years of its original production, with 1.8 million sold in 1998, and 14 million in 1999. Its speaking capabilities were translated into 24 languages."
Need it to sneeze when the build fails.
And the NES is a real 6502. They just made a couple very simple patches to the metal layer. Didn't want to pay for the patents on the BCD instructions AFAIR.
"The datasheet describes it as a 6502 instruction set, with an X register but no Y, and just 69 "instructions" (presumably opcodes, of which the 6502 has 151). It has a banked ROM architecture and just 128 bytes of RAM at $80 through $FF which "includes stack" - so page one is folded onto page zero presumably."
FWIW though, it doesn't loo like page one is an alias of page zero, but is instead unmapped. That's not the biggest deal in the world to the zero page the fixed stack page given that this is probably a totally different mask set than a regular 6502. I wonder if you can underrun the stack into the HW registers or if their stack pointer is only 7 bits?
Same here :(
> BURP ATTACK, SAY NAME, TWINKLE SONG, and ROOSTER LOVES YOU
They are well-known and documented almost everywhere.
The interesting thing the code does reveal, though, is that there were only 15 possible names, with this being extended to 24 later.