Hacker News new | comments | show | ask | jobs | submit login
Original Source code for the Furby [pdf] (seanriddle.com)
533 points by rahimiali 4 months ago | hide | past | web | favorite | 152 comments



Surprisingly sophisticated. I didn't remember them being much more than annoying when my kids "had to have one."

Hey, I recognize those opcodes. 6502 (Commodore, Apple 2, etc) Cool! I had no idea what was running these things.

Time Slice Task Master (aka "cooperative multitasking without preemption")

Interesting they set the PWM (pulse width modulation) based on battery voltage as not to stress out the motors. Cheaper than some caps & resistors for a voltage divider from a high V source, or a boost/buck controller. Still necessary for the CPU though. If I remember correctly, this thing ran off 4 1.5V cells (+6V). Yay, software!

Clever use of tilt sensor to seed the pseudo-random num generator. (page A22)

Too bad the diagnostic file isn't included (diag7.asm) per page A21.

(A33) That cycle-counting timer loop brings back memories, too -- throwing in a few NOP (no-operation) calls just to eat a cycle and get the timing right on a horizontal/vertical retrace -- while here they're doing it so the service intervals on sound & motor control come out right.

(A36) LOL @ "Rap mode"

A39 - the various sensor sequences needed to trigger the modes

I gave up at about 45 pages, but there is obviously some neat stuff here.


Hmmm, I'm alway really surprised when people comment on threads like this.

How much time did you take to do this, I can read code, but I simply can't casually take a look at code like this and say, hey that's cool, hey that's interesting, I have to concentrate and go line by line, a more debugging state of mind.

There are programmers that can sort of skim read code which just amazes me.


Maybe a few seconds per page? Truly, it's really not that much of an accomplishment -- the comments are superb. I spent 10 years writing games in 6502 assembly, so it feels pretty natural to me.

Keep in mind that a lot of this kind of code follows a pattern : lots of EQU ("equates") statements that link a variable to an address like a port or a signal line. Then you'll have the setup code that gets everything into a basic state where it's ready to run the next part

The big loop Here's where the action's act. Just spin in a big loop, waking up every few hundred cycles or so to check and see if a button is pressed, or the IR chip is sending a byte, or the battery is dead, etc. If you're already doing something, keep doing it until you're done. If you're done, wait around for a while or go do something else.

At least, that's what I come expecting. So when I see code like this that basically fits that model, parsing it is simple.

And, like I said, there are great comments. Even a couple about the round-robin service routine that made me laugh.


" I spent 10 years writing games in 6502 assembly"

Well that would be the answer ...


ha! Yeah, I guess that's the secret -- just do something for a 10,000 hours and it comes naturally. But I still remember the learning process "X and Y index, what's that good for? and why do I need an accumulator? Stack? flags? Rotate?" but once I put that first pixel on the screen and made it move, I was hooked.

what's funny to me is the idea that a person can go do something else for 10 or 20 years (C, C++, DotNet, robotics) and barely touch a line of assembly, yet it's all still there waiting. Maybe it's because game development in 6502asm was at the start of my career and I was so passionate about it, but some things are just ingrained. Even stuff like favoring zero-page memory (the first 256 bytes of memory from $0000 to $00FF) because it can be accessed in one cycle instead of two like indexed, offset or direct access memory. Later, moving to the 68000 family felt like a high-level language (Multiply AND divide opcodes -- you mean I don't have to write a multiply routine?)

But having a asm background has ruined me in some ways. Learning C was straightforward enough, and the jump to C++ felt very natural. However, I find modern languages repulsively inefficient (e.g. Ruby) and struggle with functional programming (Lisp, Elixir). Python feels reminiscent to Applesoft BASIC so while I have to look up the syntax every time, it still feels normal.

somewhat related, since I'm rambling...

I recently read "Making 8-bit arcade games in C" by Steven Hugg and found it brought on a strong dose of nostalgia. It's a really ambitious book that attempts to give a feel for programming various 8bit game platforms, and covers some surprisingly advanced techniques and some basic stuff that schools may not event each any longer (two's complement, run-length encoding, interrupts, etc.). The book has some rough edges and wanders a bit but the good parts easily outweigh the defects. It achieves its aim of

Speaking of game programming -- how about meta-game programming? programming in a game.

I bought a game called TIS-100, and found it to be loads of fun in the same way a mechanic might buy a derelict car and restore it to new condition, or an EE might fiddle around with an old tube radio just for fun. The premise is that an uncle dies suddenly/disappears and leaves a prototype computer behind. Hidden in this computer are some signal processing circuits that hold a secret. You (the player) are tasked with restoring this prototype military computer to unlock the knowledge held within. Very simple, only a handful of opcodes, but it's a challenge and fun in a way I haven't felt since the 80s. Being forced to juggle a variable around on the stack or between indexes or the accumulator an only having a few bytes of storage requires a different type of thinking. Good stuff.

TIS100 definitely isn't for everyone and the challenge curve gets rather steep but once I "got it" it was one of those stay-up-all-night challenges. https://store.steampowered.com/app/370360/TIS100/

great review: https://blog.codinghorror.com/heres-the-programming-game-you...


Perhaps this is a naive question, but why would something like this be written in assembler, and not C or similar?

It seems assembler makes it a bit more complicated without much benefit, and surely there's a compiler for a chip as common as the 6502.


No higher level languages were ever competitive with assembler for the 6502. The stack model of C is not a good fit, as the 6502 machine stack doesn't give any support for reading or writing stack-based variables. Good performance depends on effective use of the zero-page -- the first 256 bytes of memory which have special addressing modes and allow more efficient access. Registers are few and quirky -- accumulator, two index registers which are not quite interchangeable, a byte stack pointer. All operations are 8 bit; 16 bit calcs like for pointers are costly both in code size and speed.

Back in the 80's and 90's the best compiled languages were probably COBAL and Forth. As far as I know nobody seriously used these for games or applications.

There are more recent C compilers, but the code generation is multiples of size and speed (5* maybe?) away from assembler.


Well, Mattel used Forth: https://groups.google.com/d/msg/comp.lang.forth/omi6PUvymEE/...

Because it was cheap: https://groups.google.com/d/msg/comp.lang.forth/LzOasFOyIMg/... (forth not mentioned here; plausibly(?) extrapolating)

Some nearby discussion about 6502 market penetration (I can't determine whether Forth is being discussed here - oh and dodge the cartoon fistfight-cloud wending its way between the posts, happens a lot on this usenet group): https://groups.google.com/d/msg/comp.lang.forth/Yoa9u55cpbo/...


For those into modern (!) medium-level 6502 programming, PLASMA [0] may be better (syntax-wise) than Forth and better (size and performance-wise) than C.

[0] https://github.com/dschmenk/PLASMA



Ha! Yes, COMAL. Thanks.


Because the cheapest chips have very limited code and memory space.

And back when this product was released, optimizing compilers were really poor by comparison with a human for these tasks.

Today you probably wouldn't bother with assembly for all but the most timing critical parts.


From my reading of the code, this microcontroller has 128 bytes of RAM.


Compilers just didn’t get as good of code as assembler. Really, for the 6502, 6809, and space constrained 68000s assembler was really fun and just worked. Have a decent macro assembler and some discipline and assembler was pretty easy and understandable. Heck, I’ve seen C code that was a whole lot less clear or understandable.


You'd be surprised how much overhead a compiler adds even for something like C. Also you can do things in pure assembly that you just can't do in C (w/o inlining assembly haha).


Well written code in any high level language is basically Chinese to most people.


*low level


I can skim a lot of things, but they're all a lot higher than assembly.


Western Design Center still apparently sells millions of 6502 cores each year. Especially for devices like this. So it's quite possible this was properly licensed and WDC provided 6502 IP.

It looks like this is basically a 6502 with no Y register and a reduced set of instructions.

Some past discussion of this: http://forum.6502.org/viewtopic.php?f=1&t=2027


> It looks like this is basically a 6502 with no Y register and a reduced set of instructions.

Why would you do this? It absolutely can't have made any difference to the cost of the processor in 1998.


It absolutely did make a difference to the cost. Probably only a few cents, but definitely a difference. Die area is money. Microcontrollers are extremely cost-optimized and people will buy exactly as much capability as they need which is why you can still buy things like the PIC10F200 with 16 bytes of RAM (Furby has 128 bytes).


Removing Y and the instructions AND taking advantage of the size difference means you have to do new layout.

The NRE to make the change is at least $100,000 (probably closer to $500,000--but I'm spitballing here). And that is up front cash.

Those changes are really small. So, lets assume a penny saved.

It would take you 10 million chips to make that back.

The total run was 25 million? or so--and nobody expected that.

If the NRE was $500K at a penny saved, you never make it back.

Even at a nickel (which is probably a huge savings), again it would take a 10 million chips to make it back.

Given the oddity that this is, I bet that this was a failed batch of chips that was just enough damaged to be really cheap but not so damaged as to be useless.


It was a custom 6502 like chip, made by a third party. And they already had to have a custom mask set beyond a 6502 because it integrated a bunch of peripherals. Also, a back of the napkin look at their changes probably means it was half to two thirds the die size, so a pretty good win at those volumes.

Datasheet for the chip they used:

http://www.ic72.com/pdf_file/-/428557.pdf

Looks like a fairly generic "sound processor" that was most likely used in tons of toys and other consumer electronics. Hell, I wouldn't be surprised if the Furby line wasn't even their biggest contract.


> Also, a back of the napkin look at their changes probably means it was half to two thirds the die size, so a pretty good win at those volumes.

There is no chance that removing Y and its instructions saved 50% die area on a chip with 80K ROM and 128 bytes of RAM. The "savings" is likely due to this being in a 350nm or 500nm process rather than anything else.

Seeing as this is a cheap Chinese fabless company and these things were Chip-on-Board, they had a failure on the Y register and just datasheeted it into compliance. This is little different from the original 6502 which datasheeted the lack of Rotate Right into compliance on the first versions.

If it's cheap enough, nobody cares.


Maybe, but removing Y also means removing the instructions which do indirect addressing through it, which removes all the decoding logic for them. So it could very well save a bunch of die space.

Of course one could go look at the layout of the (NMOS) 6502, which has been fully reverse engineered at this point, to prove or disprove this.


Here's a picture with some labels: http://breaknes.com/files/6502/6502.jpg

The Y register is WAY smaller than even I expected. It's a tiny vertical strip on the lower left. And the PLA entries are on top, and a couple entries would again be tiny.

Any savings due to omitting Y would be miniscule.

The more I examine it in detail, the more it becomes clear that it's just a bug that got datasheeted.


Removing complexity (gates) means yield goes up. My indications from being on the periphery of developing custom silicon and respinning it is that NRE is not $500k for devices that are more advanced (ie multiple sparc cores + dsp + application specific logic)


There's no reason to assume that chip was used only in the Furby.

It's likely it has been used in all sorts of toys and the like for the last two decades. And it was probably dirt cheap. And probably designed in the early 80s, hence the design decision to go with the 6502 as a base platform.


And probably designed in the early 80s

I thought Sunplus was a bit newer than that (90s), but a quick search reveals that your guess was pretty close:

https://www.sunplus.com/about/milestones.asp

Founded in 1990.


https://en.wikipedia.org/wiki/Market_segmentation

A prime example is https://en.wikipedia.org/wiki/Intel_80486SX where Intel spent extra money to develop a way of damaging a 486DX so it could be sold as a 486SX at a lower price.


From a comment below, it seems some instructions (BCD related) were patent encumbered, so removing them means less royalties per chip and possibly a big saving, especially since you probably don't need binary coded decimal in a toy like this.


Well, I'm not a hardware engineer, but...

Who knows maybe the processor had been designed and produced a long time before that, and that made it super super cheap to throw on a BOM because it was technology from a few years before.

Also likely to use sliiiightly less power, and be even more responsive to interrupts.


> Surprisingly sophisticated.

I distinctly remember them being advertised as intelligent little friends who learn about you.


Like Google and Facebook?


Your own, personal, surveillance capitalism golem


They didn't have a network connection, did they? So they're much more your own personal surveillance golem than anything put out these days by Mattel (or anyone else selling internet-connected devices) which are all very much someone else's surveillance golem.


They have an IR sensor and emitter. That's a network of a fashion.


Well yes, but it's missing the giant cloud backend that will record every interaction and the tangle of Big Data, adtech and other analytics companies on the other side.


The feds did ban Furbies from secure facilities.


I'd like to meet the agent trying to bring his Furby to work.


Rest assured that this was creepy, too. Albeit for moderately different reasons.


Likewise here, one of the factoids that got tossed around pretty regularly was that all the physical actions were controlled by a single motor, with sophisticated mechanisms and coding to make it look independent.


It was absolutely the draw, but I can't say I figured them to be brilliant


A-116 Diag7.asm. Really if you got to page 45, you got through almost half. A-126 and on is just data tables.


The z80 is also still in use as a microcontroller of sorts.


6502 is also what the Terminator ran. We should thank our time traveler saviors for stopping the Furby uprising from happening.


I think we can assume that, given the complexity of a Terminator, the 6502 was just there for I/O, kind of like a BBC Micro with an ARM processor.


Or a Mac IIfx... "Wicked Fast".


"Kah bye-bye oo-bah koh-koh."


Tamagotchi also ran on the 6502.


Indeed! And there have been some truly heroic hacks of the little buggers!

https://books.google.com/books?id=lvMxDwAAQBAJ&pg=PT248&lpg=...


I attended a talk that Dave Hampton gave on the creation of the Furby, it was a lot of fun. For such a "simple" toy there was a lot going on. I picked up a couple of Furbies and did the 'make them talk' hack (basically inserting an SBC with Ethernet into their bodies so that you could feed their audio and animitronic circuits with the output of a text to speech converter. For a while I had one reading out nagios alerts, and yes that is as creepy as it sounds.


Furby was recently discussed in the middle third of this Radiolab episode:

https://www.wnycstudios.org/story/more-or-less-human

If you turn a Furby upside down, it will start saying that it's scared after a bit. So they do an experiment where they ask kids to hold a barbie doll, a Furby, and a hamster upside down to see how long the kid will hold the item upside down before the kid becomes uncomfortable and turns it right side up.

The show goes on to interview the guy who designed Furby (Caleb Chung). He defends the Furby as being alive. He then talks about a dinosaur toy he designed (Pleo) that a review site posted a video of the reviewers beating up the dinosaur till it stops functioning[0]. That left him very uncomfortable. He's now working on designing an animatronic baby doll and is cognizant about how to design it so that it discourages (or at least won't encourage) any type of sociopathic behavior.

This part starts 20 minutes in and is 20 minutes long. The first third of the episode is about the Turing test. The last third of the episode talks about using VR to allow someone to put themselves in another person's body.

I enjoyed the whole episode. Radiolab doesn't publish transcripts, so you'll need to make an hour to listen.

[0] https://www.youtube.com/watch?v=pQUCd4SbgM0


> you'll need to make an hour to listen.

Listen on 2x speed and it only takes 30 minutes :)

If you can't handle 2x, start with 1.25x and slowly ramp up.


I've flirted with this method off and on, but the consensus between me and people I've spoken to who do the same, is a lack of retention comes along with the speed. You get the feeling of learning something and the pleasure associated with that, but none of it sticks. Of course this wont hold true across the board but something to factor in before adopting this method.


I feel like I retain information more easily when listening to audio on 2x. Listening to audio is significantly slower than reading, and often by the time the speakers get to describing the results, a lot of time has passed from when they described the context. Shortening that time gap is helpful for retaining the information for me.

(Of course if you're playing the audio so quickly that you can't even make out some words, then I'd expect comprehension to go down)


I like to 2x and then use the extra time for reflection and review. I find that on 1x I zone out because I'm not engaging with the person speaking like I would face to face. On 2x, I have to pay attention or I'll miss something.


I find retention of audio at any speed is lousy.

This is why I'm a big fan of transcripts


To be fair, this is very person-specific, and this distinction of preferred learning methods is a common topic in pedagogy. Some folks, myself included, are auditory learners, and will tend to retain more from a spoken lecture than a reading assignment. Some folks just can't retain audio as well as reading something, or need to take notes to retain (even if the notes are not consulted again).

This is why being a good teacher requires not just presenting the data in a good way for your learning modality of choice, but for all of them.


"Learning styles" has actually been debunked repeatedly.

The biggest problem is that the truly effective mechanisms for learning require very low student to teacher ratios and high engagement with and from the teacher over relatively short often repeated sessions (3 hour lectures, for example, do not qualify).


I'm pretty sure there are personal styles, which might be undiagnosed cognitive issues.

I probably have whatever "hearing dyslexia" is called. I seem to misunderstand a lot more words than the people around me. I'm the living version of that Benny Hill gag. Or Radner's SNL "violence on TV" skit.

Regardless, I either have to take notes or totally lock my attention onto the speaker (which most people find creepy) to retain much.


Seconded. Audio for learning is like getting a fixed dataset in a linked list, and text/transcript is like getting it in a vector. There's zero benefit from the former, and by losing efficient random access, you can no longer skim/skip/search for things.


Me 2. That's why I hate lectures.

The professor should be able to give notes that people can use.


Apparently we are in the 1%. As in, only 1% of podcast listeners listen at 2x or greater.

I personally can’t stand 1x. It’s just too damn slow.


I'm a big fan. Nearly all my limited YouTube consumption is at 2x. Any audio book I am re-listening to is somewhere between 2 and 3x. However, first pass complex stuff I still prefer around 1x.


it defeats the purpose a little, but it'd be nice to set the spoken speed faster while leaving normal length pauses


I'd personally like exactly the opposite. Get rid of all the pauses, but speak normal speed.

For me every time someone pauses I tend to "tune out" of what they are saying. Pause too often and I can't stand listening.


This is a feature of the iOS podcast player Overcast. Marco Arment calls it SmartSpeed. It's also neat that the app tells you how much time this feature has saved you.


This is precisely the idea of Farnsworth Spacing (used for ham CW): http://www.arrl.org/files/file/Technology/x9004008.pdf


Good idea for an audiobook app/other audio listener program setting. Pretty easy to detect periods of low volume and reduce the speed.


Here's a negative path that I hope doesn't occur - we build toys that are more lifelike, communicative and designed to invoke empathy, and some children will just become desensitized to empathy triggers.

I wonder if there's any research to prove/disprove something like that? Is empathy desensitization a thing?


Military forces around the world have figured it out - they have a common dehumanizing language to avoid empathy triggers.


Everyone uses the more mild version of dehumanizing language, reduction to abstraction [1], because it's more efficient to consider groups of people and it removes emotions from decisions.

For example, when considering latency you might say:

The service is experiencing 3 seconds of latency at the 99th percentile with periodic timeouts.

You wouldn't say

Alice wasn't able to buy medicine because the transaction failed.

[1] https://en.wikipedia.org/wiki/Dehumanization


Since the 11th century Norman conquest we've had pig/pork, cow/beef, sheep/lamb


As I understand it, this has to do with who was asking for a thing and what they wanted.

The upper class spoke French and when they asked for beouf, they wanted meat, not a cow. Likewise when they asked for poulet, they wanted a cooked chicken, not a live one.

The lower classes spoke German and gave us cū (kuh in modern German) for a cow and cīceb (Küchlein in modern German) for chicken.

As English grew from both of these roots, the different words remained.


Don't let Shia fool you, it's bœuf.


Almost: Küken means chick. Küchlein would be the diminutive of Kuchen (cake) so a small cake or sweet pastry.


Minor nitpick, at least in North America sheep meat is mutton. Lamb is the word for a young sheep and also the meat from the animal.


I thought this too for the longest time - turns out it's actually not true (assuming Wikipedia is to be believed). There aren't any age restrictions on what can be sold as 'lamb', so most American 'lamb' is what the British would call mutton.


Thanks for this! I've been wondering for many years why "lamb" in the US is so tough. Wikipedia citation here: https://en.wikipedia.org/wiki/Lamb_and_mutton#United_States

Nevertheless, s0rce is correct on the original linguistic point: that the Norman food-word is "mutton", whereas "lamb" is the Germanic word for the young animal.


interesting, thanks for informing me! I checked wikipedia before commenting but I didn't read far enough, didn't know there was no legislation on lamb age. I wonder how old the "lamb" in the grocery store is? Might explain why there is so much variation in flavor. I can't remember if the lamb was "younger" tasting when I lived in Canada.


sheep/lamb

Captain Pedantic, checking in: it should be "sheep/mutton" and "lamb/lamb". Lambs are baby sheeps.



I, indeed, did find it interesting and to which I respond, "'dafuq?" I grew up in the U. S., and growing up actually raised sheep. And when you are served adult sheep, you are served "mutton". IOW, the term "mutton" was not "uncommon in the United States" in my part of the country at the time I was growing up.

But that's another time and another place; I'm kind of curious about when the change came about. Partly because, were that article written thirty years ago I would have argued that it is wrong. But I've since become a vegetarian, and might not have gotten the memo when the change came about. Or maybe I've always been wrong, along with all the other hicks I grew up with. :-)


Interesting. I don't think I've ever heard an American use "mutton" or seen it on an American restaurant menu.


This has more to do with classist beliefs than it does with dehumanization. The origin of those words relate to the aristocracy using the french/latin forms, while those poor farmers used the orginal english/germanic forms. Notice what the french words are for a pig or oxen/steer.

At this point you might be right, but the origin is about wanting to fit in with the new french overlords.


For some reason, in the French language itself the same distinction eventually appeared.

In regular language, you would call porc the meat that comes from a cochon, and bœuf the meat of a vache.

It's not as clear cut as in English, as a bœuf is also an ox, and you can also call a pig a porc, but I think it indicates that the linguistic distinction between an animal and its meat is indeed at its core a dehumanizing process, it might just have been helped by the French-speaking rulers of England, but it still happened in France eventually, more recently and without exterior intervention.


"Dehumanizing" isn't the proper word here, since we're literally talking about non-human entities. If you want to talk about inter-species empathy, it should be pointed out that most predatory species aren't relating to their prey animals (they typically go after the weakest prey; young, elderly or sick animals which humans would likely empathize with). The odd behavior is that we ascribe human traits to things we plan on killing and eating which aren't objectively demonstrating them.


You're right. For example, there's a similar distinction in Russian.


"but I think it indicates that the linguistic distinction between an animal and its meat is indeed at its core a dehumanizing process"

Why? I feel absolutely no emotional difference between "cow meat" and "beef", and would be surprised to hear that it really made a difference to pretty much any non-vegetarian... Further, it seems nobody has felt the need to introduce a different word for 'lamb' - it's still an extremely popular meat despite the word literally meaning 'baby sheep'...

I think it's drawing a really long bow to think it's desensitising language (dehumanisation isn't really the right word, since cows, sheep etc. aren't human). At the end of the day, I think it's just a quirk of language development...


Not specific to robots, but i found your question very interesting to compare with this video which is specifically about managing empathy, and it's dangers.

https://www.youtube.com/watch?v=8wlwDdzlpuk

"Can you run out of empathy" by Kati Morton (professional therapist).


Actually, studies have been done and shown that isn't the case.

https://www.theverge.com/2018/8/2/17642868/robots-turn-off-b...


> Is empathy desensitization a thing?

Factory farming wouldn't exist if that wasn't a thing.


Is empathy desensitization a thing?

Yes. It is a well known process and is successfully used to intentionally create new torturers.


so basically Westworld?


Being able to say "Freeze all motor functions" to a furby-class toy as an industry standardized voice instruction could be handy in the future.


Yes the uncanny valley can cause that already.

For example, in the past, when someone asked you for directions you gave it to them. Now some people are like “don’t you have google?” Same goes for asking for any type of informational help.

Before, many people would cook or help people do things like move or pick them up from the airport. Now they can just do it via a gig economy app so why bother.

So yeah, parents don’t really pay attention to their kids, friends don’t pay attention to their friends, dating has become shallow, people’s attention spans are lower etc.

Why? Because we have new modes of communication with swiping and LOL and we don’t actually want to hear long heartfelt explanations when the shorthand takes far less time so we can fit in more interactions in a single day.

Example: https://www.lifehack.org/299404/

https://www.psypost.org/2018/05/smartphones-can-prevent-pare...

http://www.wired.com/2014/02/outsourcing-humanity-apps/


I don't know how to say this without seeming offensive, but you seem to have really shitty people around you if this is your experience.

The worst I received when asking for directions was a brusque "I don't know" (like, just a couple of times in my whole life). If I ask friends for help, they help. We do dinners together at home and last time I moved, a little army materialized.

If you were looking for reasons to shit on the gig economy (and I would have nothing against it) you could find really better reasons.


Literally as I was about to reply to you, the doorbell rang and the Uber Eats person dropped off the stuff. They said “thank you” and that was the extent of our interaction.

Most of the people in my building under 60 don’t know ther upstairs neighbors, or even on the same floor. The streets, once filled with people, are mostly empty. People rarely write letters or postcards and visit less often than before. Dating communication has texting instead of voice (takes too long when you’re multitasking). Heck even with birthday wishes - once thoughtful calls have been replaced with facebook posts where you don’t even look at the person’s wall.

If you really think it’s just me, compare photos of the 50s to now. Also here are studies and statistics:

https://en.m.wikipedia.org/wiki/Bowling_Alone

http://www.wired.com/2014/02/outsourcing-humanity-apps/

https://www.theatlantic.com/magazine/archive/2017/09/has-the...

https://theconversation.com/with-teen-mental-health-deterior...

https://www.npr.org/sections/health-shots/2018/05/01/6065885...

http://www.wsj.com/articles/to-beat-the-blues-visits-must-be...


I have heard people complain that everyone is glued to their phone these days while riding public transit. But humans are easily bored and want something to do. Here's a photo of a train full of people reading newspapers, what's the difference?

https://www.raconteur.net/wp-content/uploads/2017/02/Digital...


> compare photos of the 50s to now

Genuinely curious, what am I supposed to take away from that comparison? I mean, I see a lot of oversized gas-guzzling cars and people with ugly sunglasses. In both sets of photos.


> For example, in the past, when someone asked you for directions you gave it to them. Now some people are like “don’t you have google?” Same goes for asking for any type of informational help.

There's a difference between asking for help because you are having problems and asking for help because you are lazy. There's a spectrum along this, but if you ask too much of a random person, just for your own benefit, they will feel rightly taken advantage of. You probably wouldn't solicit a random passerby to help you move all your furniture in or out of a building, but you might quickly call for help from anyone around if the specific item you are moving is about to be dropped. This is all a roundabout way of saying people are generally happy to help if it costs little compared to how much it helps.

As for directions, in the past knowing how to get somewhere was hard without prior knowledge, and providing help cost relatively little to the person helping compared to the help gained by the other. Stopping to give someone directions that are less accurate that can be found on any number of services that most people have access to can easily be seen as inconveniencing someone for little or no reason. That said, I've had good experiences by also explaining why I need directions from someone while asking. Suffixing your request with something like "I have directions but they aren't making any sense to me" or "My phone died" seems to make all the difference.


Because google can help them better than I can. I don't know the street names a lot of the time. I know that turn at the white house by the big oak tree makes for horrible directions but I don't recall what the street name is.

Give GPS an address and it knows what all the street names are.


> He's now working on designing an animatronic baby doll and is cognizant about how to design it so that it discourages (or at least won't encourage) any type of sociopathic behavior.

I suspect it will be a futile effort because:

A, the person will know it is not alive.

B, stuff like this is done specifically to get people to click the video.

Frankly more and more the web is resembling a massive schoolyard, with people egging each other on with dares and promises of rewards if they do some kind of icky or potentially dangerous task.

One game i have been following recently released an update that allow twitch viewer to vote on various modifiers to the game play, more often than not to the detriment of the streamer.

There also seems to be a rise of "hard" games that are specifically there for streamers and their audience. This so that much like the audience of a gladiator match they get to jeer and name call whenever the streamer slip up and the play character gets mauled by some game mechanic.


> is cognizant about how to design it so that it discourages (or at least won't encourage) any type of sociopathic behavior.

Seems like this could also backfire and discourage taking a computer feigning distress seriously--after all, it is a sort of dishonesty about the nature of the toy, which presumably isn't harmed by turning it upside down.


In the interview, Caleb is still not sure what exactly to do. It sounds like he doesn't take the design lightly, especially since (he thinks) these toys might shape the way people might later interact with actual human babies.


Some interesting tid bits of toy design in the interview e.g.: "never show the white of the eye above the pupil... it emotes shock/unpleasant surprise", and if you get doll emotions wrong it becomes Chucky. LOL


I’ve always though Caleb Chung was being pretty hypocritical in this episode. He not only programmed the pain response behavior, but also must have thoroughly tested it, meaning he or someone at his company must have done a more scientific version of that video.


I was going to mention this too. It was a great episode that gave me a lot to think about (I have kids)


I’ve seen this mentioned in a few places. I feel like it’s becoming the new Stanford Prisoner Experiment, a completely contrived “experiment” that’s bogus science but becomes popular because it reinforces a popular belief.

The kids tormented one robot and not the other because doing so with one was fun and the with the other it was not. They were playing. It’s what children do. It doesn’t reveal so insight into the human mind.


Do you not see the potential for insight there? You said it yourself:

> The kids tormented one robot and not the other because doing so with one was fun and the with the other it was not.

Why was it fun with one, not the other? Why do we take to certain forms of play? How do toys encourage these forms of play? This is extremely insightful as child’s play is partially the process of practising adult behaviours.


There are videos of engineers kicking robots to see them recover or self-balance, and people who watching are always like, "OMG why would you kick him?"

And then it takes a second to realize .. wait...that's ..that's just a robot. But our brains make that association because it looks human enough.

So watching that Pleo video; even though I rationally understood what I was feeling, I still felt bad for the dino and I'm glad I felt it. That was just cruel and disturbing. I know I'm feeling bad for a toy, but our brains evolved to have those types of emphatic responses to identify when other animals or humans are in pain -- similar to pain we can experience.

It's a pretty interesting trait, and it's not just purely nurture(vs nature) because we see similar sounds and emphatic reactions on many other mammals.

There are quite a few human beings who don't feel that; who can watch animals or people in pain and not feel anything or have their sounds/crys trigger neurological response. Those people usually grow up to run banks, or become CEOs of multinational conglomerates.


> Those people usually grow up to run banks, or become CEOs of multinational conglomerates.

Got a citation? I imagine most people like this would end up in prison and/or socially marginalized. Being incapable of empathy is basically a disability.


The documentary I Am FishHead is a good documentary about psychopathy:

https://www.youtube.com/watch?v=TB0k7wBzXPY


Low expectation, high variance, I'm guessing.


I think it was intended as a joke...


>> And then it takes a second to realize .. wait...that's ..that's just a robot. But our brains make that association because it looks human enough.

When robots become self-aware and ingest the video of us kicking the Boston Dynamics robots and screwing with them with brooms and all that... that's not gonna be good.


Just imagine that the Pleo is infected with a xenomorph... and you're all outta nukes.


Alternative source with multiple formats including an OCR'd text file: https://archive.org/details/furby-source


The patent wrapper containing it was obtained and scanned by Sean Riddle (seanriddle.com).

This is where it came from, in case anyone (including me) was wondering --- there are a handful of articles online claiming it was "leaked", but it doesn't appear to be so.

It is probably not coincidental that this marks the 20-year expiry of that patent.

Patents can have very interesting things in them, including source code. I can't find the reference now but I remember reading that a famous early calculator (TI? HP?) was reverse-engineered and emulated down to the transistor level because of the detailed chip layout and source code from a patent.


I was occasionally Googling the source code out of boredom, and one day this yielded a bannister.org post of Sean claiming he had obtained a scan of the source code from the USPTO.

I proceeded to contact him and he proceeded to scan and publish it in a few days.


Interestingly it is the exact same scan. (you can download from IA in pdf and get the same file that the link points to).


You could totally combine this with some ideas from the Tamagotchi Matrix [0] and create a virtual world of furbies.

[0] - https://hackaday.com/2015/11/24/building-the-infinite-matrix...


They made a heaven for emulated Tamagotchi! This makes me happy.

http://tamahive.spritesserver.nl/


I believe the period-correct way to combine things like this, is by imagining a beowulf cluster of them.


I don't know how I missed this.


I just ordered a print copy of this. Going to have a ritual burning of the Furby source at upcoming 90's nostalgia party.


I wonder if anyone could use this to improve on the Furby Organ?

https://www.youtube.com/watch?v=GYLBjScgb7o


That guy is incredible; I've been a supporter for a while. He's also a hardware, not a software guy, so I strongly doubt he'd make use of it. He'd respect its existence and any derivatives or interfaces that might come out of it, though!


For those like me who don't know what Furby is:

"Furby is an American electronic robotic toy released in 1998 by Tiger Electronics. It resembles a hamster or owl-like creature and went through a period of being a "must-have" toy following its holiday season launch, with continual sales until 2000. Over 40 million Furbies were sold during the three years of its original production, with 1.8 million sold in 1998, and 14 million in 1999. Its speaking capabilities were translated into 24 languages."

https://en.wikipedia.org/wiki/Furby


Emulator when!?

… "Furbulator"

Need it to sneeze when the build fails.



I really wish my wife hadn't gotten rid of her old original Furby. I'd love to "follow along" in the code while it's actually running.


It would be really neat to have a "build a furby" kit with motors and chips you had to flash yourselves with this source code..


An ex-girlfriend of mine worked for the company that works on the latest iterations of Furby. She wrote a lot of the lines that the Furby speaks, if I recall correctly. She loved to sit in on the voice recording sessions and hear the actors speak the lines. One of the voice actors was rather prominent and had done a lot of recognizable voices from various cartoons.


I build a system once for a furby-like product that was designed so that the voice talent could listen to all sorts of speech snippets, and, crucially, play them back over the exact same hardware as the final toy's. These 'golden ear' folks would pick the best (heavily compressed) speech that would be used in-product.


I worked with Caleb Chung (designer of Furby) on the Pleo (Animatronic Dinosaur) project. It was much more sophisticated but that was also part of it's downfall. I worked on the VM (Pawn) that ran on it's ARM processor and would have allowed for user programming of new behaviors.


If I remember correctly, the Furby was a deinstructioned 6502 copy, somewhat like the Ricoh NES processor.


It's definitely a 6502 looking at the source.

And the NES is a real 6502. They just made a couple very simple patches to the metal layer. Didn't want to pay for the patents on the BCD instructions AFAIR.


http://forum.6502.org/viewtopic.php?f=1&t=2027

"The datasheet describes it as a 6502 instruction set, with an X register but no Y, and just 69 "instructions" (presumably opcodes, of which the 6502 has 151). It has a banked ROM architecture and just 128 bytes of RAM at $80 through $FF which "includes stack" - so page one is folded onto page zero presumably."


OK, it does look like a Sunplus SPC81A, like they said. Memory map, interrupts, I/O registers all match.

http://www.ic72.com/pdf_file/-/428557.pdf

FWIW though, it doesn't loo like page one is an alias of page zero, but is instead unmapped. That's not the biggest deal in the world to the zero page the fixed stack page given that this is probably a totally different mask set than a regular 6502. I wonder if you can underrun the stack into the HW registers or if their stack pointer is only 7 bits?


"Due to lack of time, I resort to brute force ... YUK ..." - (label Simon3 page A-60)

Same here :(


Anyone know what the Easter eggs in the Furby35 changelog entry are?

> BURP ATTACK, SAY NAME, TWINKLE SONG, and ROOSTER LOVES YOU


> Anyone know what the Easter eggs in the Furby35 changelog entry are?

They are well-known and documented almost everywhere.

The interesting thing the code does reveal, though, is that there were only 15 possible names, with this being extended to 24 later.


Seems like it is supposed to be cp437 but the font used to print this isn't.



Just a friendly warning if you're on mobile: the PDF weighs in at 62MB...


Yikes, that’s probably worth adding to the title. I was trying to load it in mobile Safari and I thought it was just hugged to death.


It's 6.4 MB. But maybe the URL was changed?


Yes, it appears to have been "optimised" at some point. Anyone have the full-res original?

Edit: https://archive.org/details/furby-source


Yes, the OCRed version has a lower quality and is smaller. However, archive,org does preserve the original if you need it.


Is that real?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: