Hacker News new | past | comments | ask | show | jobs | submit login
AI company that made robots for children went bust and now the robots are dying (aftermath.site)
139 points by ceejayoz 34 days ago | hide | past | favorite | 173 comments



If the company had any sort of ethics at all, they would release a patch that let you point it another LLM so you could at least keep it running. Or release their software as open source.

Or just do something so that they don't just "die".

Another example on the long list of "there should be a law that you have to open source your server if you're shutting down a server based service".


The last time the issue of "abrupt" startup shutdowns messing up people's lives was debated on HN, the was a contingent that insisted that the founders and leadership have a fiduciary duty to share holders to hold out for a miracle. If they release a firmware that unlocks the robots, they are immediately setting their startups worth to about $0, when they could be bought out at the last minute for a couple of millions. Or so the story goes, I think it's immoral. When your left with <1 mo of runway, the best thing for your employees and consumers would be an orderly exit. VCs may not like that though, so it tends not to happen.


> fiduciary duty

Personally, I love the phrase fiduciary duty. I use it all the time to help rationalize the choices I make, to ensure I maintain my fiduciary duty to myself, and use it liberally to convince my friends it's ok fight for raises, or the new job they want. I think I like it also, because of the context it has to live within my brain. A long time ago, when I was a young teenager, I was arguing, with a named partner for a huge lawfirm (family friends) about if a large company had to cede identifying information about users to $government. I made that same argument, they have a duty to shareholders. His retort was a simple. "No, this is a human rights issue, you can just say 'we're not playing'". That was a quick end to that conversation, because of how insane it is to suggest giving up human rights for money.

Fiduciary responsibility is nice and all, but have you tried upholding your responsibility to protecting human rights too? Sure, yes startup founders have a responsibility to hope for a miracle "for the shareholders!" but they have a much higher priority to human rights.


> That was a quick end to that conversation, because of how insane it is to suggest giving up human rights for money.

It is insane when you write it out kike that, but this is further complicated by money providing extrinsic motivation, and caring about human rights being intrinsic, at least with our current economic system and jurisprudence with laissez-faire approach to human rights violations (see wage theft for a direct example of the insanity). C-suites are more worried about shareholder lawsuits than they are about those from either consumer or employees.

My suspicion is that there's active filtering that prevents those who "overly" care about human rights from climbing too high up the leadership ladder at for-profit organizations.


> they would release a patch that let you point it another LLM

What a great idea. Let parents hook it up to Gemini, who told users to eat glue and rocks. I’m sure wrong and harmful information won’t have any effect on autistic children, no siree, can’t see a single problem with that.

And before anyone has the followup brilliant idea of then putting the blame on parents for whatever LLM they choose, let me remind you that even on HN (ostensibly with a tech savvy crowd) a ton of people furiously defend LLMs. Non-tech users aren’t as equipped to make these choices.


if they owe debt, they may not be legally able to open source their software, since it could still theoretically be sold to someone to pay off debts. is there an expert in bankruptcy law that knows for sure?


What’s special about server-based services vs other services? Should they also be obligated to publish their IP if they cease operations?


The larger issue is that they sold a product which was designed for children to form emotional attachments to, but the product was never designed to be long lasting.

If they were ethical, they would’ve ensured that moxie would work at least somewhat offline.

Nintendo has been doing this for years. Their games are always almost entirely playable offline (outside of timed events.)

That’s why you can “time travel” in animal crossing and Pokémon.

It doesn’t even phone home to check the time even if it affects gameplay.


What are you on about, last I checked Nintendo makes a huge number of games unavailable and is extremely litigious against the emulation community. IIRC you can't even transfer savefiles to a different device in some cases.


If you have a Nintendo game on a Nintendo console, that game will continue to work on that console as long as it still physically functions.

If Nintendo goes bankrupt, my game boy, 3DS, GameCube, Nintendo 64, and switch will all still work pretty much as if nothing has changed.

Unlike moxie or that Spotify car thing.

You’re describing a different issue entirely around games preservation outside of official Nintendo channels (emulators and roms)


Both can be true that Nintendo does well at sunsetting services to ensure they work as well as they can for as long as possible while still being complete jerks about making sure you can only play that game in that specific way and if you want to play it any other they try to force you to buy the same game over and over.


Nintendo could do away with emulation all together by requiring games to always ping Nintendo servers in startup, or they won’t run.

They could include any number of DRM strategies to prevent emulation at the cost of longevity of their software and hardware.

They don’t, though. They opt for litigation over really strong DRM.


Server based services are different because they are required for me to keep using the thing I purchased. If my hardware requires an internet service to keep functioning, then they should have to make that service available for self hosting or for someone else to host it if they go out of business.

There should never be a case where something physical that I bought stops working because of an internet based service.


Is there a kid safe LLM available?

I can imagine this going horribly wrong fast.


Companies only ever have ethics when forced.


There's zero (0) motivation. They have their $800 / child. It implies there's someone who actually cares about children, or who can even possibly comprehend sympathy in charge of the company.

Rather than: "AI is hot. Parents with children spend money on toys. Selling children AI toys will make money. Profit."

Notably, does not look like that many people were actually swindled. Only 5000 downloads for the App companion on Google Play, and 27 reviews, and many are scathing. [1]

> Like a Jipsy moxie will keep taking your money And lie to to convince you to keep paying for it. It's $100 a month and cost $1,000 to buy.

> Do not buy this product. Technical support is non existent. I opted to return the robot. They acknowledged return. 3 weeks later still no refund for this $1K robot. Only after I created a PayPal dispute did PayPal refund me. Still not a single reply from the company to any of my emails.

[1] https://play.google.com/store/apps/details?id=com.embo.embod...


100% this, we need right to repair, we need EoL mandates, we need to restrict IP protections, especially in time.


Does it count as being ethical if you have no choice?


Force means the society has ethics, rather than the company.


it's impossible to do it safely. open source model for kids!? without moderation every 'hacker' would like to get access. imho, the only option is to sell support as a business to someone else. with payed subscription it could survive for some time. but obviously not for long without new robots ans subscriptions. more interesting would be to have this sort of chatgpt for kids. with some basic robot control like flashing lights. then it would be possible to have several small and big manufacturers making mechanical robots and contributing to common software development.

another proof for 'robotics is hard'...


This is the premise of the short story 'The lifecycle of software objects' by Ted Chiang. In that story the employees of the company take it upon themselves to maintain their human-like pets. If the concept of future AI/robot friends like this 'dying' is of interest to you give this story a read: https://www.goodreads.com/book/show/7886338-the-lifecycle-of...


Ditto the shelter for obsolete dinosaur-toys story by Chwedyk (2001) https://www.reddit.com/r/BKCNoSpace/comments/rkkl80/the_meas...


Also check out the lovely novel Klara And The Sun, about a solar powered AI robot friend, gifted to a sickly child.

https://en.wikipedia.org/wiki/Klara_and_the_Sun



I came to comment this. It was a weird story, to be honest. Like I didn't know where it was going, but at the same time I couldn't stop reading it. Highly recommended it.


Beat me to it!


Idea: if a cloud-connected product is bricked within 7 years of purchase, the company responsible must make the code open-source and freely available in a public repository.

This would not only allow consumers to continue using the product they supposedly own, but will also cause investors and managers to pause before developing and marketing an exclusively cloud-based device. CPU and memory are good enough that most regular consumer devices shouldn't need an internet connection


This is a very hackeresque, blinkers-on response.

Your idea may work for technical systems used by technical users, but in this case (and I'd bet my bottom dollar most cases are similar) that the products are intended for non-technical end users.

How is some random family supposed to take the public repository of source code and set it up so their child's emotional support robot works? How is your middle-aged neighbour supposed to set up their Nest doorbell with someone else's server and establish trust? How is your aunt supposed to set their Alexa speaker to a self-hosted server? These are people who call a desktop computer a 'CPU'. Put yourself in their shoes, their IT skills are basically 0. Where would they host the compiled code? How would they even know where to start?

The intention is certainly good and it would be an improvement, but it would not allow consumers - most of whom are buying an appliance - to continue using their products. Would it really also push investors and managers to pause? It's a non-monetary impact and I can't imagine they'd allocate any effort building systems that package it up so that it's simple to stand up the cloud services. It has no benefit to them.

Most consumers aren't hackers/developers. Most are non-technical. Implicit in the purchase of a cloud-connected appliance is an expectation that some geeks somewhere make everything Just Work for them. They're not looking to have to tweak (digital) knobs and configure parameters - that's the company's job. That's why so many of these sorts of devices are designed with very simple user interfaces.

Your suggestion is good within the narrow window of technical systems for technical consumers, but this is a far tougher nut to crack than that when considering the broader audience.


But it is a necessary prerequisite for any solution that is accessible to non-technology-minded consumers.


Not at all. It’s about as necessary as owning a metal foundry is for someone who wants to use their cloud-enabled car. They don’t need the base materials, they need a turnkey solution.

It’s a nice-to-have if you could resolve the part about needing to trust some random person picking it up and modifying it, or hosting their own server for you to use. Unlike when you purchase an item from a known legal entity, there’s way less assurance about your data, security, etc.

The lay-consumer isn’t going to spin up a DO droplet to host their own instance. They’d be hard pressed to sign up to ChatGPT and drop in an API key.

These are all very hard problems and I don’t pretend to have any answers.


> Your suggestion is good within the narrow window of technical systems for technical consumers, but this is a far tougher nut to crack than that when considering the broader audience.

I honnestly wouldn't even go that far. Systems that were designed for self hosting can be difficult to deploy on your own. A system that was managed by multiple teams might take a man year to unravel.


I think you are missing an important part of capitalism from your logic. Once this system exists, the shops that currently replace broken cellphone screens will all start offering repair/enhance/upgrade services for this sort of equipment.

My classic car is no longer supported by the defunct manufacturer, but there is a whole ecosystem of parts and services companies who come together to support it for me. If we had better access to the internals this could be the case for expensive electronics too.


Most likely the source code alone is not enough, probably the system works with third party paid cloud services or third party components that could not be licensed.


And you need the private keys, the certificates, the cloud domains.

Can’t release those publicly, so you’d need someone to hold the domains, the certs, they private keys, and operate the servers. That costs money, so they’d need to collect payments and manager customers. Starting to look an awful lot like running the company again.

You probably also need the customer database including user accounts and user data. Obviously can’t release that, so you’d need some process to release everyone’s data individually on some ongoing basis.

And you couldn’t release any code that was licensed rather than owned, so users couldn’t build most software anyway without acquiring their own licenses for that 3rd-party code.

There are so many problems with this idea that would prevent it from being viable.


Cloud providers are available to competitors and private individuals, so it could still help. Any step in the direction of repair is a positive compared to the black boxes we have now.


Alternative idea -- regulate unnecessarily network connected shit out of the market. Just don't buy smart doorbells, robot vacuums that call upload your house maps to the cloud and 800 bucks teddy bears. Petition your overreaching unelected bureaucracy to disclose network connectivity, ask for the effective right to repair and the right to reconfigure network-depending shit to your own server (or one supported by 3rd party provider).

Eventually network connected shit will piss off enough people to do a combination of all of above without the need to opensoure and deal with it yourself.


This is the correct response to my suggestion


What if they're going bankrupt anyway though? No fine will matter surely


You'd probably need to put the source code/keys in escrow. With legally mandated escrow, you can go after the owners/executives if they're not compliant.


Businesses are more than some source code. Even if you forced a company to release source code, you’d need to have access to the cloud domains and be able to have someone setup and operate servers at those addresses for everyone to use.

Forcing companies to forfeit their IP to public domain is an idea that appeals to tech people who imagine they’re going to reboot the entire service themselves, but it wouldn’t actually solve these problems in real life business cases.

It’s also effectively an appropriation of the company’s IP. Any country that introduced legislation that mandated companies run products for 7 years or completely forfeit all of the IP would immediately see development and production of that product move to other countries without such laws.

These ideas get floated on Hacker News but they’re completely unrealistic.

> you can go after the owners/executives if they're not compliant.

Holding startup founders personally liable if their product or business fails before 7 years and their business can’t be open sourced would create a very bad business environment, to say the least.

This also ignores the real problem that parts of source code are frequently licensed, not owned. Companies can’t just open source everything because they’re often using some licensed software or libraries.

This entire plan would make it effectively illegal for companies to develop or release a product which contains code they licensed from another party.

The number of issues with these kinds of ideas are numerous. It’s the domain of Internet fantasy, not actual policy.


> This also ignores the real problem that parts of source code are frequently licensed, not owned. Companies can’t just open source everything because they’re often using some licensed software or libraries.

Regulation can require that sublicenses extend to private individuals for repair of devices or services that are EOL. It could even limit the versions covered and commercial use.

Regardless, perfect doesn't need to be the enemy of the good. As it is consumers are increasingly powerless and landfills are overflowing.


What you’re suggesting is a ban on licensing closed source software.

Any country that did something like this would see every hardware company immediately relocate to a different country that didn’t have such weird laws.

Regulation like this is how you completely crush a country’s tech industry.

> landfills are overflowing

Tech waste is a tiny fraction of all garbage. People throw out more volumes of household trash in a week than they might discard tech devices in many years.


You have very quickly glossed over on why would the tech people be unable to reboot the entire service, and immediately jumped to hand-waving as "stealing the IP". The company is bankrupt, it does not need the IP. And the customers still need their product that you promised to support. And if you have not promised, you should be forced to promise, otherwise what's the point? It's like selling air that can escape at any time. Nope. I'll steer away from those businesses like I always have, and like most sensible people do.

> Holding startup founders personally liable if their product or business fails before 7 years and their business can’t be open sourced would create a very bad business environment, to say the least.

Maybe. That's a conjecture; a hypothesis at best. At the same time it might introduce pressure to have the licensing craziness tamed and streamlined. It's way too complex and hinders innovation, as you half-alluded to.


> The company is bankrupt, it does not need the IP

I think you have a fundamental misunderstanding of how bankruptcy works if you don’t think the IP is important in bankruptcy.

> Maybe. That's a conjecture; a hypothesis at best.

It’s not conjecture, it’s simple fact.

If you introduce laws making founders personally liable if their product fails within 7 years, you create a hostile environment for startups. It’s very simple.


The legal aspect is not interesting to me. I am more interested in "What are you going to do with that IP?".

Citing legalese is not relevant here, we all know how the situation came to its current state, the real interesting question is how can things improve.


Right, this is why I’m trying to tell you that you don’t understand how bankruptcy works with tech companies. The IP is an asset which is sold off to recoup money which goes toward the company’s debtors.

If you introduce a law like this, you effectively remove that IP from the company’s assets. Tech companies are valued largely by their IP.

No sane investor or startup will headquarter themselves in a country that has a law forcing their IP into public domain.

Investors won’t invest in and banks won’t loan companies to develop tech IP if there’s a law on the books that says they can’t sell that IP.

> The legal aspect is not interesting to me.

I can see that much. :)

The legal aspect is all important. Waving it away is ignoring the entire issue.

> the real interesting question is how can things improve.

The proposals so far would do nothing other than discourage development of new products in that country. Hardly an improvement.


No founder is going to pay too much attention to what happens if they go bankrupt. They will be focused more on succeeding. Anyone that takes this into account probably deserves to go bankrupt


Unproductive discussion, I see. Seems like you are a protector of the status quo. To me it's blindingly obvious things are stuck and should be shaken a bit.


Then threaten the corporate officers’ freedom instead


This category of consequence would solve so many problems. Think of how much everyone's lives would be improved if stealing wages was a crime with actual repercussions.


Where do you see evidence of wages being stolen?


Wage theft is extremely common in the US, to the sum of billions

https://www.epi.org/publication/employers-steal-billions-fro...


Perhaps but there is no evidence of wage theft in this instance. Is there evidence of it occurring in this case?


GP was drawing a parallel, not alleging that wage theft is a part of these particular bankcruptcy proceedings.


Threatening to jail startup founders if they can’t open source the entire code for their product if it fails is one of the more out of touch ideas I’ve seen on Hacker News lately.

Many products are developed with software licensed from vendors, which cannot be open sourced by the startup. They don’t even own the IP, just a license to use it.


Maybe on bankruptcy, any source code or data should not be part of the sale of the company to potential buyers, but first examined to see if it should be owned by the customers instead. Also relevant to 23andMe.


If you didn't own it before bankruptcy, why should you own it after? You are not even their creditor.


Because it's necessary for the thing you do own.


You could add it to the bankruptcy laws; the trustee would hire someone to release the code.


Many hardware products are built with licensed code. It cannot be released because the company doesn’t own all of it.

There’s also more to a product than just some source code. You’d need the cloud domains, the private keys and certificates, and other bits. Releasing many of these to the public domain wouldn’t solve anything because you’d need someone to operate the cloud servers at the known address and trust with the private keys and certificates

All of this quickly begins to look a lot like requiring someone to operate the business again, which is obviously a silly thing to require in bankruptcy proceedings.

Forcing companies to release their IP if they go bankrupt is equally silly, because it renders the value of the company’s IP as $0 before anyone can be paid out.

The only thing this would incentivize is for companies to either relocate to countries with sane laws, or to shut off the servers and avoid bankruptcies by keeping the company technically alive but doing nothing other than staying as a registered business. It would act as a holding company to keep the IP in case someone wanted to buy it.


Pierce the corporate veil?


    > CPU and memory are good enough that most regular consumer devices shouldn't need an internet connection
Moxie appears to use ChatGPT and an in-house LLM for facial/speech recognition.

https://moxierobot.com/products/ai-robot "emotion-responsive HD camera and GPT-powered AI"


So, there's a physical product out there utilizing LLMs, the company secured funding from some well known VC funds.

Yet, this company failed to secure additional funding at the last minute in an atmosphere where Gen AI is the buzz word everyone's after?

Is this a one off case specific to this company or is it some sudden realization among businesses about the current viability of AI products?


Eh, that's like talking about the viability of computer products... Yeah, many will fail. And many will prevail in sizes you can't even imagine. There's nothing about AI that would tell you which company is going to be which.


> And many will prevail in sizes you can't even imagine.

Been more than a year, still waiting.

Sure, it might take 50. But the AI craziness is made no less crazy in the meantime. Way too much time and money thrown for fairly marginal benefits.


Still waiting for what? When OpenAI gets another 100 billion net worth?

Marginal benefits? Even just Github Copilot itself is absolutely worth it, itself probably a billion dollar company, and changed the world.


No, waiting for "AI" to be more than super-niche useful. I am saying that way too much money has been poured in for the fairly underwhelming results that we get.


Software engineering is a huge industry in which AI has produced enough results to make it all worthwhile.

Image generation is another. I personally know a business owner - it's an online grocery store - who used AI generated pictures to reduce costs by 10s of thousands of Euro, it's what made the business possible. (The pictures are cartoonish, nobody thinks it's a real photo)


If you say so. ¯\_(ツ)_/¯

I have only seen it introduce friction and keep seniors busy code-reviewing the subtle mistakes that the so-called "AI" does.


Your seniors are not senior then. None of my seniors would commit these subtle mistakes. Find people who can do that and AI gives them superpower.


I view the time of hand-crafting good code a better investment than spending the same time (and sometimes more) carefully eye-balling tool output that is almost guaranteed to contain subtle mistakes and correct them.


I don't have this problem at all, and yet I use Copilot daily since it was closed beta. The output I get is very helpful, indeed sometimes wrong but it's very much outweighed by the successes. I sometimes turn it off to remember how it used to be, and it's just terrible. It actually makes me and my team 10x devs.


Your point being?

That it does not happen to you, means it never happens to anyone else?


The point is that the senior professionals who benefited from it created so much additional productivity, it doesn't matter that there is someone who can't use it well - that doesn't make AI useless or not worth it.


Doubling down on anecdotal evidence, okay.


And yours is backed by what exactly?


Nothing... just like yours.

One anecdotal evidence does not nullify the other, and vice versa. You have your experience, I have mine. I even qualified my statements -- one of the languages I use is not very popular and thus statistical models like the LLMs obviously don't do well with it -- but you are happy to ignore that and keep arguing that your experience is the prevailing phenomena, which I'll always disagree with.

Peace.


I’d like that idea, but I feel like more would have to go into it. Things like user-accessible updates/reflashing, or someone to care enough about the software to fix it for use by everyone else. Support requirements + this are probably a better way to go.


This sounds far fetched but we do have precedent for companies paying for future contingencies up front. The first example I can think of is unemployment insurance. It’s a payroll tax so even if the company goes broke and has to fire everyone the state has some cash on hand to make the employee’s whole.

Something like that could be done here. If you make products that don’t last a reasonable amount of time or you pack up in the middle of commitments then there’s some way for your relevant corporate assets to be transferred to another entity that will take up support.


In this case it makes a lot of sense to open-source the product, at least the non-proprietary parts. It would be extremely cool to provide a build process, a framework and drivers for each actuator/sensor. From there these devices could live a new life quite easily.


This is one law that would stand an exactly zero percent chance of ever passing the US congress. For the simple reason that those who would stand to potentially lose money over such a law own the law makers.

I'm sorry, but the legislative branch isn't of or for you. It is of and for the companies that pay for the congressional campaigns.


> any cloud based device is subject to the health of the company and LLMs are not cheap to run

Perhaps the technical angle to this story is the promise of edge ML. If your language model runs on the device, your cloud inference costs go to zero and the device works as long as it has power.

Aside from the financial benefits, there’s a huge privacy upside as well since no audio or text is sent over a wire. Might be notable for a children’s toy.

Of course, this is very difficult for large companies and VC-backed startups to care about because 1) it involves hard technical problems rather than API calls and 2) as long as you can keep asking for money the inference costs don’t matter and 3) there are no criminal consequences (prison time) for privacy violations.


I don't think edge ML is competitive for now. It can do simple things but not the big beefy LLMs. State of the art AI chips are so expensive that you can't afford to idle them. They need to be M:N shared - M chips for N users so they have maximum utilization, and that fits perfectly for the cloud.

However, there is a middle ground: pluggable AI could potentially be a thing. The device would use an open protocol to access cloud AI. If the original company goes bankrupt then someone else can implement the protocol and the devices can be repointed.


Yeah definitely, especially for more complicated things like having a conversation. State of the Art will probably always need a server.

For simpler things, small models can definitely handle them. Transcription, object detection, simple classification tasks. I expect more and more to fall under the category of “things which ML can do on $X of hardware” as hardware and software get better.


In spite of my hugely biased pro-privacy stance, I'm skeptical that real privacy is a feature or selling point that can contribute to the financial interests of corporations enough to consider, broadly speaking.

To the corporations, data is the new oil, and to the vast majority of consumers, there's this very defeatist attitude around privacy, something like "I already don't have any privacy, what's one more recording device going to do?".

I think we should not expect privacy to meaningfully improve until the gap between end-user perceptions of the value of their privacy and corporate perceptions of the value of their customers' (or users') data shrinks, and I sadly don't see much hope here at all. In fact, I think a substantial plurality, if not an outright majority, even have the technical aptitude to critically evaluate the accuracy of corporate claims of privacy, like those misleading claims of privacy offered by Apple.

Humans are highly adaptable creatures, and sadly, I think most have settled quite comfortably into the panopticon that modern society has mandated for so many of us, embedded deep down in the terms of service that everyone agrees to but never reads.


>> I think we should not expect privacy to meaningfully improve until the gap between end-user perceptions of the value of their privacy and corporate perceptions of the value of their customers' (or users') data shrinks,

Do you see this as a generational thing? I remember working for wireless companies in the 1900's when GPS got huge and companies wanted to start using tracking apps for their fleet management and every company we talked to refused to install it because of privacy concerns and the drivers (and sometimes unions) being 100% opposed to it.

Now? We have what? Two to three generations who have never valued their privacy enough to really do anything about it. While I agree with your assertion, I whole heartedly believe the road back to people seeing privacy as important may have effectively died with the Gen Xers.


It’s a bit more nuanced than people simply not caring. All things being equal, people will choose not to be spied on. Apple showed that pretty conclusively in my mind with the “allow app to track” toggle.

If your generational hypothesis is correct we should see that older people are less likely to allow the app to track. I personally doubt that but I don’t have the data. More likely is that young people are more likely to use newer technologies at all, and those technologies have other side effects.

People care, imo. Companies know this so they make it a pain in the butt to opt out. Plus, the harms are very abstract and rarely materialize. So unless a person has a lot of time to spend configuring everything, they usually don’t waste time turning tracking off.


I agree, I don’t expect privacy to be popular enough to drive the R&D. But models small enough to run on-device will be a byproduct of other cost cutting, which means we will be able to turn off telemetry via other means or even develop more privacy-conscious devices.

For the layman, this will allow them to bypass a monthly cloud subscription fee which is a killer feature.


On the one hand, that's sad. On the other hand, what a great way for both parents and children to learn about the idiocy that is IoT in the Clown. And oh boy do people need to learn!

It could be worse. Imagine that the children had to hand over private information to use the toy, and for that entire dataset to be leaked because the toymaker knows nothing about security. Oh, that already happened: https://www.bitdefender.com/en-gb/blog/hotforsecurity/vtech-...

Or imagine the toy had cameras and microphones uploading continuously to the Clown, then "hackers" released all of it.


Or an attacker gaining control of a specific device to make kids do stupid/bad/illegal things.

We can teach children not to trust a stranger offline or online. How do you teach them to be on guard about an interactive emotional support robot friend thats a parental gift?


> Moxie can’t perform core functionality without cloud connectivity.

The usual story, and more e-waste generated.


I'm a former hardware engineer with a lot of ties to communities that include handicapped children. Happy to help reverse engineer and resuscitate these as a side-project if I can get my hands on a "dead robot".


The real problem is that parents are giving children AI technology for socialization, not that the robots are dying.


Maybe this is the straw that breaks childhood's back, but they've said the same about every boogeyman since the 19th century. "The real problem is that parents are giving children [books → radio → comic books → rock music → television → video games → D&D → rap music → computers → internet → smartphones → social media → toy that uses AI], not [the actual problem]."


The truth is probably somewhere in between "social media/technology is the cause of all problems" and "social media/technology causes absolutely zero problems that wouldn't be caused anyway"


For sure, there are problems ascribed to all of these things consumed [ignorantly | irresponsibly | in excess]. Still, I've lived long enough to see many of these featured in hysterical "for the children" propaganda, and I find myself recoiling from that, maybe more than most. It's easy to see that AI (LLMs) are next on the list to be vilified, which seems absurd to me.


Undue alarmism against a thing does not render that thing harmless, even if the original alarmism is excessive.


We are in complete agreement on that. :^)


That is true but a 0.001% chance of causing novel problems is effectively zero for most people.


I do not believe it is 0.001%


Whatever the actual chance is, some fraction of the population will still believe it’s around that.


First, reasoning by analogy (or dismissal by analogy) is a poor way to reason. Second of all, that progression is already part of the problem in a way -- at least part of your progression is one of technology, which is alienating and isolating.

So, I disagree that it's any straw. In fact, I'd argue it's the reverse: AI has just revealed and accentuated the real probelm: social media, smartphones, internet, computers, video games, etc. were already some kind of problem and it's one of magnitude, not some binary condition.


If you're not so quick to leap to the thought terminating cliche you might stop to think that yes, those transitions did change us. And how might we change next after we outsource thinking and socialization to automation rather than just consume media in a different format?


Ultimately, the parent has the full responsibility for their kid, because they have way more involvement in developement of the "core" neural networks of the child.

Not enough emphasis is placed on this.


I mean, I think the convincing lie machine is qualitatively different from those other things.


Thank you, I was losing faith in humanity.


Perhaps a useful lesson for the children that became attached to the robot. They got to experience grief but no living thing was actually harmed.


>They got to experience grief but no living thing was actually harmed.

Except for the kids, that is, who've got hit by unneeded and unexpected grief.

Yay.


If the grief puts them off Software as a Service for life, I’d say it’s a net positive.


Yes, I’m sure that’s exactly the lesson that autistic children will take from this. That won’t be hard to explain at all. Publishers are already rushing to make a children’s book on the perils of trusting SaaS with your friendship.

Let’s be real: This is at best a lesson for the adults, and not one they’re concerned with learning right now.


Holy smokes, coming to a CV in ~20 years: "I cannot work with cloud-based services due to childhood trauma.".


orphan crushing machine

> nfl player pays for 15 kids to go to college

> student organizes bake sale to pay for school lunches

> chemistry teacher finds creative ways to pay for chemo


I was thinking something along those lines. Basically every pet will die before its owner, perhaps in tragic fashion.

However, parents who shelled out $800 recently may not appreciate that.


If you work on "social robotics", just remember that this one of the least concerning of the corporate abuses that will arise from your work.

There is no ethical way to build robots to which humans are meant to, or predictably will, become emotionally attached.


If you think humans getting emotionally attached to robots is bad, wait til you hear about the type of stuff that stems from humans getting emotionally attached to other humans


> According to Embodied, Moxie can’t perform core functionality without cloud connectivity.

As always. Everything that relies on some cloud can become a brick without warning. And many things shouldn't require it...


> and if you bought the device on a payment plan it’s out of their hands

Woah. So you'd have to keep paying for this now-defunct plastic piece of garbage?


Corollary on another thread discussion: Never ever buy anything that depends on Software as a Service on a payment plan.

They go out of business too quickly these days, and run away with the money.


I find the video in the article really interesting. This person is clearly an adult and is crying while discussing the robot's future "death" with it. She's clearly quite attached despite the robot being quite inhuman with sub par text to speech.

This company is a take the money and run type of operation. No subscription plan and I doubt the API fees were making a huge part of their obscene profit. They would have eventually, but clearly they were not going to wait that long.


This video is reaction bait, crafted specifically to maximize engagement


Well yeah, I would cry if one of my stuffies got destroyed and it's even less animated than the robot. It being less animated actually helps the attachment because it's more like a pet than a person.

Humans will pack bond with anything.


Parents who can afford $800 toys for small children, made by startups that have not quite made it, can probably get them the therapy they need to get over this.


I support jail time for cases like this, but with mandatory physical consequences (like a daily beating with a stick). Since that is kind of unrealistic (sadly), here's a more raelistic idea: In case a company goes bankrupt and bricks its cloud-dependent products, I as the (now a defunct) product owner must become a co-owner of the company. By extension when the company holds licenses for software (e.g., QT), those licenses would transfer to me as well. This would grant me the right to receive a copy of the source code and build it myself. With access to source code I (and everyone) can easily change any hardcoded dns name. And even without changing it, everyone can run a pi-hole, why not add a special case for their domain to point to my server? (I don't use pi-hole but i guess it have that option)


> I support jail time for cases like this, but with mandatory physical consequences (like a daily beating with a stick).

I hear you, but I think corporate dissolution is the one case where no one can really expect a device to remain supported, so punishing that will do no good. Punishment only makes sense when you have an ongoing viable business that drops support for no good reason (like the Spotify Car thing).

IMHO, for dissolution, it makes more sense to require open-sourcing all device and server code if there's no entity willing to take on support.

For an ongoing business, there should be some onerous punishment if they decide brick a connected device too early (e.g before 5 or 7 years after the last version sold), and if they decide brick it later they need to open source the related code.


It's like DORA, but enforcement arm is outsourced to Singapore?



As someone with a kid, I feel bad for these little ones. When a stuffed animal gets beat up or a toy gets physically damaged, it's easy to explain to them what happened and why. When a pet dies, it sucks, but at least it's an opportunity for them to learn about life and death. Good luck as a parent explaining to a kid that her beloved friend is going to stop working because some company far away screwed up, they don't care, and they designed the thing stupidly to only work as long as they were perpetually in business. Buyer beware again and again.


As someone with a kid I feel bad too, but in response to your point, if your kid weren’t developmentally aged to understand what was happening here, would explaining a pet death be so much easier to handle in comparison? I’ve had to explain to a toddler in active crisis that his toys have run out of batteries, went missing, and/or otherwise stopped working… It’s never fun, but putting the kind of existential crisis of owning hardware that is dependent on cloud based services onto your kid seems either sufficiently advanced or totally unnecessary.


That sounds like a good life lesson right there. There's going to be sooo much stuff that's set up that way in their future unless they don't buy into it.


Well, just like the robot is a simulacrum of a friend, the cloud disconnection can be a simulacrum of death, no need to explain to them about LLM tokens and their costs and how the MBA's costs-profits graph ended up not being sustainable for this shitty company.


1. If you have $800 to spend on an emotional support robot — OR you’re dumb enough to spend $800 on an emotional support robot, I don’t see this as a big loss.

2. Never spend money on multiplayer-only video games. Never buy a robot that is 100% cloud connected — ESPECIALLY from a startup. This is the same concept stated twice.

3. You buy things for what they are today; NEVER for what they might be tomorrow.

I feel no remorse. Stupidity should be painful.


Honest question: but what can a company really do if it goes bankrupt and shuts down, since there is no money to pay for infrastructure and workforce?

Of course, they could probably release code to the open-source. Still someone needs to run the service. Also, if the code contains intellectual property, wouldn't existing investors enforce the company of selling the IP rights in case of a bankruptcy?

I am not taking sides here, just trying to understand what options a company (any company) could explore in this case.


If you sell a product with a promise to run servers for N years, you put a deposit to run servers for N years, or you have a contingency plan to let a third party run it for N years after you went out of business.

If you can't deliver on a promise and didn't make a good faith effort, maybe you committed fraud.

I have no idea about bankruptcy law -- does the company defaulting on their contractual obligations make their customers their creditors? Maybe the customers are entitled to get rights to whatever code the company had and is free to run it at their own expense. Or maybe nobody really cares enough about 800 dollar teddy bears.


Allocate money for 5-10 years of support for this contingency, beforehand, as long as the seed round is secured (so in this case: in the past). $800 is an okay investment for a household item for 10 years. For 2-3 though, or even 5? Not at all.


Allocate money before you have any?

Allocate the money you need to become profitable before you can become profitable?

Where would that money come from? Presumably if you had that kind of money you could just skip the whole make a company bit and retire in luxury.


Sounds like you can't sell the product then. Maybe the product is bullshit and it should not be on the market. General sentiment in the comments says the product is bullshit and can't work. Maybe it's correct. Or maybe you should be smart about it somehow and invent a contingency plan and legal structure that doesn't involve putting that much money into escrow. And everybody who isn't that smart will become rich.

Imagine you have 10k and want to become rich by building a luxury appartement building. Can you start selling property titles without first having funding? Can you raise capital, sell not-yet-existing properties, then close up and expect everything to be okay?


Sounds like a great opportunity to teach them about the shitty real world we all live in.


A fair amount of the shittiness comes from people actively enabling that shittiness, such as by throwing large sums of money at fly-by-night companies selling products dependent on cloud services and then getting mad when the fly-by-night company goes under and their product stops working, and then voting for politicians who oppose any regulation that would curtail such behavior.


When will people learn about cloud dependent gadgets. And if this thing already cost $800, how much more would it have cost to put some basic AI into it for use as a fallback? Can't you run stuff like that on smartphones these days?


At least for Europeans there could be a mandatory 2 years warranty:

https://europa.eu/youreurope/citizens/consumers/shopping/gua...

The robot was never officially sold in Europe, but the claim might extend to the reseller.


> but the claim might extend to the reseller.

That EU legal guarantee is only between seller and consumer. The manufacturer does not have any obligations under it.

In this case, it likely would come down to the “If this is impossible or the seller cannot do it within a reasonable time and without significant inconvenience to you, you are entitled to a full or partial refund” part


It might be a bit more nuanced in the case of platforms like Ali Baba, that claim to not be actual resellers but only service providers to the actual seller.


I don't think that really helps if the company doesn't have any money.


> But relying on large language models to socialize children, particularly neuroatypical ones, seems like a bad idea on every single level

I mean, yes.


I'm fine with that. Those children will learn valuable lessons about loss and death. The kind of lessons that having a goldfish pet will provide, but without killing a goldfish.

Or: "Robots will need to go hibernating now in order to return to its far, far away planet". You are welcome, coward parent.


Another lesson to never purchase anything connected to the cloud.

It is unfortunate that the majority don't care. I'd prefer if the Invisible Hand pushed the economy towards products that weren't e-waste in waiting.


I hope for a day where "Caveat emptor" will no longer be needed, but today is not that day!


Let your kids do a hardware autopsy/reverse engineer/disassembly on it. Can play medics or just see how it looks/works inside. Not $800 worth but better than tossing it.


I know of a similar project but focused on medical rehabilitation. https://en.inrobics.com/


Who has one we can take apart and try to jailbreak? Not even joking


Oh, I read that as robots for chickens. Which, could be interesting.


From the Octopus Poultry Safe to the Chicken Boy, to the Spoutnic that hassles them off the floor and into the nesting boxes, chickens seem amply served by robotic companions:

https://www.canadianpoultrymag.com/rise-of-the-robots-30876/


So did I. Was the title changed?


Surprisingly I did as well, at least for me it looks like I just misread it (some sort of human bias?) Interesting to be n=3.


I'd guess subconscious recall of Robot Chicken from the "Robot [] Chi" similarity.


I'll take one of these units off someone's hands for $400, if they have it. I would like to disassemble one on video in a video marketed toward children.


Great idea, we should teach the kids how to do robot surgery to revive their dead friend.


if Dreamcast fans can find a way to make their consoles believe Sega's servers are still running and play long after support is over, i'm sure some hacker will come along and create the tools necessary to bridge the gadget to a desktop or something

i'd give it a shot, but i dont have one :(


Rule #1 in the Techno-oligarchy: If you do not own the platform you do not own the product.


I actually had to double check the title.


yeah, be aware that many things you buy today depend on some cloud service that may go away.

for example, a computer system that routinely checks for updates will hopefully still function if the update server goes away. and you can always install another operating system.

since I listen to most music via a streaming service these days, I don't consider the music owned and don't mind moving to another service (which I have gone twice, losing playlists in the process).

the worst is "digital copies" of movies. I try not to buy these, but occasionally do. I'm pretty sure that Blu-rays I buy today will work until the media degrades, but have no such confidence in digital copies.


Most important


Another creepypasta 10 years down the road...the "dead" robot haunts back


make it open hardware!


great lesson for kids


[flagged]


Was also marketed for autistic children, so that's probably not just rich people


rich autistic children ?


the techbros who performed the rug pull are now richer than the (supposedly) "unsympathetic because they are rich" customers


[flagged]


This is specifically and overtly marketed to autistic kids, not as a mere toy but as a tool to help with emotional development.

https://moxierobot.com/pages/robot-for-autistic-child


So it's fine if someone steals your daughter's favorite teddy bear, because you shouldn't delegate your kid's emotional state to a stuffed toy?


Yes. Not having the market validating this as business in a time where AIs are full of ideological biases is good news. Incredible that parents would trust this much in the blind. Remove the marketing gimmicks and is like asking to accept in advance a close 24/7 new friend of their kid before knowing if that's the behavioral influence they want from that "friend" (interestingly this would bring the alignment issue and, as a side note, makes us meditate on how aligned we are with friends and friends of our kids, etc).


So this is OK? Selling somebody an $800 thing that might stop working is fine and people should just get over it? This "business model" should not exist.


Oh No! They're dying?! The Humanity!!!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: