If the company had any sort of ethics at all, they would release a patch that let you point it another LLM so you could at least keep it running. Or release their software as open source.
Or just do something so that they don't just "die".
Another example on the long list of "there should be a law that you have to open source your server if you're shutting down a server based service".
The last time the issue of "abrupt" startup shutdowns messing up people's lives was debated on HN, the was a contingent that insisted that the founders and leadership have a fiduciary duty to share holders to hold out for a miracle. If they release a firmware that unlocks the robots, they are immediately setting their startups worth to about $0, when they could be bought out at the last minute for a couple of millions. Or so the story goes, I think it's immoral. When your left with <1 mo of runway, the best thing for your employees and consumers would be an orderly exit. VCs may not like that though, so it tends not to happen.
Personally, I love the phrase fiduciary duty. I use it all the time to help rationalize the choices I make, to ensure I maintain my fiduciary duty to myself, and use it liberally to convince my friends it's ok fight for raises, or the new job they want. I think I like it also, because of the context it has to live within my brain. A long time ago, when I was a young teenager, I was arguing, with a named partner for a huge lawfirm (family friends) about if a large company had to cede identifying information about users to $government. I made that same argument, they have a duty to shareholders. His retort was a simple. "No, this is a human rights issue, you can just say 'we're not playing'". That was a quick end to that conversation, because of how insane it is to suggest giving up human rights for money.
Fiduciary responsibility is nice and all, but have you tried upholding your responsibility to protecting human rights too? Sure, yes startup founders have a responsibility to hope for a miracle "for the shareholders!" but they have a much higher priority to human rights.
> That was a quick end to that conversation, because of how insane it is to suggest giving up human rights for money.
It is insane when you write it out kike that, but this is further complicated by money providing extrinsic motivation, and caring about human rights being intrinsic, at least with our current economic system and jurisprudence with laissez-faire approach to human rights violations (see wage theft for a direct example of the insanity). C-suites are more worried about shareholder lawsuits than they are about those from either consumer or employees.
My suspicion is that there's active filtering that prevents those who "overly" care about human rights from climbing too high up the leadership ladder at for-profit organizations.
> they would release a patch that let you point it another LLM
What a great idea. Let parents hook it up to Gemini, who told users to eat glue and rocks. I’m sure wrong and harmful information won’t have any effect on autistic children, no siree, can’t see a single problem with that.
And before anyone has the followup brilliant idea of then putting the blame on parents for whatever LLM they choose, let me remind you that even on HN (ostensibly with a tech savvy crowd) a ton of people furiously defend LLMs. Non-tech users aren’t as equipped to make these choices.
if they owe debt, they may not be legally able to open source their software, since it could still theoretically be sold to someone to pay off debts. is there an expert in bankruptcy law that knows for sure?
The larger issue is that they sold a product which was designed for children to form emotional attachments to, but the product was never designed to be long lasting.
If they were ethical, they would’ve ensured that moxie would work at least somewhat offline.
Nintendo has been doing this for years. Their games are always almost entirely playable offline (outside of timed events.)
That’s why you can “time travel” in animal crossing and Pokémon.
It doesn’t even phone home to check the time even if it affects gameplay.
What are you on about, last I checked Nintendo makes a huge number of games unavailable and is extremely litigious against the emulation community. IIRC you can't even transfer savefiles to a different device in some cases.
Both can be true that Nintendo does well at sunsetting services to ensure they work as well as they can for as long as possible while still being complete jerks about making sure you can only play that game in that specific way and if you want to play it any other they try to force you to buy the same game over and over.
Server based services are different because they are required for me to keep using the thing I purchased. If my hardware requires an internet service to keep functioning, then they should have to make that service available for self hosting or for someone else to host it if they go out of business.
There should never be a case where something physical that I bought stops working because of an internet based service.
There's zero (0) motivation. They have their $800 / child. It implies there's someone who actually cares about children, or who can even possibly comprehend sympathy in charge of the company.
Rather than: "AI is hot. Parents with children spend money on toys. Selling children AI toys will make money. Profit."
Notably, does not look like that many people were actually swindled. Only 5000 downloads for the App companion on Google Play, and 27 reviews, and many are scathing. [1]
> Like a Jipsy moxie will keep taking your money And lie to to convince you to keep paying for it. It's $100 a month and cost $1,000 to buy.
> Do not buy this product. Technical support is non existent. I opted to return the robot. They acknowledged return. 3 weeks later still no refund for this $1K robot. Only after I created a PayPal dispute did PayPal refund me. Still not a single reply from the company to any of my emails.
it's impossible to do it safely. open source model for kids!? without moderation every 'hacker' would like to get access. imho, the only option is to sell support as a business to someone else. with payed subscription it could survive for some time. but obviously not for long without new robots ans subscriptions. more interesting would be to have this sort of chatgpt for kids. with some basic robot control like flashing lights. then it would be possible to have several small and big manufacturers making mechanical robots and contributing to common software development.
This is the premise of the short story 'The lifecycle of software objects' by Ted Chiang. In that story the employees of the company take it upon themselves to maintain their human-like pets. If the concept of future AI/robot friends like this 'dying' is of interest to you give this story a read: https://www.goodreads.com/book/show/7886338-the-lifecycle-of...
I came to comment this. It was a weird story, to be honest. Like I didn't know where it was going, but at the same time I couldn't stop reading it. Highly recommended it.
Idea: if a cloud-connected product is bricked within 7 years of purchase, the company responsible must make the code open-source and freely available in a public repository.
This would not only allow consumers to continue using the product they supposedly own, but will also cause investors and managers to pause before developing and marketing an exclusively cloud-based device. CPU and memory are good enough that most regular consumer devices shouldn't need an internet connection
Your idea may work for technical systems used by technical users, but in this case (and I'd bet my bottom dollar most cases are similar) that the products are intended for non-technical end users.
How is some random family supposed to take the public repository of source code and set it up so their child's emotional support robot works? How is your middle-aged neighbour supposed to set up their Nest doorbell with someone else's server and establish trust? How is your aunt supposed to set their Alexa speaker to a self-hosted server? These are people who call a desktop computer a 'CPU'. Put yourself in their shoes, their IT skills are basically 0. Where would they host the compiled code? How would they even know where to start?
The intention is certainly good and it would be an improvement, but it would not allow consumers - most of whom are buying an appliance - to continue using their products. Would it really also push investors and managers to pause? It's a non-monetary impact and I can't imagine they'd allocate any effort building systems that package it up so that it's simple to stand up the cloud services. It has no benefit to them.
Most consumers aren't hackers/developers. Most are non-technical. Implicit in the purchase of a cloud-connected appliance is an expectation that some geeks somewhere make everything Just Work for them. They're not looking to have to tweak (digital) knobs and configure parameters - that's the company's job. That's why so many of these sorts of devices are designed with very simple user interfaces.
Your suggestion is good within the narrow window of technical systems for technical consumers, but this is a far tougher nut to crack than that when considering the broader audience.
Not at all. It’s about as necessary as owning a metal foundry is for someone who wants to use their cloud-enabled car. They don’t need the base materials, they need a turnkey solution.
It’s a nice-to-have if you could resolve the part about needing to trust some random person picking it up and modifying it, or hosting their own server for you to use. Unlike when you purchase an item from a known legal entity, there’s way less assurance about your data, security, etc.
The lay-consumer isn’t going to spin up a DO droplet to host their own instance. They’d be hard pressed to sign up to ChatGPT and drop in an API key.
These are all very hard problems and I don’t pretend to have any answers.
> Your suggestion is good within the narrow window of technical systems for technical consumers, but this is a far tougher nut to crack than that when considering the broader audience.
I honnestly wouldn't even go that far. Systems that were designed for self hosting can be difficult to deploy on your own. A system that was managed by multiple teams might take a man year to unravel.
I think you are missing an important part of capitalism from your logic. Once this system exists, the shops that currently replace broken cellphone screens will all start offering repair/enhance/upgrade services for this sort of equipment.
My classic car is no longer supported by the defunct manufacturer, but there is a whole ecosystem of parts and services companies who come together to support it for me. If we had better access to the internals this could be the case for expensive electronics too.
Most likely the source code alone is not enough, probably the system works with third party paid cloud services or third party components that could not be licensed.
And you need the private keys, the certificates, the cloud domains.
Can’t release those publicly, so you’d need someone to hold the domains, the certs, they private keys, and operate the servers. That costs money, so they’d need to collect payments and manager customers. Starting to look an awful lot like running the company again.
You probably also need the customer database including user accounts and user data. Obviously can’t release that, so you’d need some process to release everyone’s data individually on some ongoing basis.
And you couldn’t release any code that was licensed rather than owned, so users couldn’t build most software anyway without acquiring their own licenses for that 3rd-party code.
There are so many problems with this idea that would prevent it from being viable.
Cloud providers are available to competitors and private individuals, so it could still help. Any step in the direction of repair is a positive compared to the black boxes we have now.
Alternative idea -- regulate unnecessarily network connected shit out of the market. Just don't buy smart doorbells, robot vacuums that call upload your house maps to the cloud and 800 bucks teddy bears. Petition your overreaching unelected bureaucracy to disclose network connectivity, ask for the effective right to repair and the right to reconfigure network-depending shit to your own server (or one supported by 3rd party provider).
Eventually network connected shit will piss off enough people to do a combination of all of above without the need to opensoure and deal with it yourself.
You'd probably need to put the source code/keys in escrow. With legally mandated escrow, you can go after the owners/executives if they're not compliant.
Businesses are more than some source code. Even if you forced a company to release source code, you’d need to have access to the cloud domains and be able to have someone setup and operate servers at those addresses for everyone to use.
Forcing companies to forfeit their IP to public domain is an idea that appeals to tech people who imagine they’re going to reboot the entire service themselves, but it wouldn’t actually solve these problems in real life business cases.
It’s also effectively an appropriation of the company’s IP. Any country that introduced legislation that mandated companies run products for 7 years or completely forfeit all of the IP would immediately see development and production of that product move to other countries without such laws.
These ideas get floated on Hacker News but they’re completely unrealistic.
> you can go after the owners/executives if they're not compliant.
Holding startup founders personally liable if their product or business fails before 7 years and their business can’t be open sourced would create a very bad business environment, to say the least.
This also ignores the real problem that parts of source code are frequently licensed, not owned. Companies can’t just open source everything because they’re often using some licensed software or libraries.
This entire plan would make it effectively illegal for companies to develop or release a product which contains code they licensed from another party.
The number of issues with these kinds of ideas are numerous. It’s the domain of Internet fantasy, not actual policy.
> This also ignores the real problem that parts of source code are frequently licensed, not owned. Companies can’t just open source everything because they’re often using some licensed software or libraries.
Regulation can require that sublicenses extend to private individuals for repair of devices or services that are EOL. It could even limit the versions covered and commercial use.
Regardless, perfect doesn't need to be the enemy of the good. As it is consumers are increasingly powerless and landfills are overflowing.
What you’re suggesting is a ban on licensing closed source software.
Any country that did something like this would see every hardware company immediately relocate to a different country that didn’t have such weird laws.
Regulation like this is how you completely crush a country’s tech industry.
> landfills are overflowing
Tech waste is a tiny fraction of all garbage. People throw out more volumes of household trash in a week than they might discard tech devices in many years.
You have very quickly glossed over on why would the tech people be unable to reboot the entire service, and immediately jumped to hand-waving as "stealing the IP". The company is bankrupt, it does not need the IP. And the customers still need their product that you promised to support. And if you have not promised, you should be forced to promise, otherwise what's the point? It's like selling air that can escape at any time. Nope. I'll steer away from those businesses like I always have, and like most sensible people do.
> Holding startup founders personally liable if their product or business fails before 7 years and their business can’t be open sourced would create a very bad business environment, to say the least.
Maybe. That's a conjecture; a hypothesis at best. At the same time it might introduce pressure to have the licensing craziness tamed and streamlined. It's way too complex and hinders innovation, as you half-alluded to.
> The company is bankrupt, it does not need the IP
I think you have a fundamental misunderstanding of how bankruptcy works if you don’t think the IP is important in bankruptcy.
> Maybe. That's a conjecture; a hypothesis at best.
It’s not conjecture, it’s simple fact.
If you introduce laws making founders personally liable if their product fails within 7 years, you create a hostile environment for startups. It’s very simple.
The legal aspect is not interesting to me. I am more interested in "What are you going to do with that IP?".
Citing legalese is not relevant here, we all know how the situation came to its current state, the real interesting question is how can things improve.
Right, this is why I’m trying to tell you that you don’t understand how bankruptcy works with tech companies. The IP is an asset which is sold off to recoup money which goes toward the company’s debtors.
If you introduce a law like this, you effectively remove that IP from the company’s assets. Tech companies are valued largely by their IP.
No sane investor or startup will headquarter themselves in a country that has a law forcing their IP into public domain.
Investors won’t invest in and banks won’t loan companies to develop tech IP if there’s a law on the books that says they can’t sell that IP.
> The legal aspect is not interesting to me.
I can see that much. :)
The legal aspect is all important. Waving it away is ignoring the entire issue.
> the real interesting question is how can things improve.
The proposals so far would do nothing other than discourage development of new products in that country. Hardly an improvement.
No founder is going to pay too much attention to what happens if they go bankrupt. They will be focused more on succeeding. Anyone that takes this into account probably deserves to go bankrupt
Unproductive discussion, I see. Seems like you are a protector of the status quo. To me it's blindingly obvious things are stuck and should be shaken a bit.
This category of consequence would solve so many problems. Think of how much everyone's lives would be improved if stealing wages was a crime with actual repercussions.
Threatening to jail startup founders if they can’t open source the entire code for their product if it fails is one of the more out of touch ideas I’ve seen on Hacker News lately.
Many products are developed with software licensed from vendors, which cannot be open sourced by the startup. They don’t even own the IP, just a license to use it.
Maybe on bankruptcy, any source code or data should not be part of the sale of the company to potential buyers, but first examined to see if it should be owned by the customers instead. Also relevant to 23andMe.
Many hardware products are built with licensed code. It cannot be released because the company doesn’t own all of it.
There’s also more to a product than just some source code. You’d need the cloud domains, the private keys and certificates, and other bits. Releasing many of these to the public domain wouldn’t solve anything because you’d need someone to operate the cloud servers at the known address and trust with the private keys and certificates
All of this quickly begins to look a lot like requiring someone to operate the business again, which is obviously a silly thing to require in bankruptcy proceedings.
Forcing companies to release their IP if they go bankrupt is equally silly, because it renders the value of the company’s IP as $0 before anyone can be paid out.
The only thing this would incentivize is for companies to either relocate to countries with sane laws, or to shut off the servers and avoid bankruptcies by keeping the company technically alive but doing nothing other than staying as a registered business. It would act as a holding company to keep the IP in case someone wanted to buy it.
Eh, that's like talking about the viability of computer products... Yeah, many will fail. And many will prevail in sizes you can't even imagine. There's nothing about AI that would tell you which company is going to be which.
No, waiting for "AI" to be more than super-niche useful. I am saying that way too much money has been poured in for the fairly underwhelming results that we get.
Software engineering is a huge industry in which AI has produced enough results to make it all worthwhile.
Image generation is another. I personally know a business owner - it's an online grocery store - who used AI generated pictures to reduce costs by 10s of thousands of Euro, it's what made the business possible. (The pictures are cartoonish, nobody thinks it's a real photo)
I view the time of hand-crafting good code a better investment than spending the same time (and sometimes more) carefully eye-balling tool output that is almost guaranteed to contain subtle mistakes and correct them.
I don't have this problem at all, and yet I use Copilot daily since it was closed beta. The output I get is very helpful, indeed sometimes wrong but it's very much outweighed by the successes. I sometimes turn it off to remember how it used to be, and it's just terrible. It actually makes me and my team 10x devs.
The point is that the senior professionals who benefited from it created so much additional productivity, it doesn't matter that there is someone who can't use it well - that doesn't make AI useless or not worth it.
One anecdotal evidence does not nullify the other, and vice versa. You have your experience, I have mine. I even qualified my statements -- one of the languages I use is not very popular and thus statistical models like the LLMs obviously don't do well with it -- but you are happy to ignore that and keep arguing that your experience is the prevailing phenomena, which I'll always disagree with.
I’d like that idea, but I feel like more would have to go into it. Things like user-accessible updates/reflashing, or someone to care enough about the software to fix it for use by everyone else. Support requirements + this are probably a better way to go.
This sounds far fetched but we do have precedent for companies paying for future contingencies up front. The first example I can think of is unemployment insurance. It’s a payroll tax so even if the company goes broke and has to fire everyone the state has some cash on hand to make the employee’s whole.
Something like that could be done here. If you make products that don’t last a reasonable amount of time or you pack up in the middle of commitments then there’s some way for your relevant corporate assets to be transferred to another entity that will take up support.
In this case it makes a lot of sense to open-source the product, at least the non-proprietary parts. It would be extremely cool to provide a build process, a framework and drivers for each actuator/sensor. From there these devices could live a new life quite easily.
This is one law that would stand an exactly zero percent chance of ever passing the US congress. For the simple reason that those who would stand to potentially lose money over such a law own the law makers.
I'm sorry, but the legislative branch isn't of or for you. It is of and for the companies that pay for the congressional campaigns.
> any cloud based device is subject to the health of the company and LLMs are not cheap to run
Perhaps the technical angle to this story is the promise of edge ML. If your language model runs on the device, your cloud inference costs go to zero and the device works as long as it has power.
Aside from the financial benefits, there’s a huge privacy upside as well since no audio or text is sent over a wire. Might be notable for a children’s toy.
Of course, this is very difficult for large companies and VC-backed startups to care about because 1) it involves hard technical problems rather than API calls and 2) as long as you can keep asking for money the inference costs don’t matter and 3) there are no criminal consequences (prison time) for privacy violations.
I don't think edge ML is competitive for now. It can do simple things but not the big beefy LLMs. State of the art AI chips are so expensive that you can't afford to idle them. They need to be M:N shared - M chips for N users so they have maximum utilization, and that fits perfectly for the cloud.
However, there is a middle ground: pluggable AI could potentially be a thing. The device would use an open protocol to access cloud AI. If the original company goes bankrupt then someone else can implement the protocol and the devices can be repointed.
Yeah definitely, especially for more complicated things like having a conversation. State of the Art will probably always need a server.
For simpler things, small models can definitely handle them. Transcription, object detection, simple classification tasks. I expect more and more to fall under the category of “things which ML can do on $X of hardware” as hardware and software get better.
In spite of my hugely biased pro-privacy stance, I'm skeptical that real privacy is a feature or selling point that can contribute to the financial interests of corporations enough to consider, broadly speaking.
To the corporations, data is the new oil, and to the vast majority of consumers, there's this very defeatist attitude around privacy, something like "I already don't have any privacy, what's one more recording device going to do?".
I think we should not expect privacy to meaningfully improve until the gap between end-user perceptions of the value of their privacy and corporate perceptions of the value of their customers' (or users') data shrinks, and I sadly don't see much hope here at all. In fact, I think a substantial plurality, if not an outright majority, even have the technical aptitude to critically evaluate the accuracy of corporate claims of privacy, like those misleading claims of privacy offered by Apple.
Humans are highly adaptable creatures, and sadly, I think most have settled quite comfortably into the panopticon that modern society has mandated for so many of us, embedded deep down in the terms of service that everyone agrees to but never reads.
>> I think we should not expect privacy to meaningfully improve until the gap between end-user perceptions of the value of their privacy and corporate perceptions of the value of their customers' (or users') data shrinks,
Do you see this as a generational thing? I remember working for wireless companies in the 1900's when GPS got huge and companies wanted to start using tracking apps for their fleet management and every company we talked to refused to install it because of privacy concerns and the drivers (and sometimes unions) being 100% opposed to it.
Now? We have what? Two to three generations who have never valued their privacy enough to really do anything about it. While I agree with your assertion, I whole heartedly believe the road back to people seeing privacy as important may have effectively died with the Gen Xers.
It’s a bit more nuanced than people simply not caring. All things being equal, people will choose not to be spied on. Apple showed that pretty conclusively in my mind with the “allow app to track” toggle.
If your generational hypothesis is correct we should see that older people are less likely to allow the app to track. I personally doubt that but I don’t have the data. More likely is that young people are more likely to use newer technologies at all, and those technologies have other side effects.
People care, imo. Companies know this so they make it a pain in the butt to opt out. Plus, the harms are very abstract and rarely materialize. So unless a person has a lot of time to spend configuring everything, they usually don’t waste time turning tracking off.
I agree, I don’t expect privacy to be popular enough to drive the R&D. But models small enough to run on-device will be a byproduct of other cost cutting, which means we will be able to turn off telemetry via other means or even develop more privacy-conscious devices.
For the layman, this will allow them to bypass a monthly cloud subscription fee which is a killer feature.
On the one hand, that's sad. On the other hand, what a great way for both parents and children to learn about the idiocy that is IoT in the Clown. And oh boy do people need to learn!
It could be worse. Imagine that the children had to hand over private information to use the toy, and for that entire dataset to be leaked because the toymaker knows nothing about security. Oh, that already happened: https://www.bitdefender.com/en-gb/blog/hotforsecurity/vtech-...
Or imagine the toy had cameras and microphones uploading continuously to the Clown, then "hackers" released all of it.
Or an attacker gaining control of a specific device to make kids do stupid/bad/illegal things.
We can teach children not to trust a stranger offline or online. How do you teach them to be on guard about an interactive emotional support robot friend thats a parental gift?
I'm a former hardware engineer with a lot of ties to communities that include handicapped children. Happy to help reverse engineer and resuscitate these as a side-project if I can get my hands on a "dead robot".
Maybe this is the straw that breaks childhood's back, but they've said the same about every boogeyman since the 19th century. "The real problem is that parents are giving children [books → radio → comic books → rock music → television → video games → D&D → rap music → computers → internet → smartphones → social media → toy that uses AI], not [the actual problem]."
The truth is probably somewhere in between "social media/technology is the cause of all problems" and "social media/technology causes absolutely zero problems that wouldn't be caused anyway"
For sure, there are problems ascribed to all of these things consumed [ignorantly | irresponsibly | in excess]. Still, I've lived long enough to see many of these featured in hysterical "for the children" propaganda, and I find myself recoiling from that, maybe more than most. It's easy to see that AI (LLMs) are next on the list to be vilified, which seems absurd to me.
First, reasoning by analogy (or dismissal by analogy) is a poor way to reason. Second of all, that progression is already part of the problem in a way -- at least part of your progression is one of technology, which is alienating and isolating.
So, I disagree that it's any straw. In fact, I'd argue it's the reverse: AI has just revealed and accentuated the real probelm: social media, smartphones, internet, computers, video games, etc. were already some kind of problem and it's one of magnitude, not some binary condition.
If you're not so quick to leap to the thought terminating cliche you might stop to think that yes, those transitions did change us. And how might we change next after we outsource thinking and socialization to automation rather than just consume media in a different format?
Ultimately, the parent has the full responsibility for their kid, because they have way more involvement in developement of the "core" neural networks of the child.
Yes, I’m sure that’s exactly the lesson that autistic children will take from this. That won’t be hard to explain at all. Publishers are already rushing to make a children’s book on the perils of trusting SaaS with your friendship.
Let’s be real: This is at best a lesson for the adults, and not one they’re concerned with learning right now.
If you think humans getting emotionally attached to robots is bad, wait til you hear about the type of stuff that stems from humans getting emotionally attached to other humans
I find the video in the article really interesting. This person is clearly an adult and is crying while discussing the robot's future "death" with it. She's clearly quite attached despite the robot being quite inhuman with sub par text to speech.
This company is a take the money and run type of operation. No subscription plan and I doubt the API fees were making a huge part of their obscene profit. They would have eventually, but clearly they were not going to wait that long.
Well yeah, I would cry if one of my stuffies got destroyed and it's even less animated than the robot. It being less animated actually helps the attachment because it's more like a pet than a person.
Parents who can afford $800 toys for small children, made by startups that have not quite made it, can probably get them the therapy they need to get over this.
I support jail time for cases like this, but with mandatory physical consequences (like a daily beating with a stick).
Since that is kind of unrealistic (sadly), here's a more raelistic idea:
In case a company goes bankrupt and bricks its cloud-dependent products, I as the (now a defunct) product owner must become a co-owner of the company. By extension when the company holds licenses for software (e.g., QT), those licenses would transfer to me as well. This would grant me the right to receive a copy of the source code and build it myself. With access to source code I (and everyone) can easily change any hardcoded dns name. And even without changing it, everyone can run a pi-hole, why not add a special case for their domain to point to my server? (I don't use pi-hole but i guess it have that option)
> I support jail time for cases like this, but with mandatory physical consequences (like a daily beating with a stick).
I hear you, but I think corporate dissolution is the one case where no one can really expect a device to remain supported, so punishing that will do no good. Punishment only makes sense when you have an ongoing viable business that drops support for no good reason (like the Spotify Car thing).
IMHO, for dissolution, it makes more sense to require open-sourcing all device and server code if there's no entity willing to take on support.
For an ongoing business, there should be some onerous punishment if they decide brick a connected device too early (e.g before 5 or 7 years after the last version sold), and if they decide brick it later they need to open source the related code.
As someone with a kid, I feel bad for these little ones. When a stuffed animal gets beat up or a toy gets physically damaged, it's easy to explain to them what happened and why. When a pet dies, it sucks, but at least it's an opportunity for them to learn about life and death. Good luck as a parent explaining to a kid that her beloved friend is going to stop working because some company far away screwed up, they don't care, and they designed the thing stupidly to only work as long as they were perpetually in business. Buyer beware again and again.
As someone with a kid I feel bad too, but in response to your point, if your kid weren’t developmentally aged to understand what was happening here, would explaining a pet death be so much easier to handle in comparison? I’ve had to explain to a toddler in active crisis that his toys have run out of batteries, went missing, and/or otherwise stopped working… It’s never fun, but putting the kind of existential crisis of owning hardware that is dependent on cloud based services onto your kid seems either sufficiently advanced or totally unnecessary.
That sounds like a good life lesson right there.
There's going to be sooo much stuff that's set up that way in their future unless they don't buy into it.
Well, just like the robot is a simulacrum of a friend, the cloud disconnection can be a simulacrum of death, no need to explain to them about LLM tokens and their costs and how the MBA's costs-profits graph ended up not being sustainable for this shitty company.
1. If you have $800 to spend on an emotional support robot — OR you’re dumb enough to spend $800 on an emotional support robot, I don’t see this as a big loss.
2. Never spend money on multiplayer-only video games. Never buy a robot that is 100% cloud connected — ESPECIALLY from a startup. This is the same concept stated twice.
3. You buy things for what they are today; NEVER for what they might be tomorrow.
Honest question: but what can a company really do if it goes bankrupt and shuts down, since there is no money to pay for infrastructure and workforce?
Of course, they could probably release code to the open-source. Still someone needs to run the service. Also, if the code contains intellectual property, wouldn't existing investors enforce the company of selling the IP rights in case of a bankruptcy?
I am not taking sides here, just trying to understand what options a company (any company) could explore in this case.
If you sell a product with a promise to run servers for N years, you put a deposit to run servers for N years, or you have a contingency plan to let a third party run it for N years after you went out of business.
If you can't deliver on a promise and didn't make a good faith effort, maybe you committed fraud.
I have no idea about bankruptcy law -- does the company defaulting on their contractual obligations make their customers their creditors? Maybe the customers are entitled to get rights to whatever code the company had and is free to run it at their own expense. Or maybe nobody really cares enough about 800 dollar teddy bears.
Allocate money for 5-10 years of support for this contingency, beforehand, as long as the seed round is secured (so in this case: in the past). $800 is an okay investment for a household item for 10 years. For 2-3 though, or even 5? Not at all.
Sounds like you can't sell the product then. Maybe the product is bullshit and it should not be on the market. General sentiment in the comments says the product is bullshit and can't work. Maybe it's correct. Or maybe you should be smart about it somehow and invent a contingency plan and legal structure that doesn't involve putting that much money into escrow. And everybody who isn't that smart will become rich.
Imagine you have 10k and want to become rich by building a luxury appartement building. Can you start selling property titles without first having funding? Can you raise capital, sell not-yet-existing properties, then close up and expect everything to be okay?
A fair amount of the shittiness comes from people actively enabling that shittiness, such as by throwing large sums of money at fly-by-night companies selling products dependent on cloud services and then getting mad when the fly-by-night company goes under and their product stops working, and then voting for politicians who oppose any regulation that would curtail such behavior.
When will people learn about cloud dependent gadgets. And if this thing already cost $800, how much more would it have cost to put some basic AI into it for use as a fallback? Can't you run stuff like that on smartphones these days?
That EU legal guarantee is only between seller and consumer. The manufacturer does not have any obligations under it.
In this case, it likely would come down to the “If this is impossible or the seller cannot do it within a reasonable time and without significant inconvenience to you, you are entitled to a full or partial refund” part
It might be a bit more nuanced in the case of platforms like Ali Baba, that claim to not be actual resellers but only service providers to the actual seller.
I'm fine with that. Those children will learn valuable lessons about loss and death. The kind of lessons that having a goldfish pet will provide, but without killing a goldfish.
Or: "Robots will need to go hibernating now in order to return to its far, far away planet". You are welcome, coward parent.
Let your kids do a hardware autopsy/reverse engineer/disassembly on it. Can play medics or just see how it looks/works inside. Not $800 worth but better than tossing it.
From the Octopus Poultry Safe to the Chicken Boy, to the Spoutnic that hassles them off the floor and into the nesting boxes, chickens seem amply served by robotic companions:
I'll take one of these units off someone's hands for $400, if they have it. I would like to disassemble one on video in a video marketed toward children.
if Dreamcast fans can find a way to make their consoles believe Sega's servers are still running and play long after support is over, i'm sure some hacker will come along and create the tools necessary to bridge the gadget to a desktop or something
yeah, be aware that many things you buy today depend on some cloud service that may go away.
for example, a computer system that routinely checks for updates will hopefully still function if the update server goes away. and you can always install another operating system.
since I listen to most music via a streaming service these days, I don't consider the music owned and don't mind moving to another service (which I have gone twice, losing playlists in the process).
the worst is "digital copies" of movies. I try not to buy these, but occasionally do. I'm pretty sure that Blu-rays I buy today will work until the media degrades, but have no such confidence in digital copies.
Yes. Not having the market validating this as business in a time where AIs are full of ideological biases is good news. Incredible that parents would trust this much in the blind. Remove the marketing gimmicks and is like asking to accept in advance a close 24/7 new friend of their kid before knowing if that's the behavioral influence they want from that "friend" (interestingly this would bring the alignment issue and, as a side note, makes us meditate on how aligned we are with friends and friends of our kids, etc).
So this is OK? Selling somebody an $800 thing that might stop working is fine and people should just get over it? This "business model" should not exist.
Or just do something so that they don't just "die".
Another example on the long list of "there should be a law that you have to open source your server if you're shutting down a server based service".