Oh the interesting part is “our AI could not interpret images of common objects at unusual angles”.
Now that’s fascinating - why not? Is computer vision just boring pattern recognition and really does not have “concepts” underlying it - if so 90% of the AI hype is false?
There are cases where AI can recognise gender on an X-ray when humans can't, find tumors that experienced doctor's can't. This must mean that human doctors looking at Xrays use just boring pattern recognition and AI has actual concepts of what it's seeing.
But does it really? Or is it more observant than a human doctor and more thorough, but only at the limited task of deciding if this X-ray looks like the million other X-rays of a male abdomen versus the million X-rays of a female abdomen.
I assume counting the number of ribs is not what is meant …
“We found that even state-of-the-art models which are optimally performant in data similar to their training sets are not optimal — that is, they do not make the best trade-off between overall and subgroup performance — in novel settings,” Ghassemi says. “Unfortunately, this is actually how a model is likely to be deployed. Most models are trained and validated with data from one hospital, or one source, and then deployed widely.”
It's simple math. If the correlation between gender and sex is 0.99 then if a method can determine your sex with say a 90% accuracy then it can determine your gender with an 89% accuracy (very roughly). The difference is negligible.
Mind that there's a big difference between machine learning (which these robots use) and generative AI, which is what most of the recent hype has been about.
ML is by now mostly a proven technique with known limitations. E.g. being unable to deal correctly with situations not present in the training data. Generative AI is an offshoot of this, where people largely seem to like pretending those known limitations don't apply for vague reasons.
What ? Stable diffusion doesn't have an underlying understanding that humans typically have two arms, two hands and five fingers per hand gathered from vast sea of training data ? That's a bold statement.
IIRR it’s a debate as to the difference between 99% of the time
It predicts the next pixel will be fleshy and the pixel next to it is background this making something that looks fingery (and so when presented with
An odd angle that 99% drops crazily” or that somehow there is a executive function that has evolved that gets a concept of finger with movement, musculature etc
It’s the “somehow evolved” part that is where I have my concerns.
Predictive ability based on billions images, sounds good. Executive function - how does that work? But at some point we are playing “what is consciousness” games.
Would love to hear more rigourous thought than mine - any links gratefully received:-)
I actually agree with you. I was a bit sarcastic. If I understand correctly there isn't a fundamental difference when it comes to text output vs pixel data output in this context. If so then it suddenly sounds much more of a stretch (intuitively) to claim that somehow stable diffusion understands the real world (like people claim to be the case with language models).
Just to clarify: The photos and audio collection isn't related to the mentioned security flaws. These are two separate issues.
> Ecovacs robot vacuums, which have been found to suffer from critical cybersecurity flaws...
> An Ecovacs spokesperson confirmed the company uses the data collected as part of its product improvement program to train its AI models.
Is it really completely legal? That would surprise me. And, of course, is it is, it shouldn't be.
I mean, in the end, it's just how you frame it, so it surely must be a viable class action lawsuit. When a teenager playfully hacks into someone's completely unprotected IoT anyone could walk into, he is breaking the law for some reason. When your business is not doing anything "wrong", but it provides a technical service targeted to businesses who actually produce malware and do harmful stuff like that, you can end up in prison for a lifetime. So surely there must be a way to frame this kind of thing as criminal activity too.
Here's a title you can reuse freely for the next decade or so.
(Startup/public/private equity owned) company <IOT device>'s collect data you don't want them collecting, use it for profit to your detriment, and didn't bother securing any of it because they don't care.
Each time it happens, it needs to be news to name and shame the companies. Unfortunately, once you've bought the product, it's game over for privacy. So this info needs to be explicitly available for each product/company so that when future buyers are researching, they might be able to stumble upon these articles.
Product reviewers need to explicitly state that the cameras/mics/whatevs of devices have been used for nefarious purposes other than what is advertised on the box.
But we should not just sweep everything under the rug because a couple of nerds "knows about it" because there's a heck of a lot more people that do not.
Name and shame doesn't work in that it doesn't stop the next guy, or even the current guy. It does at least make the information available to those that care. If you don't care, great. Continue to live with your head in the sand. If you do care, at least the information is available for you to make an informed decision.
If we do nothing because of the "it doesn't work" in a manner you think fitting, then we'll make no progress. It's yet another example of a choice between doing anything versus doing nothing because the perfect answer isn't available.
I do not use Microsoft products. I do not use Google products in my personal life. Others do not use Apple products. So for some people, it absolutely does work. I don't shop at WalMart, and am damn near Amazon free too.
A new sucker is born every minute. If the only time the name and shame is mentioned is when it happens, then those new borns will potentially never hear about it.
Why do they preach to the choir? Because that's how you get them to sing.
Tech progress at its finest. I stick to my 90s-made fridge, samey vintage washing machine, non-smart vacuum and non-smart microwave. All solved and sturdy appliances. Cheers.
At least I know I'm right to avoid anything with a camera on it. You're not crazy if they're after you. I also try to avoid chinese products, but we all know that's not completely possible anymore.