I don't think Kaedim set out to rely so heavily on manual human labor (although given the bad actors in the space, I would not be terribly surprised if they did) — their HN post from last year [1] seems sincere and driven by an interesting, personally-motivated problem. But the general trajectory this always seems to follow is:
1) Someone runs into an interesting problem that can potentially be solved with ML/AI. They try to solve it for themselves.
2) "Hey! The model is kind of working. It's useful enough that I bet other people would pay for it."
3) They launch a paid API, SaaS startup, etc. and get a few paying customers.
4) Turns out their ML/AI method doesn't generalize so well. Reputation is everything at this level, so they hire some human workers to catch and fix the edge cases that end up badly. They tell themselves that they can also use it to train and improve the model.
5) Uh-oh, the model is underperforming, and the human worker pipeline is now some significant part of the full workflow.
6) Then someone writes an article about them using cheap human labor.
Last point aside, this isn't a bad trajectory! You really can get to a point where you've automated most of your work, and there will (and arguably, should) always be some humans in the loop. And the manual work really can help you train the automation. But it's getting to that point that can be dicey, and that's why you have articles like these.
I agree. There are ways not to slide down the slippery slope. Open-sourcing the project at step 2 sidesteps the whole thing. Or being transparent and realistic about practical limitations.
But in large part it is the psychology of being an "ML/AI startup" that is the trap — thinking SaaS is about the software and not about the service. Then everything else is secondary to the holy algorithm. Manual human labor is seen as just a stopgap measure until the automation is perfected, and to acknowledge that at all is tantamount to admitting imperfection, and thus failure.
Theranos is an excellent non-software example. Presumably at some point in her life, Holmes really did want to make blood tests more convenient for patients. But Theranos' eventual obsession with the Edison device made them willing to sacrifice more and more on its altar — money, credibility, patients' safety — until it destroyed them utterly.
The difficulty is that in a VC world, you admit that you will more-or-less permanently need humans in the loop, which kills margins and scalability. At Google/Meta the number of SERP raters and content moderators are only a few tens of thousands, serving a population of billions - and this only after major success. But at Uber or DoorDash, every sale requires a human in the loop from the start. It’s better to be in the first category than the second. As of now AI startups are seen more to be in the first “pure software” category, meaning the margins are expected to be sky high. Of course, the risk here is hardware costs (which will likely come down within the decade for reasonably useful and general models), and humans in the loop (which likely will be a permanent fixture given the hallucinations and opaqueness of the current generation of LLMs).
They set out without any ability to do what they claimed to do and lied about it to investors,
customers, and the press. A 3D model outsourcing shop isn't going to raise a lot of investment money compared to an "AI" startup.
They talk about a Chinese studio in which artist trainees go through a 3 month probation lined up in neat little rows overseen by Western masters. The half which pass and get job offers get paid $11k USD a year to make models and animations for big name Western games.
A reminder that this is very very relevant when people say bullshit like "Microtransactions are fine because games are so much more expensive now"
Nope, that hat they charge you $5 for cost them $200 in art time, and that's only because the artist was also setting up the pipeline to put arbitrary colors on the hat to sell those for another $5
I don't want to start an argument, but I don't get the animus about things like this. They're selling an expensive entertainment product in a sea of entertainment products. It's not like they're selling food, medicine, housing, or something like that, and even then, I encounter far fewer people rankled by the existence of, say, high-end restaurants they'd rather not pay for. Why not just vote with your feet?
I'm not sure how to properly interpret your comment? Are you saying microtransactions are still evil because they are profitable?
While I think $11k is a pitiful wage (but apparently somewhat normal in that city), it at least gives working artists a paycheck. Isn't that a good thing?
Rich westerners entertaining/decorating themselves on outsourced art from poorer countries... yes, it's vaguely colonial and kinda uncomfortable to think about, but is it really that different from buying woven goods or coffee or phones or whatever? At least in this case, the artists work in a relatively comfortable office and not in a field or sweatshop.
As a web dev in the US, I get paid much more than that, but at the end of the day I'm still just a minion for someone else's capitalist machine. Unfortunate, but still beats being a peasant...
I remember when I used to work for a big advertising agency, we made a campaign that used „AI“ to find locations for rail travel within the country that looked like far away vacation spots (think a beach on bali, for example).
The whole case for that campaign was that it was using AI to find these places, while in actuality we had a group of interns manually picking out the places and art directors touch up the pictures for the campaigns. We called it the „human API“.
Recode has learned that Amazon has staff on call behind the scenes to assist the computer vision system that is supposed to detect which items a shopper pulls off a shelf and carries out of the store.
An Amazon spokesperson confirmed the setup and said that Amazon staff is asked to help out when the system used in the new Amazon Go store can’t make a determination
I work in the industry and the 100% success rate touted by these companies just doesn’t exist in live in person real time inferencing. We will get there one day, but in many cases “magical” computer vision AI has humans verifying and changing AI outputs
Imagine Amazon was flexing the stores being fully automated before “Track Everything Everywhere All At Once” was written
It would be an issue if they were lying about how often humans were needed, but as far as I can tell they’ve never claimed anything like the “100% success rate” stated in the OP’s post.
I do wonder why people automatically assume "AI" is "100% AI"?
Maybe we need to acclimatize. We all know when a company claims their soda is the tastiest or that their cars will make you cool, it's marketing fluff. Or that packaged food has to have some amount of preservatives. We should just assume that "AI" means "AI system", one that involves humans for software maintenance, taking care of edge-cases, and providing training inputs.
It's BS. I have friends on that team, and the tech works (it's very expensive and doesn't scale too well, but it works a hell of a lot better than having some Indian dude squinting at security camera footage)
I’m sure your friend is under many NDAs where they understandably might or might not be comfortable discussing implementation. But as of 2017 Amazon admitted they are using humans in the loop for go, you can ask your friend at what year they stopped
You basically implied that the whole thing is a scam and doesn't work at all, now you're walking back your initial egregious slander to a more defensible position
I’ve used Kaedim a lot and it is great. We’ve known pretty much from the beginning it is people or mostly people. But not sure why that would be a problem for people. I’m sure they try to automate as much as they can to save cost.
Mostly because they positioned themselves as being a revolutionary AI shop that can crack the 2D->3D mapping with AI. They are not branding themselves as an outsourcing shop but from the circumstantial evidence presented in the article this is what they are.
I think this is another case of a "fake it till you make it" attempt. While legally it might not be in the same category of fraud as Theranos it nonetheless feels pretty fishy when there is a literal man behind the curtain moving the chess pieces.
It's pretty common practice to start full manual and then automate pieces of the process as you go with a human-in-the-loop, often with edge cases still being full manual forever as the cost of labor < cost to engineer out of all edge cases.
I'm unsure if they're claiming "It's 100% automated now" vs "Our goal is 100% automated".
If it's the former, then yea that's dishonest, but the point is whether it's scalable now / likely possible eventually. I could imagine that being the case.
seems like the investors are the ones that are getting misled. for the customer, it shouldn’t matter whether the image comes from a human or ai, as long as it suits their business need
ive used it to make one of 2d original characters into 3D its not their yet. of course this was about an year ago i think they still have my 3d object on file.
1) Someone runs into an interesting problem that can potentially be solved with ML/AI. They try to solve it for themselves.
2) "Hey! The model is kind of working. It's useful enough that I bet other people would pay for it."
3) They launch a paid API, SaaS startup, etc. and get a few paying customers.
4) Turns out their ML/AI method doesn't generalize so well. Reputation is everything at this level, so they hire some human workers to catch and fix the edge cases that end up badly. They tell themselves that they can also use it to train and improve the model.
5) Uh-oh, the model is underperforming, and the human worker pipeline is now some significant part of the full workflow.
6) Then someone writes an article about them using cheap human labor.
Last point aside, this isn't a bad trajectory! You really can get to a point where you've automated most of your work, and there will (and arguably, should) always be some humans in the loop. And the manual work really can help you train the automation. But it's getting to that point that can be dicey, and that's why you have articles like these.
[1] https://news.ycombinator.com/item?id=30552988