Hacker News new | past | comments | ask | show | jobs | submit login
AI Company Accused of Using Humans to Fake Its AI (sixthtone.com)
246 points by ytch on Sept 22, 2018 | hide | past | favorite | 114 comments



A poorly-kept secret in Silicon Valley is that many "AI startups" started over the last few years are actually using humans, with a long-term plan to slowly decrease the amount of work the humans do using algorithms (generally based on machine learning, but not always) as well as more basic automation. In some cases they don't actually do any "AI" work for the company's first few years, and if they do, it's more likely a quick-and-dirty logistic regression than what most people think of as "interesting AI".

The actual approach is a perfectly reasonable business strategy, because (A) it's really hard to work on machine learning without a lot of data, and it's often hard to get that data without actual users using the product in roughly the way it's supposed to work, and (B) the company can prove there's commercially significant demand for the user experience that they hope to be able to eventually do with some algorithms/AI technology, and thus can attract investor capital for making the thing actually work (potentially years later). In some cases, the human-powered product might actually be really good, and the economics might work in.

But it really sucks that the general hype around AI has generated a whole category of "AI companies" that don't actually use any AI technology. The dishonesty is really poor (I have a friend who's actually a machine learning expert start a job at one of these companies, and only then discover that the company did not use machine learning and no technical plans to do in the next year and actually just wanted another web developer... which obviously resulted in my friend quitting).

That said, I expect the trend to continue as long as companies get lots of extra press and investor attention for claiming to be an AI company.


"The actual approach is a perfectly reasonable business strategy"

Only if the AI is going to work for sure.

There was a UK company, SpinVox, they were big 10 years ago, they did voice recognition of your phone calls and sent you a text. They raised huge, like 100M pounds. Granted not AI.

Turns out they were doing most of their transcription manually with hopes of going automated.

It was a huge blow up, it boiled to fraud.

This is close to Elizabeth Holmes territory kind of storyline.

So it's possible to do as you say, but there needs to be transparency.

Also, AI is an approach, not a market ... so I think they're going to end up like transistors or some other kind of thing, the knowledge will be embedded within companies, and not so many pure AI companies.


I don't think that there is a requirement of it working - just like there isn't a requirement of being successful.

However you have to be honest to investors, which I assume they weren't and thus commited fraud.


> Only if the AI is going to work for sure.

On the other hand, I could definitely see this being used in mock-ups, just as you might populate some UI with "fake" data to illustrate proofs-of-concept. The transition between mock-up and working application might be fuzzy, and it's a lack of honesty/transparency in this transition that leads to murky territory like you say.


Very frustrating for companies who actually do use machine learning to solve a problem!

Hard to rise above the background "AI noise" of well-funded marketing.

And people are already fed up with all the AI marketing bullshit; "ebbing tide sinks all boats".

One way is to offer real-time results via APIs or demos, which cannot be faked with humans due to sheer latency / volume. Another is to be active in the ML community (open source etc) to build a reputation. But there the audience is different (technical), and may not overlap with the business case much.


In times of funding plenty the valley is awash with companies that are basically faking it. This was very common in the previous valley bubbles, the Real-Intelligence-Masquerading-As-Artificial thing is just one in a long line of semi-frauds that the valley tends to accumulate. In the 90s the flood of B2B companies was as crazy as the current flood of AI companies.

This will pass when funding gets tight again and/or the economy enters a recession. A very sharp recession is actually good for the valley because it helps clean out the fakers; it's no different than the first cold snap in the winter helping clear away certain pests.

(One takeaway from this is that if one wants to found a startup, and one is not a fraud, it's actually better in almost every way to do so in the first part of a recession: lower salaries, less grass-is-greener attrition, lower real estate costs, etc.)


It's crazy when you consider that the Lean Startup advocates in the past actually advocated to fake things. For outsiders it's virtually impossible to tell if something is real or not.


There are two sides to this -- AI used a marketing buzzword to attract customers and AI used as a buzzword to attract investors.

For the customers, they need to be sure that the company is actually capable of solving the problem they are claiming to solve (before signing some sort of long term contract that doesn't actually make the claims the sales people put forward.) The customer's evaluation, however is distinct from that of an investors.

It is the job of an investor to determine if the startup is actually capable of having reasonable odds of successfully pursuing whatever they are claiming to be doing. If an investor is not, then for an actual startup doing machine learning, probably this is not they type of investor you want.

During the ICO pump and dump cycle one thing stuck out to me - nearly all of the ICOs were being run by people without any cryptography experience. Most had little to no software development experience. I'd suggest investors take their first step in evaluating "AI" startups similarly.


>Very frustrating for companies who actually do use machine learning to solve a problem!

I get that on a purely technical level, but really... who cares about the implementation? If a company is solving problems it really doesn't matter, and if they're solving it better using some mix of ML and human intervention... again; who cares?


Anyone who's betting on the growth of the company. Hiring more and more humans doesn't scale as well as automating tasks.


A company that needs to be protected from the market to be successful in this very same market, isn't this a fallacy?

Trying to create a startup that can't at least compete with the existing solutions isn't a very good business strategy in my opinion.

That's what academia is for. Research that is still too far away from being profitable but that has it's merits in the long run.


The claim is that there may be a market for AI-powered technologies, but that market is suppressed by non-AI competitors exploiting information asymmetry.


Wouldn't that suggest however that AI is not the best tool for the job? AI should be the great equalizer if the tech actually was useful, but I have seen precious few examples. The usual triumphs are almost all actually human sourced and then delivered by algo. (google image, etc.)


No, the point is that dishonest marketing distorts the market. The companies use human labour to deliver useful services, but falsely advertise as AI-driven to capitalize on the novelty factor. This makes life more difficult for both companies delivering the same service, but advertising honestly, and for companies actually trying to deliver the product by means of AI. The result is that money flows to dishonest people, instead of honest people and/or people actually trying to push technology forward.


This is an issue if the non-AI company using humans is not profitable (supported by VC funding) with the goal of "eventually replacing humans with AI". If they are not transparent to investors about this point, then investors may have preferred to invest in a company that already has AI that works well enough (but not as well as the humans in the first company). Perhaps the first company will never be able to build the AI, leading to a market failure.


But they should be honest about their claims. Perhaps the customer is expecting that humans don't process their personal information on a routine basis.


Not really - Korea protected Samsung et al for years before they were capable of competing globally.


Also, many of the largest internet businesses in China started out as cheap imitations of western websites, but gained success as the western versions were blocked by the great firewall. The Chinese government has basically propped up their own industries by shutting out the west, and are now screaming about "protectionism" when the west is finally responding.


The Lean Startup approach specifically recommends doing something like this as a "concierge MVP" to gauge customer demand before actually building the automation technology. It's a general approach, not anything specific to AI. Of course the underlying assumption there is that the technology is actually feasible.


This is the main problem imo and the reason why people with real domain knowledge hate these companies. Most of the people funding, selling, or buying “AI” products have no clue what is feasible!


An investor in an absolutely terrible NoSQL company told me point blank that he knew the team were clowns who didn't know what they were doing but that, having raised a lot of money and gotten a lot of traction from people who didn't understand how bad the project was, they could afford to hire professionals from Oracle and other companies who would be able to turn it into something less broken.

I also happened to know one of the people they later hired, and right up to that point they were seriously enraged by this company's deceptive nonsense. The company didn't describe his job quite in the way the investor did, but yeah, that's pretty much what the job was. The compensation was very good.

In a sense people with strong domain knowledge and experience could view these companies as basically a way to outsource the two hardest startup filter functions (finding product-market fit, establishing timing) while at the same time preserving their option to join as highly-compensated ICs and leads.


The actual approach is a perfectly reasonable business strategy

Except for keeping this secret. Than it's called lying.


Looking around the world, lying seems to have been redefined as; "a perfectly reasonable business strategy".

One of the biggest shocks I had going from childhood to adulthood was the sudden flip between people telling you that you have to be honest and the people now telling you to stop being honest. And the people who had previously told you that you must be honest, they tell you "oh yes, that's work", when you go back to ask them what you should do given you are being instructed by the people who pay your wages that you now have to lie.


Life is messy an there are grey areas. That doesn't mean that the abstract concept has no meaning at all, Theranos is an example.


Sure, but as a society we've already legitimized lying, in form of e.g. ad industry, and at the same time, we're in total denial of that fact.

I regret my parents never taught me we're living in softly adversarial world. People are not going to kill you, but businesses are out to fleece you.


Tell me about it. That point when you realise that fiction and comedy really are not exaggerating all that much, if at all.

"The secret of life is honesty and fair dealing. If you can fake that, you've got it made." - Groucho Marx

See also; the appointed Liars of the Zoon tribe of Terry Pratchett's Discworld.


Theranos is an example of commercial dishonesty leading to the top of political power and even snares people such as Mattis, who is possibly the only individual in the current US executive with any kind of reputation for being honest.

And the apparent message from that debacle isn't; "be honest", rather it is; "being dishonest nets billions, try not to get caught though".


> The actual approach is a perfectly reasonable business strategy,

It is, as long as you have reason to assume that you will be ABLE to automate the tasks. For many of these tasks, there is NO reason to assume that.

My feeling with AI tasks is that either they work reasonably well after a short time or they will never work well. (At least not until the next AI revolution.)

If a company doesn't have a good enough AI system for their task after a couple of months of toying with it, I will be Very sceptical that they will ever have one.


Do you have any specific examples of AI products that are allegedly secretly using humans (don’t bother dropping specific company names)? It strikes me as something that would only be viable in very narrow cases.

Anything customer-facing and low-latency, like human language translation, image classification, or speech recognition doesn’t seem viable at even modest scale. Anything involving large amounts of data, like analyzing financial data or server logs, seems potentially doable though still unlikely.

In general, it seems like the main advantage of AI products today is either latency of output or size of input, both of which seem difficult to fake by using humans. If a customer has a small amount of input data and wants a small number of results that are indistinguishable from what a trained human would produce, surely they would have no misconceptions about the current state of AI, and would just hire the human to produce it.

Of course a company marketing an online service that sells AI-generated musical compositions could cheat by hiring experienced human composers. But if there’s any money in such a service at all, I expect the customers would want to be able to generate a huge number of compositions, something that humans would probably not be able to do at any appreciable scale. If the customer wanted a single composition that sounded like it was composed by an expert human, I’m pretty sure they would just hire the human.

One thing I certainly could believe is common is human sanity checks on AI output, but that doesn’t strike me as bad or dishonest unless the seller is specifically claiming (for some reason) that no humans are involved. I could also definitely believe that companies fake AI demos for potential investors or future customers.

What types of AI products would be best-suited for humans to be able to viably fake?


x.ai is, or at least was, powered by humans.


humans pretending to be robots pretending to be humans


cough cough Palantir.


Curious, is what the article describes illegal? Or just false marketing.


Likely depends on whether the story is being told to customers/users or investors. As long as their service does what is advertised, customers/users would be hard pressed to prove any harm. Investors on the other hand have a reasonable expectation that the thing they are being told they are investing in is the thing they are actually investing in.


False marketing would be in and of itself illegal, if this happened in Hong Kong.


False advertisement is also illegal.


Then why doesn't my hamburger look like the ones in the ads?


In a proper democracy false advertising is or should be illegal.


Palantir had never claimed to do AI — just the opposite, that human+computer is more powerful than either separately in many domains.


nobody knows how to do actual AI because the complexity of the human brain with all of its sensory inputs/memory is light years beyond what we can create with chips and ram. even machine learning, which is meant to be a step below AI (machine...learning things) has been passed over in the hype cycle because most of it is just basic linear regression that saves a few companies 5% in costs per year. not a brand new field that grows revenues and create businesses. if anyone has seen any legit AI companies/applications please point me towards them so I can regain some faith and perspective on what people are calling AI these days


Right. When some application of AI (voice recognition and OCR for example) starts to work properly, it stops being called AI. Almost by definition the field of AI is the field of unsolved problems.

When simultaneous translation becomes a product, its makers will stop claiming they do AI, and start claiming "we used AI to create this."

This is only a problem when credulous and powerful investors (a dangerous combination) get caught up in the hype.


Image recognition and manipulation is one of the few big consumer oriented applications I can think of. If you manage to do it in real time from a camera feed you got very high quality AR.


And faked live-feeds. But that is an inherent risk of technology.



>(A) it's really hard to work on machine learning without a lot of data, and it's often hard to get that data without actual users using the product in roughly the way it's supposed to work, and

Delusional; if it's really hard to get it to work then it's really hard to get it to work; investors should not believe "if only we get the data"

>(B) the company can prove there's commercially significant demand for the user experience that they hope to be able to eventually do with some algorithms/AI technology, and thus can attract investor capital for making the thing actually work (potentially years later). In some cases, the human-powered product might actually be really good, and the economics might work in.

Delusional - if the economics work then everyone would be doing it, there's no money in it, the investors should only invest if they understand that rooms full of transcribers will be needed - with all the management, recruitment and security issues that brings.

This is what the last wave of voicemail transcription companies tried; they all went bust with scores of millions of loss. Be warned.


>> Delusional; if it's really hard to get it to work then it's really hard to get it to work; investors should not believe "if only we get the data"

Not having sufficient data to train a machine learning algorithm is a legitimate concern. Statistical machine learning algorithms are very bad at generalising, so they need lots and lots of examples of every possible variation of a concept, in order to model the concept well.

That is why everyone's going gaga over "the new electricity" (by Andrew Ng's saying). So in some cases, yes, there are applications that are impossible to develop without "the data".


Our team is looking for machine learning experts and myself in general am hoping to build my network as I’m developing my startup around driving real machine learning adoption. Any chance you might see if your friend is interested in connecting?


I suspect this will continue until any such service can be successfully marketed on the actual merit of the actual service provided, without the hype (or even the mention) that it's an AI-based service.


It’s not a secret - it literary is what peter thiel said Palantir Technologies is - a combination of AI and humans. This combination apparently is really powerful.


Makes sense. Human reading a 1GB file is pretty much impossible.

Human reading a 1GB logfile looking for key words/phrases is a lot more doable.


Being in the public sector, I have the joy of sitting in a lot of meetings with companies and consultants that want to sell us AI for analytics and prediction. I’m still unconvinced.

The closest I’ve seen to anything useful was Watson, and all it really offered was stuff we’ve been doing for years. I mean, we have 5 people working with data and building models, and Watson could maybe automate some of that, but you’d still need people to qualify what it came up with and build the things it didn’t.

We’ve run proof of concepts on stuff like traffic, and a small startup with a relationship with a university, build a route prediction thing that would make it easier to avoid jams. It worked, but it frankly didn’t work better than what our traffic department was already doing, using simple common sense. So we diverted the project to work on traffic light control for maximum flow, and it turned out that the settings our engineers had come up with were already optimal. Only our engineers had come up with the same results, using less resources.

I’m beginning to think of AI as mostly hype. It does have some uses, like for recognition. We use ML and image recognition to troll through millions of old case files for various purposes. We use speech recognition to help people with dyslexia.

But for analytics and prediction it’s been extremely useless compared to actual people.

I’d love to be convinced otherwise by the way. If it actually does something, I’d like to be on the forefront with it so I don’t have to play catch up when Deloitte sells it to the mayor. But I have to admit that after 5 years of AI bullshit bingo I’m one sales meeting away from labeling AI in the same category as blockchain.


I think it depends on what you call AI and where you want to use it. Many people focus on "automation" - so do the analytics that an analyst would otherwise do. But as you note, mostly automation, workflow, analytic or chat, is only able to do very routine tasks, and it's still quite expensive to deploy. Another issue is that often when you tackle these domains it turns out that a commonsense non-technical approach is almost as good. On several occasions I've seen it's just a case of "stop doing that".

On the other hand AI can be a powerful tool or assistant when humans are struggling with a task. AI that (for example) highlights all the suspicious places in an oncology image after an oncologist has had a look and labelled it as showing cancer could improve quality of outcome. I am deeply suspicious of people who want to chop the oncologist out though!

More prosaically prediction systems to estimate future customer churn, identify sales opportunities (or in the public service case demand for services and needs), future traffic flows, complex settings for complex devices (especially in the face of change), diagnostics of problems, reasoning over millions of records, are all valuable. Often the theme is "can we make a very narrow very specialist component that could come to the same conclusions that a reasonably smart three year old would if we could make her concentrate and do it millions of times". So a bit smart, very autistic, very fast.

What you will not do is create value from consultancy and procurement; you need to build a capability to get the value, and that is a slow, expensive and strategic game - think 4 high cost (but don't be bullshitted there are great people who will work for reasonable amounts out there so long as you don't expect them to be slaves and make bombs or crash the financial system - think doctor or lawyer cost and recruit with the support of your uni) fte for three years.

If that is too expensive then I think it's going to be a waste of your time for now, and your current stance of sceptical openness is the right thing to do - eventually someone will come with a convincing shrink wrapped offer that is right for your domains, at the right price - then all is good.


In my opinion, we cannot compare blockchain and AI : blockchains are a type of data structure based on hashes, while AI is a wide and fuzzy concept with different meanings over the years.

Process automations and "AI" (let's say, complex autonomous systems) are parts of a bigger picture which is the digitalization of the society. Sometime, they are sold as a drop-in replacement for current human processes, but it actually works in only a few common cases.

The issue with AI and automation is not necessarily a technical one, the issue is the shift to a different ownership model, trust model, maintenance model and faster improvement model than the ones being believed possible, or currently in place for the last decades.

It's not that software cannot replace people, rather that big corporations are resistant to changes. We don't need to introduce AI to see the internal inefficiencies of big employers...

Technologies fundamentally change the meaning and content of "work", but people cannot fundamentally change that fast. Budgets cannot change that fast, investments cannot change that fast, regulations cannot change that fast, etc. There will be a lot of preliminary work to do in changing the educational system, regulations and also changing the redistribution of wealthiness, before "AI" actually happens.


I understood the OP in that he put AI and blockchain in the same hype / bullshit trap category.


Yes.


Watson is also people. Granted, those people are data scientists who can actually build an AI solution for you, but it's still a stretch from the marketing.


The worst example I have seen in that regard was some kind of ride sharing thing. The core function apparently was route optimization based on multiple destinations for multiple passengers. They aimed to offer 30 minute time frames for arrival and departure. In an environment where complain about a 5 minute delay for busses. Quite unsurprisingly they failed in the end, still got funding so.


> We’ve run proof of concepts on stuff like traffic, and a small startup with a relationship with a university, build a route prediction thing that would make it easier to avoid jams. It worked, but it frankly didn’t work better than what our traffic department was already doing, using simple common sense. So we diverted the project to work on traffic light control for maximum flow, and it turned out that the settings our engineers had come up with were already optimal. Only our engineers had come up with the same results, using less resources.

The AI came up with the optimal solution on an "already solved problem", twice, and this is a failure on the part of the AI?


It’s a failure if it’s more expensive. It wasn’t very good at the first example though.


> But for analytics and prediction it’s been extremely useless compared to actual people.

Usually the problem is that companies are going to sell you an AI solution for your data, but these companies are not going to tell you that with your current data there is very little an AI system can do. I say this because I saw companies who wanted a prediction system with 2 years of data (40 datapoints per year). Any AI expert will see there is no way this will work but still the system is sold to the company. This is much worse in the public sector.


We took this into account and worked with data from multiple municipalities. 25 years of data in a region with 1 million people is quite a lot.

The problem for ML seem to be that we’ve been using the data for a while and that we’re already very good at it.


Off topic, but SixthTone is an interesting publication. It is actually controlled by the CPC in an attempt to create news that westerners would actually read [1], as opposed to Chinadaily and Globaltimes that are too obviously propaganda. In this case, they go with China-critical articles and then try to direct the message to a pro-government position from that perspective.

This is actually how wumaos are supposed to work in China as well. If they go with a blatantly pro-government message, they will be ignored by anyone critical of the Chinese government. So instead, they come in agreeing with critical positions and redirect sentiment from there. So if you ever hear someone who sounds like a wumao, they are probably just overzealous nationalists and not shills.

[1] https://foreignpolicy.com/2016/06/03/china-explained-sixth-t...


I am also curious about this site, too. But right now, it is still one of the fastest English site to report scandal from China, another example is this[1] one I submitted before. I've tried to read some China news about this scandal in Chinese and I thought most of the content in the report is basically the same (BTW, I get this scandal in Chinese first, then try to find some article about this in English for submitting here and find this news by Google).

[1] https://news.ycombinator.com/item?id=16544058


> China, Explained

> BY BETHANY ALLEN-EBRAHIMIAN

Wow. That's an incredible title to use on a subject that is so narrow in scope.

Let's look at her other articles:

https://foreignpolicy.com/author/bethany-allen-ebrahimian/

Seems pretty one-sided.

And her twitter:

https://twitter.com/BethanyAllenEbr

Seems pretty one-sided as well.

Just to be clear, I don't intend to personally attack her. Just pointing out the bias in her reporting while we are on this topic of "wumao".

I am also not targeting Foreign Policy as a news media, as Foreign Policy itself doesn't seem to suffer from bias against China as a whole, as there are other authors that are either more neutral or biased towards China:

https://foreignpolicy.com/tag/tea-leaf-nation/

https://foreignpolicy.com/author/melissa-chan/

https://foreignpolicy.com/author/ran-jijun/

Edit: Attempt to remove personal attacks in the comment.

Edit 2: Remove the section about myself.


Those are pretty stereotypical ad hominems, actually. Anytime you attack the author and not the content, it is pretty much an ad hominem. These get used heavily in the Chinese press as well, so I assume there isn’t the taboo against using ad hominems (and other red herrings) in Chinese culture as there is in western culture.

Wumao is basically someone who is paid by the Chinese government to subvert online public opinion. If you are just pro China and being called a womao, the term is being misapplied. Anyone who says anything critical of China is called biased against China, even if it’s just a thing or two. Few people are actually completely biased against China on everything, but in Chinese thought, there seems to be no room for nuance.


Thanks for the reply. I have edited my comment so that it is less personal. Is that enough to remove the ad hominems?

Also, there's plenty of room for nuance to judge the bias, like the other authors on FT that I pointed out in my comment. Her bias is quite clear in my own opinion, given that there are many articles by her on China in FT, and most of the content (at least first few pages) is critical of China (not just one or two).


Ad hominem is easy to fall into if you don't know what to look for, we all do it unless we are trained/train ourselves otherwise, it is just human nature. The reason it is taboo is that it is really an emotional appeal rather than a rational one.

As always in western press, bad news is much more visible than good news, leading to an appearance of bias against China when it is really just a bias for bad news that readers will take note of. The CPC is exploiting this exactly with sixthtone, realizing that their feel good only propaganda approach doesn't translate abroad.

Also, certain reporters do feel good articles, others are better at critical ones, it doesn't mean that the reporter has a bone to pick with China, just that they have a job to do. In reality, she is probably less salty than the average long term laowai.


start out with an overtly critical almost "ad hominem attack" position, include your own stance in the course of your argument and redirect sentiment-in this case trying to garner sympathy for yourself for being attacked as a wumao.

you're either actually a wumao or are too afraid to actually take a reasoned stance. i had never heard of the term before this thread but makes sense, i'm reading a lot more content like this nowadays wondering what exactly the author is arguing about


Thanks for pointing it out. I have edited out personal comments on her and my own stance. Didn't think it would be perceived as garnering sympathy, but if people feel that way I am happy to remove it.


"Probably it is a good time for us to reflect on this, and maybe next we can try to appreciate and acknowledge others’ arguments before jumping into a rebuttal."

[cit. https://paradite.com/2014/01/23/good-articletaking-a-war-of-...]


Yes. I'm guilty of this quite often. And probably in this case too. That's why I need constant reminders like your comment.

Do you know the article beforehand or you discovered it through my profile?


I checked your profile and blog after reading this discussion. Now I feel some regret posting that anyway, as I think my comment did not add anything valuable to this discussion and was only instrumental to reprove you, a person who I don't know. So I would like to apologize, I am sorry for my stupidity and lightness.


I think it a little funny that you are attacked for this. Personally I have a hate for China due to nationalistic reasons, perhaps unreasonably. But it is good to question the source of information. After all that is the idea of a free press, no? Should we not all question the source of the information to find bias? For myself overall, I would say fuck China.


So, HN has a culture of not explaining down votes, but in this case I can only reason that it is because I said "fuck China". I do not know what experience the down voter has had with China. Myself I made the mistake of loving a person who was later imprisoned in a forced labor camp in China. I understand that that is partly my fault(they were never shy about their beliefs). At some point though it should be okay for a person with grievances to rail against the machine that imprisons them and separates them from their desires.

Ask yourself if you have to worry about your love having all their organs after an act of civil disobedience. I have and do every night. I do not know how to state that in a way that is palatable.


> But it is good to question the source of information.

Not when it's done in lieu of a rational discussion of the information, which should be the majority of the argument. Hitler liking dogs does not invalidate loving dogs even one iota - so what did you gain by attacking the source (when the subject is not the source but what it says)? It's a distraction and a (dark side!) rhetorical method, attempting a shortcut through evoking emotions.


I always suspected that Emacs's Doctor AI knew a bit too much about RMS's hangups and inhibitions.

http://www.art.net/~hopkins/Don/text/rms-vs-doctor.html


I have priced some popular computer vision APIs and found you can get a human in southeast Asia to do the work cheaper... Something seems wrong with that!!!


OpenCV is free and used for important projects at companies including Yahoo! and others.


OpenCV won't do the recognition/task for you, which is what the OP was referring to. In a nutshell, it's a bunch of useful tools/building-blocks. It's up to you to use them in the right way, and then apply CV algorithms + training so that it can do something.


This isn't the first time something like this has happened.

Pinscreen was also accused of faking results, including those shown off at Siggraph 2017. https://www.theregister.co.uk/2018/07/18/pinscreen_fraud_cla...


> This isn't the first time something like this has happened.

This has been happening for centuries: https://en.wikipedia.org/wiki/The_Turk


artificial artificial intelligence


Funny enough, a lot of marketing spiels break out AI as "augmented intelligence". And take the position that it's not their fault if people misinterpret their use of AI as artificial intelligence.


Nailed it.


I remember hearing from a colleague that Expensify wasn’t automatically identifying the expenses, but had helped with manual labor. I don’t know if it is true and quite honestly I as a consumer doesn’t care. If some is giving me a good experience which eases my work, I don’t really care if it was done with AI, human labor or magic - as long as it works.


You might care if you've chosen Expensify instead of another company in the same space out of expectation that human eyes will not look over your expense data. Of course Expensify's obvious defense will be that they don't explicitly give that guarantee...

But you might also want to care about this on general level. So many bullshit companies doing fake-AI create perception that the technology is much more advanced than it really is; this influences public opinion, and political decisions. Consider e.g. the coming wave of AI-induced job loss, that was such a hot topic just a year or two ago (for some reason it seems to have died down recently). After "Humans Need Not Apply", I expected we'll see significant effects in a decade. These days, I'm much more conservative about the timeframe, because I've learned that most of the immediate fear was based on hype and false advertising of a new statistical gimmick (deep NNs), that in reality only sorta, kinda, works, for some very limited tasks.


You don't care? So if someone in offshore labor camp starts collecting your uber receipts so they gather where you live and your regular movements, then gathers your medical bills (say you did trust that mob) they have nearly enough to do a nice social engineering job on you. You should care. Turkey companies doing this should be exposed so consumers who actually care about their privacy know about them and when their employer forces them to use these apps they can say no and stand up for data-privacy. Just my 2c worth ;-)

ps. Expensify sent images with personal data to Mechanical Turkers, calls it a feature

https://arstechnica.com/information-technology/2017/11/expen...

There are others out there too and you can check out their "Privacy Policies" whether they use "data extraction teams" offshore.


You might care as an investor, lest great consumer experience come from burning through VC. As an anology, if Uber claimed to have self driving taxis, but actually had people hiding in boxes driving them, that would be investment fraud.

The promise of AI has always been not better experiences, but similar human touched experiences for a much cheaper cost.


I'm on the fence about that... Take a look at Receipt Bank privacy policy https://www.receipt-bank.com/privacy-policy/ -- search "data extraction team" and their mega $50M funding from US investors in 2017. I have found around 90% of accounting products do this and no one snuffles or maybe all that money is going into hiding this fact and buying off "influencers".. ok I will shut up now ;-)


My comment was merely mad as a consumer.


You don’t Care but some consumers will care that it’s real people going through all their data and not just a machine. And claiming to have AI but using manual labor is straight up investor fraud on a Theranos level.


You expect AI to get better over time and it doesn’t make arbitrary errors. I’ve been using x.ai for awhile and it has actually gotten worse. They always say that it’s algorithms (let’s lie to our customers twice!). Algorithms don’t mistake Tuesday for Wednesday.


As Uber proved: AI is hard, but labor arbitrage is easy.


I was a speaker at a conference they were doing live annotation for. (http://waic2018.com/index-en.html) See slide deck here for proof: https://www.slideshare.net/agibsonccc/world-artificial-intel...

Beyond that, commenting on the translation a bit. They did live translation the first day of WAIC for the headline speakers. There were 2 screens, 1 was baidu and the other was iflytek. Neither were that good on the english side (it was ok..but could barely keep up with the speakers)

They claim they are still working on english. The grammar output wasn't coherent. IFlytek itself has some neat hardware they sell that is pretty good.

Beyond that, it seems like they are mainly collecting data right now. I would not be surprised they were doing this just for marketing visibility. It is easy to fake.

Happy to answer questions about the experience there if people would find it useful.


Reminds me of classic mechanical chess-playing turk illustrations. I was about to link it here, but all the search engines could come up using plain text search were links to Amazon's crowdsourcing thing and its competitors. We've really put the car before the horse in so many ways on the 'net, to stay with a metaphor from orientalism.


The first 10 DuckDuckGo images are all about the chess playing turk:

https://duckduckgo.com/?q=mechanical+turk&t=h_&iax=images&ia...


That's why I wrote text search.


And because of what the text search returns you cannot link to an illustration?


My point was that a regular user won't find the historical meaning of the term "mechanical turk" without hacks, being steered towards commercial products of that name instead.


Ok. But in your text you seemed to say you cannot find the illustrations:

    Reminds me of classic mechanical chess-playing
    turk illustrations. I was about to link it here,
    but all the search engines could come up using
    plain text search were links to Amazon's
    crowdsourcing thing


I didn't use manual labeling during pilots for ML-based service and I regret it after the fact.

Your customer cares about a business value. From my experience using buzzwords like "AI" will only get your foot in the door, but in the end, the business value will sell. IMO it's a completely valid business strategy if you are selling something at cost with the plan of reducing the cost later and you can stay solvent in the meantime.


The actual Mechanical Turk.


They appear to have stolen the plot of the film "Shooting Fish". Trailer - https://www.imdb.com/title/tt0120122/videoplayer/vi276031106...


Damnit or good... someone else remembers this. Oh well.

Maybe they should've tried trading a red paperclip for a house?


It's all done by Algorithms! This guy's name is Algorithm, and this guy, and these other guys...


This guy's name is al-Khwarizmi..


We are expecting too much to have AI solving all of our problems, when we should have develop tech to help us to achieve more with less.

This is a huge mistake that many companies are falling on, they are trying to sell us a Jarvis when actually what they have is some (good) algorithms


Reverse Turing: people with such narrowly defined work that an unbiased observer cannot distinguish them from an AI


would services that learn from recaptcha be considered as utilizing machine learning or no? I mean, it is humans who power that technically...


I think I can speak for the typical person when I say this:

what


The whole idea is that AI is supposed to: a) run more accurately than humans - good luck when translating b) run with lower cost - even Mechanical Turk is not as cheap c) do not require an army of people to handle the task

If the "AI" company lies by using manual translation without the Turk, they're lying about point B a lot. If they're using Turk, they're lying about point C and a bit less about point B.


Makes sense. Law of information non-growth proves AI cannot create mutual information. Only source we know of is people. This fake AI is the future of the industry.


I guess you are talking about the second law of thermodynamics? Even humans don't violate that, the have to get the energy for generating information from somewhere, and that increases entropy.


No, Leonid Levin's law of information non growth in algorithmic information theory.

https://news.ycombinator.com/item?id=17986929




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: