
Ask HN: My company is lying about product. What do I do? - notsmart
My company sells product publicly claiming it to use AI. The product contains no AI at all. The statements they make about the product and how it uses AI are 100% false.<p>What should I do?
======
n4r9
I had a similar worry about a year ago. On reflection I realised that AI is a
much broader, vaguer term than I'd thought. It can be used to describe any
software which replaces some human process involving decision-making. A huge
range of software can legitimately or tenuously be made to fit that criterion.
Doesn't have to involve ML, deep learning, neural nets or anything like that,
even if some people would prefer AI to specifically mean those things.

~~~
supercanuck
I was confused when I learned linear regression is considered ML now.

~~~
contravariant
Well, it's not like a support vector machine that just separates two point
clouds by a hyperplane is that much more advanced than an algorithm that draws
a line that best matches a point cloud.

------
methodover
Let’s get one thing straight: If your company is saying they use AI to solve
their customer’s problems but they do not, that is lying. 100%. It’s fraud.
You are absolutely right to be concerned.

I would argue that you have a moral obligation to do something about this
while bearing in mind your own needs and obligations (e.g., as a breadwinner
for your family).

Others have been in your situation before. Try seeking advice from Tyler
Schultz, one of the first Theranos whistleblowers:
[https://www.linkedin.com/in/tyler-
shultz-450923126](https://www.linkedin.com/in/tyler-shultz-450923126)

There may be others, I’d love to see comments giving other suggestions.

You could also get in touch with the Wall Street Journal— tip line is here
[https://www.wsj.com/tips](https://www.wsj.com/tips)

You could send a message directly to reporter John Carreyrou, the WSJ reporter
who worked on the first big Theranos story.

Going directly to the press has some serious risks — legal risks especially.
That’s why you might first begin by merely seeking advice and input from
people who have worked on this before.

Many people will tell you that the legal risks mean you shouldn’t do
something. I understand that position. At the same time, I believe we human
beings have an obligation to each other to deal fairly and honestly, and to
help bend the world in that direction. It’s not an easy position for you to be
in. Good luck.

~~~
neilv
It's possible OP is in a Theranos type situation, and that's a good thing to
look out for.

It's also conceivable the company is doing Wizard of Oz exercises (e.g.,
having humans simulate what they ultimately want software to do), to refine
their model and build ML training data, which is an entirely valid approach,
_but they 'd have to be upfront about that_.

Upfront to users, and especially, investors. For example, if someone were
telling an investor that they have a DNN servicing the customer requests, but
it's actually humans, that would create a very difficult situation, and fixing
it would probably include consulting a lawyer. In that scenario, there might
be others in the company who are pissed at the person who ran off their mouth,
but not want to take a hit of fixing it in a good way (and then things can get
worse; "the coverup is worse than the crime" is a thing).

I suspect OP's situation is much more easily corrected, or might even just be
a small misunderstanding.

~~~
boulos
Wizard of Oz experiments don't work if you tell the user (from a user study
perspective). You could tell them afterwards, but depending on the product /
service that may not make sense.

~~~
neilv
This isnt necessarily an experiment, users' perception of _how_ might not be a
concern at this time, and there's different kinds of Wizard of Oz exercises.

For example, when I first learned the term (in HCI, or human factors
engineering), they used the example of a mockup by photocopier designers, in
which obviously the people using it knew there was a human moving sheets of
paper to the slots.

Maybe there's a better term?

------
mchannon
Try to see how hard it would be to make it true, even if it's for some
tangential accessory use. Add some elementary AI function to add a line to a
log in your build scheme if that's something you have control over.

Then, if you jump ship, you can put "conceived of and implemented 100% of AI
capabilities in former employer's operations" in your resume and cover letter.
Win-win.

------
Cactus2018
You are not alone

[https://www.theverge.com/2019/3/5/18251326/ai-startups-
europ...](https://www.theverge.com/2019/3/5/18251326/ai-startups-europe-
fake-40-percent-mmc-report)

> Forty percent of ‘AI startups’ in Europe don’t actually use AI, claims
> report

~~~
NicoJuicy
I'm pretty sure that it's not only related to Europe ;)

------
neilalexander
In the modern market, "if statements" and "control flow" are "AI" in the eyes
of marketing execs. It probably won't do you any good to try and tell them
otherwise.

~~~
Bekwnn
In games this is widely known and accepted even, but I guess that's a slightly
different case since there's some history behind the why. It's AI because the
actors are game characters trying to emulate a real behaviour, whether that's
a soldier, dinosaur, or a fish. "Artificial intelligence" makes sense.

I don't think it's not too bad to call a complex decision process "AI". ML is
a separate distinction.

------
Odenwaelder
If you can't take the lies, leave. It's the only way. I would refrain from
making this public because it could get you in a lot of trouble, which isn't
worth it.

~~~
myself248
What sort of trouble beyond getting fired?

~~~
squarefoot
No idea about the OP, but in my case every contract I signed contained some
clauses that prevented me from speaking or acting in any way against the
companies interests, at least while I was working there; just like a sort of
non-compete agreement but applied more broadly. Luckily I never had to test
those clauses, but I'm sure had I done anything against them they would have
destroyed me in court in no time.

If I was in the OP shoes I'd leave ASAP. One day someone will let the cat out
of the bag anyway (coworkers, researchers, competitors doing reverse
engineering, etc) and he could lose his job anyway if the company tanks. Not
to mention being related to a technical fraud.

------
x0x0
You need to share more context.

TBH, a lot of AI does not do things better than a human. It merely does them
more cheaply or more scalably. Thus if you are prototyping or building an mvp,
it's not necessarily terrible to have your "AI" be a human. Even the best AI
systems are often a mix of humans and software, where eg the humans label or
the ML code sends examples near the decision boundary to humans for review.

Building an initial system that is human operated and transitioning that to
machines is how a lot of companies are built.

Of course, if you're lying to your investors, that will not end well. Ditto
future employees.

------
villaumbrosia
This is tough. The lie is probably originating from the executives in your
company, so you probably can't just nip the issue in the bud and change the
verbiage of what you do. So, with that being said, you probably will have to
start by voicing your concern to your most immediate superior. When you do
this, don't immediately jump in as a whistle blower who is trying to implement
justice (which I am sure you were not going to do, but just in case).

Rather, have an honest conversation, and see if there is anything that can be
done to create a more honest image of what you are doing. Hopefully they will
have some sway or insights on how change can be brought about.

If your boss is unwilling to hear you out, you might have a problem you can't
fix. In this case, if the company is willing to deceive its customers for
financial gain, then it is likely time to start looking for a new job
elsewhere.

In the long run, companies that do shady business practices get found out, and
won't last very long. Luckily for you, it seems like you have an honest head
and your shoulders, and will be a great candidate for a business that does
things the right way.

------
karmakaze
Personally, I wouldn't worry about it, my rice cooker has AI. If it really
bothers you personally, leave. I wouldn't consider it a moral obligation to
expose the company unless there's more than the usual marketing and biz
aspects to it, e.g. individuals being put at risk or exploited. I don't
include investors who's job it should be to sniff this out.

------
milesvp
Keep in mind a simple if statement in code is AI. AI has _always_ been a
(shitty) marketing term, and sales and marketing has been trying to use it to
sell products for the last 50 years. You’d need to elaborate a lot more on the
specific claims of the company before I’d personally be worried about sales
and marketing selling snake oil with this term...

------
xiphias2
Look for another job in the background if you want and take it after you found
a better one.

Also what's more important is whether the product is useful or not.

At the same time it will be hard to find a company in which the CEO doesn't
ever lie :(

------
partisan
I suspect many of the AI companies floating around nowadays are really just
faking it till they make it on the assumption that they will figure it out
once they get big enough.

------
sodosopa
In short: GTFO. Longer: They're unethical and eventually it will be discovered
and it will tarnish the work in your career. Do not stay and be their
accomplice.

------
thinkingkong
The team is small enough that you should be able to talk to whoever is in
charge of either engineering or marketing and ask them about the strategy and
positioning. In some cases you need lots of data to start making things work
with and kind of ML; this could be that period of time.

If it violates your integrity to work there after you have an answer then just
move on.

------
staunch
Any time I read or hear "AI" these days I automatically translate it in my
head to the word "software". After all, all software is a kind of artificial
intelligence, so it's not exactly dishonest to use the term AI -- it's just a
bit lame.

The benefit of doing this mental translation is that I'm no longer annoyed all
the time ;-)

------
r3n
Since you are not fine with it, you have two options:

1\. You quit right away. 2\. You force yourself go to work and eventually burn
out then quit.

Unless you are in a position that can influence the company, it doesn't worth
your time to change it.

If you need the job now, invest more time and resource on yourself so you can
move on to other place easier later.

------
taf2
Is it an aspiration for the company and are they close? Is it miscommunication
between product and marketing? Or is it intentionally lies? Are you offering
some sort of information automation? To me I can see anyone not in computer
science easily being confused. So maybe intention here matters

------
sp527
I have yet to work at a tech company that doesn't either maximally
misrepresent or outright lie about its product offerings. Incidentally this
also seems to be a good way to identify companies that are likely to fail or
underperform in the long run.

------
jahrule
who's being fooled ?

\- the investors ? most likely they know your four man team is not capable of
doing any solid AI work and are aware that your CEO is bullshi$$ing but they
are actually betting on this ability to sell what does not exist being a
personality quality.

\- the customers ? why should they care. if they care, they should ask where's
the ai. "show me the ai"

\- you ? your not being fooled. you might have been when being signed up for
the job, but now just accept it. if AI becomes a thing it might eventually
find a way into your product. otherwise id just play along now while the
cheques keep coming.

~~~
Buttons840
I may not agree with these viewpoints (I'm not the OP so I don't have to form
an opinion) but they are unique and worth considering.

------
throwaway2019Z
Any chance you're working for a legal "AI" platform?

------
denkmoon
Nothing. Take your salary and do your job, assuming you aren't the one stating
the lies.

Being a martyr is cool and all, but I'd rather continue living.

~~~
robomc
Do you think they'll kill him?

~~~
denkmoon
Of course not, but getting sued into oblivion might as well be a death
sentence. Why take the risk, when the potential reward is so small?

------
mbrodersen
Show me a business that always 100% tells the truth about their products?
Anybody?

------
gesman
Sounds like my perception about majority of vendors at recent RSA conference.

------
cm2012
Is anybody concievably being hurt by the claim that it's AI?

------
mbrodersen
AI is just a marketing buzzword. Don't worry about it.

------
Irishsteve
Does it at least solve the problem that it says it solves ?

------
craftinator
Do you work for IBM, by any chance?

------
bromonkey
leave.

