
Mark Cuban on Why You Need to Study Artificial Intelligence - applecore
https://bothsidesofthetable.com/mark-cuban-on-why-you-need-to-study-artificial-intelligence-or-youll-be-a-dinosaur-in-3-years-db3447bea1b4
======
minimaxir
The title is clickbait, but the problem with the religious fervor of machine
learning/deep learning is that entrepreneurs/VC keep arguing that deep
learning is magic and can solve any problem if you just stack enough layers.
(see also:
[https://www.reddit.com/r/ProgrammerHumor/comments/5si1f0/mac...](https://www.reddit.com/r/ProgrammerHumor/comments/5si1f0/machine_learning_approaches/))

Meanwhile, statistical methods for non-image/text data with identifiable
features can often work better than neural networks, but they are not as sexy.
(Good discussion on HN about this:
[https://news.ycombinator.com/item?id=13563892](https://news.ycombinator.com/item?id=13563892))

~~~
zebrafish
I have no experience with the entrepreneur/VC side of this coin, but Data
Scientists know the differences between traditional regression based ML and
CNNs. We know that doing image recognition or NLP is best suited to
nvidia/tensorflow/keras/deeplearning/buzzword/buzzword. We also know that SVMs
& Random Forests & regression lines work just as well as they always have to
make predictions based on your click-thru data.

Maybe we should explain the cost/benefit of the buzzwords vs. the science?

~~~
logfromblammo
It's simple. More buzzwords equals more money. So whatever it is you're
_really_ doing, just find some way to semi-plausibly attach the latest hot
buzzwords to it.

Most of the "dinosaurs" around here should know by now that you don't get paid
to do what works; you get paid to do what the boss/customer wants. (And that's
not always going to be exactly what they _asked for_ , either.)

As long as you continue to throw money at me, I'll continue to chase your wild
geese, hunt your snipes, fish for your red herrings, and joust with your wind
turbines. Most of us here can learn fast enough to always stay one or two
steps ahead of the boss on whatever topic they might latch on to. And that
becomes genuine experience if it somehow manages to graduate past the business
fad phase.

------
feral
Reading HN I worry that we're going to have _the opposite problem_ \- a glut
of people will try and (badly?) learn ML and then realize there aren't enough
ML jobs.

I've a PhD and held ML-engineer positions in a few different companies - I've
good industry awareness.

Most applied ML, for most companies, right now, is actually relatively simple
models (hand-coded rules! logistic regression! You'd be shocked how common
these are.) The bulk of the work is data cleaning, gathering, integration,
deployment, productisation, reliability, avoiding pathological cases, special-
casing, Product, UX. You do need ML specialists who understand the stuff, to
make it all work and come together - but the ratio of ML specialists to the
wider team is low. Maybe 1 or 2 specialist on a team of 10 for an ML heavy
product.

This is going to remain the case IMO. Yes, there will be small teams, in
highly resourced organizations (GOOG, FB etc), academic research labs, or
occasional hard-tech startups, who do new model development. Maybe if AI
becomes huge, you'll see more traditional Fortune 500s spin up similar
efforts.

But there'll be a much wider set of people&businesses applying and tuning well
understood approaches, rather than doing new model development. And you just
don't need as many ML specialists, for that approach.

Even with deep learning, the tooling will advance. I mean, even look at all
the research papers describing applications at the moment - so many of them
are using pre-trained models. Industry will be similar. Tooling will advance,
and you'll be able to do increasingly more with off-the-shelf pieces.

I think ML is absolutely going to have a big impact - I buy at least some of
the hype. But should all developers, or even a substantial minority of
developers, start learning ML as a career imperative? I don't think so.

Finally, it takes serious time to learn this stuff. Its easy to dabble (and
worthwhile doing - its fun; and sometimes you can do powerful things in a
using tools in a very blackbox manner!). But actually thoroughly learning it
takes time. It takes serious time to build statistical intuition, as just one
example.

We could easily end up with a great many career developers who have a
specialization in ML, frustrated they never get to use it.

~~~
ylem
This is not my field, but a serious question--I once read that part of the
motivation of pharmaceutical companies in hiring researchers was not so that
they would all produce ground breaking independent research, but rather
because they would be capable of reading the literature (again, not my field).
Is that true at all for machine learning? Would companies hire people who
would be up to date with the literature so that they could implement
algorithms that others have developed in an academic context and put them into
production?

~~~
feral
Yes, I think that's true.

Basically, the interfaces of the models/tools/abstractions people will use
will be 'leaky'.

For example, you can take a machine learning method from scikit learn, which
works really well on the scikit learn example, and apply it to your problem.

Any developer can do this pretty quickly. If it works, and gets good accuracy
out-of-the-box, then great.

But if it doesn't work, what do you do? How do you know where to look, what
could trip it up? Are your features OK, or is it a problem with your model? Or
maybe you framed the problem wrong? When you get into this sort of area,
that's when you need an ML expert, who knows whats actually going on under the
hood - or can at least learn the particularly model quickly - and can make
progress faster. A more general developer will slow down drastically once they
start reading the documentation for how the model/system actually works.

And hopefully the expert will frame the project better from the very start,
solving issues before they even arise, because they know the kind of issue
that _can_ occur, or suggest easier paths to solutions.

So, agree with your point there.

But how many such experts do you need? In my experience, only a small number
on a bigger team. ML folk are highly leveraged, but need a lot of support to
get their product into production - to manage the data (if its worth applying
ML to, there's probably a lot of data, so a big data engineering task, maybe
connected to a live system), to think about the UX etc.

This will all evolve on several levels of course:

\- The tooling will get better; model/data deployment/management will get
easier. But also and non-experts will be able to get more done as ML becomes
more robust out-of-the-box.

\- We'll get better at building ML products (e.g. team structure, data
infrastructure, UX (designers learning how these things work), company org
(e.g. a lot of friction between Agile and ML))

\- Businesses will want to do bigger things

But if I had to guess, we'll be picking up low-hanging fruit for a while, and
most of the work, for most companies, will be in the support infrastructure
and application, with just a minority of specialist ML roles.

------
deepnotderp
I'm going to go against the grain here and (gasp) not hate on deep learning.
People should realize that although many older statistical methods and
"traditional" machine learning methods such as LDA, SVMs and decision trees
may be good enough for business tasks, they are not the cutting edge of AI
research. I think people are forgetting exactly how difficult image
classification and object detection was before the advent of deep learning.
People hating on "stack more layers" forget that "stack more layers" is
EXACTLY what improved imagenet performance to such as massive extent. ResNets
pushed the limits by figuring out how to stack more layers in a beneficial
way.

And let's take a look at AlphaGo, how would you do that with SVMs or decision
trees? Just get over the fact that deep learning provides a level of
"intuition" (Go's search space is famously greater than the estimated number
of stars in the universe).

I think that a part of the problem is that older ML PhDs are angry that deep
learning is so easy (until the learning rate fails to provide convergence of
course...) and would prefer that their preferred methods would still reign
supreme.

I'll end this wall of text on the note that OpenAI's Ian Goodfellow notes that
all projects at OpenAI use deep learning right now, but they are not dogmatic
and will consider other approaches if they work well. I think this is the path
that should be taken. On the other hand, I also see a bright future for
uniting traditional techniques with deep learning such as attaching a SVM to a
CNN and combining decision trees with CNNs both of which have resulted in good
results.

~~~
dmreedy
>> I think that a part of the problem is that older ML PhDs are angry that
deep learning is so easy (until the learning rate fails to provide convergence
of course...) and would prefer that their preferred methods would still reign
supreme.

I think that's definitely a part of it, and I feel that way sometimes myself
(not that I'm a PhD). But there's another side of that reluctance that lies on
the axis of model accountability and explicability. A lot of modern ML/Deep
Learning doesn't -feel- like we're understanding anything any more than we did
ten years ago. Yes our black-box results are better according to the tests
we've laid out for them, but there's something more slippery about the 'why',
beyond the handwave of 'complexity'. Maybe this is just the way it will be
going forward (in the spirit of Quantum's "shut up and calculate"), but it is
not easy to give up something that you can wrap your head around with
something that kind of just takes care of itself, especially if you're in the
business of seeking knowledge instead of results.

~~~
deepnotderp
I'm not sure I fully buy the "explainability" argument against deep learning
since our goal is human level intelligence and human intelligence isn't easily
explained either.

On the other hand, we're nowhere near human level intelligence in most tasks
(Go, Poker and image classificaiton non withholding) so I can understand the
argument for explainability from a practical perspective. I think we're making
some good progress in that direction though, and I'll list them below:

1) Attention maps in CNNs can tell us what the net is usually looking at.

2) "Attentive Explanations" use attention mechanisms to point to the object of
interest to generate an explanation for VQA tasks, check the paper (warning
PDF): [https://arxiv.org/pdf/1612.04757](https://arxiv.org/pdf/1612.04757)

3) A recent project used a similar explanation mechanism that forced the
network to output "what it was thinking" while playing an Atari game.

4) NTMs (neural turing machines) allow weighted memory access, which
alleviates the back box issue to some extent.

~~~
dmreedy
>> since our goal is human level intelligence and _human intelligence isn 't
easily explained either_

I think this might be where the real divide lies. To present a false
dichotomy, there are two schools. One that is interested in building systems
that attain human-level intelligence, and another that is interested in
understanding the nature of human intelligence by building models using our
closes understood analog (computation).

We're definitely making progress on Deep Learning models that tell us more
about how they think they work, but philosophically, I'm not sure we're making
progress on understanding the nature of what they experience. I don't mean
this in any polemical sense, just trying to explain the particular
dissatisfaction (and it -is- dissatisfaction, not contempt or disdain. The
stuff is amazing) I feel towards these advances.

------
evgen
I would be far more inclined to heed this advice if Cuban have any indication
of understanding ML as anything more than a magic black box; fairy dust to be
sprinkled into every pitch deck to solve any objection or solve difficult
problems. The bandwagon is passing through folks, jump on board with Mark or
you will have fewer buzzwords with which to craft your deck...

~~~
fixermark
Was he talking to engineers or owners / executives though?

""" The Upfront Summit is LA's premier technology event, with more than 750 of
the country's top investors, startups, and corporate executives """

If he was talking to executives, it may be sound advice. It's extremely likely
the hot business opportunities in the short-term will be applying ML
techniques to outstrip competitors trying to solve problems with traditional
hand-coded solutions. In business-speak, "Learn ML" translates to "Familiarize
yourself with the space and hire the people you need who know the topic,"
because that's how a company "learns" something.

------
pjungwir
Sort of a content-free article, but the headline is an interesting bold claim
that conjures a lot of thoughts:

\- I know enough machine learning to be dangerous, but I'm hardly ever asked
to use it. I designed a Bayesian classifier for my own startup around 6 years
ago, analyzing political donor networks. I've completed the Stanford ML
course. Back in college I did a math minor, so I'm comfortable with linear
algebra, calculus, etc. I'm pretty comfortable with statistics of both kinds.
But my bread-and-butter is freelance web development . . . and I'm not really
even sure how to find work doing more MLy things.

\- I've read over and over that the most time-consuming part of ML work is
data collection & cleanup, and that matches my own experience. It is the same
thing that killed so many data warehouse projects in the 90s. You don't need a
Ph.D. to do it, but it is a tough and costly prerequisite. So it seems like
you'll need non-ML programmers even for specifically ML projects.

\- In a similar vein, Google has written about the challenges of
"operationalizing" machine learning projects.[1] Having a little experience
collaborating with a team doing an ML project, where they did the ML engine
and I did the user-facing application, I can say that many ML experts are not
experts in building reliable, production-ready software.

\- Will there ever be a Wordpress of machine learning? If there is, the author
will be rich, but you won't need a Ph.D. to operate it. But because ML
requires hooks into your existing systems, I don't know if this will ever
happen. What _will_ happen I think is plugins to existing e-commerce systems
for product recommendation or other off-the-shelf ML-powered features. These
already exist, but I assume they will become more prevalent and powerful over
time. In any case, the mainstreaming of ML for business will be inversely
correlated with the expense to implement it, which suggests it will be easier
and easier for non-expert developers to use (and misuse).

EDIT: Added the (now-)third bullet point I forgot before.

[1]
[https://research.google.com/pubs/pub43146.html](https://research.google.com/pubs/pub43146.html)

~~~
vidarh
> and I'm not really even sure how to find work doing more MLy things.

I think this is key - prospective clients won't ask for it because they don't
understand where it could be used, and they won't understand the heavy ML
methods. An approach there would be to pitch things like improving search
results using a bayesian classifier applied to analytics data as a cheap
upgrade when quoting other work. Until people are used to even the basic
statistical approaches they won't be ready to invest in something more
drastic.

------
vidarh
One of the things I've realised is that the more I'm looking around, the more
I find opportunities where people "should" have seen the opportunities of
basic bayesian models, simple clustering algorithms etc. and other simple
mathematical/statistical methods 20 years, but didn't, and still don't. That
has massively changed my perspective on how quickly the onslaught of machine
learning will come.

E.g. when I was reading up on genetic algorithms etc. 20 years ago we also
expected the "revolution" to be right around the corner, and that things like
genetic programming would change the world in a few years time. And while
various of those methods found use some places, most places that could have
used at least some of the simpler ones, still don't.

In other words, I think talking about a 3 year timeline is crazy. It's getting
more attention, sure, but there is so much low-hanging fruit that most
developers could be busy for the next 20 years putting in place the most
trivial algoriths all over the place and we still wouldn't have picked off
even the low hanging fruit where the computational resources and algorithms
and data to make a big impact were well within reach 20 years ago.

This certainly means there is plenty of room for a lot developers to do very
cool stuff and build careers on machine learning today, but it also mean most
developers will not have to learn the state of the art - or anything near it -
for a very long time.

As a concrete example I give to people, consider all of the search boxes out
there on various sites - product searches, location searches, site searches -
that are straight keyword based searches that don't take into account _any_
clickstream data to improve ranking. The proportion of search boxes I see that
take advantage of the available data is vanishingly small, even though very
basic analysis can improve the perceived relevance of the results massively.

We certainly will see more companies invest in proper machine learning as the
payoff gets higher and difficulty in taking advantage of it drops. But we will
also see a huge proportion of sites that could use it continue to ignore it
for years to come.

There are big business opportunities in finding ways of making a dent in that
portion of the market, though, and so learning this stuff can certainly be
well worth it on a personal level, but I don't believe in his timeline in
terms of the overall market.

~~~
anotheryou
If competition with basic keyword search survives, the benefit of implementing
something more sophisticated might just not be worth it. With search you
usually know what you are looking for and can pin it down quite well with
classic keyword search, discovery happens elsewhere.

~~~
vidarh
> With search you usually know what you are looking for and can pin it down
> quite well with classic keyword search, discovery happens elsewhere.

Having looked at search terms users enters on a lot of client sites, I don't
believe that for a second. Users are exceptionally bad at composing search
terms.

~~~
anotheryou
You are probably right. Who searches? Is it really just the long-tail that
can't be served by good navigation and the most important content in prominent
places? Or could one optimize the rest first? (of course, a bit of statistics
on what is accessed frequently helps here too)

------
anupshinde
Statements like these suggest that another AI Winter is coming (sooner than 3
years I guess)

"""He thinks even programming is vulnerable to being automated and reducing
the number of available programming jobs."""

I believed something similar could happen within 1-2 years of learning/writing
AI programs (more than 12 years back). I believed it so much that it consumed
most of my weekends as I took on the Genetic Programming approach. Yes!
computers can write programs - BUT trying reading those. Eventually after
spending hours or days, you will be able to read those programs and you might
find a simple "hello-world" program represented by a complex mathematical
equation. Good luck trying to get such program fixed by humans. Imagine an
experience decoding deep-learning-neural-nets. However, that is black-box from
a programmer perspective.

From a business/management personnel perspective - the code is a black box
anyways. When they get NNs that can generate required software, they will
replace the people-manager with a NN-manager (who is a programmer btw!)

~~~
primaryobjects
Are you referring to something like this?

Using Artificial Intelligence to Write Self-Modifying/Improving Programs
[http://www.primaryobjects.com/2013/01/27/using-artificial-
in...](http://www.primaryobjects.com/2013/01/27/using-artificial-intelligence-
to-write-self-modifying-improving-programs/)

~~~
anupshinde
No, that looks like GAs with string/array representations. Similar thing
worked for me when I tried randomly referencing nodes within a chromosome (say
from index A to index B) - generating a graph like structure

The outputs are like this for some not-so-easy targets:

Op nodes ['ADD', 'ADD', 'MUL'] EXPR: ['ADD[ni_99](ADD[ni_49](I__7[ni_43](),
ADD[ni_19](I__8[ni_66](), ADD[ni_79](GET_CONST_3[ni_25](), I__9[ni_71]()))),
I__1[ni_61]())', 'ADD[ni_13](I__6[ni_17](), ADD[ni_49](I__7[ni_43](),
ADD[ni_19](I__8[ni_66](), ADD[ni_79](GET_CONST_3[ni_25](), I__9[ni_71]()))))',
'MUL[ni_68](GET_CONST_8[ni_73](), FLOAT[ni_42](I__1[ni_91]()))']

With genetic programming (using an AST), it can solve complex equations:

However, this simple equation i.e. correct answer "(a + b + c - d) / e" could
be evolved and will result into either of these (depends on my luck maybe)

Case1: ((int)((b+((c-d)+a))/e)&(int)((b+((c-d)+a))/e))

Case2:

((((int)a&(int)(((((((mod(e,a) _c)
/e)_(((((int)(d-a)&(int)b)+e)/a)/e))/((((int)e&(int)c)+e)+(c+(e/a)))) _c)
/e)_e))/((((int)e&(int)c)+e)+((b/e)+(e/e))))+(((((((((mod(e,a) _c)
/e)_(((((int)(d-e)&(int)b)+e)/a)/e))/((((int)e&(int)c)+e)+((b/e)+(e/e)))) _c)
/e)_ <..........10383 characters here.........>
))))))/e))/((((int)e&(int)c)+e)+(((d-e)/e)+((d/b)/e))))+(((mod(e,a)/e)+(b/e))+(((c/b)+((d+b)/e))/e)))))))))

The GP output (case 1 and 2) above was generated with a tweaked version of
[https://github.com/rogeralsing/go-genetic-
math](https://github.com/rogeralsing/go-genetic-math)

------
gremlinsinc
That's as strong as a statement as Trump: Learn Machine Learning or you'll be
a Dinosaur in 3 years...

Maybe if it was coming from Bill Gates, Mark Zuckerberg, or another tech titan
with some actual coding experience and a deeper level of learning about what
ML even is. Cuban's a business man, and most CEO's I know don't have a clue
about the stacks that run their own company, let alone what's popular.

That said, I do think ML will be important, but I develop ecommerce apps and
things of the such in Laravel, unless I move into AI and Neural nets I don't
see needing to know a lot about ML (though I wouldn't mind moving in that
direction as that space picks up) -- but there's still plenty of opportunities
without it.

~~~
AJ007
Making machines learn, applying machine learning to successfully solve
problems, and positioning business assets to benefit from machine learning are
three very different pieces of the "learn machine learning" puzzle. I don't
know how any human would be able to go from 0 to having a good grasp of even
one of these pieces in three years let alone all of them.

I suppose there will be (and to some degree already are) machine learning
magic wands, but they are going to be the kinds of things that suck capital
out of the companies that utilize them (through loss of proprietary data and
competition blocking moats.)

------
marricks
Assuming we're all going to be deep learning programmers is quite a bit
foolhardy. I think what's really relevant to consider is AI winters can and do
happen[1]. I would not disagree deep learning has done some amazing things,
what I would say is it does have limitations.

What causes AI winters is when an advance such as deep learning can be applied
to new problems and leads to increased interest. And while this new thing is
really good as a subset of problems and impresses the public, of course it
can't displace humans at everything and naturally has it's limitations.

So funding pours in, everyone gets hyped, and then those natural limits are
(re)discovered and everyone gets all anti-AI research. Of course many people
knew the limitations all along, but the dream is gone and so is a lot of
funding until the next thing comes along.

This is probably natural to a lot of fields but AI just seems more prone to
these boom and bust cycles because it's really exciting stuff.

[1]
[https://en.wikipedia.org/wiki/AI_winter](https://en.wikipedia.org/wiki/AI_winter)

~~~
sevensor
Exactly. 25 years later, I'm still waiting to be replaced by an Expert System.
A lot of the tech that fueled the hype train (logic programming! genetic
algorithms!) is still really neat, but it didn't work out the way we expected.
Same thing will happen with ML. It's the Prolog of tomorrow.

------
itg
Good luck with that. Any place doing serious ML will require the person to
have a PhD or have publications and presentations at conferences like
NIPS/ICML. Even most CS grads with a bachelors do not have the math background
required unless they double majored in math or stats.

This is more VC/founders who are hyping up AI and need more ML folks so they
can drive down costs.

~~~
sidlls
Utter nonsense. A PhD signals two things: that a person has the same degree of
mastery of core material as a person with a master's degree and that he or she
has the determination to do additional original research sufficient to produce
a 100 page paper.

It isn't required for any serious research effort, except by the accident of
inertia. And it certainly isn't a necessary indicator of determination.

~~~
tensor
You seem to make extremely light of "doing original research" here. A PhD _or
equivalent_ is absolutely required to do serious research in the field. Sure,
you can always get the depth of knowledge required without a formal program,
but the reality is that few do. Most people just take Machine Learning 101
then think they are a domain expert.

That said, there is definitely a place for non-PhD level ML practitioners. I
don't think the industry has stabilized in this regard, but I can definitely
see a "machine learning developer" type position becoming quite common. This
is not the same as someone doing original research, but would definitely meet
the needs of a great many business use cases.

~~~
sidlls
I'm not taking it lightly at all, considering I've done it. I know exactly
what it requires and what it signals.

------
bsaul
Honest question : once the technics will settle a bit, and libraries are
created, what will be needed, apart from knowing that machine learning
algorithm are based on some kind of stasticial inference, with a few settings
here and there ?

I mean, we don't need a phd in image compression to create a service that
streams videos. We just use libraries. Same for everything in computer
science, it always end up packaged in some kind of reusable code or service,
and only some specialists remain working in the field to work on marginal
improvements.

Why would ML be any different ?

~~~
Eridrus
Once you've plugged your data into an ML system and gives you a classifier,
are you done? What if the results are not good enough, do you just move on to
another problem?

If you don't just move on to another problem you will need people who know how
to push these systems further.

~~~
Namrog84
But at some point, there is a difference between turning dials of the systems
vs implementing new novel ways of these systems. Obviously we always need new
novel approaches, but how much of that should stay in phd/academic/research
world and how much spreads out into engineering day to day activities?

I am rather newbish in the whole space, but most everything I've seen is quite
often just knowing enough to turn the right dials the right way. And I do
wonder how long will it be before some of these dials can start turning
themselves with more iterations with a simple, "yes that's what I want, or no
that's what I don't want"

Sure, it might take some more cycles for it to find optimal in this example.

~~~
Eridrus
It all comes down to ROI of going further; if the ROI is small, people won't
go much further than twiddling some knobs. I think there's a reason you see
people throwing a lot of analysis at finance.

Understanding what models are doing and adding new features/data
transformations is standard work for ML practitioners now. If you believe the
deep learning hype, architecture engineering is the new feature engineering,
and to the extent that that is true, industry will essentially resemble
academia to some extent. Maybe not making anything groundbreaking, but mixing
ideas together.

------
msvan
Either he's right about machine learning, or this is exactly the kind of thing
bubbles are made of.

~~~
geodel
Of course he is right. I saw some dinosaur characteristics already showing up
when I looked myself in mirror this morning.

~~~
Twisell
You are so lucky! I can't even look into a mirror since I learn 3 years ago
that a NoSQL ninja will soon stab me in the back and steal my job because I
work with that declining SQL old stuff!

~~~
markatkinson
Those made me laugh embarrassingly loudly.

------
badthingfactory
I'll place this in the same folder as the articles claiming Wix will
eventually replace web developers.

------
brilliantcode
Normally I'd laugh off any Mark Cuban antics but he isn't wrong. AI is going
to greatly reduce white collar jobs with economies of scale.

Luddites of 18th century thought they would never be replaced and continued on
their trajectory.

~~~
ploika
In three years? No way.

A lot of white-collar jobs may be automated (or otherwise changed beyond
recognition due to technology) after about thirty years maybe, but not three.

~~~
brilliantcode
I agree 3 years is way too short. I'd say 15 years is even early. 30 year
seems maybe, 100% in 60 years.

------
mad44
(Pre-apology. I am not trolling, please don't get my comment below more than
what I intended: another perspective to look at the strong reaction Cuban's
comments incited.)

Reading through the comments, I see that Cuban's statement upset and even
angered several HN commenters. That is a strong emotional reaction.

I am not saying it is the 5 stages of grief, but the first 3 fits: denial,
anger, bargaining, depression and acceptance.

Also from Howard Aiken: Don't worry about people stealing your ideas. If your
ideas are any good, you'll have to ram them down people's throats.

~~~
return0
I 'm not even sure why it provokes such reaction. Neural networks have existed
for 4 decades ; without the deep- part, but we know their potential power and
we still have not been replaced. Granted, this time it's different. I think
what worries most of us is that neural nets need lots of data and we don't
have access to it. Still, they are easy to learn and we should be learning
about them (note to self).

------
dkarapetyan
No thanks. Fundamentals and not hype is what makes one not a dinosaur.

------
anotheryou
I think prosthetic knowledge will become deeper and more accurate in the long
run. And if it scales we don't need many people building the general purpose
AI.

With this prosthetic knowledge we will have to learn much more what to ask and
know how much the machine knows.

One has to quickly grasp the abstract that is one level too high or detailed
for the machine to find and than find the seperate answers of the level below
to recombine them. You can't yet ask where to open a restaurant, but you can
google for demographics and write a program to map ratios between income,
foot-traffic density and restaurant density.

Once we can ask what and where is the most profitable business in to start in
town, we probably still don't get a step by step guide how to do it, interior
design included. Where the rubber meets the road there is still a lot of
oppertunities to decide on and complex data we can grasp more easy than the
machine.

------
xs
Just like there isn't such a thing as polymaths anymore because the world has
so many specialized skills, I think there won't be the possibilities of things
like "full stack developer" in the near future because of the complexities for
development. Some things that contribute to the complexities will be:
ubiquitous controls, advanced AI, internet of things, augmented reality,
machine learning, and new technologies we don't even have yet. We are in the
golden age now where a single person can sometimes create a better website or
app compared to a whole development team at a fortune 500 company. I think our
Internet world will become so complex in the very near future that a single
person simply won't be able be capable and they'll have to become specialized
in only a portion of it.

------
taytus
Sorry, I refuse to visit such a clickbait headline.

------
owaislone
I think what he meant to say was that companies should learn it in 3 years or
the competition will drive them out of business. I don't think he meant
individuals will be dinosaurs, but startups that fail to take advantage of
ML/AI.

------
xamuel
Three years ago, Machine Learning on a resume meant: "Good candidate."

Currently, it means: "Jumps on bandwagons, caution."

In three years, it'll mean: "Brainless buzzwords, avoid."

------
JustSomeNobody
I want a nickel every time someone says programming jobs will be automated in
5 years. This goes back to the beginning of programming and it hasn't
happened.

------
coldcode
Machine learning would imply the machine needs to learn it not me. Knowing
something about it and actually using it are quite different. While it might
appear "everywhere" most of programming is still not ML and probably won't me.
When I started in 1981 I didn't know C yet, that did not make me a dinosaur
then either.

------
hnmot223
"Mark also said that what happens in the next 5–10 years is just going to blow
everybody away, especially in the field of automation. He thinks even
programming is vulnerable to being automated and reducing the number of
available programming jobs."

He's talking out of his ass here. This won't be happening anytime soon (if
ever)

------
usgroup
I kind of read it "learn it and you'll be a dinasaur for 3 years". It made me
quite excited at the prospect. I started to practice growling and running at a
curious forward angle , but on second reading I find myself disappointed.

------
thomasahle
I wonder to what degree he means. Because if he means "really" learning it,
this is akind to the "learn to program in a week" books. It takes much more
than three years to learn well.

~~~
vidarh
Here's my suggestion to people worried about this: Learn about how to use
bayesian models and clustering methods, and how to recognise where they may be
applicable, and you will already be able to produce things that will astound
executives out there, and deliver very real value for your employer /
customers, and those methods are simple - you don't even need to understand
the maths to be able to make use of them, though it helps.

By all means, start learning actual machine learning methods too, rather than
just statistical methods, but as I pointed out elsewhere on this thread: there
is low hanging fruit _everywhere_.

Not nearly all of those will need "proper" machine learning, and even fewer
will be willing to pay what it will cost to hire people with in-depth machine
learning experience or pay the development costs or computational costs
anytime soon. But a lot of them will buy into the buzzwords and look for a
cheaper halfway-house or be open to pitches.

------
onmobiletemp
This is so dumb. You arent going to understand or get a job in machine
learning unless you have at least a masters in the subject. Its extremely
difficult and complex. I see tons of college students taking machine learning
classes in anticipation of becoming a dinosaur amd none of them could get a
machine learning job afterwards. Programming has always been pretty easy. The
ai revolution wont be like the home computer revolution. Its going to be led
by a relatively small group of academics, scientists and engineers working in
prestigious research positions.

~~~
cr0sh
Currently there just aren't that many ML jobs out there to apply for - but who
knows what the landscape will be like in 5 years or so?

Your assertion, though:

> This is so dumb. You arent going to understand or get a job in machine
> learning unless you have at least a masters in the subject. Its extremely
> difficult and complex.

...couldn't be further from the truth. You can understand this stuff without a
masters in the subject. It really isn't too difficult or complex. Sure, I will
admit that understanding how to take a derivative might be useful, but despite
not having that knowledge (but I'm working on obtaining it), I have still been
able to implement successfully working ML solutions - at least in a classroom-
type environment.

My last success was getting a virtual car to drive around a virtual track,
staying on the track and negotiating the curves, using an implementation of
the NVIDIA End-to-End CNN architecture and some data I generated (plus
augmentation and some other fun stuff). I used Keras and Python 3, running on
my workstation at home, with a 750 ti SC as my "GPU" (I really need to upgrade
this). My model converged very well after 10 epochs, but after 20 the loss was
pushed pretty low to sub 1%. As far as I could tell, there wasn't evidence of
overfitting (I need to do more investigation on this, though).

This was all done as part of Udacity's Self-Driving Car Engineer Nanodegree,
which I am taking part in. Prior to this, I also completed Udacity's CS373
course in 2012, and Andrew Ng's ML Class in 2011. My motivation for all of
this has mainly been my interest in autonomous unmanned ground vehicle
robotics technology. I have an ongoing side-project in developing such a
platform (seemingly back-burner'd a lot, though - life getting in the way, I
guess). Even so, if a job offer comes about because of it, I'm not going to
complain.

As it is, I believe the knowledge has helped me land positions, since it shows
my dedication to improving my skills in problem domains outside of the
everyday software development tasks. When potential employers have asked about
it, I can show them some code I've worked on, while mentioning how some of the
more simpler ML methods could help in a business problem domain. It sets me
apart somewhat from other candidates, I believe.

Especially those who think the topic isn't worth their time to learn, because
it may be "difficult and complex".

------
id122015
It doesn't matter how far technology gets, there are some -isms and they are
the real Dinosaurs that we have to get over and we are too small.

------
bgdkbtv
Oh yeah? Does Mark Cuban know artificial intelligence himself or is he just
asking people to study it and work for him? :)

------
acd
I am sysadmin/devops what machine learning tools and topics would you
recommend to learn for that field?

------
mi100hael
That page has so much JS bloat it made my top-of-the-line MBP lag just
scrolling.

------
bluekite2000
anyone knows if there is a business need for human to train/label data? I have
been thinking of going to a place w/ cheap labor cost (perhaps Vietnam) and
set up an operation like this.

~~~
cr0sh
In the short term, perhaps.

In the long term, possibly not - there are already efforts and approaches
being done in ML/deep learning to get models to label unlabeled data (google
"deep learning unlabeled data" for tons of research info).

------
cstuder
Ok, so where do I start?

------
rocky1138
Why don't we just learn it in 3 years, then?

------
general_ai
No, you're not going to be a "dinosaur". 99% of extremely well compensated
software engineering jobs do not involve ML. Using top large companies as a
proxy of what things are going to be like in the world at large 3 years from
now, maybe one in 200-300 engineers does anything in any way related to ML
there. And that's a generous estimate. You do need to know what it is,
roughly, but there's no need to drop everything you're doing and switch
careers.

------
otikik
Yeah. I'm going to be a clickbaitsaurus.

