
Artificial-Intelligence Experts Are in High Demand - mwytock
http://www.wsj.com/article_email/artificial-intelligence-experts-are-in-high-demand-1430472782-lMyQjAxMTA1MzA2MTMwNTEyWj
======
Daishiman
This is suspiciously close "data science" and "machine learning" experts.

Can't we just be honest and say that most of these are applied statistics jobs
with a specialty in large volumes of data? Or is "statistics" just not
fashionable enough nowadays?

~~~
est
I often heard the Big Data guys hype that there's no sampling in Big Data, you
have the whole data, so it's not exactly statistics.

~~~
walshemj
from my distant memory if you sample size is the pollution its still
statistics

~~~
walshemj
Population I of course meant to say !

------
julianpye
This trend is in most companies business-driven, in others it is technical-
driven. Few companies have technical leadership that can manage true AI
resources. If you remember the ML courses from Uni and experts in that field,
you can imagine why. In many universities AI departments are assigned to
schools of psychology and philosophy. Only companies with a deep engineering
culture as those mentioned here can build up true AI departments.

The other driver is business-driven. And this is where management demands 'AI
experts', when what they really want is data-miners. And in many cases
management prides themselves on 'AI algorithms', but we know that this is a
term for anything that gets the results that management wants and may be far
from intelligent and in most corporate cases a bunch of SQL scripts.

~~~
netcan
What's the potential path forward (say projecting 10 years ahead) from the
current growth in demand for data mining centric people?

I mean people go and study in response to demand. They learn data mining and
AI at Universities. I think it's often people with backgrounds or aptitude in
maths. What will the 22 year old with an aptitude for maths that is learning
R, SQL, AI-for-business and such be doing in 10 years?

I don't know if the starting point matters much. "Results Driven," even if its
optimising inventory or making ad purchasing decisions or data mining old DBs
is not a bad place to "search" for advancements. Not everything needs to be
fundamental research.

~~~
nkassis
I doubt the 22 year old you are referring too will run out of problems to
solve. I also feel at some point it will be like a lot of software engineering
is today. Working for companies implementing solutions similar to what already
exists but tailored to the context of that companies specific needs. As of now
I feel that this field is so young that a lot of the solutions are almost
completely custom built to the problem at hand and that a lot of work is
needed to abstract away those solutions into higher level reusable pieces.

------
rayalez
The big thing that prevents me from getting into AI is the lack of practical
projects that I can build.

It is a very interestimg field, but as a self-taught programmer I'm used to
learning by building things, and it's hard for me to come up with some project
that would be practically useful and yet doable.

Does anyone have any ideas?

~~~
ASlave2Gravity
Game AIs, Stock market bots, character recognition, adaptations to game of
life, computer art wherein you get an AI to paint or to draw and can seed it
values. There's lots of cool stuff you can hack away at. Even training a
neural net to recognise a '3' is quite interesting.

~~~
protonfish
These things suck. I want a robot with sensors, the ability to move and an arm
that can be programmed with a language that is appropriate for AI. That
doesn't sound like something technically difficult.

~~~
jfoutz
So, what's holding you up exactly? RC cars are cheap, people have done cool
stuff with old android phones for sensor packages. If you need more
horsepower, stream the data back to a PC, you've got wifi on the phone. New
industrial arms are expensive, but you can scrounge one, or get a hobby one.
sparkfun had a uArm that would probably work for you.

~~~
protonfish
Because I am a software developer and intelligence hobbyist (from the
biology/ethology camp.) I don't know a damn thing about RC cars or android
phones nor do I have the time or desire to learn. Sadly, the hardware
tinkerers don't know a damn thing about programming intelligent behavior.
Until there is some sort of API to connect hardware to a software environment
programmable by specialists in that domain, hobbyist robotics will remain in
the realm of Battlebots.

~~~
jfoutz
Hmm. There are a few options. You could get a roomba, and put a laptop on top
of it, use the laptop camera to sense, and the serial port to command the
motors. I kinda think you could use python to glue everything together.

Mindstorms are really easy to get started with if you want more control over
what the robot will look like, but you'll be facing more mechanical
engineering problems.

diydrones has a lot of good information about getting things connected. Most
people look to RC cars because they're so inexpensive.

So, the big thing with hardware is it kind of sucks. It always feels like
using a butterknife to tighten screws. Things are challenging in ways you
don't expect. With software, if you make a mistake, you fix it and recompile.
With hardware, you hope nothing breaks.

I too wish it were easy. It's not, it's complicated in ways that suck. If you
have money to throw at the problem, i bet you can find a nice industrial
platform that's built like a tank that has a nice API. That kind of stuff is
thousands of dollars, and i'm not familiar with it.

------
nerdy
AI is very interesting but not very accessible because it's so specialized. I
have a fairly strong programming background but feel like I'd need to study
theory for a significant amount of time to even get my feet wet with AI.

If you have (condensed, especially) AI resources that you think would help
bridge that gap, please share! Toy-scale project ideas would also be
appreciated.

~~~
kriro
Buy and work through "Artificial Intelligence: A Modern Approach". It's a huge
book and the de facto standard for pretty much every AI 101+ course. Some of
the stuff may not interest you some might but it covers a broad range (from
logic based agents to Bayesian networks). It's systematic and has excellent
references and further reading notes for each chapter. The focus is not on the
currently sexy "data science" aspects though (however you will find plenty of
material that is relevant).

The edX class from Berkeley is pretty fun and hands on. It uses Pacman as a
running example and essentially teaches the agents stuff from AIAMA:

[https://www.edx.org/course/artificial-intelligence-uc-
berkel...](https://www.edx.org/course/artificial-intelligence-uc-berkeleyx-
cs188-1x-0)

The Stanford class by Thrun and Norvig himself (one of the authors of AIAMA)
is also good but I prefer the edX one:

[https://www.udacity.com/course/intro-to-artificial-
intellige...](https://www.udacity.com/course/intro-to-artificial-intelligence
--cs271)

Edit: changed to direct links for the courses

~~~
tansey
The AIMA book is sort of a Good Old-Fashioned AI (GOFAI) book that focuses a
lot on agents and planning. The jobs this article is talking about are really
machine learning ones-- taking large volumes of data and extracting knowledge,
so as to build recommender systems and such. For that, Kevin Murphy's book,
"Machine Learning: A Probabilistic Approach" is without a doubt the best book
out there, both in terms of explaining things from the ground up and being the
most comprehensive/up-to-date source.

~~~
kriro
There's still quite a bit of material on Bayesian networks (with the dreadfull
dentist example :D), neural networks and support vector machines but overall
you're right the focus is on agents. The relevant chapters are great staring
points though and as always filled with great reference material for further
reading.

\+ I'm pretty sure if you apply for an AI job somewhere and it's labaled AI
and not "data science" they'll expect that you know the material in AIAMA.

------
astrocyte
When true strong A.I hits, I feel the confusion will quickly lift. You'll know
because all of the people with weak A.I :

> Used mainly to strip information value from people without compensation

> Who are dumping money into foundations to prevent the coming of it's more
> true form

will be screaming 'It's the end of the world'. Until then, enjoy the
algorithms. It's the nature of business to over-sell. Don't be too upset by
it.

------
lowglow
I'm starting an SF-based robotics/AI/ML workshop/meetup/club next week. Hit me
up at dan@techendo.com if you want an invite -- or join this group:
[https://www.facebook.com/groups/762335743881364/?ref=br_rs](https://www.facebook.com/groups/762335743881364/?ref=br_rs)

------
peter303
We went through a round of this in the 1980s. The first commercial graphics
workstations happened to be LISP machines. So management confused non-numeric
code with A.I. There was demand for workstation experts. Not to loang after
this UNIX graphics workstations like Sun, Apollo and MicroVAX came out and the
market switch to UNIX/Linux.

Second was the expert systems boom in the mid-1980s. This was fanned by
Stanford professor Fegeinbaum who wrote the infamous book The 5th Generation
about expert system computers being the future and Japan was building the best
ones. These would either be LISP machines or an interesting French niche
language called prologic. Prologic basically traversed a databse "if-then"
rules (modus pons). These machines went nowhere and Japan economy tanked in
the early 90s. Lot of Silicon Valley VCs lost big on this.

Prof Feigenbaum may still be correct, but 40 years early. However the new A.I.
is driven by massive database matching possible in modern peta-level computers
and not so much logical computing.

~~~
VLM
That would be prolog. You'll have much better luck googling for prolog. I
played with "turbo prolog" in the 80s and accomplished nothing (from the same
place as turbo C or turbo pascal or turbo basic (am I forgetting any?)). A
modern variant (of a logic oriented language) can be seen here:

[https://github.com/clojure/core.logic/wiki/A-Core.logic-
Prim...](https://github.com/clojure/core.logic/wiki/A-Core.logic-Primer)

It tends to suffer from management by scalable procedure disease. Its possible
to successfully replace a human assembly line worker with a robot arm and a
very small shell script, which inevitably leads overactive imaginations to
think of replacing engineers or doctors with an immense set of unfortunately
undefinable unscalable procedures and rulesets, so it always collapses with
complexity at implementation time. Its like moths to a flame, you should be
able to replace an engineer with a very long list of if/then statements, but
it turns out to be impossible in practice. Meanwhile the more advanced
techniques butts up against the rapidly scaling "DBA" "IT" type of traditional
solutions or non-traditional big-data techniques.

Its hard to find something to logic program that isn't less verbose in a non-
logic language or unwritable in any language including logic programming. Its
like the Perl regex thing where you got a problem, so you write a regex, and
now you got two problems. Its a very narrow although interesting niche.
Finding something that fits would be pretty cool, although probably very
difficult to maintain.

------
100timesthis
when the wsj writes about it means that the trend is over

------
tvsaugt
I wish there were any position like this available in Germany ...

~~~
iamcurious
Interesting. Do you know if there are any positions in the UK or any other
part of Europe?

~~~
tvsaugt
I think the Amazon ML Group have something going on in the UK.

~~~
iamcurious
I will check them out. Thanks!

------
wimagguc
I wonder what all the AI is going to be used for. Is everyone working on their
own Siri and recommendation engine now?

(Is anyone building an AI that can come up with its own agenda?)

~~~
aangjie
I think (not sure, as I don't closely follow the work) AGI people and MIRI
work along the lines. (i.e: how to make sure a super AI doesn't go wrong, but
comes with Objective functions beneficial to humans.
[https://intelligence.org/](https://intelligence.org/))

------
graycat
Considering being an employee such as in the OP, I have two reactions:

(1) Take statistics, machine learning, neural nets, artificial intelligence
(AI), big data, Python, R, SPSS, SAS, SQL Server, Hadoop, etc., set them
aside, and ask the organization looking to hire: "What is the real world
problem or collection of problems you want solved or progress on?"

Or, look at the desired ends, not just the means.

(2) Does the hiring organization really know what they want done that is at
all doable with current technical tools or only modest extensions of them?

Or, since _artificial intelligence_ is such a broad field, really, so far of
mostly unanswered research questions, and the list of topics I mentioned is
still more broad, I question if many organizations know in useful terms just
what those topics would do for their organization.

So, for anyone with a lot of technical knowledge in, say, the AI, etc.,
topics, it is important for them to be able to evaluate the career
opportunity. I.e., is there a real career opportunity there, say, one good to
put on a resume and worth moving across country, buying a house, supporting a
family, getting kids through college, meeting unusual expenses, e.g., special
schooling for an ADHD child, providing for retirement, making technical and
financial progress in the career, etc.?

So, some concerns:

(A) If an organization is to pay the _big bucks_ for very long, e.g., for
longer than some fashion fad, then they will likely need some valuable results
on their real problems for their real bottom line. So, to evaluate the
opportunity, should hear about the real problems and not just a list of
technical topics.

(B) For the opportunity for the _big bucks_ to be realistic, really should
know where the money is coming from and why. That is, to evaluate the
opportunity, need to know more about the _money_ aspects than a $10/hour fast
food guy.

(C) As just an _employee_ , can get replaced, laid off, fired, etc. So, to
evaluate the opportunity, need to evaluate how stable the job will be, and for
that need to know about the real business and not just a list of technical
topics.

(D) For success in projects, problem selection and description and tool
selection are part of what is crucial. Is the hiring organization really able
to do such work for AI, etc. topics?

Or, mostly organizations are still stuck in the model of a factory 100+ years
ago where the supervisor knew more and the subordinate was there to add
_muscle_ to the work of the supervisor. But in the case of AI, etc., what
_supervisors_ really _know more_ or much of anything; what hiring managers
know enough to do good problem and tool selection?

Or, if the _supervisors_ don't know much about the technical topics, then
usually the subordinate is in a very bad career position. This is an old
problem: One of the more effective solutions is some high, well respected
_professionalism_. E.g., generally a working lawyer is supposed to report only
to a lawyer, not a generalist manager. Or there might be professional
licensing, peer review, legal liability, etc. Or, being just an AI technical
expert working for a generalist business manager promises in a year or so to
smell like week old dead fish.

(E) If some of the AI, etc., topics do have a lot of business value, then
maybe someone with such expertise really should be a founder of a company,
_harvest_ most of the value, and not be an employee. So, what are the real
problems to be solved. That is, is there a startup opportunity there?

Really, my take is that the OP is, net, talking about a short term fad in some
_topics_ long surrounded with a lot of hype. Not good, not a good direction
for a career.

AI and hype? Just why might someone see a connection there?

------
raverbashing
However, with the abysmal standards of hiring, I'm sure a lot of companies
would pass on very good candidates because they won't write FizzBuzz on the
board for you, or companies would pass on Peter Norvig because his code is not
Pep8 compliant

~~~
SilasX
But there would still need to be a fizzbuzz style filter for AI jobs to keep
out the people who overstate their abilities. Is there a good test for basic
AI job capability?

~~~
raverbashing
You just interview people.

Ask them to explain what a SVM is. Ask them to explain how training a linear
perceptron works. This kind of stuff.

~~~
SilasX
That tests book knowledge, not actual understanding of how to think about AI,
and leaves itself open to interviewer bias, the very things fizzbuzz avoids.

~~~
raverbashing
The idea that FizzBuzz has no interviewer bias is laughable at best.

I can _guarantee_ some will pick and reject people because they did things in
a way they didn't like

------
GigabyteCoin
How can there be experts on a subject that doesn't yet exist?

~~~
deanmen
AI experts don't know how to create an artificial intelligence. AI researchers
study how to solve various problems in CS traditionally performed by humans
that humans don't solve by carrying out an algorithm by hand like Natural
Language Processing, machine learning, automated reasoning, search (e.g.
chess).

~~~
peter303
There are at least two flavors of A.I. "Strong A.I." seeks to build something
human-like and could pass Turings immitation game test. The new movie Ex
Machina explores this test. "Weak A.I." replicates just a single cognitive
skill like game playing, pattern recognition, natural language or something
more trivial. Most recent A.i. worked on the latter, or its theoretical
background.

