
Mark Cuban says studying philosophy may soon be worth more than computer science - bayareabronco
https://www.cnbc.com/2018/02/20/mark-cuban-philosophy-degree-will-be-worth-more-than-computer-science.html
======
majos
His argument seems to be that advances in AI will mean that software
development itself is soon automated away, in which case a CS degree will be
useless. Then, he advocates studying philosophy because this future will need
"big picture" thinkers.

I don't know how much substance there is to discuss here. It pretty much
hinges on how you think AI will develop over the coming decades. I am
skeptical that software development will be automated away anytime soon,
especially since the actual writing of code is just one part of the job. I
recall an Economist article several years back arguing that jobs are less
automatable the more varied they are. For this reason custodians should be
among the last to go, the logic being that a given custodian does so many
different maintenance tasks that it would be impractical for a bot to replace
them.

Software development seems similar. You're constantly pivoting between
different and shifting problems. Most of the impressive AI demonstrations like
image recognition or playing Go are really about getting good at a single
well-defined task. We seem to be _very_ far away from the kind of adaptibility
required to automate away developers (though perhaps the CS degree will
devalue as more people pursue it).

It's also unclear to me how exactly philosophy is better preparation. My own
experience with academic philosophy is limited, but from what I can tell its
main selling point is that it teaches you to examine things in a critical way
that's very aware of the argument in question, the assumptions made, and the
structure the argument sits in. My experience in math (which I pursued much
more) was similar, and I imagine that most good CS programs teach similar
skills?

"Study philosophy" here seems like a catchy (and perhaps poor) proxy for "make
sure you can reason about stuff outside a narrow programming bubble". But
that's always been good advice.

~~~
freedomben
> _from what I can tell its main selling point is that it teaches you to
> examine things in a critical way that 's very aware of the argument in
> question, the assumptions made, and the structure the argument sits in._

That's definitely a big (and important) part of it. There's an unfortunate
dearth of critical thinking and examination of ideas these days. I have
theories as to why that is, but it's out of scope for this discussion so I
will abstain.

Another big part of philosophy as well is to familiarize the student with the
big ideas with the hope that they can be further refined, to drive both the
individual thinker forward, and hopefully humanity as a whole as well.

For example, I think a hugely valuable part of philosophy training is the
study of ethics. So many people (software engineers and companies included)
don't really stop to think about the ethics surrounding what they are doing.
This is a mistake, IMHO. I highly recommend reading about and understanding
Immanuel Kant's work in this area, especially his categorical imperative and
hypothetical imperative. Comparing/contrasting that with Jon Stuart Mill's
Utilitarianism, and conducting various thought experiments makes for a good
time (you can tell I'm quite the partier :-D ).

~~~
mistermann
> There's an unfortunate dearth of critical thinking and examination of ideas
> these days. I have theories as to why that is, but it's out of scope for
> this discussion so I will abstain.

As someone who sees the same things but has no theories, I for one would like
to hear a few of yours.

~~~
pasabagi
I'm not the other poster, and this isn't my theory - but for people like
Adorno or Hegel the go-to explanation for this phenomenon is basically that
reasoning in our society is ends-rational. We're very good at considering
ideas insofar as they relate to a goal - for instance, the geometry of
transistors in relation to their performance. That's because we're a culture
of employees - and in such a culture, you don't need to think about goals,
just about means.

This kind of thought process is pretty toxic to critical thinking and
examination of ideas, since the most central elements (our goals) are almost
entirely unexamined - and we don't really include their examination in our
education, since we're only interested in preparing children to be good
'problem solvers', or in other words, employees.

------
danbruc
Firstly, AI is currently massively overhyped and people are overestimating the
progress we are making. What we are currently doing is curve fitting and
searching on steroids enabled by the ever increasing availability of computing
power, what we know about general intelligence hasn't changed much in a long
time, at least as far as I know.

I won't rule out that we may build an AGI by just throwing enough computing
power at it and simply simulating brains with enough fidelity but without
actually understanding how it works, at least not at first, but I really don't
see much - if any - progress towards building an AGI because we know what we
have to do. Maybe the secret of general intelligence is just curve fitting and
searching at massive scale and we fail to find the secret behind it because
there is none, but we really don't know.

That of course doesn't mean that the recent developments are not interesting
or don't have useful applications, they are just not as groundbreaking as
often portrayed and we probably still have a rather long way to go.

With regards to big picture thinking and general problem solving ability, I
don't think you are in a particularly bad spot as a software developer or
hacker. Especially not if you are doing project work or frequently change jobs
and regularly have to get familiar with new business domains. Also if your
focus is more at the design and architecture end of the spectrum as compared
to coming up with beautiful looking CSS.

But I would certainly agree that philosophers have some ways of thinking that
are at least to some degree unlike what you find in software developers, but I
can't really pin it down. I only know it exists because I am frequently
somewhat surprised by the thoughts and ideas that come up in philosophy and
they seem not like things I would think of. Maybe it is just the level to
which philosophers dissect things and which goes beyond what is required in
software development but I really can't tell for sure.

~~~
gh02t
I think another aspect of the current boom in AI is that it is _model-less_
curve fitting (to use your description), and that this is both revolutionary
and regressive. On the one hand machine learning algorithms let us solve fuzzy
problems that we don't necessarily understand deeply, but the trade off is
that those solutions don't give us any real fundamental insight into the
nature of the problem the same way e.g. discovering new physics would. So we
are now able to solve all these problems that we couldn't before, but in a
sense these solutions are unfulfilling because they reveal little else.

------
curiousgal
For some reason, despite him being extremely well-accomplished, I just can't
take any advice he gives seriously, compared to Bill Gates or even Ray Dalio.

~~~
jacquesm
Mark Cuban got supremely lucky selling broadcast.com to Yahoo in the deal that
best illustrated the crazy boom years of the .com bubble and believes that
this makes him some kind of sage instead of a lottery winner.

You just can't compare him to Gates who got where he is today through a
combination of luck at birth and supremely hard work (and some questionable
business tactics).

~~~
bdcravens
Cuban's stake in Broadcast.com's was worth about $100M before they sold to
Yahoo, and since the sale to Yahoo, he's managed to over double his net worth
via the Mavericks and his business interests. He may not be Gates, but I think
he's more than a lottery winner.

~~~
raldi
He would be wealthier today if he had immediately put everything in an S&P 500
index fund and spent the last 20 years ignoring it.

~~~
adventured
That's factually wrong.

The S&P 500 was 1300 when he sold to Yahoo. He received $1.4 billion of Yahoo
(14.6m shares at $95/each). He had to pay taxes on clearing out that position
as well, so you can safely assume his after tax position was more like ~$1.1
billion. He very solidly beat the S&P 500.

He's worth $4.08 billion today according to Bloomberg (ranked #500 on their
global rich list). He has increased his net worth by nearly 4x.

The S&P 500 is at 2747. It has just over doubled since April 1999. He'd be
missing a billion dollars if he had gone with the S&P 500, including any
dividends.

He was better off with what he did, he beat the S&P 500 index fund. Not only
did he make more money his way, he got to do what he wanted to with his money,
such as buying the Mavericks.

------
zitterbewegung
I studied philosophy in College but got my CS degree. I think some guy called
Paul Graham did so too. I think philosophy is useful for detecting if someone
is lying to you or doesn't know what they are talking about. It does stress
critical thinking in different ways than say studying math (I also studied
math also).

I can't really take what he is saying though at any serious level though and I
don't believe what he is saying at all. I think that this story is to grab
headlines.

~~~
smt88
> _philosophy is useful for detecting if someone is lying to you_

What do you mean? What did philosophy teach you about lie detection?

~~~
mythrwy
Ah, you caught him. You must also have taken philosophy.

------
fidels
> Cuban advises ditching degrees that teach specific skills or professions and
> opting for degrees that teach you to think in a big picture way, like
> philosophy.

I don't think my computer science degree taught me 'specific skills' but
instead taught me how to learn. How to pick up new concepts in a short period
of time. I think does not fully understand what a computer science degree is
or how artificial intelligence works.

------
ilaksh
Personally I think that computer programming AIs that can really replace
humans (beyond just programming automation) will need to be AGIs. Those AGIs
will be able to do philosophy just as well as they can program.

I also personally believe that we currently have most if not all of the
ingredients necessary for AGI. My prediction is that in 2018 or 2019 we will
see public demonstrations of AGI. These initial demos will be underwhelming
and not have the capacity of humans or even animals -- but nonetheless will be
truly general intelligences with significant human- and animal-like abilities.
It will take another few years before the technology is widely recognized to
be human-equivalent or better, but the initial systems will be trained in
numerous fields quickly. There will be very powerful general systems available
by 2021.

Some of the pieces necessary for general intelligence as I see them are truly
general purpose inputs and outputs (like a body or virtual body), highly
efficient processing of high bandwidth input and output, fast online
(immediate) learning, hierarchical learning and computation, the ability to
learn and recombine flexible sequences and time-based data. Advanced neural
net-based systems, especially ones that are inspired by animal cognition
systems, are providing all of these features. Its mainly a matter of
integrating existing leading edge neural-net research with a good
understanding of AGI.

------
dmfdmf
Utter non-sense for two reasons.

First, he is reversing cause and effect in that AI is a philosophic problem
(more specifically an epistemological problem). The philosophy departments
have been doubting the efficacy of reason for over a century now, a process
that started with Descartes, advanced by Hume and cashed-in by Kant to save
faith from reason (i.e. subjectivism). If modern philosophy had been actually
working to validate reason and explain how it works we would already have AI.
AI is on hold until someone explains how reason works, and more specifically
concepts, in enough detail so that we can program a machine to do it.[1]

Second, if you signed up for philosophy courses or a PhD today, perhaps you
would learn how to be a more critical thinker but the danger is you would be
infected with navel-gazing skepticism and impotence of reason and thus wreck
your ability to think, big or small picture. If you are really talented you
can get tenure writing essays destroying the work of your colleagues without
generating a shred of original work.[2]

[1] Someone from the early work on computers said (I paraphrase); "Show me
what the mind is doing and I can make a machine that can think". I want to say
von Neumann but I've never found a source for that statement. If anyone knows
who said it and where I would be very grateful for a reference.

[2] LBJ: “Any jackass can kick down a barn but it takes a good carpenter to
build one."

------
stmfreak
Many people think the current buzz in AI is about intelligent machines instead
of highly repeatable pattern recognition. I suspect we are several major
advancements away from sentient machines and in a few years the buzz over ML
will die down as we realize that it isn't much more than a great at sorting
inputs.

~~~
whataretensors
The buzz may also continue to increase. The power of even limited narrow AI is
only starting to show.

------
hatchoo
I personally have taken myself to learn more about psychology. I've already
spent most of my life learning about machines and how to control them. It's
time to learn more about people.

------
kabdib
"We don't need people to engineer buildings any more, because we have
architects and city planners."

I wouldn't expect a philosophy major to leapfrog into a similar compensation
package as a competent engineer. At best they're going to be like PMs with
domain knowledge, and working experience building software will be required
for folks on product teams. Just another case of individual with cross-
discipline expertise being more valuable than ones with narrower focus, which
is nothing new in the workforce.

------
kk58
Ai hypers dont spend a min building ML models or munging data. Its so far away
before AI can do all

~~~
cooper12
This guy doesn't even know how to program. Some exec probably bamboozled his
gullible self and now he's trying to making an ass of himself in headlines.

~~~
programmer_dude
[https://www.forbes.com/pictures/mme45gmef/mark-cuban-the-
pro...](https://www.forbes.com/pictures/mme45gmef/mark-cuban-the-
programmer/#789f94bf3300)

~~~
cooper12
Okay, he does know how to program. But he still doesn't know fuck-all about
AI.

------
koonsolo
Reminds me of what an ex colleague used to say: Every scientist is a
philosopher, but not every philosopher is a scientist.

Let me explain: Philosophers love questions and problems, but the difference
with scientists is that the latter like to find answers to those questions and
solutions to those problems. It seems in practice, that philosophers like the
questions more than the answers, and like them more to be "unsolved".

------
pfarnsworth
Why do people keep quoting Mark Cuban? He got lucky by selling his company to
Yahoo just before the dotcom crash. I don't know any other worthy thing he's
done since then. I really wish reporters weren't so lazy and just keep
parroting things from people that they've heard of without qualifying how
useful or effective the source is.

~~~
gargarplex
If you define "worthy" as "financially successful" then there is a lot

HDNet

So many media appearances that he's a celebrity and his appearance at a
location becomes valuable/monetizable. (Dancing With The Stars, Entourage,
etc.)

Shark Tank (9 seasons of broadcast /syndicated tv - think about money from the
reruns)

Dallas Mavericks (NBA sports team) owner who has appreciated the value of the
franchise not to mention won championships

And last but not least he is accessible to entrepreneurs and responsive via
email.

~~~
adamnemecek
All those things are a result of the first thing.

~~~
gargarplex
I don't see the point to your comment and I would also say it's more complex
than simply the deterministic result of selling Broadcast to Yahoo! and
smartly locking in the stock price with a collar trade... I'd say it's the
product of his charisma, natural talent, work ethic, experience at previous
companies such as Micro-Solutions and like ventures, etc.

------
gremlinsinc
While I agree someday, (more likely 40+ years from now) AI might be general
enough to code new/better ai... I don't think philosophy will be the goto
college degree...

I think the arts will possibly, because if AI is doing all the work, building
the industries, and technologies we want then I feel science and entertainment
will be two things ai can't do.. Science because it takes human curiosity to
even know what we want to know or search, and the whole point of knowing is
curiosity -- I think science will exist for awhile, albeit augmented fiercely
by ai.

Entertainment because if there are no jobs as accountants, lawyers, or
doctors, or any other jobs if--we end up in post-scarcity one thing we will
need is something to do in all our free time, that something will be binging
on netflix and social media 24/7.

------
JoshMnem
> Cuban advises ditching degrees that teach specific skills or professions and
> opting for degrees that teach you to think in a big picture way, like
> philosophy.

I'm not sure if big picture thinking skills come from studying one specific
field like philosophy by itself. I'd expect interdisciplinary thinking and
life experiences to be more important.

Examples: meet many different kinds of people from around the world who
challenge your preconceived ideas, and listen with an open mind. Work many
different kinds of jobs (or volunteer). Approach learning from an
interdisciplinary perspective and study multiple fields.

Studying philosophy isn't a bad idea, but I don't think that one field by
itself is the solution.

------
tw1010
Mark Cuban says a lot of things

~~~
WheelsAtLarge
Right, he loves to hear his voice. Next month he will say the opposite.

------
dude01
At the least, we'll need some kind of overseer for what these "AI"s are doing.
These people will need strong analytical abilities, and the ability to deal
with various levels of abstraction. Something that computer science people
surely have.

Yeah, not much good content here, but it is probably useful to know what
people like him are thinking (the new American oligarchs?). If, in fact, he is
saying what he actually thinks. The only thing we can be sure of, is that he
and his people are scared of AI.

------
gilbetron
If computer science is worthless, the rest of the jobs will be as well. CS is
about solving problems in (almost) its purest form. If AI can do that, humans
are done.

~~~
make3
also afaik we're nowhere close to being able to use "ai" to do that reliably.
it's a use case really distant from supervised learning

------
the_cat_kittles
stop listening to people just because they are rich. i dont even mind this guy
that much, but what evidence is there that this guy knows what he is talking
about?

------
electrograv
Why philosophy in particular, though?

I am sympathetic with worries about how automation will affect the job market,
and I love philosophy -- but I saw no explanation in this article for why Mark
Cuban believes _philosophy_ in particular will be more valuable.

~~~
randomdata
Because philosophy is seen as the de facto "worthless" degree in pop culture?
Philosophy majors don't need to earn more in the future. Computer science
majors just have to earn less. Something that isn't completely unrealistic.
The BLS shows that developer incomes are already in decline.

Pay, of course, is determined by supply and demand. Developers currently do
well because there are more jobs than people, but that can quickly change (and
is changing, albeit slowly). The most common job in America only makes up
about 3-4% of the entire workforce. Even if software jobs overtake that and
grow to be the most common job in existence, but 10% of the population are
willing and capable of being developers (not unbelievable given the current
push by the education system to turn everyone into developers), it'll trend
towards being a minimum wage job.

And that's assuming that the software labour industry grows substantially from
its present size. Cuban is expecting it to shrink as AI takes on more and more
development tasks which, if true, will mean we will need even fewer software
professionals than we need today.

------
two2two
He _may_ not be wrong. Check out Getting to Philosophy on wikipedia. [0]

"There have been some theories on this phenomenon, with the most prevalent
being the tendency for Wikipedia pages to move up a "classification chain."
According to this theory, the Wikipedia Manual of Style guidelines on how to
write the lead section of an article recommend that the article should start
by defining the topic of the article, so that the first link of each page will
naturally take the reader into a broader subject, eventually ending in wide-
reaching pages such as Mathematics, Science, Language, and of course,
Philosophy, nicknamed the "mother of all sciences"."

[0]
[https://en.wikipedia.org/wiki/Wikipedia:Getting_to_Philosoph...](https://en.wikipedia.org/wiki/Wikipedia:Getting_to_Philosophy)

------
BigJono
I think it's far more likely that if this process takes place, CS degrees will
morph more towards areas like philosophy and mathematics than software
engineering.

I doubt they're going to be teaching the exact same way in 30 years.

------
debt
“In 10 years, a liberal arts degree in philosophy will be worth more than a
traditional programming degree."

Dang I couldn’t think of a less qualified guy to make that call!

And I love philosophy!

------
WheelsAtLarge
Hard to take anything this guy says seriously. Ever since he announced that he
would be happy to run for vice president without care whether it was Clinton
or Trump I lost interest in whatever he said. He said this after regularly
blasting Trump as not qualified for being president. He blasted Bitcoin then
he decided it was a must-have. Previously he blasted the stock market as a
gambling den. And I'm sure there are more examples. He loves publicity and the
press loves to print whatever BS he's saying today. I'm sure tomorrow it will
be something else. Mark Cuban shut the F--- up!!

------
anonu
This guy keeps pontificating. He must be running for POTUS

------
pvsukale3
On an unrelated note:

Is it just me or the character Russ Hanneman from silicon valley is based on
Mark Cuban.

~~~
moab9
I was expecting this comment to be number one. 3 comma club.

------
cooper12
I flagged this for being unsubstantiated scarebait.

> That's because Cuban expects artificial intelligence technology to vastly
> change the job market, and he anticipates that eventually technology will
> become so smart it can program itself.

> "What is happening now with artificial intelligence is we'll start
> automating automation," Cuban tells AOL. "Artificial intelligence won't need
> you or I to do it, it will be able to figure out itself how to automate
> [tasks] over the next 10 to 15 years.

This is coming from a guy who doesn't know a lick of programming, let alone
anything about AI. In fact, he mainly dabbles in sports and Shark Tank. Yet
another "you luddites are all going to be replaced, so you better piss your
pants" article, and all it's based on is this guy's gut feeling. Silicon
Valley needs to chill the fuck out with overhyping AI, because it's going to
backfire on them substantially when people start realizing they can't back up
their promises.

