Hacker News new | past | comments | ask | show | jobs | submit login
Andrew Ng on Life, Creativity, and Failure (huffingtonpost.com.au)
209 points by igonvalue on Oct 24, 2015 | hide | past | favorite | 28 comments

> People that count on willpower to do these things, it almost never works because willpower peters out. Instead I think people that are into creating habits -- you know, studying every week, working hard every week -- those are the most important. Those are the people most likely to succeed.

That is the one insight that I wished I truly understand years ago. After figured out that intelligence barely means anything, I thought that willpower would be the most important characteristics (because of endurance, perseverance yada yada you know).

Turned out that just like physical muscle, mental muscle get exhausted and you can run out of them. You really can't exert your willpower all the time, and have to preserve them for the really tough time. As human tends to do, we reverse to our habits for the majority of our lives. Well, I hope it's not a lesson learned too late.

That said, training for habits is damn hard.

If you want to try out a new technique to train for habits, check out this site by BJ Fogg @ Stanford University:


The method trains you to add a new habit to your life by doing a tiny habit every day, anchored to an already existing habit. For example, "Before I visit Hacker News, I will drop down and do TWO pushups." Eventually, you can grow your new habit to make it a bigger part of your life.

There is a lot of focus in the literature right now on developing habits. What I'm interested in is the deeper motivations that drive a person to select their habits in the first place. Why does Andrew read research papers and books all day on a Saturday? Why did you decided to add two pushups to your daily routine? I think that without understanding these deeper motivations, we are left with habits that do not have enough fuel to drive them, leaving us to resort to the habits with the least resistance (i.e. lazy habits).

but what about bob?

That's a very good write up. The dichotomy he describes between motivation and discipline is something that gradually came to me during my career. And some of the techniques he describes for "tricking" yourself into developing greater discpline are things that I sort of stumbled upon through trial and error. It would have been great to read something like this years ago, although at the time maybe I would not have realized its value.

When you've firmly established a new facet of discipline to the point that it is internalized and becomes automatic, that's when you've got a new beneficial habit. I like how Ng nails that in an almost understated way.

I've built an API that helps developers build apps that create habits.


This reminds me of a zenhabits post about The Myth of Discipline [1]:

[1]: http://zenhabits.net/discipline/

I love this app: http://www.habitbull.com/

>(On Tuesday, Baidu announced it had achieved the world's best results on a key artificial intelligence benchmark related to image identification, besting Google and Microsoft.)

And a week later was found to have cheated in said test, apologized, and withdrew its results[0].

[0] http://www.technologyreview.com/view/538111/why-and-how-baid...

I have no affiliation with Baidu but I'd like to defend them on this point a bit because it seems to me that they are getting quite a lot more hate than they deserve. Baidu had a without-a-doubt strong and at the very least a very near state of the art visual recognition system. Then a group of people within the company tried to squeeze out all the last few 0.01% they could with questionable means. When people erase all context and say Baidu cheated, it sounds as if there was a wide, organized top-down effort within Baidu to make their bad system look good with blatant cheating. None of which, in my mind, was the case. A slap on the wrist is more appropriate than burning at stake. </endrant>

I wonder if that incident is an example of their "clear-eyed view of the world and the competition", or if it's more of an example of their "remarkable lack of bravado".

More seriously, the cheating incident linked in the above article could be an example of the autonomy of people at Baidu. The breaking of the rules had to do with testing too many times in a single period of time, via multiple email addresses. They might have got a little over-enthusiastic letting multiple teams work on it in parallel.

I loved his critique of the recent hyperventilation about AI taking over the world--

"I don't work on preventing AI from turning evil for the same reason that I don't work on combating overpopulation on the planet Mars."

As he points out later in the interview, much of the recent gains have been due to a great increase in data and computational power. The history of AI is replete with incredibly overoptimistic predictions of achieving Strong AI. Andrew's focus on the current, important problems of the field bodes well for the future of Baidu's AI work.

Chomsky has been saying this for a long time. He equates our current language tech with stone age tools. Long way to go...

> As Ng explained, "The remarkable thing was that [the system] had discovered the concept of a cat itself. No one had ever told it what a cat is. That was a milestone in machine learning." If you dont mind, let me call this bullshit out. No one in machine / deep learning thinks that this was anything more than PR fluff combined with a very very weak paper that people had to approximately cheat to find the actual cats. (You had to initialize your random vector very close to an actual cat image, and do gradient descent, before it "figured out" for itself about a cat).

I am not sure why you're saying they cheated. I am a layman, but my understanding of that quote is that he refers to a rather large deep neural network, which was trained unsupervised to encode general images (not just cats), had a neuron representing the concept of "cat". You can run the network in generative mode to see how the general concept of cat would look like.

So I am puzzled by your comment - you seem to be talking about supervised learning, but I think the network was trained unsupervised.

> You had to initialize your random vector very close to an actual cat image, and do gradient descent, before it "figured out" for itself about a cat


> a rather large deep neural network, which was trained unsupervised to encode general images (not just cats)

seem to be in contradiction, afaict, or was general vector initialized rather close general images?

The initial value of the vector determines what it will encode/converge to.

I'm taking his machine learning course [1] and it's absolutely fantastic. It's one of the mondern wonders of the internet that you can have such a tutor for free. Plus it's fascinating to see what people go on to do after the course [2].

[1]: https://www.coursera.org/learn/machine-learning/

[2]: https://syrah.co/joshdickson40/5604e5e10fc1786b0152a51a

I've heard that the course is diluted compared to the brick-and-mortar school one, is this true?

At first sight I thought that the microphone was an spermatozoon mad with joy stirred up by Plutarch's quotation. Unfortunately that was only an illusion and life is not such exciting today.

To force to your mind to work creatively you must feed your mind with lots of examples and experiences. And the suggested way to accelerate the learning process is via showing off corner cases.

Innovation places us in a field in which the corner cases are unknown unknowns, his workshop is about an strategy to detect and anticipate corner cases in uncharted territories. That amounts to finding the fount of creativity, and that is not an easy feat.

Edited n+1 times for learning English.

> After figured out that intelligence barely means anything

I think that's probably an exaggeration. A better way to say it might be: intelligence only takes you so far, and true achievement requires something more.

Before you buy into the negativity of some comments on this thread, take a moment to pause. Andrew has achieved some truly remarkable feats. Why not accept what he has achieved is many standard deviations away from the average, and try and learn from what he thinks was useful?

To me, that the top comment right now is about how Baidu "cheated" on an AI benchmark says both that no one can have perfect oversight, but also that no matter your other achievements, someone will always point out a shortcoming.

I am learning the course of Machine Learning [1] at Coursera. I didn't know he co-founded Coursera. Can't believe this awesome course is free. Andrew Ng is really a good teacher. Thank you Andrew Ng and Coursera.

[1] https://www.coursera.org/learn/machine-learning/


play it, mr. toot

> But often, you first become good at something, and then you become passionate about it. And I think most people can become good at almost anything.

So many people don't get this. When parents send you to learn a profession, don't say "I'll do what I want". You can always do it later.

Instead, go get a proper, extensive education in anything - it will help you immensely, and you might find that you love doing what you learned.

Otherwise, you may waste years being stuck in a loop of finding yourself and your purpose, which sometimes really sucks...

I first heard the notion from someone here on HN: instead of doing what you love, love what you do!

I'd like to think the counter-point to "most people can become good at almost anything" is tempered by the notion that to become "an expert at something" takes passion and commitment that far exceeds short-term rewards. This might sound like an endorsement of the "10,000 hours" concept that Malcom Gladwell espoused, but somebody can spend 10,000 hours playing the same 6 guitar chords, but that doesn't make them a guitar expert.

It might be rather crass to say I'm skeptical of Dr. Ng's cultural perspective, but I am. In the US, it's a badge of honor to have achieved the means to buy a genuine Gibson Les Paul. In the popular culture of looking like a success, such as Baidu, it's relatively acceptable to simply make a copy, often lacking in the craftsmanship and utility of the original, simply to have the appearance of success. I'm not saying the US doesn't have some very blatant issues of posturing (the "30,000 dollar millionaire") based on access to credit, no, that's not fair in this context.

I believe that the ability to innovate and to be creative are teachable processes.

That's a big red flag to me. Teaching "creativity" is inherently encouraging insubordinate thought processes. That doesn't make sense. There are teachable avenues to capitalize on the inherent perspective that a student may have, one of rebellion and unique perspective, but teaching creativity is, well, about as far fetched to me as real, genuine AI. It might be possible, but probably not in my lifetime.

I'd like to relate this back to education in that "a grade that isn't earned is no mark of distinction at all" because the pursuit of being good is fine, but to be excellent means once there's focus, it's okay to fail now and again. There is so much evidence of blatant, endemic fraud in China that touting Baidu must come with appropriate skepticism. Just like the Moeller Flying Car in the US.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact