Hacker Newsnew | past | comments | ask | show | jobs | submit | jmares's commentslogin

Data shows, as you say, that small scale performance is no indicator of large scale performance.

How then do you decide which projects are worth trying on the large scale?


Dear Googlers, it would be interesting to know how computational resources are allocated to new ideas (eg. Kurzweil's PRTM-based NLU system) at each stage, from prototype genesis to mature technology. What are the factors that come into play?


Could you tell us about project genesis at Bell Labs? How did projects originate, grow, and get killed or morphed?

Thank you.


> project genesis

It was a big company: I pretty much saw the entire scale of formal defn to skunk project.


Thanks jpdoctor. Could you elaborate on what you mean by formal defn? Who defined them, out of which principles or goals? (I understand that this might have multiple answers)


I took CS229 at Stanford.

ml-class.org does a phenomenal job in equipping you with the practical knowledge needed to apply the tools of machine learning to real problems.

There is no reason why learning to use these tools should be hard. If you want a challenge, there are plenty of problems in the world amenable to solution via machine learning, especially in today's data deluge.

If you want a deep mathematical appreciation of the algorithms and their derivation, you should do CS229, not CS229a.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: