Hacker Newsnew | comments | show | ask | jobs | submit | crc's comments login

> 229A is close to worthless > This course included gems such as "if you don't know what a derivative is, that is fine".

It also included other gems like debugging models with learning curves, stochastic gradient descent, artificial data and ceiling analysis. I have not come across practical things like these in more mathematically oriented ML books that I have tried reading in the past.

Interestingly, your arrogance is in sharp contrast with the humility of the professor, where he admits in places that he went around using tools for a long time(like SVM) without fully understanding the mathematical details.


You should put an end to this mode of trying to figure out some kind of globally optimum sequence of learning. There is none. If somebody tells you otherwise, they are either lying or they have forgotten the countless hours/days/months/years they spent trying to hone their craft and in-retrospect they feel they could have avoided all this if only they knew some optimum way to learn. You are in a partially observable environment. The only way to learn is to explore a bit, consider what you now know, re-explore. rinse, repeat. Pick something, anything, that interests you and jump at it.


I agree with the general sentiment.

But I'm not looking for a general solution. What I'm looking for is some cues and hints.

The most common answer in this thread has been "write more code". When I started collecting resources (mostly links to course material and lecture videos) I got into a read only mode. I forgot that writing code was probably the one thing I wish I had done more in college.

I don't know exactly what I was looking for but I think diving into the material is a great tip, thanks! :)


> I think for myself

Then you should think that there may be others who aren't really that concerned about being their own boss and financial freedom(strange as it might sound); others who love learning new perspectives. Other than saying that clojure may not be worth your time, you offer no insight. Thanks for making your preference clear, now can we get back to discussing the talk?


> I just think this point tends to be overlooked when talking about "social" websites, almost like it was not polite to point out where these businesses make their money.

I am not sure that this is overlooked. I think most people know it is as so obviously true that it doesn't need restating anymore.


The way moot promoted "the Twitter way", I find it hard to believe. Twitter's multiple-disjointed-throwaway-identities model is completely useless to advertisers, and as such it will never be adopted by commercial websites on the same scale as the G+/FB model (unless you prescribe it by law).


Don't pride yourself too much on being "honest". It is far easier to go around always telling the truth than to apply discretion.


Egan looks interesting. Which is a good book to start reading Egan?


Depends on what you like. Permutation City and Diaspora are fan favorites. They are heavy on idea porn but not very good yarns. That goes for most of Egan's books. His short stories are probably his most polished works because the form's constraints force him to keep everything tight. But on the flip side it means he doesn't have time to develop his ideas in great depth.

If you're a programmer, Permutation City would be my recommendation. If you're more mathematically inclined, Diaspora is a good choice. Just so you know what kind of book you'd be in for, the first chapter of Diaspora has a proof outline of the Gauss-Bonnet theorem. His short story The Planck Dive gives a good flavor of Diaspora: http://gregegan.customer.netspace.net.au/PLANCK/Complete/Pla...


I was a little disappointed by the abstract nature of some of the later chapters. I bought the book the same week that I stumbled on his blog, but after having read the book, I had to go and deep dive into his blog archives and only then I could connect the dots. I would also recommend reading "Impro" in parallel.


> 2. ubiquitous laziness can lead to some really subtle bugs

After the recent compiler change to prevent accidental head retention of lazy sequences, I don't remember having faced any subtle issue with laziness. Can you give me an example of what bit you?


I don't have a concrete example at hand but the pattern is this:

1. program crashes with cryptic stack trace

2. add a println to look at the data

3. program crashes again, but can't print anything because lazy evaluation of something in the data structure triggers a new, different crash

So you've got an error you don't understand and you can't even look at the data you're working with so you start the hunt & peck procedure of trying to find the last point at which your data was sane so you can work back from there. Add dynamic typing to the mix and things get really ugly. I spent an hour tearing my hair out the other day when I inadvertently reversed the arguments to a function but didn't get an error until three function calls later because the types were duck-compatible that far.


> So it reduces to, "For any given language some things are simple and others complex. For other languages what is simple and complex differ".

Though that might be the case, what doesn't change with languages is what users typically consider as simple problems. So "simple problems" are specific to user expectations, not language features. So I think the parent's point still stands.


"what doesn't change with languages is what users typically consider as simple problems.".

If you meant users of the language (vs users of an app built with the langauge) you are wrong.

Consider Text Processing (awk vs Java , even though Java has Text processing libraries), Distributed Computation (Erlang vs C - though you can do DC with C ) , "close to the metal" coding (C vs Scheme), Continuation Passing Style (again C vs Scheme, with a different "winner"), Array processing (J vs Java), statistical code (R vs C++).

What users consider "simple problems" does change with the language used, contra what you said above.

And just like if you start with "Continuation Passing Style is easy" as a criterion , Scheme beats C, or "customized memory management" makes C win over Scheme, if you use "ease of memoizing a function" as a selection criterion you'll end up with a Lisp (over Haskell or C or Java).

Change the criterion and a different language rises to the top. As I said earlier this isn't saying much.


Using this same kind of (Language vs Language) comparison, what are some criteria where Haskell wins?


The point was, to make any language "win", just choose one of its strengths as a deciding criterion and take another language without that feature as a punching bag. So to make Haskell "win", All I have to do is state "monadic programming should be easy" or "effectful programming should be clearly distinguished from the pure functional core by the type system." or "Using typeclasses to structure my code should be trivial"

And Haskell "wins" by default over other languages.

That said, fwiw, in my experience the "sweet spot" of Haskell is in large + heavily algorithmic + complex code bases, where its powerful type system shines. It is very fast without sacrificing abstraction. I've built some large machine learning systems with it and I couldn't be happier.

I know other people who are building trading systems in it and they seem delighted too.

As I said, just my experience and I making no claim about how other languages are losers somehow because they have different goals.


It is very fast without sacrificing abstraction.

This is what I use it for. My program ends up being as long as Perl, but as fast as C. For me, that's the best of both worlds.

For gluing libraries together, it's not the best, but in the last few months it's gotten quite good. There is hardly anything I want to do that would involve reinventing any wheels that I already have in $other_language.


"I've built some large machine learning systems with it and I couldn't be happier."

Does Haskell have good matrix libraries? That seems to be at the core of a lot of machine learning tasks, and when I briefly checked a few years back, I didn't see any Haskell native matrix libraries. I think there were LAPACK bindings, maybe.


"Does Haskell have good matrix libraries?"

Last I checked it didn't. But you can FFI bind to any C/Fortran library of your choice(which is what I did).




Clojure was the first lisp that I could seriously get into. I have a java background and the familiarity of JVM was partly helpful when I was learning it. I guess for people who are new to both java and lisp, it may be bit harder.

The language changing fast wasn't a concern to me then, and I was putting lot of time to keeping up with the changes by lurking in the mailing list and irc. It was fascinating to watch language design happen in front of my eyes. I think newbies would do fine with clojure (particularly at its current relatively stable state). I guess it just takes a bit of an effort.



Applications are open for YC Winter 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact