Hacker Newsnew | comments | show | ask | jobs | submit | Homunculiheaded's comments login

Robert Ash, imho one of the best writers for mathematical self-study, lists "Include Solutions to Exercises" as #2 piece of advice in 'Remarks on Expository Writing in Mathematics'[0] His quote is better than any summary I could come up with:

"There is an enormous increase in content when solutions are included. I trust my readers to decide which barriers they will attempt to leap over and which obstacles they will walk around. This often invites the objection that I am spoon-feeding my readers. My reply is that I would love to be spoon-fed class field theory, if only it were possible. Abstract mathematics is difficult enough without introducing gratuitous roadblocks."

[0]http://www.math.uiuc.edu/~r-ash/Remarks.pdf

reply


From the amazing "Design Principles Behind Smalltalk"[0], which is short, beautiful and everyone interested in programming and software should read at least once:

"Operating System: An operating system is a collection of things that don't fit into a language. There shouldn't be one."

Syntactically, the SmallTalk language is relatively straightforward, but the power of SmallTalk is that the designers envisioned so much more from what a programming language should be. SmallTalk has a "World" which is an incredible concept (this is the OS/IDE combo). Of course, this means that you must abandon every tool you use to start working in the language. But in exchange you can click on any window and view the source code for it, see where it fits in the object hierarchy etc. Every part of your development world can be interacted with and modified. You can open the same world from a usb drive on Windows, Mac and Linux. Unlike Haskell, it's not the language itself that will expand your vision of programming, it's the approach to what it even means to have a programming language.

http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.h...

reply


Have you looked into remote work? I love Reno and have been remote working here at several different companies for many years. Right now, it's fairly easy to get a good front-end web dev gig remotely that pays much closer to SV salaries than Reno ones.

Reno has really blossomed even in the last few years, but the jobs market, especially for skilled people, is abysmal. There's a bit of a chicken and egg problem as well since many people with enough talent realize that Reno's salaries are laughable and end up leaving. Right now there's not enough decent paying work to attract people to the city, but if you where to bring a company here that paid sane wages you'd have a hard time finding talent. I've known enough amazing UNR grads that migrate to the Bay to know that this city does have the potential. It's just a matter of that right window of a reasonable paying company being here and snatching up enough bright people before they move to the Bay.

But for remote work it's hard to think of a better place. Cost of living is very low, there's no state income tax, and SF is an easy 3 1/2 hour drive when you miss parts of the big city experience. Every other major West Coast city is a cheap and quick flight. And there are some really amazing people in this city. If you don't go already, head to Hack Night at the Reno collective some time, it's a great group.

-----


Anyone interested in Logic and Probability should take the time to read through (at least) chapters 1 & 2 of Jaynes' Probability: the Logic of Science [0]. Jaynes' is the arch-Bayesian and in these chapters mathematically develops what is essentially an alternate Universe model of probability which, in his view, arrives as the natural extension of Aristotlean logic. There's no "coin flipping" in these chapters, and when he finally derives the method calculating probabilities the fact that his model matches with coin-flipping models is written off almost as a happy accident. If you're familiar with Bayesian analysis but have not read Jaynes it is very likely that you aren't familiar with quite how (delightfully) extreme his views are.

Jaynes' fundamental metaphor through the book is building a "reasoning robot" so anyone interested in the intersection of logic, probability and AI will get many interesting insights from this book.

[0] PDF of the preprint: http://bayes.wustl.edu/etj/prob/book.pdf

-----


Many of the examples in this post are great examples of 'Modernist Art' and are decidedly not postmodern. This is roughly the equivalent of writing a post on "Functional programming is Anti-Mind" and then demonstrating that with examples from the Gang of Four Design Patterns book.

The real issue with this is that most postmodern art is incredibly accessible. You don't need an art degree to think that Roy Lichtenstein's paintings "look cool", or that Campbell's Soup cans are "neat". One of the quintessential, textbook postmodern film directors is Quentin Tarantino; there are few directors more adored by the general public. Postmodernism is a descriptive term for artists who mostly reject the Western tradition of 'High Art'. Almost all the difficulty and "unintelligibility" lies in postmodern theory, but not in the art theorists consider postmodern. And I would argue that this is because theorists themselves are artifacts of Western high culture and are therefore unable to articulate a response to something that is outside this framework.

Almost all examples of "unintelligible" art fall into some subcategory of High Modernism. High Modernist schools of thought are almost always exploring questions within the context of Western high culture(ie the "What is art?" questions), and for many of these works you need to have a background in the art in question to really engage with and understand the piece.

If you want to critique postmodernism a good place to start is Fredric Jameson's "Postmodernism or, The Cultural Logic of Late Capitalism".

-----


Was going to say, Pollock falls firmly within Abstract Expressionism -- about as "Modern Art" as you can get.

-----


This is great! Would you say Dadaism and similar rejections are also postmodern? What about Stuckism, which rejects the idea that art is about concept and intention and should just be about pretty pictures?

Honestly, I've never found a good explanation of what "postmodern" actually means, and your examples are helping.

-----


While I agree with the reply on the whole, many of its examples are pop art, which is a kind of shared space between modernism and postmodernism as far as art is concerned but not quite postmodernism, which explicitly rejects modernism. _But_ postmodernism originated in architecture and is more broadly a kind of 'end of history' movement, expressed not through the singular view and notion of purity of classicism, nor the idealist/utopian/revolutionary views of modernism - but through an embrace and often shocking (e.g. to notions of taste) juxtaposition of incongruent sources (e.g. classical motifs mixed with tiki references).

-----


Yeah, I find this article confusing precisely because of its focus on what seems to be modern and not post-modern art. Post-modernism definitely, as far as I (as a layman, tbh) understand it, rejects a kind of hierarchy and structure, but it's not the art itself that it insists has no structure but the artist-viewer relationship. Modern art insists that the artist is telling and the viewer is receiving, thus why an apparent lack of structure can still be "about something", but post-modernism insists that the audience is an active participant in the art.

And that's almost exactly the opposite of being 'anti-mind' to me. It gives the audience a kind of credit that a lot of other theories of art don't.

-----


I've worked remote for quite awhile now at a pretty broad range of companies. For the jobs that have had remote teams and local offices I do agree that I'm able to get a lot of communication done quickly when I visit the office.

However I find that the amount of "heads down" work I get done is greatly diminished when I'm in an office. And, much worse, there's a lot of noise in that added communication of being in the office. Remote teams, in my experience, have dramatically less "office politics".

Office space is great for communicating "big ideas" but these aren't anywhere near the bulk of communications being had. For most of the communications needs of software remote works fine (in my experience better).

I work on a quite a few "big idea" projects and I've found the best solution is to visit the office quarterly, get all the big idea brainstorming done, then scurry off to my remote office where I'm not distracted by office politics and can just get things done. A little face time goes a long way, and annual, or semi-annual all hands meetups can do wonders at filling in the gaps created on remote teams.

-----


My experience has been that none of the major Deep Learning libraries (Theano, Torch7, Caffe) offer support for OpenCL, whereas they all make it trivially easy to get models running on a CUDA GPU. On top of that NVIDIA has a library of deep neural network primatives[0], and I don't believe AMD offers anything similiar.

The general consensus I've seen is to just get an NVIDIA card if you're serious about working with deep neural nets on the GPU.

One thing that did surprise me was that there was no mention of using EC2 GPU spot instances for getting your feet wet. If you don't have access to a GPU with CUDA support you can get a spot instance for about $0.07 an hour to at least test out that you have your GPU code configured correctly (and you will see some performance gains). There are even a couple of AMIs out there with Torch7 and Theano already installed.

0.https://developer.nvidia.com/cuDNN

-----


Thanks for elaborating, these are exactly the points why AMD's GPUs are just not used in deep learning.

AWS is great if you want to use a single or two separate GPUs. However, you cannot use them for multi-GPU computation as the virtualization cripples the PCIe bandwidth; there are rather complicated hacks that improve the bandwidth, but it is still bad. Everything beyond two GPUs will not work on AWS because their interconnect is way to slow.

-----


I have a running joke with my machine learning friends that I will write a Data Science/ML book titled "A Thousand Ways to Say 'Singular Value Decomposition'". The number of papers and techniques out there that are SVD with a few minor tweaks and a unique philosophical interpretation of SVD is hilarious.

Here are some examples:

Principal Component Analysis - SVD does dimensionality reduction where some n% of variance should be accounted for.

One layer Autoencoder - SVD done by a neural network

Latent Semantic Analysis - SVD on td-idf matrix we interrupt lower dimensions as having semantic importance

Matrix Factorization - SVD only now we interrupt lower dimensions as representing latent variables

Collaborative Filtering - SVD where we interrupt lower dimensions as representing latent variables AND we use a a distance measure to determine similarity.

-----


> One layer Autoencoder - SVD done by a neural network

Not necessarily. Any serious user of autoencoders would apply some kind of L1 regularization or other sparsity constraint to the coefficients learned, so that the autoencoder does not learn the principal components of the data but instead learns an analogous sparse decomposition of the data (with the assumption that sparse representations have better generalization power).

Also I don't think any of the techniques you mentioned is being passed as "not SVD" by its practitioners. People know they're SVD. These names are just used as labels for use cases of SVD, each with their specific (and crucial) bells and whistles. And yes, these labels are useful.

Cognition is fundamentally dimensionality reduction over a space of information, so clearly most ML algorithms are going to be isomorphic to SVD in some way. More interesting to me are the really non-obvious ways in which that is happening (eg. RNNs learning word embeddings with skip-gram are actually factorizing a matrix of pairwise mutual information of words over a local context...)

That doesn't make these algorithms any less valuable.

-----


I'd also add here that you can add other variables in to the mix such as gaussian noise and drop out which is the basis for a lot of fundamental neural networks. I get the intent, but it's not necessarily the case.

Neural word embeddings are one of the most fun things I work with. Both word2vec as well as glove and paragraph vectors.

There's also the ability to learn varying length windows of phrases via recursive or convolutional methods.

-----


Nota bene, for anyone having trouble parsing Homunculiheaded's description of each algorithm: s/interrupt/interpret

-----


NMF != SVD.

-----


> computers are good at some things, humans are good at others

"You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine that will do just that!"

-- J. von Neumann

Computers will continue to get better at human things as we continue to get better at understanding how human things work. Look at the recent advances in deep learning. This is using only the most crude approximation of human neurons we can identify and caption images with astounding results. Google currently claims that anything that can be done in 0.1 of a second by a human, they can do as well.

Fraud detection relies heavily on unsupervised learning, and for all of history up until the last few years state of the art unsupervised learning was usually SVD + clustering or some variation on that. The current state of the art, things like deep belief networks, are able to achieve markedly superior results.

Additionally this article seems to imply that they are collected labeled data from customers which should help tremendously in modeling fraud. If even if the labels are a small sample recent advances in semi-supervised learning using deep neural nets is even greater than the advances in unsupervised learning.

While I don't disagree that historically it has been wise to include a human element in fraud detection, I don't believe there is any reason to assume that trend will continue indefinitely into the future.

-----


Sorry about this, but i think there are some big flaws in your comment.

> You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine that will do just that! - Know anything about the Halting problem & NP-Hard? There are things that computers will never-ever be able to perform, even if we go quantum.

> Computers will continue to get better at human things as we continue to get better at understanding how human things work

Your argument is the basic concept of symbolism, and deep learning is one of the multiple connectionism types of learning, a whole different world in ML.

> Fraud detection relies heavily on unsupervised learning

No, i have been working in Fraud and no, it is supervised with lot of manual feedback.

> unsupervised learning was usually SVD + clustering or some variation on that. The current state of the art, things like deep belief networks, are able to achieve markedly superior results.

Sorry, No Free Lunch for learning algorithms...

> historically it has been wise to include a human element in fraud detection, I don't believe there is any reason to assume that trend will continue indefinitely into the future.

Yes, and it will.

-----


> Google currently claims that anything that can be done in 0.1 of a second by a human, they can do as well.

Okay, on a moonless night, overcast, with no lights, a lot of fog, 200 yards away is .... Bingo, a pretty girl, 5' 4", 34, 19, 34, blond, really sweet, wants to be great as a wife and mommy, about 18! Yup, been doing that for years! Try that Google! My advantage: I have a dedicated, autonomous, peripheral processor just for that task!

-----


You know, this is a really creepy thing to write.

-----


One thing that this article fails to point out is how often librarians were wrong before Google. In 1986 there was a study[0] that showed that across the board reference librarians were only correct about 55% of the time.

I think most people today would consider a query answering system that had an accuracy of 55% to be an interesting curiosity, but certainly not ready for real-world application.

It's funny how frequently we measure machine learning performance of an "easy for humans" task and fail to compare it to human accuracy on the same data. We just assume humans would do perfectly on it. I'm sure there are a few MNIST digits that I would get wrong.

[0] P.Hernon, C McClure "Unobtrusive Reference Testing: The 55 Percent Rule," Library Journal, 111 April 15,1986

-----


It's the pre-internet sources and Government employees fault and not the Reference Librarian as a profession. This is out of the US Library system only.

Your "Across the Board" = "Government documents and central reference areas of 26 U.S. libraries" Not your academic or local libraries that most of us think of as Librarians.

I was a System's Librarian and contend that Librarians are needed more now more then ever due to the Internet but that is a different tale. One-Third the Librarians were let go in my state during the economic down turn).

Reference Librarians should be quoting sources and I bet you a million dollars that a Reference Librarian gives you the source and it is the source that is wrong.

-----


But why were people asking reference librarians questions?

When I spoke to librarians in the mid-90s, they'd just point me towards sources that could have an answer. It was up to me, the reader, to ultimately find the answer.

-----


Depends on where you were and who you were asking.

At several large research university libraries, I found the reference librarians could be downright obsessive in tracking down some questions. It would depend on workloads and other factors, I suppose.

-----


This was mostly for pleasure or K-12 assignments. The sort of questions that were very fact based.

"Do snakes poop?"

"How far away is Mars?"

That sort of thing.

-----


I was specifically thinking of uni-level questions.

-----


At the uni level, aren't they more supposed to help you find the appropriate database rather than find the answer for you?

I might ask a librarian how to access back copies of a journal, but I wouldn't expect him or her to do my lit review for me...

-----


You can get pretty far with being right only 51% of the time. With 55% you have an edge.

-----


Suggesting that reference librarianship is "easy for humans" is sort of... ridiculously wrong. There's a reason it requires a master's degree.

-----


How often are less than 55% of the results of a Google search relevant? Count ads.

-----


That' s not comparable at all. 55% of answers were correct, not 50% of the possible material in the library was relevant.

-----


If a search for something like "crane hook design principles" returns a page with "buy crane hook design principles for less" at the top, it is wrong. And wronger than the wrong of a librarian because it's not even trying to be correct, just thrown out there for the edge case. We just happen to write it off as "that's just Google".

-----


Is that what it shows you? I just searched for that exact thing. The first two were ISO standards, the third wikipedia, and the fourth a paper on crane hook stress analysis.

Not as good as a directed search by a professional, sure (and I haven't actually looked at any of those papers, so who knows how relevant they actually are), but my experience seems a lot better than what you're implying.

-----


If you followed either of the links to ISO, you would have found that they are offerings to sell the standards. The technical information is behind a paywall. That's ISO's business model. The link to Wikipedia is about cranes in general.

If a librarian said "the answer will cost you $181 (CHF 178,00)" or "here is a book about cranes" we would probably agree these high quality agree even if we might disagree over their being wrong. Nevermind that ISO standards are not necessarily a good set of design principles because minimum requirements are not the same as design tradeoffs.

-----


I don't think that's entirely fair. Surely the most useful measure of "wrong" is whether the user comes away with the correct answer?

Adverts on a webpage might definitely make it less likely that a user comes away with that answer, but I don't think they'd invalidate a user's successful result any more than gossiping with the librarian would invalidate their help if they successfully find an answer.

-----


The advertisements are noise. SEO also produces noise. At some point the signal is lost. Google's business model is built on maximizing the amount of noise a user will tolerate whenever the user in hope that the user stops searching and starts shopping. If you search for a book, Amazon will appear before the Wikipedia page.

-----


Nobody contested your argument, specific knowledge of niches s still fairly scarce on the net, at least if looking for gratis information. OTOh a well sorted library supplies books that cost a fortune, so google couldn't be replace that, at least not yet.

-----

More

Guidelines | FAQ | Support | API | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: