"In our study of program design, we have seen that expert programmers control the complexity of their designs with the same general techniques used by designers of all complex systems. They combine primitive elements to form compound objects, they abstract compound objects to form higher-level building blocks, and they preserve modularity by adopting appropriate large-scale views of system structure. In illustrating these techniques, we have used Lisp as a language for describing processes and for constructing computational data objects and processes to model complex phenomena in the real world. However, as we confront increasingly complex problems, we will find that Lisp, or indeed any fixed programming language, is not sufficient for our needs. We must constantly turn to new languages in order to express our ideas more effectively."
At a certain level of software design tying together different programming languages that are each the right tool for their job becomes just another type of programming. I currently do data science work, but even then in a given week I typically use R, Python, Lua and Java (and often Scheme in the evenings for fun). Trying to make any one of those languages do something the other is much better at is a phenomenal waste of time.
On the system level, once prototyping ends, if there's something that Java does phenomenally better than R, but we need both, that implies that you have two parts of the system different enough that they shouldn't be tightly coupled anyway. If you write a deep learning algorithm in Lua, but want to do some statisitical analysis on the results of that in R, it's good to force these things to be separated because if in 5 years you find a better model for the Lua part (maybe some better algorithm in Julia) you want to be able to swap it out anyway.
Making something explicit makes it clear for everyone, but you could easily argue the rights of black people and women were written into the Constitution from the start- just in less obvious ways.
That's one of the court's jobs, after all, to evaluate complex or difficult situations and laws where it is not immediately clear what the law says. It reminds me of especially tortured and opaque code. You can't see on the surface that foo=bar, but after a long and arduous process of evaluation you discover- lo and behold- foo=bar all along. Or even more difficult, you discover that foo=1 implies bar=2.
These amendments were added via a process built into the Constitution to allow changes to be made, because the folks writing it knew they couldn't account for everything.
But furthermore, the specific wording of the constitution may actually have been intentional insofar that Jefferson may have wanted it to be clear that all people have equal rights. He obviously couldn't singlehandedly change the minds of everyone at the time, but the idea that he intentionally left the wording vague enough to include everyone is a valid idea, I think.
I worked briefly for the federal government and realized that the bureaucratic system has two primary functions:
1. accumulate power
2. diffuse responsibility
Regarding the latter, occasionally the system build up too much responsibility which needs to be released to maintain stability. This is solved by picking an individual of power equal to the build up responsibility that needs to be released and punishing that individual by removal of all their individual power. Thus the overall system is restored to balance and only the individual, easily replaced, suffers.
It happened here, and I've seen it happen from the lowest to the highest levels of power in the government.
Robert Ash, imho one of the best writers for mathematical self-study, lists "Include Solutions to Exercises" as #2 piece of advice in 'Remarks on Expository Writing in Mathematics' His quote is better than any summary I could come up with:
"There is an enormous increase in content when solutions are included. I trust my readers to decide which barriers they will attempt to leap over and which obstacles they will walk around. This often invites the objection that I am spoon-feeding my readers. My reply is that I would love to be spoon-fed class field theory, if only it were possible. Abstract mathematics is difficult enough without introducing gratuitous roadblocks."
From the amazing "Design Principles Behind Smalltalk", which is short, beautiful and everyone interested in programming and software should read at least once:
"Operating System: An operating system is a collection of things that don't fit into a language. There shouldn't be one."
Syntactically, the SmallTalk language is relatively straightforward, but the power of SmallTalk is that the designers envisioned so much more from what a programming language should be. SmallTalk has a "World" which is an incredible concept (this is the OS/IDE combo). Of course, this means that you must abandon every tool you use to start working in the language. But in exchange you can click on any window and view the source code for it, see where it fits in the object hierarchy etc. Every part of your development world can be interacted with and modified. You can open the same world from a usb drive on Windows, Mac and Linux. Unlike Haskell, it's not the language itself that will expand your vision of programming, it's the approach to what it even means to have a programming language.
Have you looked into remote work? I love Reno and have been remote working here at several different companies for many years. Right now, it's fairly easy to get a good front-end web dev gig remotely that pays much closer to SV salaries than Reno ones.
Reno has really blossomed even in the last few years, but the jobs market, especially for skilled people, is abysmal. There's a bit of a chicken and egg problem as well since many people with enough talent realize that Reno's salaries are laughable and end up leaving. Right now there's not enough decent paying work to attract people to the city, but if you where to bring a company here that paid sane wages you'd have a hard time finding talent. I've known enough amazing UNR grads that migrate to the Bay to know that this city does have the potential. It's just a matter of that right window of a reasonable paying company being here and snatching up enough bright people before they move to the Bay.
But for remote work it's hard to think of a better place. Cost of living is very low, there's no state income tax, and SF is an easy 3 1/2 hour drive when you miss parts of the big city experience. Every other major West Coast city is a cheap and quick flight. And there are some really amazing people in this city. If you don't go already, head to Hack Night at the Reno collective some time, it's a great group.
Anyone interested in Logic and Probability should take the time to read through (at least) chapters 1 & 2 of Jaynes' Probability: the Logic of Science . Jaynes' is the arch-Bayesian and in these chapters mathematically develops what is essentially an alternate Universe model of probability which, in his view, arrives as the natural extension of Aristotlean logic. There's no "coin flipping" in these chapters, and when he finally derives the method calculating probabilities the fact that his model matches with coin-flipping models is written off almost as a happy accident. If you're familiar with Bayesian analysis but have not read Jaynes it is very likely that you aren't familiar with quite how (delightfully) extreme his views are.
Jaynes' fundamental metaphor through the book is building a "reasoning robot" so anyone interested in the intersection of logic, probability and AI will get many interesting insights from this book.
Many of the examples in this post are great examples of 'Modernist Art' and are decidedly not postmodern. This is roughly the equivalent of writing a post on "Functional programming is Anti-Mind" and then demonstrating that with examples from the Gang of Four Design Patterns book.
The real issue with this is that most postmodern art is incredibly accessible. You don't need an art degree to think that Roy Lichtenstein's paintings "look cool", or that Campbell's Soup cans are "neat". One of the quintessential, textbook postmodern film directors is Quentin Tarantino; there are few directors more adored by the general public. Postmodernism is a descriptive term for artists who mostly reject the Western tradition of 'High Art'. Almost all the difficulty and "unintelligibility" lies in postmodern theory, but not in the art theorists consider postmodern. And I would argue that this is because theorists themselves are artifacts of Western high culture and are therefore unable to articulate a response to something that is outside this framework.
Almost all examples of "unintelligible" art fall into some subcategory of High Modernism. High Modernist schools of thought are almost always exploring questions within the context of Western high culture(ie the "What is art?" questions), and for many of these works you need to have a background in the art in question to really engage with and understand the piece.
If you want to critique postmodernism a good place to start is Fredric Jameson's "Postmodernism or, The Cultural Logic of Late Capitalism".
This is great! Would you say Dadaism and similar rejections are also postmodern? What about Stuckism, which rejects the idea that art is about concept and intention and should just be about pretty pictures?
Honestly, I've never found a good explanation of what "postmodern" actually means, and your examples are helping.
While I agree with the reply on the whole, many of its examples are pop art, which is a kind of shared space between modernism and postmodernism as far as art is concerned but not quite postmodernism, which explicitly rejects modernism. _But_ postmodernism originated in architecture and is more broadly a kind of 'end of history' movement, expressed not through the singular view and notion of purity of classicism, nor the idealist/utopian/revolutionary views of modernism - but through an embrace and often shocking (e.g. to notions of taste) juxtaposition of incongruent sources (e.g. classical motifs mixed with tiki references).
Yeah, I find this article confusing precisely because of its focus on what seems to be modern and not post-modern art. Post-modernism definitely, as far as I (as a layman, tbh) understand it, rejects a kind of hierarchy and structure, but it's not the art itself that it insists has no structure but the artist-viewer relationship. Modern art insists that the artist is telling and the viewer is receiving, thus why an apparent lack of structure can still be "about something", but post-modernism insists that the audience is an active participant in the art.
And that's almost exactly the opposite of being 'anti-mind' to me. It gives the audience a kind of credit that a lot of other theories of art don't.
I've worked remote for quite awhile now at a pretty broad range of companies. For the jobs that have had remote teams and local offices I do agree that I'm able to get a lot of communication done quickly when I visit the office.
However I find that the amount of "heads down" work I get done is greatly diminished when I'm in an office. And, much worse, there's a lot of noise in that added communication of being in the office. Remote teams, in my experience, have dramatically less "office politics".
Office space is great for communicating "big ideas" but these aren't anywhere near the bulk of communications being had. For most of the communications needs of software remote works fine (in my experience better).
I work on a quite a few "big idea" projects and I've found the best solution is to visit the office quarterly, get all the big idea brainstorming done, then scurry off to my remote office where I'm not distracted by office politics and can just get things done. A little face time goes a long way, and annual, or semi-annual all hands meetups can do wonders at filling in the gaps created on remote teams.
My experience has been that none of the major Deep Learning libraries (Theano, Torch7, Caffe) offer support for OpenCL, whereas they all make it trivially easy to get models running on a CUDA GPU. On top of that NVIDIA has a library of deep neural network primatives, and I don't believe AMD offers anything similiar.
The general consensus I've seen is to just get an NVIDIA card if you're serious about working with deep neural nets on the GPU.
One thing that did surprise me was that there was no mention of using EC2 GPU spot instances for getting your feet wet. If you don't have access to a GPU with CUDA support you can get a spot instance for about $0.07 an hour to at least test out that you have your GPU code configured correctly (and you will see some performance gains). There are even a couple of AMIs out there with Torch7 and Theano already installed.
Thanks for elaborating, these are exactly the points why AMD's GPUs are just not used in deep learning.
AWS is great if you want to use a single or two separate GPUs. However, you cannot use them for multi-GPU computation as the virtualization cripples the PCIe bandwidth; there are rather complicated hacks that improve the bandwidth, but it is still bad. Everything beyond two GPUs will not work on AWS because their interconnect is way to slow.