> you use a combination of engineering, physics, mathematics, computer science - the less you try to - or need to - separate them the better you do.
I disagree that mentally separating math, physics, and computer science is a good thing. I think a good scientific programmer should understand the science on its own, and then figure out how to model/approximate the science to do something useful on a computer. For instance, if you want to implement Reed-Solomon codes efficiently, you'll realize that understanding the algebra required to understand the code is more or less an orthogonal skill to designing an efficient encoders and decoders.
As a personal anecdote, I had much better luck learning about waves and quantum mechanics after I knew about differential equations, orthogonal decompositions, and a fair amount of linear algebra than I did before I knew this math, even though the physics classes included all of the necessary math. I attribute this better understanding to have a cleaner mental map, because I knew which statements were true because of some sort of physical fact, and which statements were just mathematical results.
In line with the generally smiled upon principle of decomposing problems, I think its particularly critical that the scientific programmer can decompose these interdisciplinary problems. A scientific programmer should understand the science (at least mostly), understand the hardware, and figure out how to efficiently and cleanly map the scientific problem to something that to be solved by a program.
Middle distance runner here, disagree with most of this. I agree that there's not much point for a beginner runner to worry too much about form. However, simply running a lot does not give most people efficient form (it does prevent them from having horrible form, but doesn't force them to have good form). I have a friend who has been running 60-70 miles/week for a few years, and his form is still quite poor (and this limits his speed for middle distance races).
Also, I don't have a source to reference, but I do not believe your "pro tip" at all. I'm quite sure running efficiency goes down as fatigue increases. Common advice is to focus on your form when you start to fatigue, because this helps you keep efficiency up.
Finally, I doubt any of the "elite" runners referenced in that video have the form they have just by running a lot of miles. Running efficiency is something to think about and to work towards, not to just acquire by running a lot (of course, running a lot helps).
I hugely agree with this. I first learned Java in a year long high school class. As soon as the class was over, I knew I liked programming and wanted to become a better programmer, but I also had the feeling that there had to be something less painful that Java to program with. This lead me to Python, which I used for everything over the next two years. From Python, I also learned web programming with Django and Flask.
Eventually I decided to try to write an IRC bot in Python. During this effort, I realized I knew nothing about networking, concurrency (how does gevent work?), operating systems, and how to do non-trivial IO. Similarly, I also ran into performance and memory problems with some numpy code, and I started to wonder how does this stuff work and why is numpy so much faster than anything I can write in pure Python? These were my "C shaped holes". Around the time I discovered I had these "C shaped holes" is when I started college, which was a good place for me to take low-level programming classes and fill the holes.
Besides getting a student motivated, maybe the hard part of education is having students realize that they have these holes.
I think the author's argument can be split into two generalizations:
(1) people who know C are better programmers
(2) people who learn Python first never learn C
I think (1) is generally true. I would be interested to hear someone explain why they think having low-level programming knowledge makes you a worse programmer than one without low-level knowledge.
I disagree with (2). I took a high school class on Java, they spent several years programming in Python and becoming relatively proficient. As the author described, I eventually hit the "hard dirt" where I started to having to worry about the OS, memory, etc. I then learned C and C++, operating system internals, high performance computing, GPU computing, etc and all of the low-level ideas that the author thinks that those who learn Python will never know.
However, both statements (1) and (2) are generalizations, and I've only shown that I violate (2). I believe (2) may actually hold for many people, and I agree with the author's idea that if people learned C first we would have fewer programmers but perhaps also better programmers. Of course, "better programmers" is very hard to define and depends a ton of the domain of tasks over which you're evaluating the programmer.
I agree with you on (1). Knowing more is better than knowing less; this is almost tautological. Knowing C -- it doesn't matter whether it's your first language -- will make you a better programmer.
However, I disagree with the assertion from TFA that knowing C is somehow more "intelectually stimulating" than knowing Python. Specifically, I disagree with this:
> "I get the sense that Python-first programmers are missing out on a lot of the fun of programming. There’s a mistaken notion that programming is most enjoyable when you’re doing a diverse array of things using the smallest amount of code, leveraging client libraries and so forth. This is true for people trying to get something done to meet a deadline; but for people motivated by learning — and who aspire to be extremely good programmers — it’s actually more fun to create programs that do less work, but in a more intellectually stimulating way."
I find the above assertion ridiculous, and I know C better than Python. Learning programming with Python is more fun than learning with C. That's not to say C isn't fun; programming is fun as a whole. But Python is not for people "trying to meet deadlines", and it's way less puzzling and lets you get into the "interesting" part of your project faster, with fewer pitfalls and less boilerplate than C. That's the definition of "fun".
People who know C are better programmers. I would say that also people who know assembly language are better programmers. People who know a functional language are better programmers. People who know/have used a variety of languages are better programmers. Monoculture of any language or level of abstraction will limit programmer achievement.
(1) people who know C are better programmers. I think (1) is generally true. I would be interested to hear someone explain why they think having low-level programming knowledge makes you a worse programmer than one without low-level knowledge.
This isn't related to the argument per se, but it's interesting to observe that website ninjas are making >$150k, whereas most low-level ninjas are making less. The reason is because most people want websites, and all of my SIMD knowledge becomes irrelevant in that case.
"Better programmer" is very hard to quantify. Whereas "makes more money, on average" is a solid metric.
From my limited experience, big companies (Google, Facebook, etc) are a good place to do low-level programming with good pay. Additionally, a lot of the programming in the financial industry (particularly high frequency trading) is very low-level work, and the pay is significantly better than the standard "Bay area web company".
I think the study is assigning health benefits to "being a runner" rather than just "running". It's likely that the selected group of runners who had been running regularly for 6 months had been running regularly for much much longer, perhaps 10+ years. I'm definitely not an expert on the subject, but I don't know how many people are in good enough shape at age 60+ to start running, and I imagine most people running at age 60+ started running no later than their mid 50's.
The real thing you want to control for is the health of the runners and the walkers at the time the runners started running.
I'm not so sure about email not being used to communicate with friends.
I'm currently a senior in college. If asked during high school, I would have said exactly what your daughter said, except with Facebook messages replacing SMS.
At my college, all organization of student groups happens over email. I very quickly went from receiving <1 email/day during high school to receiving ~20 emails/day in college. Exposure to this mailing list culture at college (and also at during internships in industry) has made me an email person. I typically don't send short emails (~2 sentence) for social messages (I use Facebook for that), but if I need to send a paragraph to someone I do it over email. My group of friends splits our planning of events 50/50 between email and Facebook messages.
I'm not sure if this is just at the college I attend or if its a more general phenomena.
And yet, event management in Facebook sucks.
It's just where all the events are posted.
Facebook isn't good at events because of event management. It's just because they aggregate all events.
I consistently get invited to events in: New York, Montreal, San Francisco, Buenos Aires, London, and Berlin. This is by choice: If I visit one of those cities, I want to know what to do. But why can't I filter my events view so that I only see events nearby? Why do I have to have a clogged events feedback, because I want to know what's happening if I travel and don't want to hide invites certain event producers?
My experience was that organized groups will use email, but only really the leadership. I would also use email to communicate with professors in college, and now with clients (but not coworkers). Basically, email is now only for communications that need to be archived. You don't use it to send a long emotional rant to a friend because once the situation is over neither of you will need it ever again, so messaging apps to the rescue! Scheduling a meeting with a teacher, however, should be done by email, so that you can go back and prove them wrong when they claim they made no such commitment.
I'm probably dating myself a little, but this was true for me as well. Pre-college, everyone used AIM. During college, most social communication happened on email lists, with some 1:1 communication happening through the first version of google chat ('gchat'). As in, the one with the desktop client.
It's like that at MIT too. I also see people do one sentence emails like "Meeting at X location EOM." I never used it in high school but now it's the main form of communication for more than one on one chats.
nVidia's PTX is the most similar thing I can think of (and I know nVidia's GPGPU products pretty well right now). PTX is an ISA for a virtual machine, which makes it similar to HSAIL. PTX doesn't come with a memory model specification though.
However, I don't think either HSA or PTX is very important if nVidia, AMD, and Intel (wrt Xeon Phi's) don't start agreeing on standards. I haven't seen anything to convince me that nVidia wants to work within standards, and CUDA seems to have most of the market share right now, so I'm not too hopefully for portable heterogeneous computing in the near future.
To my grandparent poster: I don't think working with GPUs is that bad right now if you bind yourself to a vendor. In my experience, drinking the CUDA kool-aid (CU-aid?) isn't all that bad.
Forget non-free CUDA runtime, there are larger problems with the CUDA toolchain with respect to free software. The actual instructions executed by Nvidia's chips are pretty much undocumented. Nvidia provides great documentation for PTX, which is essentially assembly for a virtual GPU. All tools to convert PTX to SASS (the actual instructions ran on GPU) are proprietary and undocumented. Nvidia does this because it gives them much more freedom to modify the SASS language as well as the chips themselves. The existence of standards-free SASS code (and no open source compiler that can target SASS) makes truly open source GPU computing on Nvidia hardware impossible.
Single precision is about 2.5x faster than double precision on current GPUs (for matrix multiplication, which is compute dominated). It really depends on the application, but in my experience more often than not the only reason you are using GPUs is because you want to squeeze out every last ounce of performance. In these cases, single precision makes a lot of sense (assuming you're algorithm doesn't depend heavily on 64 bits of precision).
Within the neural nets community, single precision is almost always used (at least on GPUs).