That was my experience with math, and has for the most part been my experience with computers. Even when I was "on strike" in 6th grade and refused to do any schoolwork, I found it very difficult to avoid the math tests or homework. I tried once and found it so boring that on all further math work, if I wanted to protest how stupid the assignment was, I'd just do all the work and write down the wrong answers in the answer column or do it and refuse to turn it in.
When I quit my job to work on my startup, my former boss threw up all sorts of objections like "You have no idea if this'll work, and if it doesn't, you might find yourself writing lots of code for nothing, and that really sucks." And I just kinda looked at him a little funny. What I wanted to say was, "Dude, that's what I do anyway. That's what I do for you, that's what I do when I get home, and that's what I'd continue to do even if I cashed out for millions. There's nothing wrong with writing lots of code for nothing, and on the off chance it succeeds, it means I won't have to work for superficial idiots like you." (I didn't; I just said "Okay. I'm still leaving" and left.)
I wonder about this. In most of the big things I've done, courage seems the limiting factor. I find working on something of my own fairly interesting regardless of the subject matter. Could it be that it's my own conception of myself as capable that's been pulling me through?
Interestingly, our sales guy (who'd been sent to me earlier to get me to stay) saw this immediately. After I explained why I was doing this, he said immediately, "So, even if the Goldman Sachs deal came through tomorrow, and we were rolling in customers, you'd still leave?" I said "Yeah. This is something I have to do, regardless of what the outcome is or what I'm giving up in the process." And he just said, "Interesting. I see where you're coming from. You should go do it; when you're this set on something, you're doing a disservice both to us and to yourself to not pursue it."
There's nothing wrong with writing lots of code for nothing
...is the logical possibility that an assertion can be shown false by an observation or a physical experiment.
It's probably stupid of me to respond to people by saying "wrong", but like everyone else I get weary, and so I have two conflicting feelings:
One feeling, to show why someone is wrong.
A second feeling, to merely assert that he is wrong.
The latter is quite a bit more efficient. I have this feeling almost every time I read one of nostro's comments, but usually suppress the urge. Oh, well. I mean, here's what he's positing: That abilities are based on different capacities for pleasure/pain, or somesuch. It's so silly, so unrealistic, that such a claim really does require actual evidence to even be taken seriously. The whole evolution of man would be nothing but redirecting pleasure and pain, rather than language, and intelligence, and motor skills, (etc, etc, etc) if that was the case. NONSENSE.
Of course, since no evidence exists for such a thing, no evidence was presented. And that is that. It's not easy to prove a negative and so I just say, "Wrong". This rubs people badly. Oh, well.
I'm simplifying somewhat, but...you appear to be saying that, for some definition of "cause", the evolution of man is "caused" by "language, and intelligence, and motor skills, (etc, etc, etc)". It is wrong, you say, to claim that the "cause" could be "nothing but redirecting pleasure and pain" (I'm not sure from where in the original you get that "redirection" aspect, but I'll copy it over).
But your argument is at a different level that that of the OP. Whereas the OP offers a theory, or maybe rather a hypothesis, of what could "cause" the evolution of language, intelligence, motor skills, &c., suggesting pleasure/pain differentiation as a primitive process, you just claim language, intelligence, &C. as somehow "primitive" processes in causing human evolution. You're not even trying to find out what "causes" them, whereas the OP is. You claim "non-falsifiability" for your opponent's claim, which I don't take for granted: it seems possible to check whether there are differences in the pleasure/pain differentiation of individual test subjects, and see whether that is correlated with those individuals reaching their goals. You just seem to want to avoid meeting the argument.
Which is wrong, I think.
No, I said the opposite.
You just seem to want to avoid meeting the argument
I didn't want to argue at all. I wanted to assert just like the parent asserted. We both asserted and now it's over.
Faster, perhaps, but apparently not efficient at getting people to agree with you.
> That abilities are based on different capacities for pleasure/pain
If what you achieve depends on how you think about work, and how you think is controlled by how your brain works, then you can't dismiss the argument out of hand. But I'm no neuroscientist, so don't ask me....
Tip: Google Books leaves out a few pages (in my case, 34 and 35). To read these, I went to Google Books in a different browser and searched for text from the top of the next page I did have (36). Their back button then took me to the missing pages. Note that using a different browser is essential (I suppose they use a cookie).
Google Books is like being strapped to a library and tortured. And I do not have time to dork around with three different browsers.
Could somebody paraphrase this in a blog post and free it from the tyranny of restrictive copyright?
Note to publisher: I see that your 456-page book costs $45. Today I just want to read one article that spans 13 of those pages, without leaving my chair, waiting 2-4 days, losing my train of thought, or spending extra money on other pages that I don't have time to read anyway. If it were technically possible, I would happily pay you 13/456 of the book's price -- $1.28 -- for an electronic copy of the pages in question. I might even pay $2. But it's not technically possible, so I guess I'll just have to hate you today.
You have a valid point - it would be great if you can buy books per page - especially compilations. Maybe there's an opportunity for someone.
I just wanted to register my annoyance at the state of online publishing while it was still fresh.
I'm sure Scribd has considered this model.
(Was about to write "people who hack", but think that hacking itself is a qualitatively different way of writing computer programs than what most people do.)
* Learn how to use their tools effectively
* Aggressively simplify
* Understand math
* Understand fundamental computer science principles
Thank you for also realizing this. It is just mind boggling to me how many IT people are so utterly inept when it comes to basic mathematics.
I truely believe that the fact that so many IT employees don't know math is leading to this superficial computer-geek culture.
Using functional programming techniques, using macros, writing DSL's come to mind first.
More generally, trying to do what PG is doing with Arc - writing code that does the most using the least number of "thoughts", abstracting patterns as much as possible, the DRY principle, etc.
I do wish this author's prose style was less excellent at concealing this fact.
(see http://news.ycombinator.com/item?id=185533 )
I so disagree. First of all, the article has more than one point; I'm surprised that's not obvious. For another thing, its supporting evidence and reasoning are considerable, relevant, and fascinating. More than that, I personally found it to be rich in subtleties, of the kind that would draw me back to reread and reflect. Does a vitamin C pill get across the entire point of an orange?
I know 35 pages of blah blah blah when I see it. This piece deserves closer attention. Look again.
Geoffrey: One thing I’ve always been fascinated whenever I’ve talked to you about it is just your personal process of development. You’re definitely someone who tries to pick the best tools and customize them to make them work as well as you can. You also have something where you keep track of your bug rate and how many bugs you’re writing, and tests that fail and then you adjust your process. How does that work?
Zed: That’s not recommended for everyone. You have to basically be really, really disciplined. I’m actually not really, really disciplined; I’m doing it on one project, on my U2 project, and I’m tryout basically kind of like a quality control process – physical quality control. All I do is I track a bunch of metrics that don’t necessarily say how many bugs there are exactly, but they’re indicators of the bugs. I track them over time, and then I use statistics to tell me if I’m starting to suck or if I’m improving.
I’m doing mostly C coding on that project, so a lot of this is I’m running my program under Valgrind with heavy testing. Then I track what my test coverage is, and then basically it’s just a series of numbers stream across my screen as I code. It’s kind of like auto-test – when I compile the thing it codes it – and then about every maybe 300 sampled I take a break, go in and crunch the numbers, and I see if I did better than last month.
A lot of times what I’ll do is I’ll try a new technique; I’ll try a technique for a while and then I’ll go crunch the numbers and see if I actually had a statistical improvement or not. That’s the biggest thing; I don’t waste my time on stuff that doesn’t actually improve the bug rate – the defect rate.
For example, at first I wasn’t doing code coverage. I wanted to see if code coverage improved your testing – coverage of your test code, or your test code having coverage. I wanted to see if that improved quality. So I didn’t do any code coverage. I measured all of my defect rates and figured out what my average defect rate was. I did maybe about 700 or 800 samples.
Then I started doing code coverage and beefing up my code coverage. I spent maybe about a month improving my code coverage. In C code it’s real hard to get really good coverage because so many lines do so much stuff. But I got it up to about 60 percent. Then I went and crunched the numbers again to see if increasing the code coverage in test improved my defect rate.
What happened was it didn’t improve my defect rate; my defect rate was still about the same. What it did improve was when I made changes – like if I had to do re-factoring – it reduced the amount of time to get my defects back down. So you make a change, you do your re-factoring, your defects go up, your defects go up, and then you have to spend time fixing all that.
With heavier, more test-coverage it made it go down quicker, but it didn’t really improve my defect rate much. There’s some complexity in that. When you have more coverage, you are seeing more of your defects, so that’s part of it, but I found that test-coverage doesn’t really justify an improvement in quality initially. It mostly just improves your time to fix later.
But out of the ways, that’s some weird stats crunching. The process actually comes from the Capability Maturity Models – Skies, Watson Freeze, Personal Software Process. So all you’ve got to do is rather go get his book, go through what he recommends. The key is as your code. Keep metrics and then crunch numbers to see if that’s improving things for you, and that’s really all that is.