Hacker Newsnew | past | comments | ask | show | jobs | submit | partition's commentslogin

Would you think there is still a chance for iPad games that are well-made and well-advertised on the internet, that cost perhaps over $10, maybe even $20, to do well?

I see this as, first you have the stuff that's top ranking on the app store which is mostly f2p optimization problems. But hardcore gamers should be able to spread the word about an actual good ipad game, right? Or is the notion of an ipad game itself not hardcore enougH?


That's right on with "too shy".

The issues of pedagogy whether it is music or programming are largely sociological.

So why not make being a manic musician attractive and marketable like they do in movies about music?

I guarantee you that the core inspiration that starts (and to a large extent sustains) any involvement in any activity is aesthetic. This makes aesthetics the beginning and end of activities.

But in programming, we just use the manic people as Golden Children to somehow whip the other students into shape. This utterly fails of course, aesthetically and practically and in every other way imaginable. It's an ugly way of doing things and it doesn't actually produce good programmers.


Ok, instead of reading a Wikipedia page or two, or the most elementary chapter in a probability/statistics textbook and gaining a bunch more skills (among them the tools needed to reason about whether your MCMC is doing anything close to reproducing the original distribution), I can peruse a huge blog post filled with patronizing language and it doesn't even have the Metropolis-Hastings algorithm listed out even in pseudocode. Got it, this is what progress looks like.


I feel as though your comment comes very close to "gratuitous negativity" of the type mentioned recently on the YC blog.

http://blog.ycombinator.com/new-hacker-news-guideline

Clearly, a blog post primer it is not an alternative to an encyclopedia article or a textbook chapter. Rather it provides something not fulfilled by either: a brief introduction to a technical subject that relies on very little prior knowledge.

I am sure that Jeremy is well aware of the challenges involved in writing in a style that is straight-forward without being "patronizing". Perhaps you don't like the balance that he has found, but is that a good enough reason to outright dismiss his efforts?

Lastly, much of the post is in fact a plain-language explanation of the Metropolis-Hastings algorithm, and he adds: "... if demand is popular enough, I could implement the Metropolis-Hastings algorithm in code"


I was really surprised to view the comments, and find the attitude expressed in the GP's post #1.

If anything, I read this and thought something along the lines of, "I wish some of the subjects I learned in college were explained this way, at least to start."

Now, I fully appreciate understanding fundamentals, and not black boxing things (see the ongoing framework debate posts), however I didn't read that article looking to be spoon fed a black box for ML or anything...I was just bored and curious. So it made for good reading.

Perhaps I'm generalizing too far, but it reminded me of when I was presented with the classic Lamport paper on "Time, Clocks, and the Ordering of Events in a Distributed System" in undergrad. This would have been around 2004. I had been working (i.e. coding for money) for some time by then, but back then, you could still build a lot with a relatively stock LAMP stack (or similar.) MongoDB, Riak, Redis, Cassandra, DynamoDB, etc. were not all commonplace (much less things you could have a cluster ready to play with by means of a simple 'vagrant up'.) So when I was given the paper, without much context, I had a hard time really "getting it", or how it applied to me. I'm pretty sure I eventually got some commentary on it from the professor in class, while hurriedly going through powerpoint slides, probably anxious to get back to his research. Then a test question or two about it later.

Years later, after working more, and "distributed systems" becoming more of a tangible thing, I re-visited it, and it made more sense. Anyway, my point is that had I gotten some sort of a, "and here's what the paper is talking about, and how it applies to something tangible, in plainspeak", it would have been useful - so we shouldn't groan about an article like this. Someone out there is going to understand the subject much better thanks to that post.

[Note to the reader that's very familiar with the vector clocks paper - I realize it's not written with that much jargon, and DOES do a decent job in the introduction section of using plain language, but the purpose was just lost on me. I mean, it makes reference to ARPA Net, and then goes on to determining event ordering. It just didn't equate to any thing in my world view, and I had done a decent bit of multi-threaded/process programming by then too.]


It would get a better reception without the intro paragraph and if it were titled "an introduction to Metropolis Hastings." The post really doesn't even give you a taste of the most common MCMC algorithms, and what they are useful for.

For instance, sampling from complicated Bayesian models, and Gibbs sampling are the two most important things to introduce to roughly cover MCMC and those are in the paragraph he calls "bullshit" because he doesn't know the words.


Sometimes I get the idea that people in CS want to "get all this machine learning/data science stuff" without understanding the principles.

Sure, it's trivially easy to build a classifier in R/Scikit (or even code one up from scratch) so it may seem like it's no big deal. Similarily, it's trivially easy to hack up an inefficient SQL query, without knowledge of relational algebra or normal forms. But I would argue that some basic knowledge is essential in applying these things.


"Sometimes I get the idea that people in CS want to "get all this machine learning/data science stuff" without understanding the principles."

Thought I was the only one.


Not at all. People want easy things. Just look at all the companies that provide "Easy A/B testing" or "Easy lead scoring" via super-complicated-data-scienice-stuff-we-will-explain-in-1000-words-or-less-blogposts.

It doesn't matter that the results are absolute junk. Customers pay money so they can feel good. And it's easy.


Pretty much. Also, as a non white person I feel their echo chamber effects acutely. Casual racism and race "realism" is especially pronounced in nerd circles.


Arthur Schopenhauer wrote extensively on this and I feel he said it best.

[Excessive reading] robs the mind of all elasticity; it is like keeping a spring under a continuous, heavy weight. If a man does not want to think, the safest way is to take up a book directly he has a spare moment.

- "Thinking for Oneself"

Contains a great quote by Pope Duncaid 3:

"Forever reading, never to be read."

Link:

https://ebooks.adelaide.edu.au/s/schopenhauer/arthur/essays/...


Lewis Carroll also wrote an excellent essay using digestion as a metaphor for reading.

http://harpers.org/sponsor/balvenie/lewis-carroll.1.html


This was an excellent read, thanks for the link.

A great deal of nuance in the essay about doing vs thinking vs reading/consuming.


Hawthorne effect: "Feels good to track mood."


What was interesting is that the higher-performing group (Alpha) was stuck less often that the lower-performing group (Gamma). Is this the next programming motto after "Don't Repeat Yourself"?

Don't Get Stuck!

Now how does one avoid getting stuck while programming? I still get stuck a lot, but these really helped:

- Test what you write as quickly as you can.

- Hold it in your head! Or you won't have a clue what all your code, together, is doing.

- To get started, ask yourself, what is the simplest thing that could possibly work?


> Hold it in your head! Or you won't have a clue what all your code, together, is doing.

Do you mean to say: if you can't hold it all in your head, you're overcomplicating things? Because if so, I agree.

I just finished giving an introduction to programming course, using Processing. I tried teaching my students to write structured code from the start, splitting functionality into smaller methods as much as sensible. That way they never have to juggle too much functionality in their head at once.

On the first day I had them call built-in methods. Then I introduced the concept of variables and types, then I introduced defining your own methods, and only after that did I move on to booleans, if/else, for-loops and arrays.

It seems to have worked well for getting them to structure their code from the start, using methods more or less the same way you would use chapters, headers, sub-headers and paragraphs to structure a paper.

I know this isn't the best way to program, obviously, but this analogy is (relatively) easy for them to understand and loads better than the ball of muddy spaghetti you often see with people who just start out. It's also probably fairly easy to make the jump to other more advanced forms of code organisation.

I also completely agree with your other two points by the way, I kept repeating similar statements throughout the course:

- By splitting functionality into small methods, it's easy to test if something does what it is supposed to do (and I could easily correct them: "Hey, this method drawBall() is also doing hit detection. That's not what the method is supposed to do, restructure the code!").

- "The computer only does what it is literally instructed to do. What do you literally want to do? What are you literally instructing the computer right now?"


When I see students posting on /r/programming, what they usually seem to have a problem with is problem decomposition. i.e.: What are the steps needed to solve this part of the problem? Or algorithm design, critical thinking, or whatever you want to call it. How do I get from A to B, basically.

Remembering back to my first year, it seems that some people "get this" and some do not. Which is nothing against them -- everyone is different.


> - "The computer only does what it is literally instructed to do. What do you literally want to do? What are you literally instructing the computer right now?"

I feel like after you really know a system you almost start thinking only in terms of its components. "Hmm I want to do X...somehow I just automatically thought to modify function Y and call it in module Z."


> What was interesting is that the higher-performing group (Alpha) was stuck less often that the lower-performing group (Gamma).

Which may just be correlated with high performance, not necessarily the cause.

If you make "don't get stuck" a motto, you can easily imagine people skipping over difficult problems and not thinking about a good solution.


"don't get stuck" reminds me of the feynman method of problem solving:

1. Write down the problem. 2. Think very hard. 3. Write down the answer.

Rich Hickey's Hammock Driven Development, https://www.youtube.com/watch?v=f84n5oFoZBc, is IMO very good for actually not getting stuck in a bad place problem solving.


Wow, this was a big wake-up call. It's ironic because I am in research and I always did a lot of theory and problem definition up front for whatever I did, but I am now in this "reckless rapid prototyping" phase where if I get some idea I have to try it out RIGHT NOW, stepping back be damned. Ironically it results in getting stuck more than I would have wanted!

I guess this is the fourth tip?

- Step away from the computer


I share your concern that this will result in a lot of sloppy solution to lots of little problems which in turn results in one big mess of sloppy-but-it-works solutions which can be a big problem (but doesn't have to be). However, even when working out a good solution I think it's important not to get stuck. Make sure you are making some sort of progress; it's okay that it takes time but it is not okay if someone is just moving in circles instead of inching closer to the solution. Basically if something takes "too long" to get working, try a different approach or come back to it at a later time to try again. I quite like the "Don't get stuck" motto if we look at it that way.


"If you make "don't get stuck" a motto, you can easily imagine people skipping over difficult problems and not thinking about a good solution."

It is also important to define, what "getting stuck" means. For the research the alpha performers where the ones who fulfilled the goal quickest. If you switch the point of view from the teachers to the learning student, instead of fulfilling the requested goal quickest, you might follow an interesting problem that you discovered on the way. Is that getting stuck? No! It is following your own path of learning which is a very effective method. It does not look best to the teacher, but it can be very good for the learning student.

Asking your own questions and answering them is an important part of a learning process. It also makes you independent, so you don't simply do most efficiently what someone told you to do.


Overcoming getting stuck is the difference between a highly productive dev and the average dev. This is especially the case if the problem being worked on is hard. I am a very average dev in regards coding skills, but due to my background I have learned strategies to “unstick” myself after ending up in the middle of a massive mud pit. This has allowed me to solve problems better devs have not been able to accomplish.


> Test what you write as quickly as you can.

Until you run into that one problem where you can't test between the small steps, because you need the whole thing to be up (to some degree) and working for any test to work.

Operating system kernels would be an example of that: The best test for a *nix OS kernel is, if it can run a shell. You need all the essential syscalls to do something sensible and if any of the required parts doesn't work the whole thing fails.

Another example would be refactoring a complex library into something more manageable. If you keep working in small, testable steps you can move only along the gradient, bordered by "can execute" and "doesn't execute". So you'll be able to reach only a local extremum. Now if you're in the fog and don't know where to go, that's fine. And for most development this is exactly how it happens. But sometimes you can see that summit on the other side of that rift and you know you have to take a leap to get over there.

I spent the past 3 weeks doing exactly that, refactoring a code base. I knew exactly where I wanted to go, but eventually it meant working for about a week on code without being able to compile, not even think of testing it, because everything was moving around and getting reorganized. However now I'm enjoying the fruits of that week; a much cleaner codebase, easier to work with and I even managed to eliminate some Voodoo code nobody knew why it was there, except that it made things work and things broke if you touched it.

> - Hold it in your head! Or you won't have a clue what all your code, together, is doing.

Or, sometimes it's important to get it out of your head, take a week or two off and look at it again with a fresh mind and from a different angle. Often problems seem only hard because you're approaching them from that one angle and you're so stuck with wanting to get it done, that you don't see the better way.

Instead you should write code in a way that it's easy to get back into it.

> - To get started, ask yourself, what is the simplest thing that could possibly work?

And then wonder: What would it take to make this simple thing break. Make things as simple as necessary but not simpler.


> Operating system kernels would be an example of that: The best test for a *nix OS kernel is, if it can run a shell. You need all the essential syscalls to do something sensible and if any of the required parts doesn't work the whole thing fails.

So start with something simpler. Start by making a kernel that can run /bin/true, that never reclaims memory, that only boots on whichever VM you're using for testing. You absolutely can start with a kernel that's simple enough to write in a week, maybe even a day or hour, and work up from there. See http://www.hokstad.com/compiler for a good example of doing something in small pieces that you might think had to be written all at once before you could test any of it.

> I spent the past 3 weeks doing exactly that, refactoring a code base. I knew exactly where I wanted to go, but eventually it meant working for about a week on code without being able to compile, not even think of testing it, because everything was moving around and getting reorganized. However now I'm enjoying the fruits of that week; a much cleaner codebase, easier to work with and I even managed to eliminate some Voodoo code nobody knew why it was there, except that it made things work and things broke if you touched it.

Which is great until you put it back together and it doesn't work. Then what do you do? I've watched literally this happen at a previous job, and been called in to help fix it. It was a painful and terrifying experience that I never want to go through again.

In my experience with a little more thought you can do these things while keeping it working at the intermediate stage. It might mean writing a bit more code, writing shims and adapters and scaffolds that you know you're going to delete in a couple of weeks. But it's absolutely worth it.


If you haven't, you really need to see this post.[1] Starting simple and writing a test driven algorithm is not necessarily bad. However, realize that you are really just turning the act of finding the optimum solution into a search space where you have to assume mostly forward progress at all times. Not a safe assumption. At all.

And, because I love the solution, here is a link to my sudoku solver.[2] I will confess some more tests along the way would have been wise, though I was blessed by a problem I could just try out quickly. (That is, running the entire program is already fast. Not sure of the value on running the tiny parts faster.)

[1] http://ravimohan.blogspot.com/2007/04/learning-from-sudoku-s...

[2] http://taeric.github.io/Sudoku.html


I've seen it, but it's just utterly alien to my experience. Partly the problems I encounter professionally don't look much like Sudoku; partly the things that are important on a large codebase are different from the things that are important in a small example. But mostly I think people realise they're not getting somewhere - and if they don't, others will point it out. That's partly why TDD tends to be found in the same place as agile with daily standups - you get that daily outside view that stops you just spinning your wheels the way that blogger did.


I have seen exactly this style of thinking happen in a large code base. Some of it was my own, sadly.

The odd thing to me, is you say this still of problem doesn't happen in a large code base. But, to me, this style of problem just happens many times in a large codebase. That is, large problems are just made up of smaller problems. Have I ever used the DLX problem? No. Do I appreciate that it is a good way to look at a problem you are working? Definitely. I wish I had more time to consider the implications there.

More subtly in your post, to me, is the idea that with the right people the problems don't happen. This leads me to this lovely piece.[1]

[1] http://www.ckwop.me.uk/Meditation-driven-development.html


I think as you get more experienced, what is considered a "small step" changes as you're able to keep more context in your head (subconsciously of course). For the complex library example, that would be keeping the new vs the old architecture at the forefront of your mind instead of switching all thought to "const? maybe? what does that mean in this context?" and "how can I get rid of this reinterpret_cast?"

Absolutely agree with "Instead you should write code in a way that it's easy to get back into it" though!


> Operating system kernels would be an example of that: The best test for a *nix OS kernel is, if it can run a shell. You need all the essential syscalls to do something sensible and if any of the required parts doesn't work the whole thing fails.

How do you test kernel? Run a virtualised kernel? I think at least DragonflyBSD supports that.


Kernel debugging always has been kind of a dark art. What helps greatly is if you can tap into the CPU using some hardware debugging (JTAG or similar).

Today you can also exploit the busmaster DMA capabilities of IEEE1394 to peek into system memory.

However the still most widely used method is pushing debug messaged directly into a UART output register, bypassing any driver infrastructure, and hooking up a serial terminal on the other end.


It's also likely that these guys had already programmed before coming to the class. We all tend to get stuck when programming something for the first time. Second and third times are much smoother experiences.


You’ve reminded me of a quote from Chuck Moore:

> Before you can write your own subroutine, you have to know how. This means, to be practical, that you have written it before; which makes it difficult to get started. But give it a try. After writing the same subroutine a dozen times on as many computers and languages, you'll be pretty good at it.

Or, as Fred Brooks put it, “plan one to throw away”.


I'd offer two bits of marketable wisdom from this.

- think back. Students who change style do better. This implies that those students are reflecting on their own style and trying out techniques (performing experiments) to find out what works best for them.

- think ahead. Use lookahead to avoid backtracking. Some amount of top-down planning will make your life easier and avoid traps, even if your natural style is bottom-up.


My interpretation of that part of the article was "refactor early and reimplement often"

That grabs in the concept of switching between planning and experimenting modes with the end result of not getting stuck.

I may be biased in having learned this stuff a long time ago so when I get stuck its from getting hopelessly backed into a corner at which point the best solution is pretty much start over differently, and first semester programmers might get trapped a little more easily, like they don't even know common ancient anti-patterns so they don't even know where to start looking when debugging.

Reading between the lines it almost sounded like the A level programmers wrote their tests first (which is usually easy) then wrote code to pass the tests (usually not too awful) then they were done, whereas the lower grade level programmers wrote the code first, then tried to debug (and debugging is always harder than test writing or coding)


Each of your three bullet points has a related developer concept:

* Test early, test often test automatically [0]

* Remember the big picture [0]

* The minimum viable product [1]

    [0] - https://pragprog.com/the-pragmatic-programmer/extracts/tips
    [1] - http://practicetrumpstheory.com/minimum-viable-product/


Yeah I really reckon that "what is the simplest thing that could possibly work" is the vital question. If you stray from that, getting stuck wastes what could otherwise be productive effort. I do have to wonder tho -- why is programming so complicated? Silicon and neurons so fundamentally different that we're always going to be fighting ourselves -- or one day we're going to have interfaces make programming way more natural?


> - To get started, ask yourself, what is the simplest thing that could possibly work?

I'm not sure I would go for what appear to be the simplest now ; I try to find the implementation which will have the simplest design to use/modify/delete in the future.


This is also known as value _creation_ versus value _transference_:

http://halfsigma.typepad.com/half_sigma/2008/03/value-creati...

I do not completely agree with the guy's views; in particular I do not believe in there being anything other than value transference, and value "creation" means you are closer to physics, transferring value at a physical level, than to politics and sales, transferring value from other people. But it does cut the bullshit.


Aka the importance of mentally being a headless chicken. You're not being "introspective" or otherwise "deep" when anxious. You're just applying a blanket negative bias to all of your observations, and then looking around mostly to confirm the bias. Anxiety is crippling, full stop.


Is the blotchiness due to uncertainty principle or sensor imperfections? Both maybe?


That is energy vapour surrounding individual photons.


poor statistics, I would guess.

when you're shooting that fast, there are few photons per frame.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: