Hacker News new | past | comments | ask | show | jobs | submit login
What Problems to Solve (cat-v.org)
574 points by kamaal on July 14, 2014 | hide | past | web | favorite | 45 comments

This seems to echo what Richard Hamming says in "You and your research". He recounts how his fellow scientists were working on unimportant problems in their field. The judgement of unimportant came not from Hamming, but from the fellow scientists when he asked them "What's the most important problem in your field right now?" His subsequent question was, "How come you're not working on it?" He says that didn't earn him many friends.

By "important", I had also mistakenly thought that he meant "grandiose". But in fact, he later defines "important problems" as "problems that you have a reasonable angle of attack to solve".

Some problems just aren't yet ripe, until all the pieces to solve it come together. Which makes the question, "Why now?" a very good question to answer when contemplating what problem is important.

When someone writes about how Silicon Valley is working on worthless problems, like photo sharing app, or the Yo app, or twitter--or when Groupon was on the up, and it seems like there were lots of daily deal sites, it makes me think of this. I wonder if they take into account that some problems that are considered worthwhile in their eyes simply aren't ripe to be solved yet.

I think it's useful to distinguish between "kinds of problems". The problem of understanding friction between two polished surfaces is a different category of problem from that solved by Yo or Groupon. The payoff of the former is the satisfaction of curiosity, and, arguably, an irreversible advancement of human knowledge. The payoff of the latter is, at best, a small improvement in utility of a small percent of humans over a relatively short amount of time.

The dirty little secret of the valley is that (most) people don't care about this distinction. There are plenty of people made rich by AOL buyouts and MySpace acquisitions - and they are just as rich even though their creations didn't last. There is a tacit acceptance of the truth that technology is fashion, not science.

Why is this distinction important to your point? Because "ripe to be solved" takes on a very different character in the fashion technology sense and the science technology sense. Indeed, I'm not sure it can even be applied to fashion technology like "Yo" since the elements of success are entirely self-referential, as with all fashionable things. Or do you think "Yo" was really just waiting for TCP/IP and Objective C and the iPhone etc. before it could be "realized"?

For any product in the marketplace, the "Why now?" question isn't just asking what technological tools and breakthroughs have taken place that makes this possible. You must also ask, what user behaviors and markets now exist that makes this possible. In order for a product to be a adopted by many, you need both, regardless of whether it's a "science technology" or a "fashion technology".

Many people focus on the former effect as being innovation while completely disregarding the latter. Some progression in products can only occur when users have changed their behavior and expectations enough for you to leverage it.

We had the technology to make something like AirBnb since the 90's. But I don't think it would have worked in the 90's, because people were still wary of meeting strangers from the internet.

We had the technology to make Groupon since the 90's. But I don't think it would have worked in the 90's, because people were only starting to get use to the idea of paying for stuff online, much less doing it together with strangers.

This is why at some points in time, Silicon Valley focuses on problems that can be solved because the user behavior had changed, rather than on problems that can be solved because there's a tech breakthrough--sometimes there are no new tech breakthroughs to be leveraged. That's when you get the Twitter, Groupons, and Yos, and that's ok! It's like filling in the gaps, and fully exploring the ramifications of a new tech.

It's why I started working on the D programming language. I figured I only had so many productive years left, and what was I going to spend them on? I wanted to do something that mattered.

Is it a good idea to go through life thinking you're going to spend the rest of it doing inconsequential things in just a few short years? Ken Thompson seems to have stayed productive. So has pg. Maybe people stop being productive because they start to buy into the idea that they can't be.

"But let me say why age seems to have the effect it does. In the first place if you do some good work you will find yourself on all kinds of committees and unable to do any more work. You may find yourself as I saw Brattain when he got a Nobel Prize. The day the prize was announced we all assembled in Arnold Auditorium; all three winners got up and made speeches. The third one, Brattain, practically with tears in his eyes, said, ``I know about this Nobel-Prize effect and I am not going to let it affect me; I am going to remain good old Walter Brattain.'' Well I said to myself, ``That is nice.'' But in a few weeks I saw it was affecting him. Now he could only work on great problems.

When you are famous it is hard to work on small problems. This is what did Shannon in. After information theory, what do you do for an encore? The great scientists often make this error. They fail to continue to plant the little acorns from which the mighty oak trees grow. They try to get the big thing right off. And that isn't the way things go. So that is another reason why you find that when you get early recognition it seems to sterilize you. In fact I will give you my favorite quotation of many years. The Institute for Advanced Study in Princeton, in my opinion, has ruined more good scientists than any institution has created, judged by what they did before they came and judged by what they did after. Not that they weren't good afterwards, but they were superb before they got there and were only good afterwards."

excerpt from You and your research by Richard Hamming

This could be simple case of regression to the mean, too.

My dad's health is stopping him.

You can't run forever, that's just how it is.

Edit, that said, I haven't seen him as active or motivated in years before his latest contract gig.

I didn't mean "all people." Of course health problems are going to stop someone. But it seems like most people just give up and transition into other work even though there's nothing stopping them from pursuing something they feel is important.

>"Why now?"

This note might deserve a little more emphasis. It's routine in history that problems are solved only once the tools become available, which is supported by a stunning list of coinciding discoveries:


Fermat's last theorem was an open problem for some 400 years, but was solved within two decades (1993) after the introduction of the Frey curve (1975).

What this stresses is the importance of finding your own problem, that is, choosing your own path instead of having an attitude of waiting to be assigned with a problem and then subsequently becoming unhappy, which is what Feynman calls out as a 'mistake' on his part: not letting or even, not DEMANDING from his student to choose his own problem.

Ultimately, it's the advice to change your frame of reference from the 'sum of human knowledge' to 'what problem can i solve today, immediately'. And to ask yourself that question first and find an answer.

That's not only smart but about the wisest thing I ever heard someone say regarding work.

There is also the component of being sad and not knowing why, while you work on what you think is interesting and important.

I remember reading how Feynman was unhappy with his work and then chose a seemingly useless but fun and interesting problem to solve - the physics of plate wobbling (If I remember correctly, the relation between the wobble rate and spin rate of a plate that has been tossed into the air).

His colleagues were a bit confused as to why he would do this, but he had fun and his love for physics was rekindled.

Some years later the mathematics he derived would be used when the first satellites where launched and wobbled as they spun, the wobbling not being desirable. Not bad for useless and fun work!

Not just that- he attributed his Nobel to that research!


It's interesting to look at what may be a homepage for the recipient of the letter:


The letter is from an excellent collection of Feynman letters, "Perfectly Reasonable Deviations from the Beaten Track". There's a thoughtful review of the entire collection (by Freeman Dyson) here:


I asked somebody who can find out what Dr. Mano did with the advice: my Dad, who still communicates with some of Tomonaga's students, but he said would have to ask around.

"It seems that the influence of your teacher has been to give you a false idea of what are worthwhile problems. The worthwhile problems are the ones you can really solve or help solve, the ones you can really contribute something to. A problem is grand in science if it lies before us unsolved and we see some way for us to make some headway into it."

That is very good advice. You could replace teacher with your corporate manager and science with society and it becomes relevant to everyone working on a normal corporate job or a start-up.

Adding to or improving upon the sum of human knowledge, what is more important?

"Dark pictures, thrones, the stones that pilgrims kiss,

poems that take a thousand years to die

but ape the immortality of this

red label on a little butterfly."

-- Vladimir Nabokov

All his books--including Lolita--should be required reading in high school and college. His writing just flows. I remember a college professor who wanted his class to read Lolita, but was told if he wanted to reach ten year--he would be wise to just let it go. Maybe things have changed? After college, I don't think I would have ever picked up another book of fiction, until I discovered Vladimir Nabokov.

I wonder if the original uses a word closer to "mark" than "label"? As it stands, it kind of reminds me more of the end of "Do Androids Dream of Electric Sheep (Blade Runner)" by Dick -- where Deckard finds what he thinks is a wild animal, only to turn it over and discover it's branded and is an artefact after all...

I think that is the original.

Isn't the original in Russian?

[edit: Apparently not, I wasn't aware Nabokov wrote English as well as Russian. Some relevant links:

"On Discovering A Butterfly" by Vladimir Nabokov May 15, 1943:


[ edit3: sadly behind a paywall ]

A (somewhat inaccurate, but interesting) commentary:


edit2: Almost forgot: clearly the original uses "label" intentionally. Funny how the verse in isolation seems to more directly reference nature (along the lines of Blake's Tyger! Tyger! -- while clearly that's not the (main/only) intended reading. ]

Wow! Subliminal eternal poem indeed!

In a similar way, look at what Leonard Adleman (A in RSA, DNA computing, integer number factoring, etc) says in this interview (skip the chinese paragraphs): http://teacher.scu.edu.cn/ftp_teacher0/chjtang/teach/adleman...

NW = The interviewer

LA = Leonard Adleman


NW: They say the most creative and challenging part of research is finding the right question to ask. Do you agree with that?

LA: I wouldn't characterize it as the most challenging thing, but it's of critical importance. Sometimes it's not hard to find the "right question'. For example the mathematics literature is full of very important unanswered questions. In this case, the real issue is: Has that question's time come? Have we reached a point where developments in the appropriate area of science give us some chance of breaking the problem? For example, I worked on a famous centuries old math problem called "Fermat's Last Theorem". I was not 'strong' enough to solve it, but I find some solace in the fact that my intuition that its 'time had come' was right. The problem was finally solved two years ago by Andrew Wiles of Princeton. It was one of the major events in the history of mathematics.

The other side is to generate new questions. That's a funny process. The way I seek to generate new questions is to start to look at whole new fields, like biology, immunology or physics. Since I come from a different field, mathematics, I bring an unusual point-of-view that sometimes allows me to generate questions different from the classical questions in those areas. Like the question of DNA computing.

For the young scientist, this question of choosing the right question to spend your valuable limited intellectual resources on is critical. I often sit for months and do no productive work that anybody can see, because I don't feel I have a good enough question to work on. Rather than take on some lesser question, I would prefer to read a mystery novel. The point is, sometimes it's important to lie fallow for a time waiting for the 'right question' to appear, rather than to engage in uninspiring work and miss the important opportunity when in comes.

But in the end, the real challenge of science is to make progress - to succeed, to contribute knowledge.

NW: Of course, in an academic setting, there's that drive to publish or perish...

LA: Yes, that's a problem, because you have to feed your family. But I always tell my students and junior faculty that they are better off following their inspiration and their hearts in what research they do, that they should always try to take on the most interesting and important problems, that they should not waste their time on little problems just to make another line on a vitae.

My philosophy is that it's important, in a curious way, for scientists to be courageous. Not physically courageous, but courageous in an intellectual way. I believe that by working on extremely hard problems, by being courageous, you may succeed. But even if you fail, you fail gloriously. And you will have learned immense amounts, you will have extended the envelope of what you can do. As a byproduct of failing on a great problem, I have always found that I could solve some lesser but still interesting problems - which then fill your vitae.


I'm having trouble reconciling Feynman and Adleman's advice. Adelman would rather read a novel than work on anything but the most important problems whose time has come. Feynman got his mojo back by understanding spinning plates, and famously loved to goof around with physics problems. Hamming would probably agree with Adleman here.

Perhaps they would agree that one should have some ambition and do what's important, but Feynman would have you sneak up on it from behind, while keeping the juices flowing with playfulness. I'll go with that.

Well, there's no reason to reconcile them. They can have differing opinions on how to approach problems. But don't think that Feynman was working nonstop on spinning plates and those diagrams he invented. He liked working in strip clubs, started drawing a bunch, and randomly learned Portuguese when he followed a pretty woman onto a plan to Brazil. There he joined a street band and learned to play instruments while continuing his drawing. This seems pretty comparable to the idea of reading a book for a break.

Regardless, you can find people's ways of focusing to differ and not have to reconcile every smart person's personal way to study and research.

To me, Feynman's example of the plates is something that you do in addition to Adleman/Hamming's advice.

Of all the problems to work on, work on those that have have a reasonable angle of attack (Adleman/Hamming), that you have the ability to solve (Feynman). And in addition, make sure you plant little seeds instead of just working on big problems after you've succeeded (Hamming in 'You and your research'), which amounts to being able to play with spinning plate problems (Feynman)

>or how to make electroplated metal stick to plastic objects (like radio knobs).

In Surely You're Joking, Mr. Feynman! there is a whole chapter dedicated to his work at an electroplating company.

Beyond the great advice, I'm struck by the supreme kindness and humanity on display in this letter--it is itself great advice on how to treat others, all the more poignant coming from a man with many other wonderful opportunities competing for his time and attention.

> A problem is grand in science if it lies before us unsolved and we see some way for us to make some headway into it. I would advise you to take even simpler, or as you say, humbler, problems until you find some you can really solve easily, no matter how trivial.

Something that I think most novice programmers should take to heart, and something I wish I had known earlier...when you start out, you want to build something big and new, like a video game, or hell, a Rails site that you think will be the next Facebook clone. Not only is it beyond your ability as a novice, it may not even be a "problem" worth solving, because you don't yet know what's worth solving until you become a bit better at programming. I stopped programming after awhile when I couldn't come close to seeing what I thought were my goals...it's been much easier to do it day-to-day by focusing on the small steps...and after awhile, the big task doesn't seem hard after all.

Meanwhile, programming has a pretty distinct advantage...even if you spend your time mastering seemingly benign and trivial things, such as being better at parsing, function design, or just automation of what you've done before, you're not only learning, but making yourself more productive at the same time...something that's not nothing as you actually begin your grand plan.

Speaking of Feynman and computing and seemingly banal tasks...I've seen only scarce detail of his supervising the "computers" at Los Alamos:


> Richard's interest in computing went back to his days at Los Alamos, where he supervised the "computers," that is, the people who operated the mechanical calculators. There he was instrumental in setting up some of the first plug-programmable tabulating machines for physical simulation. His interest in the field was heightened in the late 1970's when his son, Carl, began studying computers at MIT.

It's not something he's famous for, but I wouldn't be surprised if such a task was critical to the success of the researchers...I've gone through both his memoirs and hadn't seen much mention of it though. Anyone else have more details?

Feynman did some interesting work on computers - he used his fine physics-trained calculus skills to analyze a chip and message passing - a discrete system:

> By the end of that summer of 1983, Richard had completed his analysis of the behavior of the router, and much to our surprise and amusement, he presented his answer in the form of a set of partial differential equations. To a physicist this may seem natural, but to a computer designer, treating a set of boolean circuits as a continuous, differentiable system is a bit strange. Feynman's router equations were in terms of variables representing continuous quantities such as "the average number of 1 bits in a message address." I was much more accustomed to seeing analysis in terms of inductive proof and case analysis than taking the derivative of "the number of 1's" with respect to time. Our discrete analysis said we needed seven buffers per chip; Feynman's equations suggested that we only needed five. We decided to play it safe and ignore Feynman.

> The decision to ignore Feynman's analysis was made in September, but by next spring we were up against a wall. The chips that we had designed were slightly too big to manufacture and the only way to solve the problem was to cut the number of buffers per chip back to five. Since Feynman's equations claimed we could do this safely, his unconventional methods of analysis started looking better and better to us. We decided to go ahead and make the chips with the smaller number of buffers.

> Fortunately, he was right. When we put together the chips the machine worked. The first program run on the machine in April of 1985 was Conway's game of Life.


There are several pages on his work on calculating stuff in the James Gleick biography.

He goes into detail about the topic in the chapter "Los Alamos from Below" in Surely You're Joking, Mr. Feynman! The rest of the book is a fantastic read as well. He talks about many interesting experiments he did (even ones involving ants), pranks he's done with safes, how to pick up women, etc.

You might be interested in this lecture that Feynman gave at a meeting called "Idiosyncratic Thinking":


I wonder if the Feynman diagram is the supreme data visualization.

Edward Tufte seems to both specialize in data visualization and be obsessed by Fenynman diagrams as art... http://www.edwardtufte.com/bboard/q-and-a-fetch-msg?msg_id=0...

Well, I personally agree, but I don't think everything in this matter is as objective as Feynman wants it to be seen, judging by the tone of his letter. I could definitely imagine the opposite opinions: "you need to listen to more experienced teachers as to know which problems to solve and which do not", or "you need to think big" and so on...

Here is a well-illustrated conversation on the respective roles of questions and answers:


What happened to koichi mano?

Al Hibbs was my Physics 1 prof!

"No problem is too small or too trivial if we can really do something about it."

This part tells us to keep solving whatever comes in our way. In today's world, most successful start-ups don't always succeed with their first idea, but rather an iteration of it.

Good advice! Thanks!

I hear this a lot, but I'm not convinced that most successful companies pivot. Is there any data to back this up?

Microsoft made a huge pivot fairly early. Apple, did a small pivot that was arguably a course correction. Nintendo did a major piviot later on. Which I think is more telling.

The odds that your first idea is perfect is small. Recognizing when you have a better option is important.

Like _craft asked, is there data that most successful companies pivot? Mentioning 2-3 successful pivots does give great insight into whether pivoting successfully is common. :)

(But as long as we're listing successful pivots, Intel is one of the most famous pivots out there.)

This is really good. I really needed to read this.

I only clicked this because I misread it as an RMS blog post.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact