By "important", I had also mistakenly thought that he meant "grandiose". But in fact, he later defines "important problems" as "problems that you have a reasonable angle of attack to solve".
Some problems just aren't yet ripe, until all the pieces to solve it come together. Which makes the question, "Why now?" a very good question to answer when contemplating what problem is important.
When someone writes about how Silicon Valley is working on worthless problems, like photo sharing app, or the Yo app, or twitter--or when Groupon was on the up, and it seems like there were lots of daily deal sites, it makes me think of this. I wonder if they take into account that some problems that are considered worthwhile in their eyes simply aren't ripe to be solved yet.
The dirty little secret of the valley is that (most) people don't care about this distinction. There are plenty of people made rich by AOL buyouts and MySpace acquisitions - and they are just as rich even though their creations didn't last. There is a tacit acceptance of the truth that technology is fashion, not science.
Why is this distinction important to your point? Because "ripe to be solved" takes on a very different character in the fashion technology sense and the science technology sense. Indeed, I'm not sure it can even be applied to fashion technology like "Yo" since the elements of success are entirely self-referential, as with all fashionable things. Or do you think "Yo" was really just waiting for TCP/IP and Objective C and the iPhone etc. before it could be "realized"?
Many people focus on the former effect as being innovation while completely disregarding the latter. Some progression in products can only occur when users have changed their behavior and expectations enough for you to leverage it.
We had the technology to make something like AirBnb since the 90's. But I don't think it would have worked in the 90's, because people were still wary of meeting strangers from the internet.
We had the technology to make Groupon since the 90's. But I don't think it would have worked in the 90's, because people were only starting to get use to the idea of paying for stuff online, much less doing it together with strangers.
This is why at some points in time, Silicon Valley focuses on problems that can be solved because the user behavior had changed, rather than on problems that can be solved because there's a tech breakthrough--sometimes there are no new tech breakthroughs to be leveraged. That's when you get the Twitter, Groupons, and Yos, and that's ok! It's like filling in the gaps, and fully exploring the ramifications of a new tech.
When you are famous it is hard to work on small problems. This is what did Shannon in. After information theory, what do you do for an encore? The great scientists often make this error. They fail to continue to plant the little acorns from which the mighty oak trees grow. They try to get the big thing right off. And that isn't the way things go. So that is another reason why you find that when you get early recognition it seems to sterilize you. In fact I will give you my favorite quotation of many years. The Institute for Advanced Study in Princeton, in my opinion, has ruined more good scientists than any institution has created, judged by what they did before they came and judged by what they did after. Not that they weren't good afterwards, but they were superb before they got there and were only good afterwards."
excerpt from You and your research by Richard Hamming
You can't run forever, that's just how it is.
Edit, that said, I haven't seen him as active or motivated in years before his latest contract gig.
This note might deserve a little more emphasis. It's routine in history that problems are solved only once the tools become available, which is supported by a stunning list of coinciding discoveries:
Fermat's last theorem was an open problem for some 400 years, but was solved within two decades (1993) after the introduction of the Frey curve (1975).
Ultimately, it's the advice to change your frame of reference from the 'sum of human knowledge' to 'what problem can i solve today, immediately'. And to ask yourself that question first and find an answer.
That's not only smart but about the wisest thing I ever heard someone say regarding work.
I remember reading how Feynman was unhappy with his work and then chose a seemingly useless but fun and interesting problem to solve - the physics of plate wobbling (If I remember correctly, the relation between the wobble rate and spin rate of a plate that has been tossed into the air).
His colleagues were a bit confused as to why he would do this, but he had fun and his love for physics was rekindled.
Some years later the mathematics he derived would be used when the first satellites where launched and wobbled as they spun, the wobbling not being desirable. Not bad for useless and fun work!
The letter is from an excellent collection of Feynman letters, "Perfectly Reasonable Deviations from the Beaten Track". There's a thoughtful review of the entire collection (by Freeman Dyson) here:
That is very good advice. You could replace teacher with your corporate manager and science with society and it becomes relevant to everyone working on a normal corporate job or a start-up.
"Dark pictures, thrones, the stones that pilgrims kiss,
poems that take a thousand years to die
but ape the immortality of this
red label on a little butterfly."
-- Vladimir Nabokov
[edit: Apparently not, I wasn't aware Nabokov wrote English as well as Russian. Some relevant links:
"On Discovering A Butterfly"
by Vladimir Nabokov May 15, 1943:
[ edit3: sadly behind a paywall ]
A (somewhat inaccurate, but interesting) commentary:
edit2: Almost forgot: clearly the original uses "label" intentionally. Funny how the verse in isolation seems to more directly reference nature (along the lines of Blake's Tyger! Tyger! -- while clearly that's not the (main/only) intended reading. ]
NW = The interviewer
LA = Leonard Adleman
NW: They say the most creative and challenging part of research is finding the right question to ask. Do you agree with that?
LA: I wouldn't characterize it as the most challenging thing, but it's of critical importance. Sometimes it's not hard to find the "right question'. For example the mathematics literature is full of very important unanswered questions. In this case, the real issue is: Has that question's time come? Have we reached a point where developments in the appropriate area of science give us some chance of breaking the problem? For example, I worked on a famous centuries old math problem called "Fermat's Last Theorem". I was not 'strong' enough to solve it, but I find some solace in the fact that my intuition that its 'time had come' was right. The problem was finally solved two years ago by Andrew Wiles of Princeton. It was one of the major events in the history of mathematics.
The other side is to generate new questions. That's a funny process. The way I seek to generate new questions is to start to look at whole new fields, like biology, immunology or physics. Since I come from a different field, mathematics, I bring an unusual point-of-view that sometimes allows me to generate questions different from the classical questions in those areas. Like the question of DNA computing.
For the young scientist, this question of choosing the right question to spend your valuable limited intellectual resources on is critical. I often sit for months and do no productive work that anybody can see, because I don't feel I have a good enough question to work on. Rather than take on some lesser question, I would prefer to read a mystery novel. The point is, sometimes it's important to lie fallow for a time waiting for the 'right question' to appear, rather than to engage in uninspiring work and miss the important opportunity when in comes.
But in the end, the real challenge of science is to make progress - to succeed, to contribute knowledge.
NW: Of course, in an academic setting, there's that drive to publish or perish...
LA: Yes, that's a problem, because you have to feed your family. But I always tell my students and junior faculty that they are better off following their inspiration and their hearts in what research they do, that they should always try to take on the most interesting and important problems, that they should not waste their time on little problems just to make another line on a vitae.
My philosophy is that it's important, in a curious way, for scientists to be courageous. Not physically courageous, but courageous in an intellectual way. I believe that by working on extremely hard problems, by being courageous, you may succeed. But even if you fail, you fail gloriously. And you will have learned immense amounts, you will have extended the envelope of what you can do. As a byproduct of failing on a great problem, I have always found that I could solve some lesser but still interesting problems - which then fill your vitae.
Perhaps they would agree that one should have some ambition and do what's important, but Feynman would have you sneak up on it from behind, while keeping the juices flowing with playfulness. I'll go with that.
Regardless, you can find people's ways of focusing to differ and not have to reconcile every smart person's personal way to study and research.
Of all the problems to work on, work on those that have have a reasonable angle of attack (Adleman/Hamming), that you have the ability to solve (Feynman). And in addition, make sure you plant little seeds instead of just working on big problems after you've succeeded (Hamming in 'You and your research'), which amounts to being able to play with spinning plate problems (Feynman)
In Surely You're Joking, Mr. Feynman! there is a whole chapter dedicated to his work at an electroplating company.
Something that I think most novice programmers should take to heart, and something I wish I had known earlier...when you start out, you want to build something big and new, like a video game, or hell, a Rails site that you think will be the next Facebook clone. Not only is it beyond your ability as a novice, it may not even be a "problem" worth solving, because you don't yet know what's worth solving until you become a bit better at programming. I stopped programming after awhile when I couldn't come close to seeing what I thought were my goals...it's been much easier to do it day-to-day by focusing on the small steps...and after awhile, the big task doesn't seem hard after all.
Meanwhile, programming has a pretty distinct advantage...even if you spend your time mastering seemingly benign and trivial things, such as being better at parsing, function design, or just automation of what you've done before, you're not only learning, but making yourself more productive at the same time...something that's not nothing as you actually begin your grand plan.
Speaking of Feynman and computing and seemingly banal tasks...I've seen only scarce detail of his supervising the "computers" at Los Alamos:
> Richard's interest in computing went back to his days at Los Alamos, where he supervised the "computers," that is, the people who operated the mechanical calculators. There he was instrumental in setting up some of the first plug-programmable tabulating machines for physical simulation. His interest in the field was heightened in the late 1970's when his son, Carl, began studying computers at MIT.
It's not something he's famous for, but I wouldn't be surprised if such a task was critical to the success of the researchers...I've gone through both his memoirs and hadn't seen much mention of it though. Anyone else have more details?
> By the end of that summer of 1983, Richard had completed his analysis of the behavior of the router, and much to our surprise and amusement, he presented his answer in the form of a set of partial differential equations. To a physicist this may seem natural, but to a computer designer, treating a set of boolean circuits as a continuous, differentiable system is a bit strange. Feynman's router equations were in terms of variables representing continuous quantities such as "the average number of 1 bits in a message address." I was much more accustomed to seeing analysis in terms of inductive proof and case analysis than taking the derivative of "the number of 1's" with respect to time. Our discrete analysis said we needed seven buffers per chip; Feynman's equations suggested that we only needed five. We decided to play it safe and ignore Feynman.
> The decision to ignore Feynman's analysis was made in September, but by next spring we were up against a wall. The chips that we had designed were slightly too big to manufacture and the only way to solve the problem was to cut the number of buffers per chip back to five. Since Feynman's equations claimed we could do this safely, his unconventional methods of analysis started looking better and better to us. We decided to go ahead and make the chips with the smaller number of buffers.
> Fortunately, he was right. When we put together the chips the machine worked. The first program run on the machine in April of 1985 was Conway's game of Life.
This part tells us to keep solving whatever comes in our way. In today's world, most successful start-ups don't always succeed with their first idea, but rather an iteration of it.
Good advice! Thanks!
The odds that your first idea is perfect is small. Recognizing when you have a better option is important.
(But as long as we're listing successful pivots, Intel is one of the most famous pivots out there.)