Hacker News new | past | comments | ask | show | jobs | submit login
Linus Torvalds Answers Your Questions (slashdot.org)
228 points by tmhedberg on Oct 11, 2012 | hide | past | favorite | 131 comments



  Linus on git:
  ...it wasn't all that pleasant to use for outsiders early 
  on, and it can still be very strange if you come from some 
  traditional SCM, but it really has made my life *so* much 
  better...
So, here he pretty much acknowledges you have to think like Linus to get git like Linus. I remember getting bashed here on HN for saying so, but I don't care and I'll say it again.

From the user's perspective git is not fun to use, at least not like hg or fossil are. Good programs just get the job done, but great programs are fun to use.


You seem to be mixing "unpleasant to use early on" with git as it is today in order to prove your point.

And to be fair your point is entirely opinion based anyways. You don't like git (fair enough), but you might be getting bashed because it is a lot of people's tool of choice.

Personally I absolutely love Git.


I'm going to second this, I love using git.


I didn't use git early on, but it's pretty unpleasant to use today as well. It's fine if you learn the right incantations but it breaks almost every unix rule in the book. Literally:

http://www.catb.org/esr/writings/taoup/html/ch01s06.html

There's only a few guidelines there that git obeys. The most obvious one it obeys is "separation".


Reading through the list of unix rules you linked, I find git reasonably obeys them.

For example "Clarity is better than cleverness": Git provides a clear internal structure (Blob DAG). It does not try to provide clever merge algorithms, since "Buying a small increase in performance with a large increase in the complexity and obscurity of your technique is a bad trade".

Or another example "Design programs to be connected with other programs". In contrast to Mercurial, many git commands are separate programs and it is trivial to implement your own in any language.

Arguably git violates "In interface design, always do the least surprising thing", since it does not try to provide a comfy start for svn exiles.



Git has an annoying feature where commands (like 'reset') will do completely different only vaguely relayed things based on flags or arguments, instead of having different commands.


Most tool require you to learn an interface to operate, and not the internals. Git is quite the opposite.

If you are learning a set of 'incantations', git will always seem hard to use. But if you take some time to learn how git works on the inside (which is really not that hard), it gets much easier to understand why and how you need to use a certain command.

Some argue that this is why git is bad, but I'd argue that this in the end is a good thing, because it gives you so much more power.

Learning how to use git is an investment, bit in my opinion an investment worthwhile.


Which rules do you think it breaks?


Well, from your perspective it is not fun to use. I actually really enjoy using git.


Since we (two people using it personally beforehand) introduced git at work, the 15 others of them — the kind of folks that swear by the mouse and the GUI — enjoy it so much every day they voluntarily dropped all forms of GUI/VS2010 integration to do everything git on the command line.


You are making some pretty big jumps in logic there... he's not saying you have to "think like Linus" to get Git, he's just saying it is different enough from traditional SCMs that there is a learning curve.

I use a traditional SCM everyday at work (Perforce) and it only took me an hour or so of reading to get Git and be productive with it. I'm not sure I'd say Git is "fun" to use, but I'm not sure I'd say that any SCM I've ever used is "fun". Getting the job done is more important than fun to me.


I wouldn't be surprised if many people enjoy using git because of its complexity. That's one reason why I consider it fun to use.


I'm not a software developer by trade, but I do tinker. I've thought I understood pointers fine, but maybe I don't. Can someone who does expand on Linus' comment regarding deleting an entry from a singly linked list?

I understood the example he lamented, and probably would have done exactly that. I didn't understand his pointer to a pointer example though.


Instead of pointing to a node, then dereferencing node's 'next' field, you can point directly to the 'next' field of a node (which is a pointer itself), and directly update it, as Linus shows. This eliminates the special handling of list head.

  // Trying to remember C syntax
  prev_link ** Node;
  // **prev_link == previous_node->next
  for (prev_link = &list_head; cur_entry = **prev_link; cur_entry != NULL) {
    if (must_remove(cur_entry)) {
        **prev_link = cur_entry->next; // exclude cur_entry
        break;
    }
  }


You have your syntax reversed, it would be Node prev_link, and cur_entry would be a Node* not a Node. Also your loop condition goes in the middle and the per-stage operation goes last.

I still don't get what you are doing here, since previous never updated, it always points to the list_head. So you would be changing the first node to point to the node after the one you are removing, which means you remove all nodes after the first up through the to-be removed node since the head is now pointing to the next.

Here's my stabs at it:

    node **prevNode, *nextNode;
    // Doesn't work, curr becomes null and can't reference next
    for(prevNode = &list_head, currNode = *prevNode; 
        currNode != NULL; 
        currNode = *prevNode, prevNode = &currNode->next)
        if(must_remove(**currNode) {
            **prevNode = currNode->next;
            *prevNode = NULL;
        }
The only other way I could think to remove a singly linked list element is the dumb way:

    node *curr = list_head, *prev = NULL;
    for(; curr != NULL; prev = curr, curr = curr->next) {
        if(must_remove(*curr) {
            if(prev == NULL)
                list_head = curr->next;
            prev->next == curr->next;
            curr = NULL;
        }
    }


You are setting currNode and prevNode in the wrong order in your loop incrementer. Also, the variable name "prevNode" is misleading because it is not the previous node, it's the link from the previous node to the current node. Let's change it to "prevLink".

For the first iteration through your loop,

    prevLink = &list_head
    currNode = list_head
Second,

    prevLink = list_head->next
    currNode = list_head  /* (again) */
Third,

    prevLink = list_head->next->next
    currNode = list_head->next
In other words, from the second iteration on, prevLink is actually nextLink, that is, currNode->next.

Here's my take:

    typedef struct node {
        /* ...some fields... */
        struct node *next;
    } Node;
    
    void foobar(Node *node_list)
    {
        Node **prev_link;
        Node *current_node;
        Node *list_head = node_list;
        
        for (current_node = list_head, prev_link = &list_head;
            current_node != NULL;
            prev_link = &(current_node->next), current_node = *prev_link)
        {
            /* ...do something... */
            if (must_remove(current_node)) {
                *prev_link = current_node->next;
                free(current_node);
            }
        }
    }
This code compiles with no warnings, so it must be perfect. :)

The beauty of it is that by doing it this way, you don't need to special-case the list head at all. You could do something like foobar(&(actual_list_head->next)) and have foobar() run on every element except the first one, and it would just work.


This almost works, but the caller will not know when foobar() has deleted the list head, and you update prev_link to the current_node even if you have deleted that node

Did you mean:

    void foobar(Node **prev_link)
    {
        Node *current_node;
        
        while((current_node = *prev_link) != NULL) {
            /* ...do something... */
            if (must_remove(current_node)) {
                *prev_link = current_node->next;
                free(current_node);
            } else {
                prev_link = &(current_node->next);
            }
        }
    }


There you go. So for future reference, if anyone wonders how many HN commenters it takes to iterate over a linked list correctly? The answer is four. :P


Hey, I had the naive implementation at the bottom of my post. I had a working one. It just sucked.


....and this is a flaw with Linus's idea. It makes it harder for mortals to build software. Fine of your team is all C experts, otherwise not a route to reliable software.


That makes more sense, change the next pointer from the previous. Thanks!


When you start the list traversal, you would have something like

    Node** link = list_head
and as you traverse, you would do

    link = &(entry->next)
Then, when you find the entry that you want to delete, link points to either (a) list_head if you are deleting the first entry or (b) the next pointer of the previous entry. Either way, doing

    *link = entry->next
does the trick.

This way, you save on the conditional branch.


That traversal doesn't traverse. It just skip list to entry->next and would keep resetting.

You could do something like this instead:

  link = &((*link)->next);


Yes and then to set it would be

  *link = (*link)->next;


Actually I would vote for the simpler one. Branch predictor should take care of the optimization pretty well.


His statements about instruction sets are interesting: basically that people get excited about low-level instructions that exploit details of processor architecture, but what would really improve performance are high-level instructions that allow software to defer to optimal implementations provided by the processor. Makes sense. Very little software can know the details of the processor architecture on which it will run, JITed code being a major exception.


To what extent does something like the JRE JIT code know about processor specific details?

You download a single x86 or x86_64 java binary from Oracle; I guess they could bake in processor specific optimizations for as many processors as they can think of at the time but I think even then the benefits of high level instructions that Linus lays out would be present. At the very least it would simplify what has to happen when a new processor comes out.


To what extent does something like the JRE JIT code know about processor specific details?

I honestly don't know. The most obvious thing I can think of is that it might check the L1 cache size, which I believe can be discovered even if the processor family is a new one that the JRE doesn't recognize.


cat /proc/cpuinfo. That is what the kernel says about the cpu, which is about as much info as you are going to get without some database lookup on specification data given a model number.


Which is funny because that is the reason people choose Java and ML over C: to let the compiler do the optimization. But that doesn't always work.


I don't see why you imply there would be no optimizations with C, a compiler like gcc does many optimizations as well with a flag like -O3. With Java you do have JIT of course.


I want to really re-iterate how great Junio Hamano has been as a git maintainer, and I haven't had to worry about git development for the last five years or so. Junio has been an exemplary maintainer, and shown great taste

Having listened in on the git developers mailing list for the last few years, occasionally getting involved, I can re-iterate how true this is.

Sure, git development uses some anachronistic-feeling conventions (like mailed patch-sets) but these all have reason behind them and exist because they work for the people who are working on git.

I don't know how many of the processes are held over from before Junio got involved, however both those processes and how Junio handles the entire thing are a great case study on how open source can be done. Some entry points for interested people:

https://github.com/gitster/git/blob/master/Documentation/how...

https://github.com/gitster/git/blob/master/Documentation/Sub...

https://github.com/gitster/git/blob/master/Documentation/Cod...


I found this interesting: " [...] I think that rotating storage is going the way of the dodo (or the tape). [...] The latencies of rotational storage are horrendous, and I personally refuse to use a machine that has those nasty platters of spinning rust in them. [...] "

Is this about SSD vs. HDD? Does he favor SSD over HDD for Desktops?

Personally, my 99% use time computer is a MacAir with 256 GB SDD, and I love how fast it is and that I don't have to worry about breaking a HDD. But having so little disk space is definitely a limitation to me. I would have expected, that Linus has a huge HDD in his desktop computer, maybe in combination with a SDD for speedup.

Apart from price, isn't lifetime still a huge problem for SSDs? I was expecting HDDs to die out for Laptops but to be around for a long time on desktops.


> I would have expected, that Linus has a huge HDD in his desktop computer, maybe in combination with a SDD for speedup.

What for? The complete Linux git repository is less than one gig.

    /dev/sda1       141G   93G   42G  70% /
That's me. If you mostly just do coding, you don't need gigs and gigs of space - for most things, at least. 256 GB would be more than enough for the immediate future, so I'll definitely put an SDD in my next computer.


Yeah, that was my line of thinking, too.

Hasn't turned out to be true for me. The 256 GB are quite limiting to me.

Things that eat up a lot of space on my SSD:

- eMail: dozen's of GB, but I wan't to be able to read them offline - Virtual Machine, I run Windows 7 with 50 GB allocated space - clumsy programs which eat a lot of disk space like Visual Studio - Videos that I record from experiments - recorded data from said experiments - when you record at 1kHz space goes up fast - mp3 music


So, basically things that gobble space are:

* Audio and video files. That's to be expected.

* Windows.

My guess is that Linus doesn't have much to do with the latter, and probably doesn't use his work computer for the former.


He explicitly said to put media on a NAS disk.


What's a desktop? Something like a server?

;-)


Anybody else notice the date animation and mouse over text for same?


Off Topic: For Slashdot's 15 years anniversary, in the month of October, everyday there is a different community submitted logo on Slashdot!

All these Interviews are done for the 15 yrs anniversary as well with famous Slashdot active users...Today Linus, last week Wozniak: http://apple.slashdot.org/story/12/10/01/1527257/ask-steve-w...


FTA:

> People were apparently surprised by me saying that copyrights had problems too. I don't understand why people were that surprised, but I understand even less why people then thought that "copyrights have problems" would imply "copyright protection should be abolished". The second doesn't follow at all.

> Quite frankly, there are a lot of f*cking morons on the internet.

I have to admit that I think pretty much the same thing whenever I read discussions about patent/copyright law on the Internet.


Btw, it's not just microkernels. Any time you have "one overriding idea", and push your idea as a superior ideology, you're going to be wrong. Microkernels had one such ideology, there have been others. It's all BS. The fact is, reality is complicated, and not amenable to the "one large idea" model of problem solving. The only way that problems get solved in real life is with a lot of hard work on getting the details right. Not by some over-arching ideology that somehow magically makes things work.

Yes, well, Torvalds is here disputing one of the main drivers of science, the motive that brought us Newtonian physics, quantum mechanics, you know, the very things he depends on via his use of microprocessor technology.

If your "one overriding idea" is wrong, then certainly that'll get you into trouble, but as history demonstrates, they aren't always wrong. They are always hard to come up with, and often when you come up with them you face an uphill battle with people who want to maintain the status quo and who can't conceive of "one big idea." But eventually those ideas are the ones that cause tectonic shifts in human progress.

His reference to Edison is apt. It wasn't Edison who brought us the AC motor that literally revolutionized power distribution and the whole industrialized world. It was a man who thought big, who had "one overriding idea": Tesla.


No, even in science he's right. You can have a model that is fundamentally correct, but useless in working out the particulars of a situation.

There's a Feynman quote that "every theoretical physicist who is any good knows six or seven different theoretical representations for exactly the same physics."

Often you find an idea that seems more fundamental than what you had before. General relativity supersedes Newton's model of gravity. But if you want to calculate ballistic motion across the surface of the earth, you're not going to reach for general relativity -- Newton's is the more useful model there.


Physicists' job is to come up with "one overriding idea". But engineers know "one overriding idea" never works except in theory, and that the devil is in the details.

Linus is not saying that microkernels are architecturally bad. He concedes that it is an architecturally superior idea but it is not a panacea when it comes to OS design. It merely shifts inefficiency outside the kernel.


> Linus is not saying that microkernels are architecturally bad. He concedes that it is an architecturally superior idea

This is what he actually said on Slashdot (my emphasis added):

> I think microkernels are stupid. They push the problem space into communication, which is actually a much bigger and fundamental problem than the small problem they are purporting to fix. They also lead to horrible extra complexity as you then have to fight the microkernel model, and make up new ways to avoid the extra communication latencies etc. Hurd is a great example of this kind of suckiness, where people made up whole new memory mapping models just because the normal "just make a quick system call within the same context" model had been broken by the microkernel model.


How is that different? A microkernel can be superior in theory, while still being stupid in practice. In theory you can ignore the realities of communication, in practice you can't.

That looks like the definition of a theorerically superior idea to me.


I really like how Plan 9 solved communication through sockets and files (which is how the OS doesn't care if any part of the system is remote or local). Just sayin, it has been done pretty elegantly.


You and Linus are working off different definitions of "solved". Sure they did it, he never said it can't be done.

But does it run efficiently and quickly?


It isn't slow enough for me not to notice while tinkering with it. Of course, I'm running it a kvm vm and have 8 cores of 3.5 ghz going on. It would be hard for a terminal environment to be slow on that.

I'd aruge: not slow enough not to be worth the benefits of userspace everything, the ability to have a completely distributed system, and not slow enough not to make the interaces with the kernel not wonderful. I much like how plan 9 skips the whole RPC and syscall nonsense and does almost straight file IO and socket communication for everything.

It is like Java vs C - it isn't slow enough not to take the niceties in some circumstances. High performance computing might like the Linux kernel, but as a developer who wants working applications that are really easy to interface with OS features, Plan 9 has that nailed.


Terminal environment? You're gauging the performance of the kernel by interactive typing performance?

I don't know where to start.


It isn't slow enough for me not to notice while tinkering with it. Of course, I'm running it a kvm vm and have 8 cores of 3.5 ghz going on. It would be hard for a terminal environment to be slow on that.


That's when you hire a systems engineer!


But engineers know "one overriding idea" never works except in theory

On the contrary, good engineers are not dogmatists. They're open to new ideas and strive for simplicity, which means striving for unified approaches based on empirical reasoning. See Newton.


Your continued comparison of physics to software engineering is horribly flawed.

Software engineering is about creating something for humans while working with other humans. It is a science that is primarily constrained by human nature.

Physics is about describing reality. It is a science that is primarily constrained by reality.

In software engineering, it is always better to choose a flawed technical solution that leads to less friction among humans. Because less friction among humans is the entire point of software engineering.


I am genuinely curious how anything technically superior ever gets a chance in this concocted world of yours. Clearly, you prize "less friction among humans" above all else. So given a choice between a clearly flawed technical solution ( say building an operating system in PHP) and alienating other humans ( specifically, an office full of PHP experts ), we should build the OS in PHP because that leads to less friction among humans...yeah ? Let me know how that goes...

Back on point, In grad school, we read the Tenenbaum-Torvalds debates in OS class to get an idea of the current state of research. Tenenbaum's point, among other things, was that microkernels were an unsolved problem. Granted they pushed the problem space into communication, and you had to contend with latencies....but that was the whole prize, to solve this brand new problem space. Whereas big bad monolithic OSes were a solved problem. We know they could be done. As far as a CS researcher was concerned, there was no meat in that space...it was mostly implementation detail and optimizing. Whereas microkernels, despite being a cleaner, more elegant approach to OS design, clearly had tons of issues to be solved, and whoever took a stab at them would push the state of the art forward. That is the point => to find a superior technical solution, not minimizing friction among humans.


> I am genuinely curious how anything technically superior ever gets a chance in this concocted world of yours.

If it actually works, sure. Unfortunately, due to uncontrolled hipsterism most technologies touted as "technically superior" don't actually work. It's very easy to throw up a demo or a benchmark. It's very easy to write a paper that shows how amazing what you wrote happens to be. Integrating something into a production stack that doesn't cause more problems than it solves is really hard. And it very rarely happens.

So yeah, while you're still at the office at midnight trying to track down what exactly in your stack exploded ... I'm sleeping ... because I decided to go with something that's proven, rather than the latest and greatest.

> we should build the OS in PHP

Bonus points for trolling.


> Back on point, In grad school, we read the Tenenbaum- Torvalds debates in OS class to get an idea of the current state of research.

> ... tons of issues to be solved, and whoever took a stab at them would push the state of the art forward. That is the point => to find a superior technical solution, not minimizing friction among humans.

You are talking from the perspective of computer science research, whereas he is talking about software engineering in practice. There's a difference there, imho.


> microkernels were an unsolved problem

Wrong. The "problem" is designing an operating system that is reliable, fast, fault-tolerant, user-friendly, maintainable, etc etc.

"microkernels" are just one example of a proposed solution to solve this problem - a design that despite the fanatical support of rabid academics like Tenenbaum has been shown to be a poor choice due to the complexity involved in implementation.


I'm glad to see someone getting the point, which is that we as scientists strive for simplified, unified explanations and systems, and that's a good thing.

Linux is nice and practical, but try working with it as a systems programmer sometime. It's full of hacks and arbitrary complexity and all manner of nonsense that persists year after year.


I hate all the existing OSes for this reason. Unfortunately, it's easier to crowd source a big pile of dung (sure you can help - just pick up a shovel!) than it is to build something elegant.


The problem is that even the most beautiful and simple solutions end up turning into a big pile of dung to actually be useful in the real world. You get to the 80% point and then all of a sudden you realize that to do the next thing you need one little hack here. Now you get to 85% and you realize you need one more little hack, then at 88% you need another little hack to make it speedy, ...


The problem is that even the most beautiful and simple solutions end up turning into a big pile of dung

Until they don't. You don't know what who is going to come up with next.


Way to hack off the end of my sentence which is the most important part.


The reason for this appears to be that elegant solutions do not survive contact with the real world.

When someone comes to you with an inelegant hack that makes her database benchmark run 20% faster, you can either accept it and sacrifice some of your elegance, or your can reject it on elegance grounds and sacrifice some of your relevance.


Interestingly, Postgresql refuses to support query plan hints, a very popular hack on the Oracle world, because they insist if you can write a better query plan than the planner, you should teach the planner how to do it instead. Elegance in the long term over pragmatism of the moment.


Then write something better.


Great comment, basically a restatement of Christopher Alexander's philosophy of architecture, as applied to software. This to me is where top-down thinking like Design Patterns sometimes go wrong; The GOF and related literature never really emphasized or adopted at a fundamental level, Alexander's most essential insight: the discovery of patterns should be driven from the bottom up, from the essential goal of expressing how human beings live in buildings (software).

I like and use patterns, but rarely according to 'textbook' implementations, and I always felt that the patterns literature never went deep enough into Alexander's fundamental principles, which I think you've paraphrased so well here.

Less friction among humans is the entire point of software engineering.

Should be on a banner over every CS department and software publishing house in the world.


And... Newton was also wrong. His theories ended up not "overarching" at all, but flawed for aspects of reality that he didn't know about/didn't think about/had little data for.

CS has lots of similarities with physics, yes. But the devil is in the differences.

Think of it this way:

If Newton had been trying to solve problems that included objects moving near the speed of light, not caring for what that would imply, he would have failed miserably.

Computer scientists were trying to use microkernels to solve problems that required performance, without caring for performance, and failed miserably.

Microkernel research was useful, Linux actually implements a lot of concepts devised to improve microkernel performance. It just happens that these concepts work even better when applied to a monolithic/modular kernel. Though luck for microkernelists...


Your continued comparison of physics to software engineering is horribly flawed.

Your contrast of science and engineering is horribly naive.

In science, we use Ockham's Razor regarding explaining phenomena; in engineering we use Ockham's razor regarding implementing requirements. Both follow the same logical, empirical process, and both strive for the same result: as simple as possible, but no simpler.


Occam's Razor says that we should prefer simpler explanations to more complex ones with the same explanatory power. That is, we should prefer explanations which don't raise more questions than they answer.

This has nothing to do with the engineering 'KISS' principle which desires simpler architectures because they are easier to understand and thereby to debug and maintain. The point is to avoid astronaut architecture, which is what you get when you chase theoretical ideas rather practical solutions. Linus's whole point is that microkernels are a prime example of astronaut architecture.


There is nothing astronaut-ty or theoretical about microkernels. For eg, QNX is a fairly successful, practical microkernel, its what your BlackBerry runs on. If anything, microkernels are one of the most practical ways to approach OS design. You strip the kernel to its bare minimum and do everything else in usermode via IPC. It so turns out in practice that the performance costs of IPC is much more than making a direct syscall in a monolithic kernel, but that is just today's state of the art. As IPC performance improves & monolithic kernel KLOCs explode, you will see microkernels making a strong comeback, especially on mobile. Wait and see.


"As IPC performance improves & monolithic kernel KLOCs explode"

1991 called. They told you to keep waiting.

If you have to write way more code to hack around the message passing performance, you haven't simplified anything.


Having to hack up all your code to perform appropriately in the context of a message-passing system is not simpler than a monolithic kernel. That's the rub.

If performance were not a concern, message-passing could be simpler.


If he had only criticized a monolithic kernel and not general ideas in general, then I would not have criticized him. But he takes a single example of an allegedly failed general approach, and then takes a wild leap to the conclusion that all such methodologies are hopeless to try.

The irony is that it's Torvalds who is leaping to wild and unjustified generalizations.


I read it as, if you have a methodology/ideology as your goal rather than an actual goal, you'll fall short of people who keep their eyes on the prize. I guess it could be read multiple ways.


If he'd said it like that, I wouldn't have taken exception to his statement. I do agree with him that ideas for their own sake are a bad idea, especially in an engineering context.


Are you a scientist or an engineer? You said "we" for both.

They do not follow the same logical processes. Science begins with observations and uses inductive reasoning to create theories. Engineering starts with formal rules and proven facts, and uses deductive reasoning to prove the correctness of a particular solution.


side comment to q, not answers (as the q is not for me): isn't engineering about creating a solution that mostly works, based on mostly formal rules and mostly proven facts, test it, start using it in production, and then MAYBE, if you have the need and/or resources actually formally proving that it will keep working in most cases or discover the cases where it will not and ad some hacks for them?

(there's actually 2 main approaches to engineering: the science/proof/theory based one and the empirical one that also uses science/proofs/theory but as a scaffolding and as a compass to realize we don't waste resources trying to do something "really" impossible)


Occam's razor is merely a heuristic, not a part of the scientific method. Its usefulness is debated in philosophy of science, and also in a practical sense in the context of machine learning; their conclusion is that it could very well be false.


Cf. Domingos (1999):

"Occam’s razor has been interpreted in two quite different ways. The first interpretation (simplicity is a goal in itself) is essentially correct, but is at heart a preference for more comprehensible models. The second interpretation (simplicity leads to greater accuracy) is much more problematic. A critical review of the theoretical arguments for and against it shows that it is unfounded as a universal principle, and demonstrably false. A review of empirical evidence shows that it also fails as a practical heuristic."

http://reference.kfupm.edu.sa/content/r/o/the_role_of_occam_...


See Newton

Are you trying to imply that Newton was somehow an effective software engineer?


"One overriding idea" is a type of dogma. Working towards simplicity is a good thing, but being dogmatic about an architectural principle is not.


What would happen if Newton designed your spaceship? It would miss its target, because his simple idea was not precise enough.


I disagree with you here, and I think you've understood his answer differently than I have. To me, it seems that Linus is talking about this "one large idea," it's an ideology that's set a priori.

In the case of Newton, he was observing physics, and then created a model around what he observed. He invented calculus to solve a problem. He wasn't trying to espouse an ideology--he was simply trying to understand the world around him. He happened to be brilliant, so he ended up doing a lot of seminal work.

To Linus's point, "reality is complicated." Complex things cannot be abstracted away into incredibly simple things. At some point, you need to solve the complex problem. Regardless of where that problem may be. And that's what I took away from his answer.

I honestly don't understand why you brought up Newton and physics as an antithesis of this idea. Physics (and microprocessors) are immensely complex things that have not been simplified as time has gone on. We've built upon works of our past and improved our understanding and abilities, but it's still remarkably complex.


> At some point, you need to solve the complex problem.

This is a good point that I see ignored many, many times by people trying to push a simple system.

One of the worst examples are the people who think files are bad, and that all state needs to be preserved all the time. They claim that the problem of determining what to save is not worth solving because it doesn't exist in their systems.

Needless to say, they are wrong. I will not redesign my workflow to fit their tools, and they're amazingly presumptuous to even consider asking anyone to. Just because they cannot solve a problem doesn't mean it isn't actually a problem, and the fact they consider differences of opinions to be technical problems is diagnostic of their whole worldview.


I would agree with Linus that we should not have an a priori ideology, but I think his argument carves off quite a bit more than that, he's against unified architectures per se, not only ones derived a priori.


Linus is not a scientist. He belongs to the most hardcore, practical, utilitarian guild of engineers. He takes what visionaries come up with and actually makes it work for all of us. He's the guy Tesla should've hired to bring Wardenclyffe project to completion.


Even among scientists managed complexity is the rule, not single blinding insights. Holding a handful of core physics theories up as examples of the way science and engineering "should be done" ignores a staggering amount of important work (like, the whole field of biochemistry for just one example) which doesn't meet that arbitrary aesthetic requirement.


Yes, certainly Linus is admirable in respects, but he's wrong to dogmatically think that his approach is the "one overriding idea" of how you should approach engineering.


No, scientists are coming from the opposite direction. Observing things, then draw conclusions and connect the dots. Pseudosciences, like homeopathy are backwards. They pull "universal truths" out of thin air, like "Law of similars", and after that they don't mind even if experiments and reality doesn't support these theories.


The direction is not the opposite for science. In Popper's philosophy of science, you form a theory beforehand and specify how it can be falsified, only then you do the experiment. Otherwise you could obtain some data and provide an ad-hoc explanation, without knowing whether it really explains anything. It's related to the Texas sharpshooters fallacy, where you shoot a gun and only afterwards draw the bull's eye around your shots. Of course practice differs from this ideal, you do get inspired to make certain theories by data (context of discovery), but to make a proper theory you need to make predictions and specify what would disprove the theory (context of justification).


You can't just discard observation-based hypothesis generation as irrelevant. It is a key part of the process.


I don't, I say it's part of the context of discovery (the way you come up with new hypotheses). On the other hand, you can't overlook the problem of induction, pure empiricism doesn't work. You need some theoretical framework before you can try to verify your hypotheses.


A true scientist seeks an integrated understanding. Newton related the objects that fall on Earth to the motion of the planets. One big sweeping idea for a wide range of phenomena. That's science.


That's the science you like. There's plenty of science without sweeping ideas. There will likely always be.


> Torvalds is here disputing one of the main drivers of science

For all I know, he may also be disputing one of the main drivers of cheese making, but that's not what he was asked about. He was commenting on a specific technical and organisational challenge, where he stated that a pragmatic approach trumps ideology. This may be true/false in other domains but he wasn't addressing them.


Science isn't about "one overriding idea". It's about "well, this is the best we have so far".


It's about striving to integrate everything that can be integrated into a unified system -- which is precisely what Torvalds opposes.


Seems to me that you're choosing an interpretation of those words which makes Linus look foolish, when an interpretation that doesn't make him look foolish fits the words just as well.


His remarks are foolish, yes. If you want to construe that as he is foolish, that's your own personal leap.


"His remarks are foolish, yes."

I did not say this. Please don't imply that I did.


A unified science is a nice idea in theory but it's not what happens in practice. No scientist would believe that you can somehow combine results from chemistry and psychology, so in effect you have many separate systems. There's not one big idea behind the various fields of science, merely a common method.


Striving, yes. Rather than shoehorning, which is what Linus appears to oppose.


> If your "one overriding idea" is wrong, then certainly that'll get you into trouble, but as history demonstrates, they aren't always wrong.

I don't think that even Linus is opposed to One Big Idea when it works. Take his Git, for instance. At the core, it's pretty much just this:

1. File data goes in blobs named by their checksum. 2. Directories are trees that have a bunch of name-checksum pairs for files, and references to other trees. Trees are named by their checksum too. 3. Commits are named by their checksum too. (Notice a theme?) They refer to their toplevel tree and their parent commits: zero for a root commit, one for a normal commit, or more for a merge.

Everything else is built on top of that. Making a distributed VCS on top of that was just adding features to get commits (and the objects at which they point) between machines. Tags are just named pointers to a commit. Branches are just tags that get reassigned when you make a new commit to the tip of a branch.


I tend to think that Linus's opinions work well in the software world. And with that limitation, he's right. And he's right because we software people haven't yet found our own Unified Theory, if you will, to adhere to. In the meantime, "[t]he only way that problems get solved in real life is with a lot of hard work on getting the details right."


One of the reasons his methodology seems to work is that that's the only one that's allowed to work on that scale. Politics is definitely an issue, and politics is driven by how individual people think on average, and if how they think on average just isn't very good, then only the pragmatic approach can "work". That's not a problem with a system-building approach per se, it's a problem with it given the current state of intellectual development in the culture at large. People prefer a smattering of contradictions to a coherent truth.


The laws of physics don't change (probably) - a good (and correct) model of the universe will stand the test of time.

The solutions engineers devise to solve real-world problems will change as the underlying technology, science and society changes - what was a good solution at one point time may not be a good solution today - what is a good solution for one set of requirements and constraints may be an appallingly bad solution given a different set of requirements and constraints.

Oh, and this attempt to conflate the fields of engineering (devising solutions to problems) with science (devising models to explain observations) is so misguided and plain wrong I can only assume you are deliberately trolling. And given this link points back to Slashdot, I can only guess there was a bit of leakage back in the other direction...


What was Tesla's "one overriding idea" in your opinion?

The major pursuit of his life was wireless power over great distances which was (and is) wildly impractical.


Alternating current, or perhaps more precisely, 'resonance' in the form of AC and radio waves.


It's not about "one overriding idea", it's about having a fundamental, unified understanding. Edison was about randomly trying things and see what works, Tesla was about seeking fundamental understanding and predicting what will work.

Picking on Tesla for some error given what he did with the AC motor and other projects is very wrongheaded. You're depending on Tesla right now in many ways. Obviously he was wildly successful; Edison's contributions have all fallen by the wayside.


Edison's contributions have all fallen by the wayside

I know it's kind of trendy right now to deify Tesla and demonize Edison, but... really?


Don't pretend to know that I said what I did because it was a "trend." It's a fact that Edison made no fundamental contribution or discovery. He was a very productive hack. Tesla's contributions are lasting and revolutionary.


You just snuck "fundamental" in there. Show me a great anything that is all foundation. Foundation is where you start, not stop.


>It was a man who thought big, who had "one overriding idea": Tesla.

>It's not about "one overriding idea",

Make up your mind.


So, you're more interested in picking petty nits in phrasing than grasping the overall point. It was Linus's phrase, and he was wrong to phrase what he was actually talking about that way (no one ever uses "one overriding idea" in every sphere whatsoever).


These decisions most often come down to a set of tradeoffs and expectations. I don't think anybody seriously claims microkernels are the best idea for all scenarios. However, there are advantages over monolithic kernels that may be more important than the tradeoffs in certain cases.


I think you're grossly misconstruing his point -- that you shouldn't let architectural ideology or a preconceived notion of what the best solution to your problem is get in the way of shipping good software.

The Hurd project had an important problem to solve -- there was all this nice free software for people, but they needed a kernel to run it on or they wouldn't have a complete free operating system.. Their #1, overarching priority should have been to deliver a stable, fast kernel to users, no matter what design it eventually ended up with. However, the project placed too much emphasis on the buzzword of the day, the microkernel, to the point that it got in the way of the project's most important goals. You almost can't fault the project for drinking the microkernel koolaid in its early days, but it likely became clear very quickly to them that it was going to take a significant amount of time and work to get a usable microkernel running. Rather than do the best thing for the free software movement or the users, scrap the microkernel and try to get a high quality "monolithic" kernel out there as fast as possible, the team basically decided to create an inferior product in stubborn attachment to their technological ideology.

That's the kicker right there -- like a whole lot of open source software, the Hurd project wasn't really advancing the state of the art; they were doing a free implementation of an existing product. Their attachment to doing a microkernel shouldn't have been as strong as their need to do a good kernel, period, but apparently it was, and they were bulldozed by Linux and the BSDs for it.

If you follow the Linux kernel's development, you'll know that the contributors and maintainers take pretty much the opposite of an ideology-oriented approach -- you might even say it's pretty well rooted in the scientific method. The project is willing to try just about anything and let the results speak for themselves. Obviously, even if anyone had wanted to, trying out a microkernel approach at any point past the very early days of Linux would be impossible, but there are numerous other cases of the project trying out multiple approaches or multiple implementations of the same idea, and everyone needing to be willing to see their code thrown out if it didn't measure up.

I'd like to think Torvalds can see the value in advancing the state of the art in microkernels for it's own sake -- I find the work being done on the L series of microkernels to be very impressive, though Torvalds may stand by his opinion that "microkernels are stupid and make a hard problem harder." Regardless, I think his main message here is not that you shouldn't bother advancing the state of the art or following a different design path for its own sake, but that you shouldn't let yourself get committed to a certain design when your primary goal is to fill a need in the world and any advancing of the state of the art is incidental.


With the continuous research in micro-kernels, their use in embedded systems (QNX), also with Apple and Microsoft changes into this direction (sandboxing, singularity, mindori), Linus might be eventually proven wrong.


In the mean team, his maybe wrong idea built the computers that people are using to build their maybe right idea, and the reverse didn't happen.


There are many reasons why Linus work got successful, being a free(gratis) UNIX clone had probably much more weight than a monolith versus micro-kernel architecture issue.


I think you're grossly misconstruing his point -- that you shouldn't let architectural ideology or a preconceived notion of what the best solution to your problem is get in the way of shipping good software.

No that's not his point. His point is that he doesn't want any fundamental ideas involved in guiding the architecture of an OS, period.


Large overarching ideas are good for explaining aspects of reality. This does not make them good as the basis for solving complicated problems.


If you want One Overriding Idea, you would probably appreciate something like ColorForth.


I think we need more open source projects that are not open to contribution from anyone. This may upset some people but will keep the bar for quality high. Linus' original work has been tarnished by too many eager but unqualified contributors.

If he had just chosen a small team, I think Linux could have been a real contender to BSD in terms of quality. It would have taken time to do, but they have had a loyal user base (of non-contributors) and demand from early on due to the legal problems with obtaining BSD and that is I think what has pushed Linux forward.


You're asserting a lot of things without any specifics to back them up. I've been using Linux-based systems for a long time and the quality of the kernel has never posed any sort of practical issue for me.

If Linus had chosen a small team and taken his time to do anything I'm sure the Linux kernel could have ended up like OpenBSD or similar systems. A very high quality OS that takes its time to do everything to the point where users more interested in a practical OS would have looked elsewhere.


Could you define "practical OS"?

What specifically do you want this OS to be able to do? (Note that we've reached a point where no user necesarily needs to restrict themselves to a single OS. They could use different OS's on the same computer for different purposes. This was not always so easy to do. It will continue to get easier.)


The code in the Linux kernel can make adding new features harder than it needs to be. The network stack is one area that could be better designed.


I think we need more open source projects that are not open to contribution from anyone

That was perhaps the best unintentional laugh I had today.


I have done good. =D Laughing is healthy.

I probably shouldn't have phrased the statement as I did. What I meant was projects that are _potentially_ open to anyone, but which are highly selective about who is allowed to contribute.


I got your meaning, but it took a few passes for the intended meaning to parse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: