Hacker Newsnew | comments | show | ask | jobs | submit | wtallis's comments login

The 106.9% isn't a problem of mathematics; it just means an average of more than one failure per drive per year, or alternatively an average lifespan of less than one year. It comes from the fact that they've only had those models in service for a very short time, and so they can only constrain the lifetime through the same reasoning used to estimate a MTBF of multiple years with less than a year of testing. With zero failures so far, it's obvious why the bottom of the confidence interval is at 0%.

> "How many people who live within a city's limits are really going to say they live in the suburbs?"

Tons:

> "Looking only at respondents in the larger principal cities (those with a population greater than 100,000) of larger metropolitan areas (those with a population greater than 500,000), the breakdown was 56 percent urban, 42 percent suburban and 2 percent rural. That means close to half of people who live within city limits describe where they live as suburban."

"City limits" is a really poor indicator of "urban".


It seems like the national consensus is that the threshold for "urban" is low enough to allow for a few more classifications of density which would probably be related to things like yards being squeezed out, single-family being replaced by high-rise, and presence of effective mass transit in spite of the American spite for such things. The term "downtown" would often fill such a role, though with even more variability than "urban". It's worth keeping in mind that our national capital doesn't even have any skyscrapers, and the term "urban sprawl" is widespread and widely understood to be something that actually happens with some frequency.

US mass transit may be meager but I don't think I agree that Americans are spiteful of mass transit. E.g...

http://m.metro-magazine.com/news/292617/poll-americans-favor...


Sure, half the population is willing to give it lip service, but we're really good at coming up with excuses, barriers, and delays, and really bad at coming up with funding. Getting a new passenger rail line built is about as easy as getting a new nuclear reactor built.

Look at what Xen has gone through: http://wiki.xen.org/wiki/File:XenModes.png

They started with HVM and PV, and have since evolved HVM toward PV by removing legacy support and software emulation and have now settled (for now?) on doing every the PV way except where hardware virtualization assistance is faster on modern hardware. Some of this shifting has been due to changes in hardware capabilities, and some of it has been due to earlier efforts being developed from an incomplete understanding of what techniques are faster.


since you bought up Xen, i have a honest question: why would you consider using Xen in presence of alternatives like vbox/vmware ? more importantly, in say 2-3 years, wouldn't something like this edge them out ?

VirtualBox is irrelevant.

VMware is closed source. The real Xen alternative is KVM. KVM is better than Xen in pretty much every way. There's a very big cost for big Xen shops to switch to KVM, but if you're not tied to Xen I can't imagine why you'd use it when KVM is better in every way (kernel integration, tooling, performance, etc).


I think the main argument in favor of Xen was that it would have a smaller attack surface for hackers than KVM. After all, Xen is a hypervisor-based solution, whereas with KVM you are running the full Linux kernel plus qemu as your host.

With that being said, there have been exploits in the Xen hypervisor. As more hardware integration gets added, dom0 starts to look a lot more like a traditional kernel.

Personally, I use kvm for all my virtual machines, since I don't want to run everything under dom0.


The option to run qemu in stub domains is a big advantage, or not run qemu at all if you use PV.

> Personally, I use kvm for all my virtual machines, since I don't want to run everything under dom0

Did you mean Xen?


kvm doesn't have dom0. More generally, you can run kvm on an unmodified SuSE (or other Linux distribution) kernel.

> VirtualBox is irrelevant.

Except for every single "prepackaged developer's workstation" solution I've seen so far. Seriously it works on all systems more or less the same, so I see it used all over the place.


I believe he is saying that VirtualBox is irrelevant as an alternative to Xen, not that it is irrelevant in general.

Xen is meant for running a potentially large number of server VMs headless. VirtualBox is meant for running desktop VMs. You could make VirtualBox run headless (exposing a pseudo-screen over VRDP) to do what Xen does, but... eww.


Why is it eww? I found it to be a very nice solution to running a legacy OS on new hardware. I could probably use something else, but VBox with vrdp works great!

Xen is also used on the desktop, e.g. Qubes (Type-1) or Bromium uXen (Type-2).

Is bromium out of vaporware stage yet? They make bold claims, but I've never heard of anyone actually getting a hold of their tech. I've certainly not seen any reviews by serious security professionals picking apart their offering.

Qubes is awesome though. It really does not get as much attention as it deserves.


Some companies are listed at http://www.bromium.com/customers.html. No sign of third-party reviews, sadly.

It's also incredibly insecure. One uses it to run a different OS on the same computer, not really for isolating it from the host OS/other VMs.

Just look at the kinds of vulnerabilities regularly found in it. They're mostly run-of-the-mill buffer overflows or missing range checks in emulation. Simple stuff that should have been caught if they were serious about security.

Compare that to xen or kvm, which have of course also had vulnerabilities, but you can see people usually have to get a lot more creative when attacking those.

If you wouldn't run a program on your actual machine, you probably should not run it in a VirtualBox VM either.


You are exactly correct - VirtualBox is a workstation solution, not a back-end server solution. That's the domain of Xen, VMware, and KVM. Xen and KVM are interesting to hosting providers and technology companies like google, everywhere else in the world it's VMware.

They rushed several products to market, got tons of market share because they were setting performance records, and then they got basically all of the backlash when the bugs in the controllers and firmware started causing lots of drive failures. They also got in a bit of trouble once or twice for swapping out components with cheaper stuff that didn't perform as well. On the other hand, they were doing a great job of pushing the price/performance frontier so all the more reliable and more expensive drives that eventually made it to market still had to at least match OCZ's performance. That led to the very surprising situation where Intel basically withdrew from the consumer SSD market for a few years because they couldn't keep pace without sacrificing their QA standards.

OCZ basically did the beta testing for SandForce, and paid the price with their eventual bankruptcy. But by then they had acquired Indilinx to provide an in-house alternative to SandForce controllers, and that's a significant chunk of what Toshiba was interested in buying. Winning in the SSD market seems to require making your own controller and/or flash, and there's only one good source for buying a good high-performance controller on the open market (Marvell).

Would you care to propose an objective criterion by which Dart is within an order of magnitude of the popularity or prevalence of Python?

That depends on if you define "mainstream" to mean "lots of users" or "similar to the kinds of things with lots of users".

If I write songs that sound like Taylor Swift but no one knows who I am, do I make "mainstream" music? Personally, I would say yes.


> If I write songs that sound like Taylor Swift but no one knows who I am, do I make "mainstream" music? Personally, I would say yes.

You might make mainstream music but you yourself are not mainstream.


That's a loaded question, in that it's notoriously difficult to measure, and you already know the answer, but here:

http://www.tiobe.com/index.php/content/paperinfo/tpci/index....

Surprised?


All the rules of evidence ultimately stem from two goals: to discourage law enforcement officers from harassing members of the public by violating their freedoms and privacy, and to discourage law enforcement officers from unfairly or prematurely narrowing the field of suspects by focusing on the first piece of suspicious evidence they can get their hands on. (More generally: don't punish wrong guy, and do punish right guy.) But once the deterrent has failed and the evidence has been collected, you end up with a known criminal you are unable to prosecute and that's a bitter pill to swallow, especially if the criminal is very dangerous and the cops' mistake was minor. The deterrence will never be 100% effective no matter how we blindly cling to that tactic.

The ideal course of action would probably be to admit all the evidence and then prosecute both the suspect and the bad cops for their respective crimes, but that's cost-prohibitive and for minor offenses even being prosecuted in the first place is often too much punishment. So as usual we end up with the courts accruing increasingly complex justifications to strike a compromise where one is needed, but the compromise the courts manage to justify isn't always a good or sensible compromise.


The problem in our system is that, when the police break the rules and get caught, the only penalty is that the evidence is excluded. "Parallel reconstruction" lets police evade even that tiny penalty for breaking the rules of evidence.

Really, there should be some personal liability for the police when they do something illegal to catch a criminal. That change isn't going to happen.

Also, "Making and selling certain chemicals." is something that shouldn't be illegal in the first place.


The portage ebuild system for describing package dependencies and build procedures is awesome. The portage program for solving dependencies and building packages is mediocre at best—it's slow and too prone to not finding a solution even when the constraints of source compatibility are looser than binary compatibility. The gentoo portage repository of packages is clearly understaffed and orphaned packages are all too easy to run across.

All three of those things have gotten better over the years, and I have little doubt that given the attention and effort that RedHat and Debian package managers get, portage could be a clear winner. But the portage we have now has too many pitfalls to be the best all-around choice.


Correct, the all-around best choice is Exherbo's package management. It is extremely similar to Gentoo (after all, many of us used to work on Gentoo), but I'd like to think it has fixed all the problems Gentoo had.

Portage is not used.


There's no way that Exherbo's package repos are anywhere near as well maintained and broad as the distros that people have actually heard of. There's no silver bullet for that problem; the only solution is manpower that they don't have.

Exherbo's package repos are incredibly well-maintained. What is provided generally always works and things are kept very much up to date (KDE/gnome/chrome/firefox updates within 24 hours usually). Lack of public awareness doesn't always mean the system isn't as well maintained as a system like gentoo (which often breaks!)

The broadness of the system isn't quite as vast as many distributions but running a desktop / dev workstation I have never encountered a package not available that I needed.


If you haven't encountered packages missing from their repo, then their search must be broken. In just a few minutes of searching, I found that they seem to be missing anything GIS-related, netperf, smokeping, targetcli, any daap server. That's just stuff I've been using my Linux box for in the past month, but it seems like Exherbo would make me do at least as much work as something like MacPorts!

Perhaps my needs are just different from yours. What I did say was "I have never encountered a package not available that I needed". That is not contradicted by your example. Your needs are different, that's cool. What isn't cool is claiming that the search is broken because I haven't found the need to search for those packages.

Exherbo may not be for you. It values users who are willing to be developers as well, and augment the system with the packages they need. You want others to do the work for you, that's not what Exherbo is about.

Besides, the original discussion concerned what package management system was best, not if it had tool X, Y or Z that you claim is very often needed.


In reply to a comment that listed the quality of the package repos as one of three major areas of concern, you said that "the all-around best choice is Exherbo's package management" and that "I'd like to think it has fixed all the problems Gentoo had".

If you can't be honest about its shortcomings, you won't be able to convince anyone to try out your pet project. It doesn't matter how reliable and trouble-free it is at managing the core of the system if it immediately degrades to "build it yourself" anytime you want to use something that's not popular enough to make the cut for a live CD.


Perhaps I was unclear, and if that's the reason for any confusion I am sorry. I was referring to the majority of the comment which was about portage's shortcomings (though it is also true that ebuild quality is a major problem for gentoo). I specifically was comparing portage/the Gentoo package management infrastructure (NOT the package repos per se) with Exherbo's package management infrastructure (by which I mean the package manager, alternatives handling, repositories). This is what I meant by "Exherbo's package management"; that does not mean the breadth of the repositories.

I like to think my comments were honest: I admitted that the system while technically superior does not have the breadth that larger distributions do, but that for my purposes it was sufficient. You ignored that and found some packages not currently packaged in an attempt to disprove my experiences. Furthermore I admitted that the project may not be for you since you expect different things from a distribution than many of us do. What is dishonest about any of this? I have been incredibly frank.

Besides, one of the nice things about Exherbo is that it handles the nonexistence of a package rather seamlessly. You can compile it by hand, install to a tempdir, and then have the package manager merge it directly while giving you the ability to specify information about the package (metadata, dependencies, etc.). And then of course the package manager can uninstall it when you no longer want it. This makes the problem of "build it yourself" kinda moot.

I'm not going to bother responding to the "make the cut for a live CD" remark since obviously the there are far more packages than would fit on a liveCD or liveDVD.


I've helped teach a few camps myself and have similar opinions. Alice allowed for impressive graphics but not a lot depth to the programming (at least the early version we had at the time). Building Pong level stuff from more primitive capabilities in real Java was just as rewarding and develops skills that are a lot more directly applicable elsewhere. Learning Java by making Minecraft mods was really interesting: the initial learning curve is a bit steep due to the messy working environment, but it very quickly leads to more advanced topics as the students want to enhance the behaviors of existing creatures, add new graphical effects, or do something that requires extending the protocol for communication between the client and server. If Minecraft were open-source and the codebase was cleaned up a bit, it would be an awesome teaching tool.
More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: