Hacker Newsnew | comments | show | ask | jobs | submit | login

From the article:

"Apple Macintosh computers and servers running OSX use NTP, and Stenn said Apple developers have called him for help on several NTP issues. In the last such incident, he said he delayed a patch to give Apple more time to prepare OS X for it. When they were ready, he applied the patch and asked "whether Apple could send a donation to the Network Time Foundation," Stenn recalled. "They said they would do their best to see that Apple throws some money our way." But it hasn't happened yet."

Surely, he needs to say upfront that there is a consultancy fee. I'm sure most big companies can't make donations easily, what they can do is pay for services and products which they do all day, everyday.

-----


It's almost like this (at the Apple store):

Customer: I'd like new MacBook, I really need to get some work done.

Apple Employee: Here you go, this should solve your problem. (gives him a new MacBook)

Customer: Great

Apple Employee: Could donate to the Apple foundation?

Customer: Sure, I'll ask my partner what they think. (walks out the shop with the MacBook, without paying for anything).

-----


It's becoming entirely clear to me that the vast bulk of nerds running open source projects do not have the requisite skills to operate their projects as a sustainable business adequate to pay their own bills.

I am, broadly, of the opinion that a non-profit either needs to take up (or form), an "infrastructure consultancy" firm with financial structure and incentives to ensure that projects like LibreSSL, GPG, NTP, etc are funded and maintained; some of that will involve consulting work for large firms for large piles of money.

Anyway, I don't have a lot of swing in that field, but... it's my conclusion. :-)

-----


I've come to the same conclusion. It needs a set of junior consultants to do more basic stuff for a decent amount, leaving the main person/people as the highest cost per day - and only called in if necessary. Someone to setup funding or support structures with companies that need/want it.

-----


Your right, we have much better standard in HDMI, because every laptop has that now. Oh wait, except the Macbook Air which has display port. So HDMI and displayport, oh wait, USB type C. So that's it just three connections to replace VGA.

Oh, forgot about iPads, 2 different ones for that, and android phones at least 2 MHL connectors and whatever google does in their phones. We haven't even got onto mini and micro HDMI.

At least 10 different connectors on recent devices then? Don't you just love standards?

-----


Are you sure you can live in a house worth 0? - most places with low house prices also have no jobs, broken infrastructure (electric, water, roads, shops etc.) and high crime. For a house to drop to 0 the location would have to be really bad and you wouldn't and perhaps couldn't live there anymore.

-----


No idea why you're getting down-voted. The assumption that houses are some kind of special asset because they always give a dividend of "you're able to live there" ignores an almost infinite number of exceptions to that.

-----


Being a server is not about power, a $5 Digital Ocean VPS is a server and it has way less power than Macbook Air you mention as a client. The only thing about a server is that it serves to non local user(s) - it could be could be low powered ARM based RPi or a multi-core Intel thing, it really just depends on it's workload. The Mac Pro is designed as client and same for all current Macs (they don't sell a server anymore), which is how they are typically used.

-----


It terms of how (from memory + fact checks on Wikipedia): Microsoft and Netscape battled over browsers throughout the late 90s with Netscape starting in the position of the dominant/only browser and IE was seen as a joke, that quickly, by the time IE5 (1999) was released Netscape seemed completely in technical debt with it's product when they couldn't support even the most basic CSS support in Netscape 4.x. IE5 was also the release that added support for what is now called AJAX.

IE6 was released August 2001, at which point it had most of the market, IE also existed for the Mac and most people I knew at the time thought of Mozilla/Netscape as completely irrelevant as a development target. Opera has basically always been irrelevant in my view. This started an era of IE-only sites which further damaged the competition.

Microsoft disbanded it's IE development team and it wasn't until a few years later that people realized that this happened (It wasn't announced till 2003) - people seemed to assume Microsoft was working on new version of IE, which was natural since it was pretty much the only browser in town.

WHATWG was formed in 2004 so everyone else (except Microsoft) could work on web standard because this had basically stopped at that point.

Firefox wasn't released till November 2004, which was the first time it looked like there would be a credible threat to IE (though it had been pretty good for a year before with Mozilla, but still unknown to most).

Acid2 test was created 2005 which further highlighted the problems with IE6's rendering: http://en.wikipedia.org/wiki/Acid2.

IE7 was released in October 2006, by which point web developers who had been trying to more and more with the web were thoroughly frustrated with IE and it's rendering bugs. IE7 was a big disappointment because it whilst it fixed some long standing problems like it's box model, it was still a long way off the standards that had been produced since IE6 and it didn't pass that Acid2 test.

In terms of why: I've wondered why for a long time now. Mostly, I think IE6 was already too good at being web application platform and Microsoft was worried (as they had been with Java) that this would make Windows irrelevant. Given that IE was effectively free the probably assumed there would be no viable competition due to the lack of business model. Microsoft stopping work on IE they could allow websites to work, but continue to make web apps that were too clunky to use so people would write native Win32 apps.

-----


Reasons for "why" not often discussed: 1) Vista consumed the top systems devs at Microsoft for seven years 2) the IE team wasn't "disbanded" so much as loyal to management at Microsoft that was discarded (Brad Silverberg, David Cole) 3) enterprise customers were plenty happy with IE6 4) the dotcom bust quieted the indie developer ecosystem 5) Microsoft honestly thought they could get NT kernels + .NET on small devices and leverage massive developer support and existing tooling in the late 90s.

-----


In the CRT days people would change resolution to get the DPI/text/widget size they wanted, almost no one I saw was using the highest resolution of their monitor as a result. It was easy to buy a CRT because you just picked the size/cost you wanted and knew that you could just set the resolution.

LCDs introduced the problem of a fixed resolution, where basically you have to choose the right resolution at the time you bought it. There were 1920x1200 laptops [1] and 4k 22inch (T221) screens 10 years ago, it was clear this problem was coming, though the software never changed to become resolution scalable.

[1] http://forums.thedigitalfix.com/forums/archive/index.php/t-2...

-----


You can't realistically submit a patch to change the direction that Systemd's is going in. For example, they won't accept a patch which removes 95% of the code, so a more modular system can be built.

Submitting a patch, implies you agree with the general direction but need a bug fixed or a feature added.

-----


Not only would submitting a patch be agreeing to their goals (Lennart gets to push the Overton window a bit further), suggesting that we should simply submit patches presupposes that the systemd cabal would ever accept them. Unless it is perfectly in agreement with their goals - including the complete software - they probably won't accept it. They don't even accept already written and tested patches for trivial things like #ifdef-ing a couple minor fixes so the project can build on a different libc.

Lennart Poettering[1]:

    Humm, I know this will disappoint you, but we are not particularly
    interested in merging patches supporting other libcs, if those are not
    compatible with glibc. We don't want the compatibility kludges in
    systemd, and if a libc which claims to be compatible with glibc actually
    is not, then this should really be fixed in the libc, not worked around
    in systemd.

If they aren't interested in trivial compatibility patches, they certainly aren't going to accept any patch that dares to disrupt their tight integration or questionable design choices.

As for forking the whole thing, remember that logind was briefly liberated so it could be build as a standalone package, Lennart went and did a big rewrite so the next version was much more integrated with systemd. When he controls the internal APIs and can change them whenever he wants, a clone will have to be a total replacement right from the start, or it ends up perpetually having to catch up to the changes that will be introduced just to cause breakage.

[1] http://lists.freedesktop.org/archives/systemd-devel/2011-Jul...

-----


His response seems perfectly reasonable to me - even more so after reading that whole exchange.

Why should the Systemd team pay the overhead - in terms of complicating their code - to work around incompatibilities in another libc that will also affect portability of a lot of other Linux software?

-----


The same reason most other project accept trivial patches like that: it's not actually a cost or complication, and helping compatibility and interoperability in the software ecosystem is a good thing.

We're not talking about asking for some new work to be done. We're not talking about any kind of change to how the project works.

This is about trivial changes like #defining function name that aren't even included in the build unless you were using the libc. It is actually rather surprising behavior to see in a publicly-developed project. This kind of fix is incredibly common we've created tools su chasd "cmake" and "autoconf" to handle the common cases and easier #ifdef-ing.

-----


"Trivial" changes like that contributes substantially to making projects hard to read and understand. When there are no better alternatives, that may be warranted, but in this case there is an obvious alternative: Fix the libc implementations that are incompatible with glibc, and at the same time gain the benefit of helping other applications.

I wish more projects would take this line.

Autoconf is the devil. It's a symptom of how broken Unix-y environments have been, and how people were willing to impart a massive maintenance cost of countless application code bases instead of either pushing their vendors to getting things right, or agreeing on common compatibility layers.

-----


Well in this specific case the patches would have been subtely broken. I.e. replacing a thread safe call with one that is not. So it was not just id deffing (thay even suggest some ways to do that better in the patches. E.g. capability based if def instead of uclib or not)

-----


It's a matter of perspective. You could also say that glibc is adding incompatibilities by deviating from standards, and now systemd depends on them. I don't consider it "perfectly reasonable" that Gnome, systemd and the Linux kernel are now starting to depend on each other when previously all of these components could be exchanged for others. It's a mischaracterization to say that the systemd "shouldn't have to pay the overhead" of making their code compatible, because they started out with introducing an architecture that promotes this very lock-in to begin with.

-----


glibc is the standard for C libraries to follow on Linux.

In this particular case, mkstemp() is not a viable replacement for mkostemp(). A proper fix is to provide mkostemp() in uClibc, or to compile with a shim that provides it.

Arguing over whether including the shim in Systemd would be acceptable would be a different matter, but parts of the patches as presented were flat out broken.

And the Linux kernel is not starting to depend on systemd or the others. The Linux kernel is moving towards demanding a single cgroups writer, and at the moment Systemd is the main contender in that space.

That Systemd is depending on Linux is unsurprising, given that they stated from the outset exactly that they were unwilling to pay the price of trying to implement generic abstractions rather than taking full advantage of the capabilities Linux offers. You may of course disagree with that decision, but frankly, for a lot of us getting a better init system for the OS we use is more important than getting some idealised variation that the BSD's could use too.

> an architecture that promotes this very lock-in to begin with.

The "architecture that promotes this very lock-in" in this case is "provide functionality that people want so badly they're prepared to introduce dependencies on systemd".

At some points enough is enough, and sub-optimal advances still end up getting adopted because the alternatives are worse. Systemd falls squarely in that category: I agree it'd be nicer if it was presented and introduced in nice small digestible separate chunks with well defined, stadardised APIs so that people could be confident in the ability to replace the various APIs. But if the alternative is remaining with the alternatives? I'll pick Systemd warts and all.

Looking at posts from the Gnome people, the original intent appears to have been to provide a narrow logind shim exactly to make it easier to replace logind/systemd with something else. If someone feels strongly enough to come up with a viable shim or an alternative API that can talk to both systemd and other systems reliably, then I'd expect Gnome to be all over that exactly because they will otherwise have the headache of how to continue to support other platforms.

The problem is that Gnome already for a long time have dependend on expectations of the user session management that ConsoleKit on top of other init systems have been unable to properly meet, so Gnome has in many scenarios been subtly broken for a long time.

-----


For better or worse, systemd has adopted the OpenBSD approach to portability. Nothing is stopping you from creating a systemd-portable project, similar to how OpenSSH-portable makes OpenSSH usable on non-OpenBSD platforms.

As to logind, it may have been a better choice for the long term to do a separate implementation of the public and stable logind DBus API instead of trying to run the systemd-logind implementation without systemd as PID1, but supposedly whoever did the latter thought it was the best short-term choice.

-----


You can fork it. You can fork all of Debian. But there's numerous reasons that's going nowhere.

Most being: this is not the issue the loudest voices say it is.

-----


Forking a distribution will help by giving an alternative, but it's not as easy as just saying it. Maintaining a distribution is a massive effort that takes a lot of work. Building a team of people with enough time to make that happen isn't something you do overnight. The fact that one hasn't magically appeared since this started has more to do with that than anything else. (Followed closely by people generally waiting to see how this shakes out before they pull the trigger.)

Second - while it will provide an alternative that helps frame the debate, this is not a minor undertaking. With every other distribution caving maintaining a distribution that does not use Systemd will require a lot of work to keep all of the software out there working properly with whatever alternative init system it chooses to use.

This alternative distro is also going to have to deal with how to solve the init problem. We had some good options in play but I don't believe we'd found the best answer to the problem yet when Lennart came bowling through like a bull in a china shop. So any distribution effort is going to have to take on the role of choosing the best of breed alternative and make the effort to ensure it continues to develop and improve.

This isn't something you take on lightly.

-----


I'm not sure it really matters that rpm and deb solve the same problem unless you use different distros.

However npm, gem, pip, composer mostly solve the same problem, but a different one to rpm, in that we want per-project dependancies rather than per-system and want them to be commitable. Also many project combine languages so it would be good if these were combined.

-----


On HTTPS where the warning triangle comes up due to mixed content, tell us what the insecure resources were.

-----


Why would you need to do any of that? RAID6 can tolerate 2 drive failures and Linux will tell you which drive is bad. Just slide the pod out and replace the drive, no data lost, very little down time.

-----


Three drive failures? My question is how do you practically determine which drive to swap. I don't see any labels or anything. Also I read the current version supports rails. The one in the article looks bolted to the rack. The article had no date on it and it sounds like a lot of the issues have been addressed.

-----


Brian from Backblaze here.

> how do you determine which drive to swap

Every two minutes we query every drive (all 45 drives inside the pod) with the built in linux smartctl command. We send this information to a kind of monitoring server, so even if the pod entirely dies we know everything about the health of the disks inside it up until 2 minutes earlier. We remember the history for a few weeks (the amount of data is tiny by our standards).

Then, when one of the drives stops responding, several things occur: 1) we put the entire pod into a "read only" mode where no more customer data is written (this lowers the risk of more failures), and 2) we have a friendly web interface that informs the datacenter techs which drive to replace, and 3) an email alert is sent out to whomever is oncall.

Each drive maps has a name like /dev/sda1 and these drive names are reproducibly in the same location in the pod every time. In addition (before it disappeared) the drive also reported a serial number like "WD‑WCAU45178029" as part of the smartctl commands which is ALSO PRINTED ON THE OUTSIDE OF THE DRIVE.

TL;DR - it's easy to swap the correct drive based on external serial numbers. :-)

-----


Ok, thanks for the info. That doesn't sound too bad.

-----


Wouldn't the SATA controller tell you which port the bad drive was on, and then you'd presumably have some standard mapping from ports to physical locations in the case?

-----


Yes, I'm not saying its not doable. It just seems error prone and time intensive to replace a drive.

-----


I don't understand why they're using raid6 instead of file-level replication/integrity. They're already running an application over top of it - do the replication there and skip replacing disks...

-----


I don't think they are doing replication at all, don't see it mentioned. RAID6 is cheaper than replicating the files, which would mean twice as many servers.

-----


But you'll need some level of multi-data-center durability (or at least across racks), so you'll want to replicate user content anyway. Otherwise a dead server could prevent a restore.

-----


I see no mention that they do that, clearly they trying to make this as cheap as possible. If the server is down people can wait for their recovery, many tape based backup systems require you to wait for 30 mins or more to get the backup which covers most outages. Even waiting a days for a recovery isn't the end of the world, especially given very few recoveries are being done and big recoveries require sending of a drive that takes days anyway.

The worst case of course is that actually lose your data, probably as the result of a data center fire/explosion, though people should, in most cases, still have the primary copy of the data on their machines. However re-backing this up would take a long time.

-----


You don't run a serious data storage service on just RAID6.

Data centres do fail, have fires etc.

-----


It's backup not primary storage. If they were going to spend the twice the cost, do you think there would be a least a single mention of this somewhere? anywhere? (I couldn't find one)

This outage post makes no mention of a backup datacenter and they say backups were halted as a result of this outage: https://www.backblaze.com/blog/data-center-outage/

-----

More

Guidelines | FAQ | Support | API | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact

Search: