Hacker News new | past | comments | ask | show | jobs | submit | davidron's comments login

... If incentives were properly aligned to make this the correct strategy, then this is probably what McDonald's would do. Unfortunately not. https://www.today.com/food/trends/why-is-mcdonalds-ice-cream...

Education is a patch. It's very hard to install though.


You monster.


For those that remember Google Notebook, this is also what Evernote did 10 years ago. It's why I've been using Evernote for 10 years and why I'll never use Google Keep.

https://www.cnet.com/news/evernotes-google-notebook-importer...


The composition of the result of the original hash function and the result of that validation function can be taken together to be some larger function. Call that an uberhash. Such an uberhash is created by putting some number of bits in and getting some smaller number of bits out. There will unfortunately still be collisions. That trick is an improvement, and newer hashing algorithms contain similarly useful improvements to make creating collisions difficult.


I've had an Orbi for some time now. What sold me was that it doesn't actually create a mesh network. Instead, it sets up a 1.7 gigabit wireless backhaul connection completely separate from the frequencies used by the client devices you connect. This avoids all of the interference (noisy clients disrupting the access points) and latency (hopping from one access point to another) pitfalls you often encounter when using a traditional mesh network or wireless extender.

The Orbi is basically the wireless equivalent of running wires through the walls.


Well it's still a mesh in the sense that the wireless APs are connected to each other wirelessly.

You can have a mesh network without wireless clients at all if you wanted. For example, you can connect two locations together via a mesh of access points by each end plugging into the ethernet port of an AP, then the mesh provides multi-task redundancy.


>If that's true why do I see so many developers using Macs? There are lots of developers who use Apple hardware with linux. Even Linus (at least as of 2012)! http://www.cultofmac.com/162823/linux-creator-linus-torvalds...

> Desktop Linux is a mess and always has been. The problem is structural. The people making it don't have any profit motive to make it better. They're all either volunteers having fun or subsidised by server products.

It sounds like you are saying there is a direct relationship between the quality ("betterness") of software and a profit motive to create that software and that there is no profit motive to make Desktop Linux better. I see two flaws with that argument:

1. You posit that there is zero profit motive to make it better, then there should be zero quality of the Linux Desktop. Certainly, you can't be saying that there exists nothing worse than Desktop Linux, even conceptually.

2. This argument, if I understand it correctly, also implies that it's impossible for an application to be "better" or "good" that has no profit motive. This would include all of the open source utilities, text editors, programming languages, compilers, games, emulators, and other applications - many of which ship with every single Mac on the market.

Those two reasons make your argument logically inconsistent, I think.

But, even if your argument were logically consistent, I believe that there is a profit motive to make it better. Some people, including myself, derive non-monetary profit from building something useful and we find that the open-source model allows us to collaborate on these things because they are too complex or time consuming to do alone. The fact that there actually exists a profit motive makes your argument factually inconsistent as well.


My argument is not logically inconsistent, but it may be a bit tricky to see why.

Firstly, I did not argue that quality is a pure function of potential profit. I said there's no profit motive to make desktop Linux better than it currently is (my view is that desktop Linux has been at a pretty constant level of quality for the last decade, which I'm sure some will argue with, but that's been my experience). That's a slightly different argument which I'll elaborate on in a moment.

Secondly, the fact that Macs ship with open source software doesn't invalidate my point. The popularity of MacOS X amongst developers can largely be summed up as "it's UNIX but with large parts replaced with proprietary components". What's left is whatever wasn't relevant to the consumer/pro-designer market segment, mostly a collection of command line tools. The other open source stuff (Apple's own code) was very much directed by the profit motive and open sourced secondarily, as a part of a product strategy.

There are two problems with the volunteer/subsidised-developer model that shine through when using desktop Linux. And in case you think I'm a clueless idiot who doesn't know what he's talking about, I've used desktop Linux for many years and even did contribute open source work to it about 15 years ago (I had patches in a few well known desktop projects). Eventually I became disillusioned because of these two main flaws:

1) Volunteers avoid boring work i.e. fixing bugs in other people's software.

2) Subsidised developers and volunteers lack any incentive to discard flawed ideologies and convictions

The second is the most important. The first problem can be addressed through Red Hat style cross subsidisation. The second results in massive, glaring weaknesses that developers rationalise as strengths rather than swim against established dogma.

Some examples of crap that is simply not tolerated in desktop operating systems built by people seeking profit but is/was often rationalised away as a strength in Linux: the audio server situation, completely messed up software distribution model, lack of support for proprietary kernel drivers, flaky backwards compatibility, general hostility to proprietary userspace apps, a bizarre insistence that anything important be written in C, refusal to work with any kind of content industry that wants DRM ... lots of policies that Linux users have just learned to accept as futile to argue with that Apple and Microsoft don't have the same hangups about. And the end result is, guess what, Macs just work a whole lot better.

I'm painting with a broad brush. The profit motive is just an incentive, a nudge, it's not an iron law. Apple and Microsoft have their own weird ideological hangups, especially in recent years, and they could really use a whole lot more competition. The Linux community has overcome entrenched attitudes a few times in the past - GNOME 2 and systemd are good examples of that. But overall the Linux community suffers far from more self-inflicted wounds that they're psychologically incapable of changing and without any financial incentive to do so, these things just fester for decades.


So long as one executor is executing the recipe/procedure/etc it makes sense to simplify directions to make them appear synchronous. When I turn my computer on and wait for it to start, I don't literally freeze up and stop breathing. I go do other things and wait for a signal that startup is complete or I poll the status of the computer from time to time. The synchronous instruction "wait for the computer to start up" is interpreted as "do whatever you want or need to do until the computer has started up".

As soon as two parties are involved, instructions start making sense asynchronously. Try writing a recipe for baking a cake by two bakers, keeping both chefs occupied the entire time. You'll see that the instructions must be read asynchronously otherwise each baker would not be able to continue reading his/her instructions when a "thread" of operation for the other baker has been called out.


Async is not parallell. With Node, only one of your bakers would actually be working at a time since Javascript does not support multithreading! Only way to achieve this would be dispatching to other node processes and start handling this yourself. With the JVM however you could actually do things in parallell without any such RPC handling.


Syncthing doesn't reconcile differences. Instead a copy of the file is created.

  Syncthing does recognize conflicts. When a file has been modified on two devices simultaneously, one of the files will be renamed to <filename>.sync- conflict-<date>-<time>.<ext>. The device which has the larger value of the first 63 bits for his device ID will have his file marked as the conflicting file. Note that we only create sync-conflict files when the actual content differs. 
https://docs.syncthing.net/users/faq.html


Did Google? Google is very much a polyglot orginization. https://www.quora.com/Which-programming-languages-does-Googl...

As for Android, Google Purchased Android Inc in 2005, and I'm pretty sure the decision to use Java came before Google's purchase.


It also helps that Android team has quite a few ex-Sun engineers that used to work on Java, so I imagine that they rather keep pushing it than anything else.

They are always quite clear that it is Java + C or C++ for native methods, regardless of what developers might wish for.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: