Mozilla foundation attitude has been changed since then, it is simply a corporation, not an open source community, I really doubt it will come up with something significant.
Just few days ago I spent half of the day cleaning bloatware from an Acer desktop, Sony Tablet and a Samsung phone, this is really crazy how every small company wants to catch you with a push button daemon that will in parallel store all information about you and sell it later. We desperately need some alternative - an open source phone at least where you can control all the components. However, I see that things like voice input or good navigation would be very hard to create for the open source community.
I sought refuge in BSD land as a consequence of this. I got sick of the whole drama around Gnome, systemd and everything else precluding me from properly using my computer. Everything is perpetually half-broken, bug reports and user demands are ignored if they don't fit their world domination agenda and so on.
I still develop for Linux because they pay me to do it at $work, but it's increasingly unpleasant. Things are still fairly sane in the kernel (barring the occasional rushed device drivers or the whole ARM clusterfuck, but there are limits to how much that can be helped). Userland, on the other hand, is full of 0pointer-branded narcissism and nearly unworkable for anything that has to run for more than three months without constant attention. Great for devops, brain aneurysm-inducing for anything else.
It is possible to avoid that crap. Older - and much saner - software still works. Just ignore the "desktops" and run the specific tools you need in a saner windowmanager of your choice.
# e16 - from my cold, dead hands
But it's getting pretty hard to avoid the octopuses. I managed to avoid e.g. systemd (not because unix philosophy something something but because they keep doing shit like ) by staying with Gentoo for a while. But the useflags gymnastics required to keep half-baked things out of my system became too time-consuming to be worth it.
Some of the problems do remain (e.g. my desktop looks like it's 1996 and everyone has their own favourite toolkit again because GTK3) but at least I'm not debugging non-booting systems and strange things happening with USB sticks on a daily basis.
I am very disappointed (this is an understatement) at the turn Linux and open-source took in, say, the 2010s: a mix of corporations and brats messing the existing Linux ecosystem for fun or profit; open-source considered as a portfolio to get a job (hi github, I am talking to you) and generally as a mean to make money for individuals or companies one way or another; free software bricks used by companies that have one single goal which is to lock you in as much as possible in order to extract any penny they can from you (free software used against freedom, that takes the biscuit); Linux and free software being just tools/commodities, no more spirit (yeah, I know there have always been different spirits, but the most common and general attitude was far from being business-minded).
I would not have minded if they had forked Linux/other projects and done whatever they liked on a new separate platform, even if this had meant a decline of Linux, but their insistence on affecting the existing ecosystem is... dreadful.
I will probably someday move everything I can to BSD and keep one Linux system to run specific programs I would still need, as we often kept in the past one Windows machine to run specific software only available on that platform. For me, that will tell windowisation of Linux is completed.
Mozilla is a joke. I have somewhere a plot I made that represents Firefox market share side by side with Mozilla's revenue along the years. The curves cross in opposite direction, like an X: market shares drop as more money flows in. They managed to build a wonderful and successful complex piece of software (and a mini-ecosystem around it) with very little money; but when hundreds of millions of dollars a year started to flow, they lost their footing.
That's my 2nd choice. It's simple, lightning-fast, with almost no memory footprint.
That's a perfect example of the problem with the 0pointer monoculture. I works great, until you do something slightly more complicated. Of course, the suggested fix is to update to a newer version of systemd. I guess you're out of luck if that newer version has compatibility issues.
> their own favourite toolkit again
I guess I never cared much about looks. Looks and feel is important, but "working" and "not crashing" are far more important.
One of the problems we're seeing is that the concept of the "distro" is largely dead. The damage done by systemd/etc wasn't just the bugs like your . The forced monoculture killed the idea of different distros.
That's actually a boilerplate Debian bug closure message, mail-merged with the bug number, the maintainer's contact information, and the package's changelog. One can find it at the bottom of many Debian bug logs. The suggestion really comes from Debian.
Moreover, the bug was in the "unstable" version of Debian, i.e. a version of the package (229-1) that had not even progressed to the "testing" version of Debian (which is currently at systemd version 228). Debian's policy on bugs in "unstable" can be read on Debian's WWW site at https://wiki.debian.org/DebianUnstable and https://www.debian.org/releases/sid/ .
The Debian message is, as mentioned, simply robotic Debian bug-tracking-system boilerplate. The underlying systemd bug reports have stuff written by actual human beings:
But, as usual, youngsters stated we were just naysayers.
And now GNU/Linux distributions are basically a 2nd coming of UNIX wars.
created the service ( https://anonimho.com/privacy.html ) as a statement against this practice exactly
Edit: (One could argue that those are different teams. But even if so, those failed attempts, and there have been enough already, dilute their brand and send the wrong message. Besides, why spend money on such a project when you can give it to the browser team instead? Now, Rust et co I understand, because it might provide the future of the browser engine with Servo, being secure, parallel, etc. Asm.js and other browser related initiatives are fine too).
Being a browser maker isn't really an advantage unless you leverage that in some other business like a website, web development solutions or (with enough market share) locking people into your ecosystem.
The real competitive advantage goes to Google. They control the most popular search engine and mobile platform. They can make or break just about any app or website. Their browser just fortifies their dominance.
Yeah, the latter.
That said, they should work on their competitive advantage too -- the fact that they are open source and pro-web which matters to a lot of users.
The only real advantage Mozilla had was the technical advantage of having one of the few popular and mature browser engines available of which only their programmers knew well enough to work on. Since Webkit though, that advantage has disappeared.
You could say their image as a non-profit out to do good is a advantage. Maybe that is something they should work on.
I think it is a mistake for Mozilla to try this. Firstly, someone else can already do that - IoTivity. AllJoyn have recently abandoned their efforts and joined IoTivity too, so it is really the only open IoT protocol in town. That is a good thing - adding another competitor is just going to make people wait (again) until it is clear who is going to win.
Secondly, consumers don't really care that much about openness, and Google and Apple have a huge advantage in this space due to being able to build support into their mobile OSes. It's not just a matter of bundling a default app either - they can do things like access your wifi passwords for provisining. I don't think Apple will do that will with HomeKit, as long as it is iOS-only, but Weave is cross-platform, sort-of-open, fairly complete and seems pretty well designed.
> A few small players are beginning to enter this gap, though their proposals are still not complete enough to solve the problems we've identified
Yes that's putting it mildly. AllJoyn never supported out-of-the-home control, and IoTivity is very much a work in progress. But I think Mozilla are hugely underestimating the amount of work involved. And given how much work is involved, why not join forces with IoTivity?
So the question is now: how do you want your IoT? A number of companies have already answered: silos, low quality, little-to-no hackability, little-to-no privacy. Mozilla wants the IoT to go in the right direction: open standards, open source, high quality, and the user in control.
Caveat: I'm part of Mozilla's Project Link. And I'm having fun coding it :)
I've also seen rumors that Mozilla is working on a successor to Persona, but absolutely no substantive evidence as of yet.
I mean come on. Firefox OS on embedded devices, the thing could barely run on cheap smartphones.
Living units need an identity. Devices introduced into the living units need to be introduced to that identity to pair with it. Phone-based programs may also need to be paired with the living unit identity. Each pair needs a set of security restrictions.
You want to set this up so that the homeowner can look at the house webcams, but nobody else, including the webcam manufacturer, can. A unified identity and permission system, perhaps built on Persona, lets you set up such connections without every phone having to be paired with every device. Also, with a unified permission system, you can revoke permissions. You might permit a guest access to the house systems but remove that access when they leave, for example.
Project Sensor Web: Yeah, let's add another solution to this problem that has been solved ten times over. w(
Project Smart Home: Also crowdsourced? After Mozillas previous handling of hardware (like ffos dev-phones that were deprecated in record speed) i can only say: Dont.
Project Vasomething: ANOTHER IoT framework that is used and needed by no one?
We are about 2 years past peak Mozilla.
I'm also curious if Mozilla is building in automatic updates in all of this software from the ground-up, seeing as how that appears to be the fundamental weakness of everything related to the IoT.
The current prototypes of Project Link are developed in Rust and tested on a RPi. However, we have not decided of a specific hardware platform and there are even chances that this will remain cross-platform – I'm currently running prototypes on my laptop.
In the current state of things, yes, the software is designed to act as a proxy between connected devices and connected devices and your access point.
I can't speak for other projects, but we definitely intend to have an update mechanism. This is very much not finalized yet, so any suggestion you have (currently, as a detailed blog post – once we are better organized, as a RFC) would be useful.
How about some really captivating features like:
Things like IPFS, while very promising, are not mature enough to justify the effort of adding them in the core of gecko yet.
Also, I dispute the fact that supporting IPFS would "make Firefox a go-to tool" in the grand scheme of internet users, unfortunately.
I certainly understand the hesitancy with IPFS. It is still too new. And they're still ironing out features and implementations. However, what gets me is that it's live right now, and I'm using it for quite a few things.
Right now, I'm getting some VM stuff up and running. My idea is that jor1k linux in JS can be run on IPFS.. Still playing around with it, but it seems very stable and fast.
Also, regarding your 3rd point: I've attributed many network effects to the 85/15 principle. The 15% is your tech userbase. They're the ones that drag everyone else into a platform or technology. The rest (85%) follow because the early adopters knew that it was where to go. Gmail was like this, as was Facebook, as was Firefox, as was Napster, etc. Adding in something like IPFS as a base support adds in "Cloud Storage" where nothing really existed like this before, well, aside 'cloud' meaning other peoples' servers.
Maybe I'm completely wrong. Time will certainly tell, but one thing I know, is that I am indeed impressed with what I'm seeing already.
Also, in my experience, Servo is a very good platform for experimental features such as these, so if you feel that they will make the Internet better, you should consider contributing them to Servo.
With regards to Tor/I2P, It would go a long way to be able to understand that a .onion or .i2p link was clicked and to use an alternate resolver. That would also call for a mode in the browser that sanitizes all user fields and makes the browsers look all exactly the same (to maintain anonymity).
I know that IPFS is actively working on a websockets/client side js for their system, so that any browser can play along, with no noisome downloads or binaries.
And honestly, I didn't know about Servo. I've already enough on my plate, that I didn't need to know about yet another awesome project :)
From an engineering perspective implementing lots of proprietary protocols in a single gateway device is lots of work and will never work 100% fine. Most people that worked on such solutions will confirm you that, and I can also do that (in a somewhat different domain). You can sink lots of money is these projects, and they still might not work for some customers at all - just like universal remotes - only more complex.
One way forward could be pushing the creation of devices (gateways, sensors, actors) that are based solely on standardized and accessible protocols. I'm not the biggest fan of CoAP (because I think it's hardly possible to implement it correctly), but I would still prefer this to reimplementing Z-Wave or Hue protocols. HTTP/2 could also be a good fit. But even if you have at least the protocol layer standardized, there possibilities for API design are endless. Standardizing IoT APIs throughout the whole stack in a backwards compatible fashion could really be something worthwhile, but the effort will be big, and it's questionable if other parties are actually interested in standardization.
And just a side node: I don't think Rust is the best choice for building a prototype for a mostly event based system. EventLoop semantics are hard to implement because of ownership/borrowing issues, the necessary dependencies (websocket libraries, etc.) are all in their in infancy, and it might also not be something that allows to move as much forward in speed as you want for a prototype (where you as a tradeoff can live with lower performance). Yes - I'm sure the Rust commmunity will disagree with me on the last point ;)
You should avoid sharing state when you're using a message passing-based system. The cross-thread communication is handled by the messaging, sharing the messages itself just muddles things further. So the borrow checker disallowing that isn't a bad thing.
> the necessary dependencies (websocket libraries, etc.)
I agree; though the websocket library Servo uses is pretty good (still, not battle-tested, so it's a very valid point).
> it might also not be something that allows to move as much forward in speed as you want for a prototype
Again, with the prototyping thing, sure, it's harder to prototype in Rust, but that's not the whole story -- you spend less time writing tests and debugging.
(Also, I'm not sure how much harder it is; I've never had trouble with doing it and before Rust I was mostly programming in Python/JS)
> Yes - I'm sure the Rust commmunity will disagree with me on the last point
Which sort of indicates that it's probably inaccurate? :)
A lot of the perceived problems with Rust (e.g. fighting the borrow checker) go away after a month or two of actively using the language.
And unfortunatly by experimenting with these things I really got the fealing that these problems don't expose the best side of Rust. Wrapping lots of types into stuff like Rc<RefCell<T>> wasn't the greatest experience, and the problem that this didn't work with trait objects back then did not increase that either (might have changed).
Message passing is a low level primitive, and it's probably a decent solution for communication between threads. However there is also a need asynchronous communication inside a thread (the eventloop thing), for which they are not really useful. Futures/Promises/Observables are great for both use cases (and imho even greater in a singlethreaded environment). These are definitly harder to implement in Rust - and if you don't believe me just check out how many good implementations are currently available in Rust vs. other languages.
How does the eventloop design in Servo look like? Wasn't that some integration with Spidermonkey?
I agree that in Rust you will spend less time on writing tests and debugging - but this applies also for a lot of other statically typed languages which bring in a better ecosystem for that task.
> this didn't work with trait objects back then did not
> increase that either (might have changed)
I haven't used mio/mioco (the current async I/O solution for Rust: https://github.com/carllerche/mio/, https://github.com/dpc/mioco), but I've heard good things about it. They seem to interact with safety pretty cleanly.
Rust in 2014 was a very different language. It looked the same at the surface, but a lot of the innards (including the fact that it shipped its own async I/O solution!) have changed since then.
> might have changed
It has. Though the explicitness of Rc<RefCell<T>> is still there, since Rust prefers ownership and mutability implications to be explicit. Not much extra typing, and you can always typedef it.
> a need asynchronous communication inside a thread
Agreed. There are some coroutine libraries in Rust which are pretty nice. There is active effort towards having coroutines inside the language itself (see https://github.com/erickt/stateful for a POC plugin, which should get RfCd at some point when it's complete).
See also: https://dwrensha.github.io/capnproto-rust/2015/05/25/asynchr... (uses mio)
> How does the eventloop design in Servo look like? Wasn't that some integration with Spidermonkey?
Like I mentioned, I meant the event loop model of communication. We have a lot of message passing between threads, many of which are event loops.
> but this applies also for a lot of other statically typed languages which bring in a better ecosystem for that task.
Sure, but these languages don't have quick prototyping either. Except perhaps Go.
Rust doesn't really end up with an additional burden on prototyping over any other similar statically typed language; the borrow checker is something you rarely tussle with once you get used to it.
Very good question. There is definitely a large intersection between OpenHAB and Project Link. Both projects share one of their objectives: letting DIY users rig together and script their devices. However, beyond this point, our objective with Project Link diverge. We aim to explore ways to put the user in control of their data, including anonymity, authentication, storage, cloud access, web access, and certainly many others that we have yet to discover. It is our impression that OpenHAB's current architecture doesn't match our experimentation objectives.
A second difference is, of course, the architecture. We use Rust, betting that it will let us execute Project Link on devices with little memory, with no supervision, and with very long uptime. In particular, in the case of memory usage, while there are no decisions on this, there are projects that could help us run Rust and Project Link on almost bare metal, should we decide to head in this direction. We felt that OpenHAB's architecture was not adapted to such explorations.
> I think the current adaption of DIY or build-something-through existing frameworks shows that most people don't really care about such solutions, they want something that works out of the box - even if all parts come from a single manufacturer.
We are also exploring out-of-the-box, with other projects. I'm not involved, though, so there isn't much I could say on that topic.
> One way forward could be pushing the creation of devices (gateways, sensors, actors) that are based solely on standardized and accessible protocols.
Yes, we would very much like to do that. But before we can do that, we first need to be actors on the field, with well-used projects and a large community.
> Yes - I'm sure the Rust commmunity will disagree with me on the last point ;)
Well, it works nicely so far :)
Out of curiosity, do you have any more details on what you think is impossible to implement correctly?
I've been doing a lot with CoAP and I'd agree that the RFC isn't as clear as it could be about some behavior, but I haven't come across anything that I'd say is impossible to implement or implement correctly.
I don't recall the exact thing, but there was also something that I didn't like about the message IDs which you somehow need to keep ordered and check which ID might be used at the current point of time by which peer. A correct implementation may not send something while no ID is available, but to track all of those you needed lots of memory (much more than TCP buffers), and the open source implementations I looked at all elided that detail.