
Mozilla's Connected Devices Innovation Process: Four Projects Move Forward - cpeterso
https://blog.mozilla.org/futurereleases/2016/03/01/update-on-connected-devices/
======
nshm
I participated in Gnome and Linux kernel development in 2002-2008, that was an
amazing time. Those days open source systems were successfully competing with
commercial software, Firefox was on edge, Linux desktop was really promising,
Linux kernel was widely used everywhere. Drivers for various devices were
created by hackers. I left once I saw that corporations are taking over.
Redhat and Novel decisions started to set the direction, I felt I can not
contribute anything.

Mozilla foundation attitude has been changed since then, it is simply a
corporation, not an open source community, I really doubt it will come up with
something significant.

Just few days ago I spent half of the day cleaning bloatware from an Acer
desktop, Sony Tablet and a Samsung phone, this is really crazy how every small
company wants to catch you with a push button daemon that will in parallel
store all information about you and sell it later. We desperately need some
alternative - an open source phone at least where you can control all the
components. However, I see that things like voice input or good navigation
would be very hard to create for the open source community.

~~~
notalaser
> I left once I saw that corporations are taking over. Redhat and Novel
> decisions started to set the direction, I felt I can not contribute
> anything.

I sought refuge in BSD land as a consequence of this. I got sick of the whole
drama around Gnome, systemd and everything else precluding me from properly
using my computer. Everything is perpetually half-broken, bug reports and user
demands are ignored if they don't fit their world domination agenda and so on.

I still develop for Linux because they pay me to do it at $work, but it's
increasingly unpleasant. Things are still fairly sane in the kernel (barring
the occasional rushed device drivers or the whole ARM clusterfuck, but there
are limits to how much that can be helped). Userland, on the other hand, is
full of 0pointer-branded narcissism and nearly unworkable for anything that
has to run for more than three months without constant attention. Great for
devops, brain aneurysm-inducing for anything else.

~~~
pdkl95
> 0pointer-branded narcissism

It _is_ possible to avoid that crap. Older - and much saner - software still
works. Just ignore the "desktops" and run the specific tools you need in a
saner windowmanager of your choice.

# e16 - from my cold, dead hands

~~~
notalaser
I think the only desktop I didn't stay away from was KDE, back in its 2.x and
3.x days. Then things like Nepomuk happened and sent me back to WindowMaker,
which will be pried from my equally cold, dead hands.

But it's getting pretty hard to avoid the octopuses. I managed to avoid e.g.
systemd (not because unix philosophy something something but because they keep
doing shit like [1]) by staying with Gentoo for a while. But the useflags
gymnastics required to keep half-baked things out of my system became too
time-consuming to be worth it.

Some of the problems do remain (e.g. my desktop looks like it's 1996 and
everyone has their own favourite toolkit again because GTK3) but at least I'm
not debugging non-booting systems and strange things happening with USB sticks
on a daily basis.

1: [https://bugs.debian.org/cgi-
bin/bugreport.cgi?bug=815586](https://bugs.debian.org/cgi-
bin/bugreport.cgi?bug=815586)

~~~
pdkl95
> WindowMaker

That's my 2nd choice. It's simple, lightning-fast, with almost no memory
footprint.

> bug=815586

That's a perfect example of the problem with the 0pointer monoculture. I works
great, until you do something slightly more complicated. Of course, the
suggested fix is to update to a newer version of systemd. I guess you're out
of luck if that newer version has compatibility issues.

> their own favourite toolkit again

I guess I never cared much about looks. Looks and feel is important, but
"working" and "not crashing" are far more important.

One of the problems we're seeing is that the concept of the "distro" is
largely dead. The damage done by systemd/etc wasn't just the bugs like your
[1]. The forced monoculture killed the _idea_ of different distros.

~~~
JdeBP
> _Of course, the suggested fix is to update to a newer version of systemd._

That's actually a boilerplate Debian bug closure message, mail-merged with the
bug number, the maintainer's contact information, and the package's changelog.
One can find it at the bottom of many Debian bug logs. The suggestion really
comes from Debian.

Moreover, the bug was in the "unstable" version of Debian, i.e. a version of
the package (229-1) that had not even progressed to the "testing" version of
Debian (which is currently at systemd version 228). Debian's policy on bugs in
"unstable" can be read on Debian's WWW site at
[https://wiki.debian.org/DebianUnstable](https://wiki.debian.org/DebianUnstable)
and
[https://www.debian.org/releases/sid/](https://www.debian.org/releases/sid/) .

The Debian message is, as mentioned, simply robotic Debian bug-tracking-system
boilerplate. The underlying systemd bug reports have stuff written by actual
human beings:

* [https://github.com/systemd/systemd/issues/2572](https://github.com/systemd/systemd/issues/2572)

* [https://github.com/systemd/systemd/issues/1866](https://github.com/systemd/systemd/issues/1866)

------
coldtea
Facepalm. Another doomed attempt from Mozilla, when they should be focused on
getting their core competitive advantage, their browser, much better faster...

Edit: (One could argue that those are different teams. But even if so, those
failed attempts, and there have been enough already, dilute their brand and
send the wrong message. Besides, why spend money on such a project when you
can give it to the browser team instead? Now, Rust et co I understand, because
it might provide the future of the browser engine with Servo, being secure,
parallel, etc. Asm.js and other browser related initiatives are fine too).

~~~
bobajeff
Competitive advantage? I think you mean core competency.

Being a browser maker isn't really an advantage unless you leverage that in
some other business like a website, web development solutions or (with enough
market share) locking people into your ecosystem.

The real competitive advantage goes to Google. They control the most popular
search engine and mobile platform. They can make or break just about any app
or website. Their browser just fortifies their dominance.

~~~
coldtea
> _Competitive advantage? I think you mean core competency._

Yeah, the latter.

That said, they should work on their competitive advantage too -- the fact
that they are open source and pro-web which matters to a lot of users.

~~~
bobajeff
Mozilla is hardly unique there. Webkit and Chromium are also open source and
Google is pro-Web as well.

The only real advantage Mozilla had was the technical advantage of having one
of the few popular and mature browser engines available of which only their
programmers knew well enough to work on. Since Webkit though, that advantage
has disappeared.

You could say their image as a non-profit out to do good is a advantage. Maybe
that is something they should work on.

------
IshKebab
> A smart home powered by Mozilla would be open and accessible to everyone -
> financially, technically, and creatively. No one else can do all of these.

I think it is a mistake for Mozilla to try this. Firstly, someone else _can_
already do that - IoTivity. AllJoyn have recently abandoned their efforts and
joined IoTivity too, so it is really the only open IoT protocol in town. That
is a good thing - adding another competitor is just going to make people wait
(again) until it is clear who is going to win.

Secondly, consumers don't really care that much about openness, and Google and
Apple have a _huge_ advantage in this space due to being able to build support
into their mobile OSes. It's not just a matter of bundling a default app
either - they can do things like access your wifi passwords for provisining. I
don't think Apple will do that will with HomeKit, as long as it is iOS-only,
but Weave is cross-platform, sort-of-open, fairly complete and seems pretty
well designed.

Finally...

> A few small players are beginning to enter this gap, though their proposals
> are still not complete enough to solve the problems we've identified

Yes that's putting it mildly. AllJoyn never supported out-of-the-home control,
and IoTivity is very much a work in progress. But I think Mozilla are _hugely_
underestimating the amount of work involved. And given how much work is
involved, why not join forces with IoTivity?

~~~
detaro
I just looked at the IoTivity website, but it looks awfully
"enterprise-y"/"design-by-committee". Are you involved with it and can report
otherwise? Any "real-world" examples?

------
Perceptes
It's hard not to feel angry about this. Not only do I not personally want the
things described in this post, I want them specifically _not_ to make things
like this. Until we have more fundamentally secure systems and laws that
actually punish negligent companies for gross violations of security and
privacy, IoT is not a domain to promote. There are so many other important
issues with the web and the Internet right now that Mozilla could help with.
This is such a waste of time and resources. Is there really some target
audience that wants this stuff?

~~~
Yoric
For the moment, it looks like IoT is happening, whether you want it or not. I
live far, far away from the Silicon Valley and last week, when visiting the
supermarket, I realized that they are already selling IoT-enabled devices.

So the question is now: how do you want your IoT? A number of companies have
already answered: silos, low quality, little-to-no hackability, little-to-no
privacy. Mozilla wants the IoT to go in the right direction: open standards,
open source, high quality, and the user in control.

Caveat: I'm part of Mozilla's Project Link. And I'm having fun coding it :)

------
Animats
Huh? They just dropped Firefox OS and Persona, their authentication system,
and now they want to get into the "Internet of Things"? It would have made
more sense if they'd used their authentication technology to allow devices to
link up in a mutually mistrustful way.

~~~
kibwen
This is mistaken. Firefox OS wasn't dropped, it was pivoted from phones to
connected devices: [https://blog.mozilla.org/blog/2015/12/09/firefox-os-pivot-
to...](https://blog.mozilla.org/blog/2015/12/09/firefox-os-pivot-to-connected-
devices/)

I've also seen rumors that Mozilla is working on a successor to Persona, but
absolutely no substantive evidence as of yet.

~~~
Techonomicon
No. The team was repurposed, not really Firefox OS though. Have got some
insider knowledge on this one.

I mean come on. Firefox OS on embedded devices, the thing could barely run on
cheap smartphones.

~~~
techdragon
Having used the one of their special SPARC dev phones, the OS is actually very
very good. Like every device it had limits but it just cruised along like a
champion for 90% of tasks and had freakishly good call quality.

------
pette
Project Link: What's that? And what problem does it solve? No one will
understand, no one will use.

Project Sensor Web: Yeah, let's add another solution to this problem that has
been solved ten times over. w(

Project Smart Home: Also crowdsourced? After Mozillas previous handling of
hardware (like ffos dev-phones that were deprecated in record speed) i can
only say: Dont.

Project Vasomething: ANOTHER IoT framework that is used and needed by no one?

We are about 2 years past peak Mozilla.

------
kibwen
I'm excited to see Mozilla's experiments in the IoT space, but I'm a bit
unclear on the exact nature of the projects being undertaken here. For
example, is the software here intended to act as a proxy between connected
devices and your access point, possibly via flashing your router?

I'm also curious if Mozilla is building in automatic updates in all of this
software from the ground-up, seeing as how that appears to be the fundamental
weakness of everything related to the IoT.

~~~
mparlane
Project Link appears to be designed to run on a raspberry pi and uses rust.

~~~
Yoric
Note that we are still at a very early stage of exploration, so anything I
write might change a dozen times before there is a first official prototype.

The current prototypes of Project Link are developed in Rust and tested on a
RPi. However, we have not decided of a specific hardware platform and there
are even chances that this will remain cross-platform – I'm currently running
prototypes on my laptop.

In the current state of things, yes, the software is designed to act as a
proxy between connected devices and connected devices and your access point.

I can't speak for other projects, but we definitely intend to have an update
mechanism. This is very much not finalized yet, so any suggestion you have
(currently, as a detailed blog post – once we are better organized, as a RFC)
would be useful.

------
kefka
Why are they working on mostly useless ventures?

How about some really captivating features like:

    
    
        Tor integration/awareness
        I2P integration/awareness
        ZeroNet functionality
        IPFS functionality
    

Start working on hard features that will make the internet better, and also
more distributed. It's one thing to have throw-away garbage that nobody will
use. It's another thing entirely to support new protocols that make Firefox a
go-to tool.

~~~
fabrice_d
There is one MoCo employee helping with getting changes from the Tor browser
upstreamed. It's not happening overnight, but progress is being made.

Things like IPFS, while very promising, are not mature enough to justify the
effort of adding them in the core of gecko yet.

Also, I dispute the fact that supporting IPFS would "make Firefox a go-to
tool" in the grand scheme of internet users, unfortunately.

~~~
kefka
I sure hope so, regarding Tor and Mozilla. Aside my scripts that make .onion
resolution on a Linux machine work, it would be nicer to have a simple and
clean interface here.

I certainly understand the hesitancy with IPFS. It is still too new. And
they're still ironing out features and implementations. However, what gets me
is that it's live right now, and I'm using it for quite a few things.

Right now, I'm getting some VM stuff up and running. My idea is that jor1k
linux in JS can be run on IPFS.. Still playing around with it, but it seems
very stable and fast.

Also, regarding your 3rd point: I've attributed many network effects to the
85/15 principle. The 15% is your tech userbase. They're the ones that drag
everyone else into a platform or technology. The rest (85%) follow because the
early adopters knew that it was where to go. Gmail was like this, as was
Facebook, as was Firefox, as was Napster, etc. Adding in something like IPFS
as a base support adds in "Cloud Storage" where nothing really existed like
this before, well, aside 'cloud' meaning other peoples' servers.

Maybe I'm completely wrong. Time will certainly tell, but one thing I know, is
that I am indeed impressed with what I'm seeing already.

------
Matthias247
I really wonder what should be the difference between those Link and Smart
Home projects and something like OpenHAB, besides that is newly started by
Mozilla. I think the current adaption of DIY or build-something-through
existing frameworks shows that most people don't really care about such
solutions, they want something that works out of the box - even if all parts
come from a single manufacturer.

From an engineering perspective implementing lots of proprietary protocols in
a single gateway device is lots of work and will never work 100% fine. Most
people that worked on such solutions will confirm you that, and I can also do
that (in a somewhat different domain). You can sink lots of money is these
projects, and they still might not work for some customers at all - just like
universal remotes - only more complex.

One way forward could be pushing the creation of devices (gateways, sensors,
actors) that are based solely on standardized and accessible protocols. I'm
not the biggest fan of CoAP (because I think it's hardly possible to implement
it correctly), but I would still prefer this to reimplementing Z-Wave or Hue
protocols. HTTP/2 could also be a good fit. But even if you have at least the
protocol layer standardized, there possibilities for API design are endless.
Standardizing IoT APIs throughout the whole stack in a backwards compatible
fashion could really be something worthwhile, but the effort will be big, and
it's questionable if other parties are actually interested in standardization.

And just a side node: I don't think Rust is the best choice for building a
prototype for a mostly event based system. EventLoop semantics are hard to
implement because of ownership/borrowing issues, the necessary dependencies
(websocket libraries, etc.) are all in their in infancy, and it might also not
be something that allows to move as much forward in speed as you want for a
prototype (where you as a tradeoff can live with lower performance). Yes - I'm
sure the Rust commmunity will disagree with me on the last point ;)

~~~
Manishearth
I would disagree with you on the first point more; Rust is actually perfect
for event loop based semantics (we use this in Servo, it's very clean).
Ownership fits perfectly with message passing.

You should _avoid_ sharing state when you're using a message passing-based
system. The cross-thread communication is handled by the messaging, sharing
the messages itself just muddles things further. So the borrow checker
disallowing that isn't a bad thing.

> the necessary dependencies (websocket libraries, etc.)

I agree; though the websocket library Servo uses is pretty good (still, not
battle-tested, so it's a very valid point).

> it might also not be something that allows to move as much forward in speed
> as you want for a prototype

Again, with the prototyping thing, sure, it's harder to prototype in Rust, but
that's not the whole story -- you spend less time writing tests and debugging.

(Also, I'm not sure how much harder it is; I've never had trouble with doing
it and before Rust I was mostly programming in Python/JS)

> Yes - I'm sure the Rust commmunity will disagree with me on the last point

Which sort of indicates that it's probably inaccurate? :)

A lot of the perceived problems with Rust (e.g. fighting the borrow checker)
go away after a month or two of actively using the language.

~~~
Matthias247
Just for the reference: I'm building similar systems professionally (and have
done that with most mainstream programming languages on the market), and I was
probably the first one who explored async IO and eventloops in Rust
([https://github.com/Matthias247/revbio](https://github.com/Matthias247/revbio)
\- but I'm not really proud of it) - so I think I'm at least halfway qualified
to talk about this.

And unfortunatly by experimenting with these things I really got the fealing
that these problems don't expose the best side of Rust. Wrapping lots of types
into stuff like Rc<RefCell<T>> wasn't the greatest experience, and the problem
that this didn't work with trait objects back then did not increase that
either (might have changed).

Message passing is a low level primitive, and it's probably a decent solution
for communication __between __threads. However there is also a need
asynchronous communication inside a thread (the eventloop thing), for which
they are not really useful. Futures /Promises/Observables are great for both
use cases (and imho even greater in a singlethreaded environment). These are
definitly harder to implement in Rust - and if you don't believe me just check
out how many good implementations are currently available in Rust vs. other
languages.

How does the eventloop design in Servo look like? Wasn't that some integration
with Spidermonkey?

I agree that in Rust you will spend less time on writing tests and debugging -
but this applies also for a lot of other statically typed languages which
bring in a better ecosystem for that task.

~~~
kibwen

      > this didn't work with trait objects back then did not 
      > increase that either (might have changed)
    

I believe the feature that you're referring to is called "DST coercions",
which has been available in stable Rust since 1.2. See an example here:
[https://play.rust-
lang.org/?gist=ff8bed62730d54c329f2&versio...](https://play.rust-
lang.org/?gist=ff8bed62730d54c329f2&version=stable)

~~~
Matthias247
Yes, that was missing back then. But also including the ability to do checked
upcast/downcasts in the results, do change between Rc<RefCell<Base>> and
Rc<RefCell<Derived>>. Is this now possible?

------
hotcool
I recently uninstalled Firefox browser on Windows because it loads at least
10x slower than Chrome, even with no plugins. Mozilla should fix that first.
It'd give users (like me) more confidence in other projects they do, like
connected devices.

------
shmerl
What happened to Shumway by the way? A few bugs I filed for it are now marked
as referencing "Mozilla graveyard" product. It doesn't sound encouraging...

~~~
ygjb-dupe
[http://www.ghacks.net/2016/02/23/flash-replacement-
shumway-i...](http://www.ghacks.net/2016/02/23/flash-replacement-shumway-is-
as-good-as-dead/)

~~~
shmerl
Thanks, that's what I've suspected.

