To me, the power here is in using the technology to foster local, human-scale interaction.
Intranets are totally underutilized. How many people do you know who can reliably transfer personal files over a local area network? Not nearly as many as those who know how to use google or send an email... that's absurd to me, given how ancient of an application file sharing is.
It's my opinion that the survival of the internet may very well rest on p2p webs like this.
It's crazy to me that there AFIAK aren't AirDrop clones. That application alone would make this awesome, and it will probably just one of the big use cases for this.
I'm also really pleased to see Mozilla producing this kind of innovation. I'm a Firefox user at home, and am starting to get really excited about Rust, but it has felt to me like the organization has been flailing for a while.
 https://snapdrop.net + https://github.com/RobinLinus/snapdrop
I'm so used to sharing via cloud services and chat apps that I really don't know who or what I'd use this with. Just curious what situations this would come in handy for you.
Uploading and downloading the file to a cloud service will take a while. Many will reencode the video and loose quality. I then have to communicate a URL, rather than have my mum just look in a flyweb (or airdrop or whatever) place on her machine.
I read flyweb as being part of Mozilla's IoT effort, to make it easy to convert your phone into a remote control. See for instance this video they made: https://www.youtube.com/watch?v=FJ5DEGvqDb4
Another thing to consider if you compare it with cloud solutions is that you don't have to find a second channel to send the shared link (which may require both person to e.g. login on their webmail, spell their email address to one another, etc).
> Dropbox needs to maintain a connection to the Internet in order to determine when to sync. To take advantage of LAN sync, all computers need to be connected to a LAN and the Internet at the same time.
Stuff like syncthing/btsync is much more radical in that you don't have to assume that the internet as we know it even exists.
One interesting thing to figure out is the combination of local and global. When I have an iot device and I'm away from home, or someone collaborating with me from a different location, the same app needs to fall back to using standard internet based interfaces. Not sure if that disqualifies it from being a potential use case of this.
The other major players being Google and Apple, they'll almost certainly want to push their proprietary app platforms to increase their market power, instead of open technologies which are in the interest of and benefit the consumer.
And this doesn't need to contradict pushing their proprietary app platforms. An addition to Google Drive that allows people to send files over the office intranet without going through the internet (i.e. faster and more securely) would give Google a small competitive advantage over the Microsoft suite of tools.
Secondly, the FlyWeb server gives you access to a really flexible API for serving just about any content.
It feels like federated content, we just need to question whether it should be locked to the local network.
I think this technology is intriguing and with some real use cases (more peer to peer) but the api seems disorganized. I cant tell if it wants to be another webstandard or be something different.
A part of me wants to dislike this and consider it as a distasteful competitor to pre-existing technologies that have learned to survive without "the web". Another part realizes that sandboxing these technologies protects and enables the average user in regards to awesome tech. This certainly wont replace torrent, webrtc or other existing p2p technology. But I certainly think its a cute way of opening up the field.
The thing that I worry about though, is that you're starting a local web server from within your browser. I feel like they're some big security concerns that should be addressed in like the opening paragraph.
Also that red background for their website is terrible. :-P
It might be nice if it could be configured to also leverage a DHT like the Bittorrent DHT such as Webtorrent uses, but that may be out of scope. (From what I guess, restricting it to link-local may possibly be an intended restriction for now.)
Hosting HTTP from JS is nice (and potentially quite useful in p2p/mesh/offline-first worlds), but the real proposed benefit here would be adding a first class mDNS UX to the web browser. Right now anybody can host a web server and mDNS advertise on a local network, but there's no cross-platform, cross-browser way for me to tell you how to pull up that mDNS advertised web server. On some systems, some of the time, in some of the browsers you can use mdns-name.local and access that server. But a lot of system/browser combo don't support that (or worse, don't support that reliably). (Not to mention the questions of how to access mDNS advertised names that don't meet URL standards such as are full of : and space and Unicode characters.) If FlyWeb could help get something of a standardized web browser interface into mDNS, that alone would be a whole lot of good for the intranet/link-local/IoT web.
Because yeah, that's where this is going to be used first and foremost.
Most people are not intelligent enough to understand how to secure their internet banking, and now we're going to bake-in hosting tcp connectable servers?
These security prompts better have some real clear language and require giving permissions every time.
Now I can see some good things for this too, start a flyweb from your desktop and easily transfer some stuff from your phone for instance (something that still sucks in 2016)
I just think that most of it's use will be malicious.
I would just plug in a USB cable for that...
But this is definitely feeling like a reimplementation of more things that should be served by the OS into the browser, a concept that I am rather opposed to. I'm probably getting old, but I'd rather the browser stay a simple hyperdocument viewer than turn into a crude approximation of an OS.
The first seems useful. The second seems to need a more compelling use case. Also, opening the browser to incoming connections creates a new attack surface in a very complex program.
What is new (and what appears to make Mozilla's implementation incompatible) is that they are adding a layer of UUIDs to ensure each service gets a separate origin. This ensures that, if you switch between LANs, an open tab for one device can't interact with a device on the other LAN with the same name or IP address. Makes a lot of sense for the security of IoT devices.
Longer term, I think we'll need to use a separate URI scheme (e.g. fly://). It turns out upon further investigation that the (http://) scheme relies heavily on TCP semantics. For example, port numbers are an implementation detail in flyweb, not something explicitly exposed via URI, but http demands that port numbers be interpreted (and without a port number, an origin assumes port 80).
We also want to restrict the underlying wire protocol to be a subset of HTTP, eliminating a number of the purely internet-related functionality, such as redirects and proxies.
We also need to specify different security semantics for interpretation of TLS certificates in the FlyWeb context. Devices are not websites, and they're not identified by internet-DNS names, and the current certificate model is oriented to work with that design.
We're slowly working through resolving all of these issues. The idea is simple, but the execution requires care and attention to detail.
Edits: speling correctoins
Most of Mozilla's dev resources right now are being targeted towards core development: the e10s process isolation work, the quantum graphics and rendering speedup work, general stability and crash rate work, asm.js and webasm work, etc. etc. The vast, vast majority of resources are being allocated to those core concenrs, and that's a direction I personally agree with.
FlyWeb is a very low-overhead experimental project with an eye to the long term. There are two people on the team, me and Justin D'Arcangelo.
I've said this before on HN, but there are no guarantees. I personally want this project to succeed, but that success depends on a lot of factors. Implementation work, security work, adoption, interest, market viability and other factors. We're working hard to make a viable path to success for the project.
Could that be a major change in the abstraction of independence/encapsulation of web pages, or in other words a break against user protections in the contract of using a web browser instead of applications.
So, please keep security and privacy as first order design considerations with this :)
Along that track, it would be nice to see native DHT support in the browser, for global server-less discovery.
Unfortunately, just using WebRTC is not a great fit for a DHT, because of connection costs. Also it makes more sense to have DHT persist between app sessions.
[edit: ok, so it is cool, but I'm not sure it's secure, and I'm not crazy about web pages from other domains being able to setup local discovery on my network. Seems like a massive security problem. Uuids sounds like obfuscation, not security. ]
[edit: ok, well at least they've started thinking about it: https://wiki.mozilla.org/FlyWeb#Security
Would like to see this fleshed out some more. ]
Meshnetwork torrent trackers with DHT anyone?
I lean towards using bluetooth as a discovery mechanism rather than wifi. Google's "Physical Web" I think does something along these lines, though I am not sure whether or not they are thinking about web servers on these local devices. I think that is a key part of the idea.
Of course here devices that host their own server would not be simple beacons, but they could be found the same way.
I'm pretty cynical and jaded, though, so I went looking for and finally found the "Security and privacy considerations" section of the FlyWeb (draft) Specification . I'm quite disappointed by what I see there -- or, rather, what I don't see.
If Mozilla is to pursue this seriously then, in my opinion, they need to follow a process similar to Internet Drafts . Development of the spec should be opened up to the public, other stakeholders (browser vendors and, of course, users) should be involved, and so on.
There was a time when we could remain somewhat confident that a device behind the firewall would not be accessible from the "Internet at large".
That was before UPNP  and rebinding attacks . As I said, I can quickly think of several use cases that would be perfect for something like this... but history has clearly proven that the privacy and security implications MUST be considered at every step of the way. This beast should be tamed before it has a chance to get away.
Myself, I'm going to go ahead and add a new lockPref entry for "dom.flyweb.enabled" to my mozilla.cfg in anticipation of the day that this comes to my browser. (Of course, with Mozilla's track record, they'll probably push FlyWeb heavily for the next year or so, then just abruptly announce one day that they're killing it off.)
Rest assured, there has been a lot more thought put towards security and privacy than the draft spec document currently shows. That's not to say that we have a complete security story at this very moment (we are still working on it), but there are many more considerations than what is currently in the document. I hope to try and provide an update in this regard in the coming weeks.
In the desktop FF Nightly the Flyweb menu must be picked from the customization menu (Menu, Preferences, drag the Flyweb icon to the toolbar). I think Mozilla forgot about this in their page.
Another important bit of information is how to install Nightly alongside with the current FF http://superuser.com/questions/679797/how-to-run-firefox-nig...
My take on this: interesting, especially the server side part. Instead the server inside the browser could be at best a way to drain batteries and at worst a security risk because of the increased attack surface. I wonder how locality applies to phones on the network of a mobile operator vs on a smaller WiFi network.
Anyway, if we have to rely on browsers to implement the discovery mechanism I'm afraid that it won't fly (pun intended). I'd be very surprised if Apple, Google and even MS will include this into their browsers. I got a feeling that they might want to push their own solutions to sell their own hardware. I hope to get surprised.
Maybe there will be apps autodiscovering those services or other servers acting as bridges to a "normal" DNS based discovery service.
Btw: Mozilla should test their pages a little harder. I had to remove the Roboto font from the CSS to be able to read it. The font was way too thin in all my desktop browsers and FF mobile. Opera mobile was OK, it probably defaulted to Arial.
I understand why they hide the real IP addresses behind UUIDs, but I think there should be an option to also convert it to the real IP/host address. Because often you want to share the address of the embedded device with your coworker, use the address in another tool, and so on.
However I'm not sold on the idea and state of the webserver in the browser API. It just leaves a lot of questions open: E.g. pages are often reloaded, how will this impact the experience. Or HTTP request and response bodys are possibly unlimited streams, the simplified API however does not expose this. What attack vectors are enabled through that and how will it limit use-cases?
Apple have hidden it behind flags in Preferences -> Advanced in recent versions, but when enabled, you get a "Bonjour" item in the favourites menu, which will show the internal settings websites of compatible printers etc. that are on the LAN.
Sorry, I didn't mean to be snarky, I worked on a project that has some surface similarities to this (local only server) but last year when Chrome (and Firefox?) banned a bunch of features unless youre HTTPS that pretty much killed the project.
Thats not to say there aren't uses without those features. Its just interesting to see Mozilla make this feature that serves pagesp that can.t use the full range of features,
You want to make media server but you can't go full screen
You want to use phones as wiimotes but you can't get device orientation
You want to speak into the webpage but you can't access the mic
You want to scan barcodes into the webpage but you can't access the camera
I don't quite understand the reason behind the random-UUID-as-hostnane design, however. Yes, it protects against a service stealing another service's cookies.
But wouldn't this also result in the same service having a different hostname and origin each time it is discovered? Woudn't this render cookies, storage and HTTPS(!) unusable for flyweb services?
Most easy to spin up servers fall into these categories:
* Static files only
* Tough to fine tune configuration
* Synchronous connections unless you want to ramp the complexity up
* Require a separately installed interpreter
Web workers within the browser solve this.
You probably don't want to use this to run a Gmail clone, but a simple media server, maybe.
I see it as great for LANs, and potentially a simpler replacement for SharePoint in SMBs.
Or does no one remember that browsers have a built in protocol for video conferences (WebRTC) that can be use to exploit systems behind a firewall?
The stuff they've done with using phone apps to play group guessing games is a lot of fun.
That's not all it opens up. "Enabling web pages to host servers"--who thought this was a good idea?
To top it off, later in the page, they tell users how to upgrade Node by running `curl ... | sudo bash -`. Good grief, the anti-patterns!
This FlyWeb site has me seeing red.
Opera Unite was an isolated platform with prebuilt apps by Opera and custom apps you could download from an app store. FlyWeb is an API exposed to any web page.
Opera Unite gave you a public URL that was an Opera server reverse proxying to your local machine so you could share files, chat, etc. with your friends online. FlyWeb just publishes multicast DNS (Bonjour/Avahi/Zeroconf/etc.) service discovery records to your local network.