I agree, but I'm two months behind on implementing important features. While I can take a week here and there to optimize, my time is best spent delivering business value.
Unless you allow performance to genuinely suck (and I don't) slimming down web apps won't help my business make money. You get what current fashionable JS frameworks and build tools give me.
As long as your customers are in the vast majority accessing your web apps over non-cellular data connections this makes sense. As more folks are cell-phone-first, ignoring the size of the payload you expect users to accept will increasingly represent a sub-optimal experience.
also, page size is not the biggest performance killer. too many http requests (for example, requests out to CDNs and advertising networks) is a more likely source of problems.
amazing that its 194kb and you think this is very small. what world are ver living in that web development has made us so sloppy and inefficient, when MB's of script downloads are considered slim.
Maybe the carriers' are sloppy and inefficient. See, whatever I do, when I access anything web-based on my mobile phone, it is slow. And I live in Paris.
It's funny how suposedly the 3G/4G stuff should be fast but in reality it is not, I believe that is mostly because in large areas the access points are overcrowded.
In other words, the technology used to access internet is saturated, and developpers should be blamed for having websites >200kb heavy!
An example for the "why" question -- the "web desktop with apps" style seems to work for NAS control panels (namely QNAP and Synology). As far as I know, they are using Sencha ExtJS for the UI -- which, although pretty complete and mature, doesn't get a lot of developer love.
I couldn't say really, can't say I'm a big fan, but it's relevant that multiple big companies are using this kind of interface. I would hope they have some reasons beyond UI flashiness and copying their competitors.
Whatever the reasons, when a customer says "build me one of those" it's useful to have a good library to call upon.
While it looks like a nice exercise in front end JS, and looks cool, does it solve any problems? Do users actually need some more abstraction layers on top of Google Drive / Dropbx, etc? Either I am getting too sleepy, or they didn't provide any actual use cases in the homepage.
It provides some sort of API ("simple, modilarized and flexible JavaScript APIs so you can easily make changes, extend functionality and create applications") but is it just a new look for single page applications?
I am not trying to be cynical or anything, just curious :)
1/ Make a modular backoffice for CMS based website. Make an app to manage users, an app to manage content, an app to upload and manage gallery of media etc.
2/ Make a distributed OS. All services can scale and use backend power beyond what a single node can provide. If you need to burst in the cloud due to a heavy process for instance.
3/ Could use it to provide a separation between the OS and the GUI, providing a UX I may like even if the backend server has to be Windows or Centos or ... Or remotely operating a part of my machine at home?
4/ What about reducing the cost of HW it needs to run? Chromebook style.
5/ What about being less tied in my Web App to Google Drive API + friends by adding a abstract VFS layer in between like a OS does? Same for texting, emailing? Sure it only uses part of this and don't need the client, but the architecture is well splited on this project between the two.
6/ It is multi-user as its core, so it can be shared across an organization and not only on one computer. You could even create one account per project/team and share a set of common tools / organization / folders that way across online services.
With the rise of web apps, it seems interesting to investigate the possibility of a web OS to administrate and make those ones collaborate more nicely. Or at least the possibility of separating the UX and the execution on two different nodes for general purpose systems as well.
I agree whole heartedly. My most loathed feedback is "What's the use case?".
I have done a lot of odd projects, (A JavaScript desktop amnong them) and have hit this same "Why" "What's it good for" etc.
I find it difficult so to fathom how people can look at things and see nothing of value. Sure I can think how something could be done differently, possibly better even. There's always some value, practical, aesthetical, inspirational, insightful or even inciteful.
As indicated by the title of the link, I think it allows you to build a "desktop" metaphor in the browser. ExtJS used to have some similar demos but I think their license is pretty restrictive. One use case for this is something that I happen to use regularly - the web interface on the Synology NAS.
Build an ERP application user interface on top of this (or other complex business app)? Different components would be OS.js apps. Users could layout the screen as they wish, changes would be persisted to server.
Too bad browser JS is single-threaded. If you switch windows the background application stops. Kind of limits its ability as a desktop replacement no matter how much effort they put into it.
There's no reason you can't have multiple "apps" running at the same time, you just don't have preemption ( https://en.wikipedia.org/wiki/Preemption_(computing) ) so it's possible for one application to tie up the CPU indefinitely. That's definitely not whats happening in this case, since the UI is still responsive.
Alternatively, to get true isolation and preemptive multitasking you could have a sort of "window server" that sends UI events to Web Workers and received drawing commands of some kind (a React virtual DOM?)
You don't need web workers, you just need to host each app in a sandboxed iframe. Then it can render however it wants with normal DOM access and have script and DOM isolation from the OS and other apps. You would need some sort of IPC system on top of postMessage, and probably some way to send theme data down into the frame and get things like content sizes back out.
I don't think same-origin iframed apps would be out-of-process in Chrome yet, but I do think Chrome is supposed to get out-of-process frames eventually. That would prevent an apps from exploiting a browser bug to take down the whole tab.
What is the reason for requiring the single thread of execution in JavaScript? Trading race conditions for callback hell?
Even the concurrency provided by web workers, suffer from the same issues, you communicate with them via messages and they will not respond to messages if they are looping away somewhere else.
I'd love to have some simple pre-emption in JavaScript. Suspend code execution at one point and pick it up at another. Instead of having threads, have the fundamentals that allow threads. Has this been discussed and rejected by the JavaScript community?
I don't understand the limitation you are implying with web workers and message passing? If a normal thread is also "looping away somewhere else" it will be equally unresponsive as as web worker not picking up messages. The alternative to message passing would be shared data structures and mutexes, not very nice.
There are many alternatives to "fire-and-forget message passing" that JavaScript does, that don't require shared data structures or mutexes.
JavaScript can implement some of these alternatives with it's postMessage/onmessage combination, but it requires cooperation on both the sender and receiver which (in my opinion) is a limitation: The supervisor/operating system can expose just a little bit more information and it means the system isn't equally unresponsive anymore. Two that I'm thinking about are:
• Mailboxes. A process can post a message to a remote buffer and also check the fullness of that buffer so it can make other arrangements (e.g. scheduling another worker, letting the user know that the system is busy, etc). JavaScript can simulate this if both sides cooperate by implementing an ack-on-receive. UNIX allows detection of a full-buffer (EWOULDBLOCK). KDB publishes[1] the number of bytes in the output buffers which would also be preferable to what JavaScript does.
• Bulletin Boards. A process can simply publish information in a local buffer. Workers can then connect to pick up tasks. Again, JavaScript can simulate this if both sides cooperate, and implement a get/put system with postMessage/onmessage, and while this does represent a shared data structure, it's read-sharing which doesn't require any mutual-exclusion. You can also build it into a network protocol: My cexec[2] does this because load+network latency gives me free scheduling. Many mail servers also use this trick- one process writes to the queue directory, and workers pick up new work whenever they have time.
If you're interested in inter-process communication, Tanenbaum gave good writeup[3] on these methods (which I think are specifically relevant), and some other methods (which are useful in other circumstances).
A pre-emption mechanism would be able receive the message and respond. At the very least a response of "I'm busy" is better than no response that could be for any number of reasons.
Shared data structures would enable this, and while you may not think they are very nice, I would prefer a not very nice solution to no solution whatsoever. I have seen discussions on various mailing that indicate that shared data structures will be implemented in some manner.
var n = 1;
search: while (true) {
n += 1;
for (var i = 2; i <= Math.sqrt(n); i += 1)
if (n % i == 0)
continue search;
// found a prime!
postMessage(n);
}
How would you implement this worker in such a way that the host page could call
Convert the while/for loops and recursion to continuation-passing style and bounce via nextTick/setTimeout to convince yourself that shared data structures still aren't required to fix this: only pre-emption.
I'm not sure I follow the bit about convincing myself. Going back to square one again with a setTimout/event loop renders much of the use of workers pointless, you are stuck back at the same restrictions as main thread JavaScript.
I agree pre-emption would allow this without shared data structures. It just seems that the solution we will be given in the end will be a shared model.
You can get a lot of mileage out of specialised solutions, and I suspect we'll continue to see those instead of a shared model; in an answer to your question about finding prime numbers, simply kill the worker and start another one with a new initial state.
The single thread limitation is caused by the global interpreter lock[0] that's common to most interpreted languages. Along with JS, it also appears in CPython and Ruby MRI, the reference implementation of Python and Ruby respectively.
As I understand it, the GIL is an implementation detail of certain interpreters that prevents threads from running on multiple OS threads (and thus multiple CPUs).
JavaScript has explicitly chosen not to support preemptive multithreading at all (aside from Web Workers which don't share memory), while Ruby and Python do have it, even if it can't fully utilize multicore processors.
No, it's not. MRI for example uses OS-level threads from 1.9.x onwards. The GIL prevents multiple of those threads from executing inside the interpreter at any given time, which may sound like it forces a single thread to be running but in practice most typical code will end up spending a lot of time running in C extensions or waiting on kernel space, so it's much less of a hindrance than you might think (it's still not great, but it's far better than it'd be otherwise)
That doesn't eliminate the possibility of pre-emption though does it?
A theoretical version of setTimeout that triggered immediately suspending the active code and executing the timeout function then resuming (or potentially not).
That only requires the interpretor to execute one piece of code at a time. The notions of thread safety would apply to the interpreted code, but not to the interpretor itself.
It certainly looks as if each app lives in its own process of sorts - check the 'process viewer'
I'm sure they could use some sort of thread/process implementation with a scheduler to do stuff. Speculation, each program could be compiled to some sort of bytecode, use asm.js to implement the guts of the thing, etc.
When you think about it, a single-core machine is "single threaded" as well, you just need some interleaving code to let more than 1 thing run at a time, and from some casual usage of the demo, seems they've done exactly that, so there you go
A single core machine can multi thread, but without a timer interrupt it's hard work. What can JavaScript do to provide an interrupt short of emulating a processor with an interrupt via some form of:
while true() {
do an instruction;
check for interrupt;
}
In a sense it is, and in a way that is the problem with the current model of JavaScript.
That line
do an instruction;
Is the smallest unit of code execution. In JavaScript* that is an event. Let's call it a wibblywoop.
JavaScript is effectively designed around the premise that a wibblywoop takes a negligible amount of time. This premise is false. Oftentimes it does take a negligible amount of time, when it doesn't the impact of responsiveness is severe. This is why everything becomes a callback. The language itself has indicators that this should not be the case. the existence of things like Array.Map shows that there was certainly an intention at some point for a function to be able to do a significant workload and return a result.
* in implementations anyway. The language itself doesn't seem intrinsically bound to the event model.
It's been a while since I've checked up on the project. How's everything going? Have you gotten any adoption? Did the backend stuff ever get completely ported over to node from PHP?
Yes, there is a company that will make use of OS.js in a commercial product. More news about that will surface this year, so stay tuned! :)
And the Node backend is pretty much identical to the PHP one now... and PHP will probably be deprecated in the near future (at least from the main repository)
Looks cool! Certainly fits the "why not" category of why it exists though I can't figure out a way to make it useable; without a proper window manager (which can't be done inside of a web browser) using apps inside of the window becomes more difficult than simply having a tab for each app. In fact if every app opened up full screen and into a new tab that could almost be handy depending on the extension API, etc.
Reminds me of something I saw way back in the late '90s called MyWebOS, which looked cool but never caught on.
Also, I see the Internet has unleashed its usual brand of stupidity on the demo. There are files in the shared directory with names like "Got any good porn.txt" and "dicksmoker.odoc".
The French company behind XWiki did something like this called eXoplatform [1] and WebOS more than 10 years ago. Can't put my finger on screenshots or videos now but they must be lying around.
Still impressive, especially with the smaller footprint, but not exactly revolutionary or novel.
There was a time where every company wanted their own portlet framework and implement windowing in their apps (for better or, most of the time, way worse). Bunch of frameworks allowed to do that more or less easily too: ExtJS, GWT, Rialto...
Always cool to see these in action, but multiple "online desktops" have come and gone since the mid-2000s. I personally built one myself in 2010 only to realize it had been tried and failed a few times. My next instinct was that to get attention and attract users I'd need a mobile-style UI. My instincts were wrong, and your gut is right. There wasn't a strong enough use case.
The motivator for these projects is typically something along the lines of, "it would be great if there was just one OS or interface I could use on any device." As engineers it's very easy for us to see the usage pattern of apps, the speed, power and ubiquity of web tooling + JavaScript and say, "aha! The next logical evolution is all of your client logic on the web!"
Alas, if it were only that easy.
"Unfortunately, this ability to see patterns can prove catastrophic as you attempt to build your own company. The more you generalize the solution to a particular pain, the further removed it becomes from that specific pain. While it might end up being able to solve a lot of pains, it won’t be very good at solving any particular one." [1]
A shared, web-based environment is probably where everything is going. I mean we're more than halfway there with most operating systems, cloud backups, tons of SPAs, etc. The problem is there's no consumer-driven need for a fast transition to a JavaScript "OS" environment. Everything is just good enough, and offloading all device logic to the web, for the end user, is barely noticeable (if not a minor detriment as older devices still have rendering issues). It's likely that everything will converge on this sort of environment (but without the desktop metaphor) because engineers want to head towards elegant solutions, but it will take time. (You won't convince people to start switching with software alone if it means they have to figure out how to open Chrome / Safari / Whatever on their phones, first.)
FirefoxOS [2] is probably getting pretty close. The sooner we get to an OS just being a glorified web browser, the more we drive development spending way down and cut costs for the consumer. Then the software will already exist, packaged with the phone. But you need the phone first --- not the software. That's the product. That drives consumer demand. "JavaScript OS" becomes a reality as soon as web-rendering matches native performance on ubiquitous, low-end devices. The software will exist to sell the product, not vice-versa. (We'll also have a Cambrian explosion of offline-first front-end tooling. ;))
... Yeah, I've done a lot of thinking about this. ;) Was my first big pet project.
> The motivator for these projects is typically something along the lines of, "it would be great if there was just one OS or interface I could use on any device."
A lot of people do have this dream, but in practice it's horrible. The UI on my watch has to be very different than the UI on my flatscreen TV.
This notion that things will move to a web based environment because "engineers want to head towards elegant solutions" has two problems.
The first is that "what engineers want" doesn't matter in the least. We saw this with Windows 8: a single OS that tries to scale from tablets through workstations. That's an engineer's design and an engineer's dream, but users hated it, and Microsoft has since backed off.
The second problem is this notion that the web is "elegant." It has some nice properties, but it's layered hacks upon hacks. Demos like this are impressive because they work despite the web's limitations.
> The sooner we get to an OS just being a glorified web browser, the more we drive development spending way down and cut costs for the consumer.
This is frankly delusional. If web apps cost less, it's because they do less, or do it less well.
It seems like you read the first couple of lines of my post and then skimmed through the rest of it. :) You reiterated points I made as though you're arguing with me, I'm a bit confused.
Re: dev spending. We drive development spending down because it costs far less to write software once and deploy everywhere than it does to write it for two, three or more platforms. The "elegant" goal is one compilation and/or deployment target, whether you like it or not doesn't matter, and it seems like JavaScript (or some related cousin) is going to fill that niche. Pretty silly to think that's delusional when it's already happening.
Pretty impressive UI/UX. If that's possible using "web technologies" (JS + HTML + CSS, I gather), shouldn't it be possible to create such things on mobile? All the mobile-optimized web stuff I see still feels sub-optimal.
> All the mobile-optimized web stuff I see still feels sub-optimal.
That's probably because all of the mobile browsers are terribly inefficient when compared to desktop browsers. It makes creating a mobile experience that's fluid and fast.
The hard parts of doing this for mobile is implementing things like touch and scrolling. You get these with a webview, but they don't feel like their native equivalents.
Impressive but everything is very slow for me - I've tried on my late 2015 5k iMac i7 and 2015 Macbook, both take almost a minute to load to the desktop in Firefox, granted I have a very slow internet connection at 18Mbit down 2Mbit up but given that it appears to be quite small in size something must be awry?
When I worked in a telecom company, there were a lot of servers where a GUI was exposed which mimicked a Gnome 2 environment (through Citrix if I remember correctly). It was painfully slow and really a time waster. I wonder if this is faster than that. If it is, so many companies can use it.
Compare this(and the response to it) to the overall perception of JavaScript 5 years ago. It's interesting to see how decent tooling and some good language changes can move the perspective from largely disparaged to greatly desired.
2016 is less than 48 hours away. Why is curl http:// | sh still a thing? Why is the Windows installer delivered without HTTPS, and does it really have no digital signature, or have I been MITM’d? I don’t know about you, but I wouldn’t touch this with a barge pole, let alone a computer.
really motivates to build slimmer web apps