I wonder what are the security implications.
I just opened a random URL and an arbitrary application was launched inside my Internet OS desktop.
What if someone gives me a link to a malicious password stealer, for example?
I think a permission system would be nice.
I guess that refers to the cloud storage/backend. Clientside apps seem to run as IFrames, so should be subjected to normal browser sandboxing (…and actually, I guess, multiprocessing too), with only explicit message-passing.
Honestly I think the developer API could have been better highlighted by this HN post. Puter's actually doing a bit more than the Desktop Environment mockup that's visually apparent, and not featuring that seems like a wasted opportunity to pick up some app developers.
On a technical level it's actually not that impressive. Mozilla got BananaBread/Sauerbraten running with WebGL on netbooks over a decade ago, with much more advanced graphics than Xash3D:
The Half Life "app" above actually just wraps an existing Emscripten port in an IFrame. Puter does provide an abstracted filesystem (and other "OS" features), but I don't think the app above uses that, so it's no different than if you went to `data:text/html,IntralexicalOS!<br><iframe src="https://pixelsuft.github.io/hl/" style="width:50%;height:50%">`.
But actually I think putting together such a simple technical concept that works this well is the impressive part, that hasn't been done before. You can now take any existing web build of any app, and by adding a couple calls to `puter.fs.write()`, deploy it to a familiar virtual workspace where you get multitasking, cloud storage, and interoperation with other apps, for free.
The circle is finally complete! Now of course you want to use a browser within puter as well: Browser in browser OS in OS in virtual machine...
It's mind boggling how far one can torture the concept of markup documents to eventually arrive at something like this... just so users don't have to install software.
Itchy is maybe exaggerating, but as far as I know, lower class clothes were very coarse compared to modern ones.
A lot of natural fibers need a lot of strength to be spun into finer threads and we didn't really have that on a large scale until the 1800s.
Plus distribution was much harder so in many remote parts were stuck with whatever they made locally, which was most likely subpar even for their times.
Cotton tech in the 1800s really revolutionized clothing.
Silk, and in general almost every type of clothing until modern cotton processing. Most traditional clothing was very coarse and one of the most obvious class indicators, across millenia.
silk was so in demand and so lucrative a product that at one point smuggling silk worms out of china was punishable by death. I think the price of a silk shirt in the middle ages was roughly comparable to a luxury car today.
These comparisons always bring up the fact in my mind that the lowest class people today have so much better sanitation, like toilets compared to kings back then
Given what the web is, it makes me exceedingly sad we don't see the page as a browser of many sites more often. That a page mostly just dials home, to its own server, is radically less than what web architecture could be, is a mere readopting of the past.
Efforts like Tim Berners-Lee's Solid seem like a great first step. There's also a variety of Mastodon clients one can run as PWAs, which both is an example but also a bit of a counter-examples: the page can dial anywhere! But then that Fediverse server intermediates & connects you out to the world. RSS readers too; dialing home to connect with the world. Instead we could perhaps have a client that reaches out directly. Then that activity of reading & browsing, keeping track of favorites and what not: that would have to be sent home or otherwise saved.
To me, Puter platform has a huge potential. And as a developer, I already have large value from it.
Using puter.js I was able to add full cloud storage for my design editor https://studio.polotno.com/ without messing up with auth, backend and databases.
I agree, this opens up really interesting possibilities.
Here's my understanding of how it works, based on the puter.js docs [1]:
If I'm developing a frontend app that could benefit from cloud storage, I can load in puter.js as my "backend". I don't need to worry about user auth because puter.js will automatically ask the user to create an account or log in. I also don't need to worry about managing & paying for cloud storage because puter.js will take care of that on a user-by-user basis - including asking the user for payment if they go over their free limits.
I haven't actually used puter.js yet. But if I understand correctly, this could be a really powerful model. As the developer of a niche app whose purpose is not to bring in revenue, puter.js seems like a very reasonable way to pass on cloud storage costs to end users, while also reducing development effort!
As lavrton said in a sibling comment - simpler integration.
I can't speak to Dropbox integration, but every time I've looked at integrating with Google Drive I have felt my development effort growing, not shrinking.
Puter also seems to place a high value on privacy, which I like.
Polotno Studio is a free app. And for a long time, it didn't even have the ability to signup and save created designs into a cloud.
I didn't want to invest my resources into "cloud saving" feature (as it is a free app). Setup full authorization, setup database, setup servers and tons of other work to finish the cycle.
https://docs.puter.com/ gives a very simple, yet powerful client-side js SDK to enable full cloud saving and loading of data for my users. I spent a couple of days for integration. Doing everything by myself with full hosting would take weeks, if not months.
Sounds good, but it seems the sign-in feature pops open a new window to the puter.com domain. Does the SDK provide any means of avoiding this and keeping everything integrated on your app without pop-up windows?
Also I'm confused about how you pay puter for the service. What if a million people suddenly use your app, and sign-up? As the app owner, do you have access to the users who signed up? Can you export those members in case you wanted to import them somewhere else if you moved your app?
I am not in puter team, so I may not know some details.
But.
I don't really care how sign-in is implemented. If it is a popup, but simple for the user - that is ok for me. Probably they will change how it works in the feature because they Puter team was listening for my feedback with puter.js SDK.
Right now, I don't pay Puter. As I understand their long-term plan, eventually, they will monetize the users directly, for example, users will pay for bigger cloud storage.
For now, I don't have access to users. But I already spoke with Puter team about, and they told me they will have a full user management dashboard.
Puter should provide more details, it seems lacking on their side given it's been around for a bit.
I'm someone who, like you, doesn't want to mess around with user management and authenticating, but I probably should grow a pair and learn how to do it properly using Amazon services or something.
If someone could combine Supabase + Postgrest or Hasura with an easy visual admin dashboard for them inside this, this could be a complete easy to access, easy to deploy platform.
This is insanely cool! Looks really slick too, even in a mobile screen.
jQuery?? I cannot imagine how difficult it is to not break this when you make the slightest change, hats off for managing with vanilla Javascript and jQuery! The best thing about React for me is to not have to worry about breaking the DOM or messing up with event handlers because some jQuery line, somewhere obscure is probably doing something funky that is really difficult to track down!
You can easily shoot yourself in the foot with jQuery (or direct DOM manipulation for that matter), but it's not that hard not to. It just requires some discipline, like most things. React is also far from foolproof and not worth the added complexity in most cases, IMO.
That and types. The only framework that's useful to JS is a better static type check system (and none of this lets-make-the-whole-damn-runtime-slow to support X feature, looking at you TypeScript).
We use a lot of typescript with a very opinionated setup on coding style and conventions, but that only goes so far when you’re dealing directly with the dom.
Because the dom is notoriously hard to work with. The internet is full of blog posts and articles talking about how slow it is as an example, but in reality adding or removing a dom node is swapping a pointers which is extremely fast. What is slow is things like “layout”. When Javascript changes something and hands control back to the browser it invokes its CSS recalc, layout, repaint and finally recomposition algorithms to redraw the screen. The layout algorithm is quite complex and it's synchronous, which makes it stupidly easy to make thing slow.
This is why the virtual dom of react “won”. Not because you can’t fuck up with it, but because it’s much harder to do so.
> When Javascript changes something and hands control back to the browser it invokes its CSS recalc, layout, repaint and finally recomposition algorithms to redraw the screen. The layout algorithm is quite complex and it's synchronous, which makes it stupidly easy to make thing slow.
Wait, you're saying it's synchronous but what exactly is being blocked here (since you also said the JS hands back control to the browser first)?
I’m not quite sure what it is that you’re asking. When you want to show something in a browser, you go through this process: JavaScript -> style -> layout -> paint -> composite.
The browser will construct a render tree and then calculate the layout position of each element/node in a process where it’s extremely easy to run into performance issues. And since the rest of the render pipeline waits for this to conclude, it’ll lead to very poor performance. You can look into layout trashing if you’re curious.
My point is more along the lines of how you can ask a lot of frontend engineers what the render pipeline is and how it works and many won’t be able to answer. Which isn’t a huge issue, because almost everyone who does complicated frontends either use a virtual dom or exclusive hire people who do know. But for the most part, you won’t be fine with just JavaScript for massive UI projects as the person I was replying to suggested.
Only if you yield to the layout engine (e.g. `await new Promise(resolve => setTimeout(resolve, 0))`) in between. Which, if you know you want to change two things, why would you?
Not enums. But you don't need a runtime or function mutation for that...
Particularly egregious was (is?) async/await. Upgrade your browser/runtime/don't use it you say? Sure, but first two weren't always possible, and the third isn't possible unless you thoroughly vet your dependencies (easier said than done).
"Compiling to javascript" is all well and good if you actually just compile to normal javascript, as soon as you have any code that simulates other features (classes/objects/what-have-you) you are no longer "compiling to javascript". I mean yeah sure as a sort of intermediary assembly language you are but the performance is not the same. You have a new language with a runtime overhead, that now requires you modify the "core" language to bring in new features, which results in the underlying execution engines (browsers/cpus) becoming more complicated, power hungry, etc....
The performance wins for typescript likely source from the ability of the runtime to pre-allocate and avoid type checking.
Providing the type checks without using any non-JS features (and possibly providing the runtime some heads up regarding checks to safely drop) is the ideal.
You can disable those fallback implementations if you don't want to use them. Just use the javascript version you have available as the basis for your typescript. The option to look into the future shouldn't be treated as a negative.
And I still don't see how they make "the whole damn runtime" slow. You don't pay the cost for code that isn't using it.
Also I'm pretty sure the class implementation doesn't slow things down. It's a very simple transformation.
> You have a new language with a runtime overhead, that now requires you modify the "core" language to bring in new features, which results in the underlying execution engines (browsers/cpus) becoming more complicated, power hungry, etc....
I think you have this backwards. Typescript doesn't implement new Javascript features until their addition to Javascript itself is imminent.
The only feature Typescript wants to push onto Javascript is a syntax for type annotations, because then you can remove the compilation step entirely. At which point there couldn't even be a runtime overhead.
> non-JS features
To first approximation, there aren't any. The main one is the old enum syntax, which is why I brought them up.
> Typescript doesn't implement new Javascript features until their addition to Javascript itself is imminent.
I guess we want different things from Type systems.
I want rock solid guarantees that code is correct, so that the only thing left to test as much as possible is the business logic. I don't care about the latest programming fads, and I want stable, performant code.
You seem to just want some boilerplate guarantees and backwards compatibility.
If I were writing/creating TypeScript, I would be not implementing new features before JS upgrades, but long after (possibly as support libraries). I understand the goal of "easing" the transition, but IMO those sorts of "upgrades" should be late, not early, in a tool whose primary goal is static type checking, not JS features.
The things you seem to be worried about are configurable in the tsconfig. You can stay as polyfill free as you would like by instructing the Typescript compiler to error out instead of making the glue for you. Aside from the inescapable quirks of runtime JavaScript, Typescript felt pretty intuitive to me when transitioning to a new job from C# previously. Typescript with ESLint is about as solid as you’re going to get with JavaScript. I know that ideally there’d be something better, but in the real world right now this is the best it gets. At some point reality and business constraints are going to slam into ideations and things are going to get a bit dirty.
Aside from that, no matter what you pick, standard Typescript configs are absolutely compiling to JavaScript, not any other step in the interpreting process. It doesn’t matter if it’s taking your async/await and polyfilling it to run on an older browser engine… it’s still producing 100% JavaScript.
It goes Typescript -> JavaScript during the build, and the JS is what gets distributed to clients.
The JavaScript produced by TS is sent to the browser which performs the same JavaScript -> abstract syntax tree -> byte code -> execution, as usual
"standard Typescript configs are absolutely compiling to JavaScript"
You are missing the point. I want zero cost abstractions (for some level of abstraction - I accept e.g. there is a CPU and a browser). I don't care what it is compiling to.
Typescript is not a zero cost abstraction. (Zero cost meaning, here, that any overhead is incurred compile time only).
"The things you seem to be worried about are configurable in the tsconfig"
That's terrible. I want the Z.C.A. to be by-default, not "configure the heck out of the language to make it so".
Oh, wow. When this came up before, I don't know if I looked at the DOM, because I assumed you would do this type of thing by drawing pixels on a Canvas. It's actually made of HTML elements? That's impressive.
I too think it's very impressive but wouldn't it be even more impressive if it was made using canvas? It would mean that you would need to implement your own rendering loop and layout engine. You'd need to reimplement a lot of elements such as input fields or buttons. You get all of this for free when building on top of HTML/CSS.
I think it would likely be more technically involved and complicated if it were made with `canvas`.
But large blobs drawing to `canvas` aren't anything new at this point. The part that impressed me is doing it the simple way, using what the browser already provides, and getting it to work this well.
It’s not performant if you’re using JavaScript APIs. But it’s also possible to write to a canvas with WebGL, which is hardware accelerated and is much faster than jQuery. I believe (although I can’t find a source for it now), that xterm.js used this strategy.
I got carried away for ages with this. I was installing extensions in VSCode and got confused when it wouldn't open a link to a repo in a little browserception, because by that point I was fully expecting it to.
Thank you! As a side note, we're open-sourcing the VSCode integration soon. Building an integration with VSCode takes quite a bit of work so hopefully the community will benefit.
I've been using this as a way of quickly putting a UI over an EC2 instance.
Could you advise on how you'd do the equivalent here? Eg., say I want to provide an EC2 machine with various packages installed (python, node,...) -- or, the equivalent docker image -- is there a way of using your UI to provide access to this?
Consider, eg., using an iPad with your UI in a browser. Could you advise a way that this could provide a complete development experience for datasci/seng? (As above, i'm using theia to do this in a quick-and-disposable way).
It would be nice if `~` was mapped to home directory (e.g. `cd ~/Desktop`)
Hard to resize windows. If I want to grab the right edge there is only 1 pixel to work with.
When printing with `cat` from terminal, it would be nice if there was a new line at the end of the text. The prompt shows up on the same line as cat's output.
If a file doesn't have a newline at the end then `cat` will immediately start writing the prompt after the last line; this is the same behavior you'd expect from sh or bash. We might later improve this by having a "no newline"-indicator in the promptline instead.
Definitely a `window.onbeforeunload = () => "Are you sure you want to close this browser tab"` missing, or override the keyboard input completely, so it doesn't try to close the tab.
Same problem exists for all browser shortcuts inside the OS.
> window.onbeforeunload = () => "Are you sure you want to close this browser tab"
besides the fact that now all browsers just display the uninformative "Your unsaved data may be lost" rather than any custom message, this would be a perfect addition.
When I was young I dreamed on having a USB stick (not yet invented) I could take with me to different kiosks and have a standard OS load my specific instances thanks to my custom key. This approaches that functionality and I think it's pure brilliance that you've included such thorough a demo for us to enjoy that you spent so much time and enthusiastic effort into creating and making manifest. So, I applaud you there and thank you for making it open-source that's super cool, and might inspire someone to make a kiosk that, by default, loads your site.
Nearly 20 years ago when I was in IT, I had something like this using a tool called BartPE which allowed you to make a custom WinPE environment that could be booted from a USB drive. As someone who went on to make one of these "web desktops", maybe things like BartPE were a partial inspiration.
About 30 years ago when I was a programmer and support guy all in one, I had an MS-DOS boot disk and a 300 Megabyte Backpack portable hard drive. It was awesome. I had all of the source code, backups of the customer sites data and my programming environment with me no matter where I went.
These two posts have awoken old memories for me, because I used BartPE and I had a "magic" bootable MSDOS diskette.
I remember I also created a custom Windows NT build for a company I worked for around 1999. They had about 6 different models of Compaq workstation, and a dozen different departments. They had a disk imaging solution, rather than implement automated pxe builds. That's fine, except that nobody in the team had any "craft". Because NT isn't plug and play and each department had different software, we had about 20+ NT images, each with its own personality (ie major flaws, like hard-coded WINS servers, being already joined to a domain, old user profiles, broken software installations, old drivers). The day I joined the team, the phone rang constantly from 8am to closing time. If you walked around the building to do a desk visit, 20 people would shout at you, "hey IT guy, take a look at this PC, will you?". Coming from a hardware support background I had installed MS stuff thousands of times and got my MCSE, but Lotus Notes, Sunguard, Bloomberg and their awful VB6 apps (unpackaged collecting of dlls and instructions) had a short learning curve. After I figured that stuff out I created a single NT build with everything working perfectly. I cleaned up, defragged, ran sysprep. It used NT's hardware profiles to make the build work on any model of desktop (which just required imaging a new model, creating a profile for it and installing all the drivers. Rinse and repeat 6 times). Then I burned the highly compressed NT image along with ghost.exe onto a CD, and handed copies to the other 2 helpdesk guys. Anyone who called IT over the few weeks, regardless of their issue, got a rebuild. Result? Immediate reduction in workload. So we proactively worked through the whole company. Things were so tranquil afterwards, we could go around to department heads asking if there was any "real" IT work that needed to be done.
I randomly watched a scene from it on YouTube based on this comment, and in the background was a poster I never saw before that said "Trust Your Technolust", and just like that, I have my new life motto LOL
It's an Austrian LiveCD based on Debian; version 2024.02 was just released. It's not as slick as Knoppix, but it does come with lots of utilities and can start an X Windows desktop.
What I'd really love is a net boot image and remote storage that can be mounted on the fly on any computer. All I'd need is my yubikey or whatever secure identifier to connect all the pieces.l and plenty of internet.
I'm partially there with VDI tools like Parsec, but being able to leverage local hardware would also be neat.
For some reason I'd assumed this was the kind of thing that the original Mac Mini would morph into, but of course I was wrong. Searching for "PC Stick" shows up some interesting options.
I would dearly love to find a use-case for this[0], which with a USB-HDMI might work almost as well
Is Samsung Dex able to boot/load to a standard Linux environment? Initial look at it seems interesting, but I've never really heard/seen much about it.
It depends on what a standard Linux environment is and what you will need it for. I mean Android is some kind of Linux and with Termux you can get it a bit more Debian-like from a terminal perspective. But I haven't seen anybody start a KDE or Gnome with it.
My biggest issue with Dex is that it can manage only one large display. So if you have two, they are just being cloned. Otherwise, I like to use it sometimes when I don't want to start my PC.
If you are referring to the background, indeed the colors change on purpose. It would be cool if that wasn't on purpose. As for browsing issues in Firefox, could you clarify the issue? If it's around using the Browser app, it is just an iframe so many sites will not load in it.
yeah i'm in the process of. why? are you interested to try? if you are send me a hello to cris@dosyago.com and I can send you a link when up and get your feedback if you're free :)
Technically you could fake it with an iframe if puter is served with the correct frame-src CSP rules. But for a full browser experience you’d need a backend and a remote browser.
I just want to point out how clean and pleasant to read this codebase is. I'm starting to learn JavaScript, coming from a background of systems programming, and I bookmarked this codebase just as a benchmark example of what good JS code looks like.
This is really cool--I've played with a lot of these online desktops, but this is by far the slickest.
As someone who is doing something similar (https://gridwhale.com), I'd love to know what your goals were. Did you ever try to commercialize it? If not, why not? If yes, what happened?
Inside the OS, there's a game called Danger Cross, which seems very similar to crossy roads. Did the Puter developer essentially reimplement crossy road? Or was there some open source version already existing could somehow run on this "OS" ? Briefly searching google for Danger Cross didn't yield any results.
Never played Crossy Roads so I don't know the full extent of what gameplay features there are, but from looking at screenshots, looks like the gameplay code could be done relatively quickly (like some days at max), so doesn't seem out of the question that it's a full re-implementation.
Crossy Roads looks like another clone of Frogger, and clones of that game have been done since the 80s.
And this anuraOS[0] too! I haven't looked to closely at it (it showed up in my GH feed) but it looks like it uses v86[1] to emulate a *nix environment.
This is such a neat idea, and you get the gist of it from just the screenshot. I wonder what kinds of 'integration' you could do (clipboard, opening links, drag-and-drop, etc). I could see this as an educational tool for doing development on a Chromebook, because of the (emulated) terminal + filesystem.
Indeed the modern Web API's can do all that and more. One of my favorites is dragging out of the browser onto the desktop. Another is ctrl+c'ing a file on the real desktop and then ctrl+v'ing it into the "fake" one. Some super powers sadly are locked away in the need for a PWA, but I think they could one day just be part of what any site can do.
Technically it's not developer mode , it's just a regular feature of the OS. "Developer Mode" on ChromeOS is when you remove boot verification for ChromeOS itself.
I love how this is done with jQuery. And it'll be obvious to literally anyone whose ever used jQuery (and is a good designer) how perfectly suitable and in many ways superior jQuery would be for something like this. But 98% of developers will absolutely balk at this in horror/confusion/wonder, despite the fact that the React/Angular DIYs they'd make would be bloated and outrageously slow.
If anyone was in doubt; jQuery is not dead. For anyone who wants to write minimalistic and efficient vanilla javascript, jQuery is still being maintained and used by many.
I've tested Puter on Oculus, seems to work pretty well; however, I think very soon I'm going to do very specific optimizations for XR, it's a new, emerging form factor that deserves its own design.
One of the things I find curious here is that there's no mobile story to speak of. Drag the window narrow, and all that happens is that the taskbar icons look squished.
Is there a mobile mode coming?
This would be dope af if I could get a mobile-esque UI on mobile-like devices, or opt into one on tablet-like devices.
In all honesty, this is perfect for people like me (who are rarely away from a keyboard) but less than ideal for people who live a more 21st century computing lifestyle.
If this thing had a mobile mode it would be revolutionary.
As it stands, it's still definitely revolution-friendly :)
A proper mobile UI is something we're looking towards adding. It's been coming up in conversation more frequently lately and we'll probably announce on Discord when we start developing that.
Be careful. If they're paying attention (and you just buried the needle on HN, so assume they are,) you're about to make some very powerful people very nervous. There will be offers, I predict. Not all of them wholesome.
I was a little surprised to see that it doesn't have the few extra tweaks necessary to work as a PWA in fullscreen mode: a manifest, some tags in the <head>, and CSS to lock down the body to prevent unwanted pinch zooming, scrolling and other gestures (which would also benefit the in-browser use case as well).
- there are strong, empirically tested usability reasons why a desktop-style UI is the pits on a mobile device
- generally, the move to mobile devices inclines to a different UI paradigm -- something finger-friendly and un-windowed (think iOS, Android, etc.)
- if there was a way that Puter could look kind of like a phone with some of the de-facto standard UI motifs and paradigms when it's on a phone, you could do cool shit like replace the Android shell with Puter, and have a pure web UI for your phone that runs web apps
- next step would be to add e.g. the ability to run Puter apps when offline, when reception is going in and out, etc. Sort of like Google Apps used to be (in Chrome)
- now you've got a cross-platform one-UI-to-rule-them-all breakout industry moment.
As it stands, Puter is already really cool, but if it turned into something that behaved a bit less like MacOS and a bit more like iOS at narrow resolutions, you'd have something that would make a lot of people very upset (and this is a good thing, they deserve to be upset)
This is one of the cool elements of the Synology operating system. Would be neat to see this extended further into other areas, using this as a base.
I setup a TrueNAS box for my dad recently and he was yearning for some kind of very light desktop environment for simple maintenance tasks. In hindsight I should have gotten him a Synology device.
We also do a desktop in a browser, but some core differences are you can launch real browsers in our environment and essentially load any web-based app you want collaboratively. It also does screen sharing and has A/V for meetings (we are not open source though, and have a paid product):
Sorry about that. It should hopefully be fixed once we change our A/V stack in a few months. I unfortunately do not have specific details on why the A/V stack only works with Chrome-based browsers.
I believe the original reason was the founder wanted a way to have his guitar teacher share tabs and give A/V feedback on his guitar playing. It was originally a product for teaching with limited sharing features, then the founder realized it could be used for many other use-cases and expanded to be much more collaborative and became essentially a multi-room collaborative virtual desktop-like experience.
We use it daily for things like standups, sprint planning, all hands, operations dashboards, product roadmap discussions, dungeons and dragons sessions, watching videos together and more.
I think this is brilliant, amazing work, opened on mobile Firefox with ublock advanced enabled; everything works first party. Most modern text based websites can't do that!
That being said I thought initially this would compete with ChromeOS, of the ill-fated Firefox OS; however, all other comments and FAQ are on about other things.
I tried running it locally and connecting to it with Firefox but it just gives me a log in dialog and trying to create an account fails. With Vivaldi creating an account works.
But where is this account created? Ah,just got an email that includes a link to puter.com so this created an account on a remote server. So it's not quite as local as advertised.
1. Emulating a desktop with windows so smoothly on both desktop and mobile. This webapp is much snappier than most webapps these days.
2. A storage and file explorer with a friendly API for third party apps. Now any app can use a cloud storage synced across devices where the storage costs are paid by the user.
Can't wait until more apps are available! I see some limitations like VS Code unable to open git repositories which seems a limitation of the storage API (append-only, cannot seek or download partially)? Hope it gets closer to native experience in the future
Looks very cool. Am I correct that this is more like a client and that the persistence (user storage, sessions, etc) is handled by the proprietary non-open source cloud backend? Not a criticism, just trying to understand.
Hi, I maintain that backend (which we call a "kernel"). For now, yes this is correct. Our goal is to open-source a kernel as well. We have a couple hurdles: we need to ensure the code architecture makes productive contributions possible, and we need to make it possible to run without complex cloud infrastructure setup so that it's relatively accessible.
Seeing the reception of the newly-open-sourced desktop environment, releasing the open-source kernel is going to be a major priority. I'm personally very excited about that, and I'm happy to see that's how other people want to use Puter too.
So, the cloud storage isn't a decentralized system where all users contribute disk space; rather you are providing the cloud storage in boxes you control? It seems like you must have some storage limits to prevent abuse, yes? Could you say what they are?
Also, I ran your hosting example and ended up with a static site at https://quiet-morning-9156.puter.site/. How long will that site be there? I assume not forever?
That's right, logically centralized and physically distributed (on our cloud infrastructure). The open-source kernel will likely include a filesystem driver for local disk instead of cloud storage (which is more convenient for self-hosting) with the option to also use cloud storage provided by puter.com.
Storage limits for puter.com start at 500MB for new users, with the ability to gain 1GB of storage by referring users (you and the user referred each get +1GB).
Static sites will be available until you decide to remove them.
Thanks! Let me ask you another question. I'm making a CLI app that could run in wasm (it's written in Rust). Normally, users will run it on their own machine in their own terminal. But I also want them to be able to access it while they're traveling. I'm wondering if there's some way I could make it available in the Puter terminal, such that anyone with a Puter account could run it there when they are away from their own terminal. Would that be possible now with Puter, or is it something that will be possible in the future?
That will be possible in the future. The shell is also open source (https://github.com/HeyPuter/phoenix) so we'd also gladly accept any contributions that add the capability to execute wasm files from the filesystem.
It would be cool to self host the backend. I wouldn't want to put files on some guy's server, both from the eavesdropping angle and from the perspective that it could be abruptly shut down and lost.
Curious how AGPL would apply for something like this. This seems like a tool to put a nice front end on a complex app, but would that trigger copyleft for the the overall backend?
First thing I tried was checking if I could use it to share images. Which would be a nice way to organize what I share in folders. But apparently it can only be opened inside Puter itself, and it asks the user if they want to download it.
In case you wonder about the purpose of the project like I did, here's the explanation in README:
> It can be used to build remote desktop environments or serve as an interface for cloud storage services, remote servers, web hosting platforms, and more.
Interesting to see that it's written more "low level": vanilla JS and jQuery (nostalgia kicks in). I guess it's analogous to why linux/windows kernels are still written in C language.
this is more in line with webOS, from palm. still lives in LG tvs. i mean, the front end of it.
the back end of it would be a much better option than eletron apps btw. but they were too early to the container age, since the paradigm was syscalls proxies
Not to detract from the coolness of this, but I wish new OSes dabbled in new GUIs as well.
Experimental operating systems seem to be dime a dozen by now, but we almost never see experimental GUIs or entirely new "desktop environments".
Just as how almost every "new" programming language is still stuck with semicolons and other C-isms that were ancient back when the Egyptians were laying down the pyramids, we're still stuck with either imitating the macOS GUI or the Windows GUI, or some weird Frankenstein's bastard of the two.
iOS, Android, consoles, and most recently the Vision Pro have proven that eschewing longstanding conventions can be successful — for example the vast majority of people on this planet don't need or care about scrollbars (or even menubars) anymore.
So why aren't the creators of experimental OSes being more experimental with the frontend? Come on guys, none but the nerds among us will be impressed with how it's made behind the scenes. The first impression that most people will get is that's just Yet Another WinMac-Looklike.
As an avid user of i3wm, the "YAWML" that you described isn't my favorite. The good news is we're working on architectural changes that will make it easier to develop alternative desktop environments that use the same APIs. I have a daily video chat with the author and the topic of custom guis is something that comes up pretty often.
“ - Why isn't Puter built with React, Angular, Vue, etc.?
For performance reasons, Puter is built with vanilla JavaScript and jQuery. Additionally, we'd like to avoid complex abstractions and to remain in control of the entire stack, as much as possible.
Also partly inspired by some of our favorite projects that are not built with frameworks: VSCode, Photopea, and OnlyOffice.
- Why jQuery?
Puter interacts directly with the DOM and jQuery provides an elegant yet powerful API to manipulate the DOM, handle events, and much more. It's also fast, mature, and battle-tested.”
This just tells me they don't understand why any of those things were created and how to actually use them. It won't get traction and it'll wallow and get stale since it'll be an unmaintainable project. Also, VSCode is built on Electron while Photopea and OnlyOffice are straight up painful to use.
That’s a lot of smack talk for someone who didn’t provide a link to their own, even more impressive project.
This kind of reaction reminds me of people who can’t fathom why anyone would use a database without an ORM. (And meanwhile I’m confused because nearly everything I do with a database would be twice as difficult and ten times slower with an ORM in the way.)
It’s only a prerequisite if you want to belittle people and accuse them of making ignorant technology choices. (If the discussion was ORMs, I’d be delighted to have an excuse to share and promote my public projects. But my earlier aside doesn’t constitute a change of topic for this thread.)
Wrong. It’s the rule, and has been thus for at least 270 years. Breaking it attracts a $500 fine and a gentle slap in the face with a pair of leather driving gloves.
That's not the point. The point is that it doesn't use a framework like React, Vue, etc., it instead directly creates and manipulates DOM elements, somewhat like Puter.
Super slick demo, I'm on mobile and it's impressively fast nevermind functional.
But it is 'just' a DE webapp right? From 'internet OS' here (which actually I don't think you use at TFA repo) I expected to be able to boot to it. I guess there is some other solution that would allow that, but not a package deal?
I suppose I'm just saying be careful/manage expectations with 'OS', but for what it actually is it's really cool.
It's got an app store, and it seems to implement embedding, windowing, and a virtual filesystem for arbitrary apps that have been built for it. I.E., When you open "Polotno" (a graphics editor), or what appears to be VSCode, it opens in a windowed IFrame, but then going to "Save" pops up a file selection dialog controlled by Puter, which lets you create a "file" which can then be accessed by the "Open" button in any other app.
It looks like there are other integrations between the DE/OS and its apps as well, like applications being able to set their window title dynamically (E.G. based on open file/tab), and an API allowing robust support for third-party apps. If you open one of the games like Doom, you'll see that it's usually hosted on a third-party site like DOS.Zone, but according to the query string using a custom build for Puter, which presumably has modifications to integrate with the desktop environment and filesystem. Other apps are hosted on Puter.site, .
So if you treat the browser as the "hardware" (and maybe also the backend services hosting a lot of the apps), maybe you could call it an "OS" in that it abstracts and manages a single environment for multiple other programs to run simultaneously and share information with each other.
I suppose `puter.js` is what passes for its LIBC, or the syscall interface:
To OP: The information on publishing third-party apps doesn't seem very discoverable. The only mention I saw is in the popup that shows the first time you launch Dev Center, and after closing that I can neither find it again nor find anything else about it on Google (other than the linked terms, and the app IFrame source, which I presume is from GoogleBot's temporary account):
This reminds me of something I made once in "Macromedia" Flash back when I was 16 in like 2001ish or so.
It wasn't really an actual OS, but I managed to get a working desktop with a file browser, fake non functional web browser, and a task bar and start menu.
My flash application didn't accomplish anything, but I felt immensely proud of having been able to create a mock up of a UI, even though it was extremely kludgy and wasn't even able to read or write files in the fake applications.
...No offense to the creator though. I don't mean to compare my high school project to this one. This project is indeed very cool!
I too dream of one day inventing my own kind of spreadsheet and terminal hackery OS, so I salute this person for creating a proof of concept that looks good.
> Why isn't Puter built with React, Angular, Vue, etc.?
> For performance reasons, Puter is built with vanilla JavaScript and jQuery. Additionally, we'd like to avoid complex abstractions and to remain in control of the entire stack, as much as possible.
Virtual DOM has always been about ease of programming, never about performance. Since this model has existed, all the work has been made to gain more performance but still not fast enough compared to direct DOM access.
I'm not really writing that to stir up js framework shit, mostly just commenting on the insane dedication to ideals needed to go "fuck it i'll just use jQuery" on a project of this scope. At least, from my brainlet dev perspective, a move like this is 100% boss level.
Yeah, I got that. The reason I asked is that to me it is a genuine question. I would have thought that with all the effort that is going into component frameworks (and into js runtimes too), the performance difference would be small either way. As it has happened with compilers in a sense.
Who's "we"? React is still by far the most popular framework. I personally like Solid better but I'm just one person vs. many who actively write React code.
Not sure why you're downvoted, it's a valid question (the downvote isn't an "I disagree" button). I believe the answer the Puter author came up with is that the VDOM takes away too much of the control from the author. They do make life easier but they have an abstraction cost (mental overhead) and in case of some of them performance issues (execution overhead).
Take it from a guy that regularly posts unpopular truths: people vote based on the way they feel after reading. It has almost nothing to do with the content.
Edit: for example, try posting a personal opinion about a controversial topic and you’ll still have people downvoting to disagree as if it’s possible to tell someone that they are in fact wrong about a statement of what their opinion on a topic is.
It’s a bit sad if you think about what voting like that means, as content is often amplified or suppressed based on votes. Echo chambers seem inevitable
I try to minimize posting (edit: and having) opinions most of the time, they don’t tend to generate interesting discussion (edit: and are highly limited by perspective). Kinda hard to come to a conclusion when the assertions all boil down to “this is my experience”
Ah, Puter! That fascinating project surfaced on HN about a year ago, claiming the top spot for most of the day. I'm delighted to witness its transition to open source, allowing us to glean insights from the creator. Gracias for sharing!
The emergence of such front-end projects provides a profound glimpse into the maturation of front-end development and showcasing the incredible possibilities it offers today.
Another really cool project, somehow related, is DaedalOS [2].
Thanks for the mention for daedalOS! I agree that when done well these projects can help demonstrate the maturity of the web as a platform. For anyone interested in checking out mine, it's on my personal website @ https://dustinbrett.com/
I hate that these guys have to justify using jQuery. Too many of you have been seduced by all these bullshit javascript stacks that just add a seemingly infinite amount of complexity to something that is already almost too complicated.
Does jQuery by itself offer any value these days compared to plain Javascript though?
I was under the impression jQuery was for the IE5 days where not all browsers provided the same javascript APIs and doing an xmlhttprequest was more clunky than doing a fetch() call these days.
At least for my personal Jekyll based blog, I was able to replace 40-45 lines of jQuery with 50 lines of plain Javascript and was able to get rid of 33kb of jquery dependency.
jQuery is still great. Just take this site (which is ironically arguing the opposite viewpoint) https://youmightnotneedjquery.com/ and notice how all the "modern" (tech's vaguest and most meaningless adjective) code is 2-3x as verbose and complicated.
And 33kb, you can probably gain that by changing your jpeg compresion level from 80 to 79.
Yes, I've been asked why we use jQuery many times so I had to put it in the FAQ. I think we have solid reasons for it but it is controversial sometimes.
I think there are valuable positions to take in between “let’s manipulate the DOM by hand” and “React Server Components, NextJS and the kitchen sink”.
I personally think there’s great value in the pattern of reactivity where parts of your DOM update based on bits of data. Developing, designing and testing components like this becomes way simpler.
Ah, jQuery, that cute little artifact from web development's stone age, where manipulating the DOM directly was the height of innovation. How quaint it seems now, in the shining era of React, where state management is not just a task but, once it gets its hooks in you, evolves to a transcendent experience. To cling to jQuery is like insisting on using a typewriter in a world of voice-to-text AI, but adorably obsolete. React, with its majestic state orchestration, makes jQuery's efforts look like child's play. It's like comparing the glow of a firefly to the brilliance of a laser beam. Truly, we React devotees can only chuckle at the notion of jQuery, for we have tasted the future, and its name has and always will be React.
Despite being asynchronous it contains a single linear schedule and two callbacks. The obvious outcome is two layered constructors, and a single top level API object with two custom events.
You couldn't describe what was going on in an equivalent functional construction of a react version unless you first head off on a diatribe about signals/hooks/state-management-disaster-du-jour, the choice of use of which would have implications to both the event handlers in some way - though they likely couldn't be in-place here, they'd have to be passed down from parent context in a giant dance of inversion of control. At runtime the outcome would likely get parsed twice (once for shadow, once for insertion) if not more times under certain compositions. Here the jquery is almost incidental, and could be quickly replaced by something else - for example want to swap out htmx here, well, just add the markers to the DOM, delete the jquery lines and you're done. You'd probably need to spend at least an hour figuring out if you could fit something independent and platform-native into your react code base, hell I've seen people burn weeks on it.
The JSX syntax makes React hard to read due to multiple layers of nesting.
IMHO, Vue is superior for both maintainability and readability than React. Especially because CSS, JS anda HTML are largely in separate blocks of the same file.
Arguably they should all be in external files. HTML in one, CSS in another, and Javascript to load and cache them without any embedded strings. That's how I prefer to write. For one thing, it lets me choose which things I want to cache and which I don't. I can compile from SCSS and deploy without changing any JS or HTML files. Usually my rollups are just the JS.
I can understand some of the benefits of JSX, but compared to Vue SFC's, it's a mess to read.
React CSS modules feel so unwieldy compared to Vue SFC's inline `scoped` CSS.
Vue's approach to templating using `v-for` and `v-if` means that you won't end up with a big block of logic mixed in with the template (you can still do it, but Vue's nature steers you away from it).
I'm confused about what you mean by this. There's nothing about Flutter's API that encourages less nesting, In fact, it's the exact opposite. Plenty of layout and styling that takes a number of nested widgets in Flutter can be achieved using a single div in React.
It is not about complex UI per se, it is about complex state. With jQuery, the State is duplicated in the DOM instead of mirrored and is very hard to keep in sync.
In sounds nice in theory, but in practice, it doesn't work out. The example linked is a good one, a popover, the underlying DOM disappears, but the "state" behind it remains at least a bit longer.
Out of curiosity, which is the internal framework used by VSCode ? Would like to take a look at that. According to Erich Gamma, there is no use of a UI framework just a conventional model-View-Presenter with JS/TS constructed HTMLElement components.
There is no such thing as "conventional" MVP, every implementation is slightly different.
It is not an official framework per se, but it is a framework nonetheless, except without comprehensive documentation and developer availability or much commercial value in investing your time to learn it.
The main point is that if you're arguing against React or any framework "because abstraction bad", you will end up with them anyways, enjoy:
Well, we will need to agree to disagree. I see some utility libraries in `base/common` for disposables management, eventing and actions that you are terming a "framework". I don't even know how I would use this as an independent framework outside vscode. Even if I could, the rendering and organization logic would be mine not delegated to fundamentally unique framework paradigms like React/Vue/Angular.
> I see some utility libraries in `base/common` for disposables management, eventing and actions that you are terming a "framework".
It is because you didn't look deep enough, VSCode for all intent and purposes has a "framework" that it is built on top of.
What is a frameworks anyways?
"Frameworks model a specific domain or an important aspect thereof. They represent the domain as an abstract design, consisting of abstract classes (or interfaces). The abstract design is more than a set of classes, because it defines how instances of the classes are allowed to collaborate with each other at runtime. Effectively, it acts as a skeleton, or a scaffolding, that determines how framework objects relate to each other."
If it walks like a duck, and quacks like a duck, it probably also Reacts like a duck.
> the rendering and organization logic would be mine.
This is cute and nice for a weekend project. But if you want to build commercially viable products or even a GA open-source software, "Mine" is of no value.
Good documentation, extensive testing, developer availability, on the other hand are far more important.
Clear (mostly one-way) data binding is definitely one of React’s strengths.
It appears that Puter handles templating with a pattern like:
let h = ``
h += `<div>${exampleContent}</div>`
h += `<div>${moreExampleContent}</div>`
$(targetEl).append(h)
And handles events in the traditional way:
$(targetEl).on('click', function () { /* do stuff */ });
Searching for “React” or “jQuery” in this thread, there are several other conversations with thoughtful comments about pros and cons. One curious tidbit that I learned is that Visual Studio Code doesn’t use a framework either and, like Puter, updates the DOM directly.
The main issue is that "$(targetEl).append(h)" is not idempotent.
This might seem like a small issue on the surface but as your application grows, this either becomes a big problem or you have reinvented React, Vue, or something similar, but without the extensive testing and developer/community availability. Which is essentially what VSCode does for example, using some sort of MVP* pattern.
It never really went away for me. I still reach for it in every web app. Over time, I've stopped using some parts of it that used to be timesavers... e.g. shifting away from $.ajax calls to fetch, but it's a good base for rolling your own responsive frameworks if that's what you want to do. And it's what I want to do, because I dislike the paradigms for react and vue, and have no interest in relying on those projects.
A slightly unfortunate name for Spanish speakers, I'm afraid: "putero" can mean either "brothel" or "man who maintains sexual relations with prostitutes" [1].
Fun fact: the Mitsubishi Pajero had to be marketed as Montero in Spanish speaking markets as "pajero" is Spanish for "wanker" [2].
Not forgetting the case of the Seat Málaga that had to be marketed in Greece as Seat Gredos because Málaga sounded too similar to "Malaka". I'll let any fellow greek readers explain the meaning ;)
Spanish also delights us with unlucky surnames. E.g., three acquaintances of my father have the family names "Feo", "Bastardo", and "Gay"; which translate, respectively, to "ugly", "bastard", and, of course, "gay".
It is absolutely not bait. Declarative programming for the web is certainly more pleasant, but direct DOM manipulation will always and forever have a performance upper hand over rendering frameworks that sit atop the DOM. Now this performance difference is usually negligible, but for projects like VSCode or this, there’s a very good reason they aren’t written using frameworks.
Depending on how you do rendering, vanilla JS can be extremely fast.
For most web apps, the initial layout and rendering take the biggest portion of the rendering budget. Subsequent changes are minimal and can be rendered very fast. If you can avoid re-rendering the whole page for every little change, the update can be fast.
React's advantage is to track the changes for you. That doesn't mean you can't track the changes yourself. And if you structure your rendering and refreshing pipelines correctly, vanilla JS would do just fine if not better.
Controlling DOM updates yourself will always allow for best performance.
Virtual DOM still needs to reconcile and apply DOM updates, but since this is handled by your UI framework of choice you have limited ability to optimize these updates.
I would argue that if performance is your top concern you'd still be better off using a framework like SolidJS, but theoretically JavaScript and JQuery can be better optimized.
Are we seeing the same benchmark? On that site, both Vue and Svelte are extremely close/on-par with Vanilla JS on every benchmark except the transfer size and first paint (which shouldn't matter at all here). jQuery also has an overhead, and I wouldn't be surprised if it's larger on many benchmarks than, say, Svelte's.
Yes, modern frameworks are getting very good. This does not change the fact that abstractions around DOM updates (frameworks) limits your ability to maximally optimize performance. The output of frameworks is still vanilla js.
https://puter.com/app/half-life-c3j01ag3pyd
It looks like it's using a build of the below, hosted on GitHub.io:
https://github.com/Pixelsuft/hl