Hacker News new | past | comments | ask | show | jobs | submit login
Essence: A desktop OS built from scratch, for control and simplicity (nakst.gitlab.io)
538 points by luismedel on Sept 27, 2023 | hide | past | favorite | 198 comments



At first glance, I thought this was another Linux distro with a custom window manager, not a new operating system. But it is its own operating system! It's got a custom kernel and everything. Built from scratch, indeed.

I'm always impressed when people build their own operating systems or browsers from scratch. Few of them will attain widespread adoption. That doesn't mean they aren't valuable though. Kudos to the developers.


On that note I really appreciate distros like ElementaryOS purely because they maintain their own software that is specifically modified / created with the intent to making their distro feel really integrated. Other distros put in loads of work too, but it just feels more consistent, like a KDE distro with all the KDE bells and whistles (KDE is a good example of what I'd like to see, an OS with default apps that all really blend together and feel like a part of one OS).

Funnily enough, you don't get this "unity" or whatever you want to call it that I'm describing with Windows at all. They have random legacy ways of doing things, Windows is starting to look like a hacked together Linux OS with how inconsistent the main OS apps tend to be.

Edit:

Maybe if Microsoft stopped letting Marketing Teams dictate what goes on Windows, and focused more on polishing their OS. They would have an insanely fantastic OS offering.


The problem is Windows has tendencies to break in opaque ways that make it extremely frustrating to deal with. The quality of drivers is, on average, lower as well.


I get this, but its already broken in some areas, I detail this in a prior comment I've written, something as simple as creating a Windows User is completely broken in the latest of Windows 11 if you have "Home" edition:

https://news.ycombinator.com/item?id=37322167


But this is by design as they want to push people into a microsoft account.


Two programs saying to go to each other to add literally any account? Including additional Microsoft accounts? I don't think that's the case.


Well, if adding a microsoft account would have worked, then they would have succeeded into convincing you to use one. Most people just give in, before they use linux.


It's a shame Elementary imploded, and more a shame their funding model was very publicly debunked. I'm not sure how the ideals for Elementary can exist without a unified, well-funded vision behind them.


> more a shame their funding model was very publicly debunked

It wasn't just that, both the founders were very big socialist/communist people who didn't really understand how to run a business. They still don't. Elementary OS turned down numerous funding opportunities, numerous partnership opportunities. They made a really really great OS. But the business aspect was just atrocious.

The "Pay what you want" model could have worked for their store. But just having a store in the beginning they didn't want one and then they pivoted to having one way late. There was big internal strife in the eOS Community amongst the developers for years, and very little conflict resolution on their part.

I wouldn't say they're an example of a failed funding model. Rather, I'd say they're an example of people who have amazing talent and ideas, but refused business help and refused to admit their failures.


It’s not that they didn’t understand how to run a business… their refusal to accept “business help” was not because of inexperience or incompetence but because they didn’t believe it was compatible with their strongly idealistic perspective on business/capitalism/the world.

I’m not sure they’re wrong.


Sometimes I marvel at what would be possible if just a sliver of open source talent was put to work building independent quality software for their own or an investor's profit. Marketing a product forces you to figure out what it is people actually need and deliver the knowledge and education people need to use a product well. In fact a lot of what is wrong with software from big tech companies is that individual programs are not written to make the program itself great, and the incentives generally do not encourage spending a lot of time making some part of Chrome or Safari more efficient.

Open source will never spend time marketing anything, never spend time educating an actual mass of general users as to its virtues or how to use it well, and suffer as a result. You don't have Desktop Linux that blows everything else out of the water because that would require investors to stake a lot of money doing these things, which they will only do if there is profit. PopOS gets as close as you might with something like that, but is ultimately shackled by the fact they cannot sell their software. (Enterprise is different, where I guess you can nerf the product to make money on servicing instead).

Even someone with infinite resources cannot do what a company selling something for a profit can because they are either ultimately captured by and beholden to some other interest other than the product itself, or constitutionally lack the energy to be daring and actually compete. Imagine what someone could do with a Firefox sold for a profit because of its superior functionality and superior efficiency.


> You don't have Desktop Linux that blows everything else out of the water because that would require investors to stake a lot of money doing these things, which they will only do if there is profit.

I think this illustrates what I believe you get backwards well. It's not that getting good quality and concise software requires traditional investors and a centralised force of vision. It's because doing software in general but specially desktops is hard.

I think Linux is not what it is despite investment, I think it is what it is because of it. Look at Windows and Mac, they show that they listen and develop their products with users in mind just as far as the market and investors let them. They will otherwise push anti-consumer features (like ads in the start menu) without even batting an eye.

This belief that profit drives innovation is just silly. Profit drives profit, innovation and competition are accidents. In a world with Googles and Amazons, Microsofts and Teslas, I am really baffled that it isn't clearer for us in tech that this is exactly like this.

The state of Linux and opensource in general, flourishing for decades is a living testament to that. Opensource is not only moral, it is the practice that we know to be the most sustainable and resilient in the long run.

I feel compelled to write, though, that even though I frame profit in this bad light, they are only so when the they are the ultimate goal of your product. What countless enterprises are enabled by Linux and OSS initiatives? What vast amounts of money flow only because Foss and OSS are the way they are?


Thanks. Many thanks.

The problem is that "innovation is driven by profits" is a religious credo at least in the USA. If you’ve ever read Atlas Shrugged, by Ayn Rand, you would laugh at how silly it is, how transparently absurd the "story" is.

Then you realize that a lot of billionaires in the USA consider this book as a scientific economical treaty. And they have the power to make it true and to brainwash everybody, including politicians, to believe it is true.

As it is the 40th anniversary of the GNU project, it worth remembering that software has always been created as free and open source. It was how innovation was possible. To the point that, in 1976, a young Bill Gates had to write an "Open Letter to the Hobbyists" which could be summarized as "Stop sharing! Please! Give us money and stop sharing stuff between yourself. We want to be a profitable industry, not an innovation playground".

That highlight how nobody took proprietary software seriously at the time. But people listened. People voted for Reagan (who managed to dismantle antitrust laws because monopolies are good to make lot of money) and, suddenly, making Bill Gates the richest man in the world became a top priority instead of pushing innovation and cooperation.

As Facebook, Microsoft and Google demonstrates every day, a monopoly is never innovating. Every new single "innovation" is from a startup that was bought by fear of having a competitor in the future. So, today, to become rich, you don’t have to make a real innovating business. You simply have to pretend be a bunch of geniuses that could create the next monopoly and be sufficiently good at pretending that an actual monopoly buy you. That’s basically what is now told in every startup incubator (the technical term is "exit" and, as soon as VC enter the dance, you already talk about your exit plans. Which is easier when one of your VC is on the board of an existing monopoly. That’s how he manage to extract lot of money from his position).

The system is completely unfair, corrupt, suboptimal. But we have to tell the fiction that it works so people don’t request a change.


> Even someone with infinite resources cannot do what a company selling something for a profit can because they are either ultimately captured by and beholden to some other interest other than the product itself, or constitutionally lack the energy to be daring and actually compete. Imagine what someone could do with a Firefox sold for a profit because of its superior functionality and superior efficiency.

turns on imagination…Firefox eventually goes all-in on making profit from selling user data and making advertising deals after they realize that the vast majority of users are totally fine with the default and other free options and have no interest in paying for your product.


Yeah advertising is a failure mode of what I said.

But if Firefox ever decided to make a lot of money by selling good browsers at a high price to paying users, well I think the result would be quite interesting.


The reason that doesn’t happen is that that business model is not viable in the presence of alternatives. The “quite interesting result” would be that there wouldn’t be a Firefox anymore.

Selling software is not a novel idea.


I pay for my mail client (MailMate). I pay for my search(FoxTrot Search). I pay for my spam filter (Spamsieve). I pay for my notetaking/archival (Eagle Filer). I pay for my network monitoring (Little Snitch). There are alternatives to a lot of these they just aren't very good, in some cases astonishingly bad.

And I would pay an enormous amount of money for a browser that worked well that had features I've always imagined a browser should have. And I don't expect anyone to make that for me without the reward of getting nice stuff for doing so.

No one would bat an eye if Firefox were no more since there are other browsers more or less just as good, more or less just as bad. It's an immemorable product, the consumer surplus of which compared to the best alternative is very low.


You're what's called an outlier. As a tech enthusiast you appreciate software.

I pay for FreeBSD and KDE. Because I believe in them. But I don't want them to make a profit. Once they do, the people receiving that profit will want to see a rising trend. Because business believes that a steady profit is decline, there must be growth at all costs. Once they reached the limit of what the market will bear, the focus shifts from giving the customer what they need to extracting as much value from them as possible. This is a death spiral because extracted value can never be infinite. The result is the phenomenon we now call enshittification.

The lack of a profit-driven approach is the only sustainable way to avoid this in the long run. Sooner or later it will always happen. Even if you have intelligent and ethical investors (which are extremely rare) sooner or later some sharks will buy it.

In fact the phase where a product truly has the customers' interests in mind is usually not very profitable but instead a gamble by investors, sacrificing short-term profits with the goal of extracting much more from the customer once they believe in the product and are too locked-in to leave.


If you want to provide “features” commercially, it’s much easier to just use Chrome’s engine. The point of Firefox, at this time, is that we have at least one alternative implementation.

MailMate is closed source donationware with a bus factor of 1, which is just hilarious. As I understand it, the “sell a mail client for a one-time payment” model turned out to be insufficient to support the single developer, leading to the whole “patron of a for-profit company” theme.


Here's the thing: You function in your environment, and you need your environment to function. And currently, some element of capitalism is required to build large-scale undertakings - if for no other reason than that the people you work with have families who'd like to eat, be housed, etc.

You can fundamentally disagree with the system, but that means you opt out of all parts of it. And so they might not have been wrong about their belief system being incompatible with business/capitalism, but the idea that they could build something large without compromising was wrong.

I, too, would like Fully Automated Luxury Space Communism - but you cannot get there from here by willing it to be true. (And it chafes me to say that, but that doesn't change the truth of it)


Lots of successful FOSS projects have no 'elements of capitalism', depending of course on how you define the latter. Same goes for much scientific research, and many other enterprises.


Examples appreciated - and please keep in mind the "large" part. Yes, you can occasionally find a few individuals being in a fortunate enough position that they can work independent of the outside world on a small/medium project, but I'm not aware of any large projects where that is the case.

I'd love to be wrong about that.


What do you mean by 'large'? Mozilla? Linux? BSDs? Signal? ?

Also, as in my other comment, it depends on how we define the involvement of capitalism. See my other comment:

https://news.ycombinator.com/item?id=37679386


Moz/Linux/BSD probably qualify. None of them would be possible without some kind of corporate support. FreeBSD is probably doing the best, but even for them they need corporate donors[1]

Linux absolutely is carried to a large part by companies employing people to work on Linux. Mozilla needs to find income to pay its own employees. All three are slightly different models, but all three make it fairly clear that you can't cut all ties.

As for the "what do you mean" part, "obtaining money via capitalist endeavors related to the project so non-affluent devs can actually afford to work on the project" is probably a good enough definition.

In comparison, Signal is small - and even it can only make it because an extremely affluent person (Brian Acton) carries them. It's not reproducible for larger projects.

[1]https://freebsdfoundation.org/our-donors/donors/


I'm not disagreeing with you (yet at least), but can you name some of these successful FOSS projects with no elements of capitalism so I can better understand? The only ones I can think of definitely have some elements of capitalism, whether it's people using it at or for work, or contributing to it as part of a work dependency/project. But of course my limited recall is not evidence that it doesn't exist, so I'd be really interested to hear about some.


That's the matter of definition I was talking about.

It's a society with lots of capitalism, so of course you'll have people involved who are also involved in capitalism; you'll buy your computers from capitalist companies, etc. That doesn't seem meaningful.

So what do you mean?

I mean that projects don't need to be capitalist enterprises. For example, they can run on donations or grants, with volunteer labor, etc.


I don't think it works as well for end user software. It works great for infrastructure software and libraries, because businesses contribute labor and benefit in turn from labor contributed from other businesses. But with end user software all the work is for a consumer who can't contribute much to the developers of the software.

Of course there are such projects that receive enough in donations from end users to be sustained but not very many; and in the particular case of Linux desktop environments or distributions there is a lot of competition for users and donations of this type. The successful ones I can think of off the top of my head, such as Libre Office and Lichess are "killer applications" with no serious competition in FOSS.


Firefox? Signal?


Ubuntu Budgie is closest thing to Elementary that I’ve found: https://ubuntubudgie.org

Are there other alternatives to consider?


I am pretty happy with Pop!_OS[1], it's got a fair amount of polish, reasonable choices about snaps etc.

1: https://pop.system76.com/


PopOS rules for me, I enable tiling manager and I am golden...


News to me. Can I read up on this somewhere?



The linked article didn't use Danielle's name (which was well known at the time), giving the impression that the author either didn't do his research or was being intentionally malicious (existing beef? but def. bias). For what it's worth, eOS appears to be alive and well to me[1]. Please let me know if you have any other sources regarding the project's status, and I hope they keep sailing; love their work.

1: https://github.com/elementary/os/commits/master


Thanks


It's likely that the last version of Windows where it all was consistent through and through was Windows 2000. Someone may correct me by bringing up an exe that ships with the currentest Windows and which looks unchanged from Windows NT 4.0.

I remember an article showing the UI of Windows system apps, and as you go deeper and deeper into %SYSTEM% or whatever, you get UI from progressively older and older versions that are sort of left behind. You get an incentive to polish all the immediately user-visible stuff, but there's so much of it anyway so if the deeper-hidden stuff that a handful of people is using is still working, nobody will touch it. Look, the WMI console was probably unchanged since Windows 2000 at the time of Windows 10.


There are still some 3.1 dialogs in Windows 11 https://ntdotdev.wordpress.com/2023/01/01/state-of-the-windo...


My point exactly. Since it's the ODBC Data Source selection dialog, I think the actual users of it would be old enough to work with Win3.x/WinNT 3.x back when it was new, so maybe it counts both as penny-saving and a fan service for the greybeards?

On the other hand, how many kinds of first-party file copy dialogs and file open dialogs do you need? :-)


Try the font import dialog box. Last time I checked it was the exact same as Win95.


> At first glance, I thought this was another Linux distro with a custom window manager, not a new operating system. But it is its own operating system!

Indeed, these "reskinned"* Linux distros seem to be so common, it's easy just to make assumptions.

This (Essence) looks pretty impressive - to be built from scratch and have a nice clean (dare I say it... "non-linuxy looking") UI is even more impressive.

* "Reskinned" is probably a bit of a lazy term here - many of them are a lot more than that - but still, the point is that the value prop. is often the nicer more user-friendly aesthetics layered on top of an existing distro. What we're looking at here seems a lot rarer. Snapping at the heels of ChromeOS?


This reminded me a bit of Oberon OS - https://en.wikipedia.org/wiki/Oberon_(operating_system) ...


Funny, I was clicking ready to rant about the opposite, claiming it's built from scratch, when it's just another Linux.


That's one hell of a great landing page. It tells me pretty much everything I want to know.

  - Beautiful screenshot.
  - "Essence will happily run on low-powered hardware.
     It can take less than 30MB of drive space, and boot with even less RAM.
     No tasks run in the background, giving your applications all the space they need."
  - Amazing performance.
  - All the code is made available under the MIT license.
  - Demo video.
I can immediately imagine many different ways this could be popular in areas not covered by the main desktop OSes. The only other thing I guess I want to know is the developer toolchain/experience and "getting started" which is covered by Discord/Patreon links.

I think it would do well to differentiate it from current desktop OSes, e.g. first class support for touch interfaces, etc. Imagine all the IoT devices needing UIs.


Definitely. There's too many '00s distro pages I've seen that hide away the screenshots away from the front page, almost as if they're afraid to show off their system.


> developer toolchain/experience and It was covered a bit in the video. Very nice way for user/designer to theme their own desktop.


This combination of features makes this platform quite interesting for embedded applications. If the window management API is any good, I think this system could make for quite a decent embedded control system.

I bet you could compile this into WASM and get quite a good web UI with it's fake windowing system.


It seems they stopped working on it in April 2022, for the most part:

https://gitlab.com/nakst/essence/-/graphs/master?ref_type=he...


Seems like the commits since 2021 have been bugfixes. Maybe this is mostly done rather than stopped?


The website says "Essence is still in development." In large type at the bottom of the page.


https://gitlab.com/nakst/essence/-/issues/20#note_1504444954

"Thank you. I am not working on it at the moment. But I would like to return to the project when I have more free time ;)"

He's busy with something else.


It is still a very inspiring project. Someone with CPP experience can keep on working on it.

Would have being nice if it in Rust. The video gave a lot nice insight to having a tabbed window based UI. It makes it more like browser. That idea can very much be adapted be used in project such as Fomos or even create a WM in Linux/FreeBSD.


As always. I don’t know why people even pretend that it’s going to be a real thing. It’s like me saying I’m going to make my own jumbo jet.

You can toss together a shitty biplane but at the end of the day you can’t make the engine or the full size fuselage and there are no mechanics or factories for your jet so nobody is ever going to use it.


I like the following structure at the end of the assembler boot block code:

times (0x1B4 - ($-$$)) nop

disk_identifier: times 10 db 0

partition_entry_1: times 16 db 0

partition_entry_2: times 16 db 0

partition_entry_3: times 16 db 0

partition_entry_4: times 16 db 0

dw 0xAA55

That's elegant, because it shows where/how the disk identifier and the 4 classic MBR partitions coexist inside of the same first 512 bytes in the boot block, with the boot code... (it is assumed that AFTER the boot block is copied to disk that the partitions will be set by an outside program (i.e., fdisk or similar program) to appropriate values... otherwise MBR partitions would be set to 0 from copying the boot block -- but that's a minor sub-point... it's elegant code -- I've seen a lot of boot block code, and I've never seen that construct before!)


Makes me think of all the elegance, simplicity, & performance we are leaving on the table with our limited OS competition.


Quickly checks if it's yet another Justine Tunney stupidly genius project ...


Plugging Justine gets downvoted? What in the what?


I think the use of stupidly could look like you were being demeaning. Personally I wasn't sure how to take it when I first read it; the downside of text communication vs in-person


Ah, makes sense. Funny thing is I only added "stupidly" to make sure it didn't sound sarcastic. Perhaps "stunningly" would've been easier to parse. It's hard to find a superlative appropriate to her work.


I'd love to see an OS like this take hold for people who don't need all the backward compatibility and server goodies in Linux and who don't want to deal with Microsoft or Apple. Something like Chrome OS but not so locked down.


I want to see this, but with a user friendly capabilities-based security design so that downloading an app from the internet can't steal my credentials and documents. That's the biggest issue people face today and yet there seems to be no real interest in trying new OS-level approaches as far as I know.


If you mean restricting filesystem access to random programs, I think that's already possible on macOS (with TCC) and Linux (with Flatpak), but the underlying mechanisms aren't very robust and can be easily bypassed by malicious code.

If you mean a true capability-based OS, there is Fuchsia, which doesn't seem to be used yet, and RedoxOS, which is in development.


Easily bypassed how for Flatpak?


Well, there's macOS which has a very capabilities-based design (Documents folder? Pictures folder? Contacts? Desktop? Screen recording?) but most people just see it as annoying.


MacOS doesn't do capability based security, it does permission flags instead.[1] A permission flag is the clunky thing we all have learned to hate that smartphones do, instead of the elegance that is capability based security.[2]

Let's say you want to buy an ice cream cone, and pay for it.

Think of permission flags as signing a "Power of Attorney" letter, and handing that over to Dairy Queen. Every time you get a cone, they just take the money out of your account. But they could also sell your house. You can't limit the side effects of the permission you give, it's all or nothing.

On the other hand, capability based security is like taking a $5 capability out of your wallet (in the US, a piece of paper with Lincoln on it), and paying directly for that transaction, one time, at the time it is needed. The most you can lose is that $5. You're not risking your home, just to get ice cream.

The OS market is deranged. Capability based security is something that's been required since persistent internet connections became a thing. Clearly blaming everything else, the users, admins, programmers, compilers, language design, hasn't worked.

[1] https://support.apple.com/en-asia/guide/mac-help/mchl211c911...

[2] https://en.wikipedia.org/wiki/Capability-based_security


This is not a useful distinction because it'd be the same UI. If you grant an app a more specific capability, that just means a more specific permissions dialog. If you don't change the dialogs, the capabilities are going to be the same access it is now. I don't think users want to deal with much more fine grained dialogs.

And it does have more specific capabilities called "security-scoped bookmarks".


What would this look like in use? A permission dialog for every interaction? If you bury the user in those, it doesn’t take long before they just mindlessly smash buttons to make the pain stop.

The infosec community never read The Boy Who Cried Wolf.


>What would this look like in use?

It would look almost exactly the same thing you already see. Instead of "file open" actually just being a suggestion to the application, the OS would actual enforce the permissions so that the application couldn't do anything else.

The usability of such a system wouldn't really ever take a hit. It certainly wouldn't result in excess permission dialogs. You don't see permission dialogs every time you take cash out of your wallet, or when you turn on a light switch.

The infosec community is weird, and I'm not part of it. I strongly disagree with them these days.


Eh, those are just folders. iOS has proper app sandboxing, but it could be taken a step further with VMs.



Doesn't seem bulletproof. I made a .sh script that touches a file in each of those dirs and ran it. It didn't ask for permission. Privacy settings don't have full disk access or individual file access granted to iTerm2, in case that matters.

Edit: Nothing has full disk access either. I even see bash listed as explicitly not having access.


That's because the Terminal app has an exemption for Full Disk Access, so as to not break anything.

Edit: OK, iTerm2, not sure what's going on there.


Yeah, maybe something is weird about every Mac setup I've used, but I've barely even noticed these restrictions. Pretty sure CLIs and shell scripts in general have full disk access by default. Almost seems like the restrictions require some cooperation from the apps, idk.

Besides disk access, there are all sorts of other ways I don't trust random native apps on my Mac. At least camera access is locked down now (I think).


They don't. But you almost certainly click through the prompt the first time you cd into Downloads without noticing. Prompt blindness is real.


I just reset all folder access perms to "no" and killed both Terminal and iTerm. Tested in Terminal, and it did protect the downloads, desktop, and photos library folders, but not any of the other ones in the home dir (pictures etc) or the Music lib.

Weirdly, iTerm did ask for permission when I cd'd to ~/Desktop, and I said no, but it was still able to cd and edit/view/delete anything inside; the only thing I can't do is ls. BUT in ~/Downloads, I can only mess with files created within iTerm, not pre-existing ones. At this point I double-checked iTerm still doesn't have access to either (or full disk access) in my sysprefs, restarted iTerm, and reproduced this.

So yes it still feels like Terminal is willingly complying while iTerm is not totally, or something is just broken. And even if both were actually enforced fully, the permissions carry over to anything you run in there, and they don't protect very many things to begin with. Like, it can delete my entire Music lib without permissions either way.

Ventura 13.5.2, 2019 Intel MBP


macOS does a lot of automatic tracking of things to try and reduce the impact of the security system. There's a system called "bookmarks" which lets apps have access to things they created even in sandbox-isolated locations, it might be related to that.

I think terminal users aren't really in-scope for macOS security.


So maybe because in the past I granted iTerm access to Desktop, it still has access to everything inside even after I've disabled it. I tried making a new file outside of iTerm just now, and iTerm can still read it, so it seems directory-level.

iTerm is third-party software like anything else. Wonder if it got an exemption. Also, TextEdit evidently has access to everything without asking, so it's not just a terminal thing. Idk what's happening exactly, but I don't trust this sandboxing.


Quite the opposite, TextEdit is sandboxed. The act of using the file open dialog grants it a capability to open the file you selected.


Terminal doesn't come with full disk access; you'll get prompts if you look inside eg app containers. But people tend to approve it the first time that happens.

There are also data vaults, which you cannot get around without turning off SIP.


My own ideas of operating system design, I have made many ideas relating to them, and one of the important features it involves is capability-based security. My project does not currently have a name, though. Note that proxy capabilities are actually useful for many things rather than only for security, though; however they are good for security too.


Could I bother you to elaborate? I'm interested in OS design and capabilities too :)


A program can receive capabilities from kernel and from other programs, and can also make up its own capabilities (which can proxy existing capabilities, so they are called "proxy capabilities") which can then be sent in messages to other capabilities; all messages are sent to and received from capabilities and messages can contain both bytes and references to other capabilities (allowing the program that receives them to use those capabilities). (This is a bit similar than SCM_RIGHTS, although there is a bit difference.)

One thing that can be done with proxy capabilities is to run programs with any instruction set whether or not it is the instruction set on the computer that it is running on; the programs will just work. This is because an emulator can provide a proxy of the execution capability and can detect the instruction set required and emulate it. (Of course this only works if the program that spawns it is given the proxy capability, although it doesn't know that it is a proxy capability.) (It could also emulate specific instructions, e.g. if you are using x86 without BMI2 extensions and you want to run a program that uses BMI2 extensions.) Another thing that can be done is to run a program on another computer just as though it is local (or vice-versa); a program can, when sending/receiving messages through the network service, make IDs for any capabilities it send/receives and use those to handle passing the messages. So, capabilities can be shared between computers, and a program can use a combination of capabilities from multiple computers. (Multiple locking will be a bit more complicated.) Or, you can use proxy capabilities for testing what a program might do ten years from now (by proxying the date/time capability), or simulated disk errors, etc. Or, if you do not have a camera, you can make a program that expects it to be given the contents of a video file instead. There are many more possible uses, too.

The kernel need not know what proxy capabilities are used for, in order to work.

When a program starts, it receives an initial message, which will contain any capabilities it is allowed to use (and, depending on what capabilities they are, might be able to use those capabilities to request further capabilities). (If the initial message contains no capabilities, then the program is immediately terminated (unless a debugger is attached to it), since it would not be able to do any I/O and is therefore worthless.) (Note that there are no command-line arguments, environment variables, etc; only the initial message.)

Files can contain links to other files in their stream as well as bytes, and links to files can also be made into capabilities which can be sent in messages too. A link can be either to the latest version, or to a fixed version (in which case copy-on-write is used if it is accessed and written through a link that is not to a fixed version).

Another feature is that locks and transactions can involve multiple objects at once, instead of having to lock each one individually. (This is helpful for many kinds of synchronizations. For example, a program might write to (or read from) two files and avoid a race condition of another program reading (or writing) the two files and receiving (or sending) inconsistent data.)

I also have many ideas of the high-level design (of stuff other than the kernel); most of the above is about low-level design.

There will be common conventions for formats of messages, including endianness, etc; this way programs written for different instruction sets can communicate with each other without being confused.

One high-level feature is the "common data format", which is a binary structured format, used for most of the system. This can include plain lists, key/value lists, rich text, diagrams, zoned spreadsheets, time series, typed arrays, extensions, and others.

The command shell has some ideas similar than Nushell, although using the binary structured format instead of text, using Extended TRON Code instead of Unicode, and others. It can also be used as a programming language, can be used to control other programs (if having access to the appropriate capabilities) (a bit similar than ARexx ports), etc. You can also just as easily move and copy data and objects between the command shell and GUI, so they are designed to work well together, rather than independent.

The file system is strange, not having file names and not having directory structures (except for a root directory with 256 numbered entries, mainly used for some low-level startup stuff; not all entries need to link to a file). A file can have several numbered forks (not necessarily consecutively numbered; perhaps 32-bit numbers); some of the low numbers have specific conventional uses while higher numbers can be used for your own use. There is journaling needed, including of transactions of multiple files at once.

A POSIX compatibility library is also possible. This is a user program that a C program can link to, and a set of conventions for messages to use with POSIX (e.g. command-line arguments, environment variables, etc), which can be used to run programs that are designed for POSIX. (However, designing the program for this system instead would make it much better suited for this system in many ways, since it can then take advantage of many of the helpful features of this system.)

My intention is that it is not a single implementation of the operating system, but a specification that multiple implementations would be possible. Some implementations might run by themself while others might be able to run inside of another operating system (similar than Inferno). A program for this operating system could then run on any of the available implementations. The kernel and the higher-level parts of the operating system need not match (other than stuff such as drivers), although usually would be expected that they would match.


Fuchsia is probably the best chance of getting a real world capability based OS.


Genode seems more mature as a desktop OS at the moment. It has been going on for much longer.


It seems to be aimed more at uber-geeks and people that have to work with secure networks than ordinary people. I think Fuchsia is intended to be a general purpose OS.


I’ve been planning and building this for years. But regular life always takes priority. This could change if there was a viable business model for an operating system that didn’t rely on hardware sales (the Apple model) or on advertising (the Android and increasingly Windows model). One day when I’m independently wealthy maybe.


Try haiku-os.org.


Lord, what I would give to see Haiku gain real traction. BeOS was something truly special and I often wonder what the tech landscape would look like today had Apple chosen to acquire them instead of NeXT.


Or if Microsoft hadn't used anti-competitive practices to ensure a system like BeOS would have trouble getting traction. Bundling agreements with OEMs and whatnot.

>Lord, what I would give to see Haiku gain real traction

I think the only thing we can give right now is our time and money. Porting applications, porting to different hardware platforms, spreading the word, etc.


BeOS started on its own hardware where PC bundling agreements didn't matter. The real problem was that by the time it came out, Windows and PC hardware was good enough.


OEMs were also to blame, they could chose not to take part on those deals, it isn't as if Microsoft gangs were visiting them talking about "how nice the OEMs are and it would be a shame to see something happening to them".


Indeed. It’s interesting that apparently there was room in the marketplace for another commercial consumer desktop operating system — we can see that it’s true with the rise of chrome OS. We could maybe have had that competition 10 years earlier.


I don't know if there was room back then. Suppose Microsoft made no agreements with OEMs. I guess OEMs would have to agree on some open standards to fairly support a broad range of OSes efficiently, which would've been hard but doable in theory. Even then, apps would be OS-specific, and one OS would win the market, at which point hardly anyone would have a reason to use the others. Cause the apps matter far more than the OS to end users.

And now the web browser matters more than the native apps or OS to many users. And Chromium has dominance there.


Yeah, I guess you're right that the necessary prerequisite for alternative OSs like ChromeOS to be viable for consumers was the Web eating software for most tasks. I think most people doing business or personal computing on a laptop today are doing 100% of those tasks either in a browser, in a first-party app the OS vendor made, or in a simple Electron app that just wraps the webapp (Like Slack and Spotify). Obviously not true 25 years ago. It started coming true right before Chromebooks started coming out.

Arguably it's also the only reason Macs became viable for consumers instead of dying off. Imagine if you couldn't use any webapps on a Mac, nor "apps" that were built with Electron.


I've used a Mac from 2003 onwards, and it was rough in some ways before large web apps became common. Microsoft was seemingly throwing Apple a bone by making Office for Mac, otherwise I think the Mac would've been non-viable. And also Age of Empires II, which later on became Windows-only.


There's a good chance that Apple wouldn't exist today if it went down the BeOS road. Not because BeOS was bad (I liked it and played with it for a while back in the day and with Haiku more recently), but going with NeXT got Apple Steve Jobs back, and that's what really saved them.


What do you think about the BSDs, or 9front, the Plan 9 fork?


You can always easily add the Linux environment to ChromeOS.


That 0.7s boot from bios to desktop is impressive.


Intel had a talk a few years back where they got Linux boot down to 0.3s from power is applied to fully interactive UI on an embedded platform. That case was meant for automotive, where you want things like infotainment and instrument cluster available shortly after the accessory power is enabled.


So why do we have to wait so long for other versions to boot?


On many machines, the firmware is the largest contributor, with user-space services and their dependencies being the rest.

`systemd-analyze` (in particular, `systemd-analyze critical-chain` and `systemd-analyze plot > plot.svg`) are useful to diagnose. On my system you can see that my graphical session waits for systemd-user-session.service, which waits for network.target, which waits for NetworkManager taking its sweet time, which isn't started before network-pre which waits until nftables is done loading rules, etc.

Optimizing the service order and their dependencies does wonders for boot time.


> Optimizing the service order and their dependencies does wonders for boot time.

Why isn't this automatic? I'm pretty sure most of us have neither expertise not enthusiasm for that job.


It is automatic, but systemd obviously is constrained by the dependencies specified inside the unit files (that's kinda the point). The real question would be, why can't NetworkManager handle being started before everything is initialized. In turn the answer is probably "because it's hard"


I mean, it is, within the rule set specified by the config files. Distros just tend to explicitly default to the system being more fully up before presenting an interactive GUI because a lot of users' workflows expect that.


It is a matter of requirements. A login may require LDAP or Kerberos, which requires network connectivity. Pulling up network before the firewall is initialized could be dangerous. Maybe your desktop environment needs to play a jingle, so it by default waits for sound to init.

Service optimization is basically deciding what your system doesn't need in order to function for your usage, and moving things you don't need to a late initialization with no dependents. A distribution dedicated for a particular machine usage and desktop environment could do some tuning on your behalf, but it is not generic.


A lot of it is hardware startup and enumeration, and also software services (because they had to really strip it down to get there).

I'm more curious why my car, which is fairly recent, doesn't start in 0.3s.


A typical car is broken down into many small, dedicated computers.

The powertrain and body control modules, assuredly running a RTOS, certainly do. The PCM for example controls when the spark plugs fire. If it did not boot instantly, the engine wouldn't run. If the BCM didn't boot, the doors wouldn't unlock nor would the key or PTS start the engine.

The user-facing infotainment crap is not necessary for the "car" part to function. You could physically remove it and the car would still be drivable.


There should be a key you can hold down which would mean "skip all enumeration and assume everything is exactly as it was the last time you booted" (with a brief warning to the user about the possible risks introduced by this)


If we're going that route, I'd like a bootloader-supplied kernel option that defaults to enabled (ideally with the GRUB/whatever menu having am option to disable it.


That's fair. I just wanted it to default to safe (as good user design dictates).


If you turn off quiet boot, you'll see an impressive list of things getting started up, many of which the kernel isn't in full control of. DHCP (sometimes), gpus, NTP, disk decryption, TPM, monitor probing, USB hub enumeration, on and on.


But all the stuff that is actually necessary is the same on every boot so no enumeration and probing should be necessary.


This isn't true. All hardware is swappable between boots. I've moved a root disk from one case to an entirely new case before, replacing literally every hardware component except the root volume but keeping the same OS. Not only is enumeration requires at boot, but event handlers for hot-swapping need to be available at all times. Users expect to be able to plug in and out USB devices, network cables, monitors, keyboards, and have it all just work.

Presumably, most faster boot OSes simply don't support as many devices. Amazon was able to get Firecracker to boot Linux damn near instantly by pretty much removing all hardware support from the kernel since they know for sure it's only ever running as a VM on a hypervisor they strictly control and won't change.

If you know for sure the devices you need, you can compile the kernel yourself and remove all the stuff you don't need and possibly even get rid of udev and just hardcode what it normally detects.


Is it? My laptop is only shutoff when switching from a dock at home to a dock at work, changing the GPU and hubs available. The OS I just netbooted off my NAS has no idea what it's booting on. I can plug and unplug USB devices anytime, including while the system is off. Maybe I swapped USB headsets, maybe I plugged in a fingerprint or smartcard reader, which needs to be initialized before the login manager starts. Maybe my time server is no longer reachable, so NTP needs to find a new one from the pool, because if it doesn't sync timing correctly the certificate for LDAP won't validate and my work laptop won't be able to login. Maybe it needs to initialize the networking stack to handle an NFS or Samba mount in my /etc/fstab which has a nowait=0, since I use it as a boot volume.

This could be a very, very long list of hypothetical changes :)


All that USB stuff ought not to be blocking though, right? since those can also be connected or removed at any time.


pxe boot obviously has some constraints that don't usually apply otherwise.

Presumably the dock can be detached while booted, so the kernel should also be able to handle booting and only detect at some point that the dock has disappeared


You can see what is eating up the time as you boot if you use SystemD.

https://wiki.archlinux.org/title/Improving_performance/Boot_...

I haven’t really had any issues with Linux boot times for personal machines lately. I think the people that care most about getting into the sub-second range are the ones doing cloud VM stuff, spinning up lots of micro services or whatever.


My computer takes about 45 seconds to start, which isn't terrible, but when I run a Windows XP VM on a quarter of the CPU cores I'm reminded of how long those 45 seconds are.

Even without the stupid 20 seconds of firmware I can't do anything about, XP shows me an interactive desktop in half the time Linux does. Windows 7 is just as fast, and just as fully-featured as my desktop Linux install. Windows 11 still boots faster (and more reliably) than Linux despite the overhead of Windows Defender and drive signature checking.

I don't need 0.3 seconds of boot time on my desktop, but something a bit faster would be nice. I think my machine is so slow by the death of a thousand services all starting at boot.


Yeah, that’s way too long. My laptop (intel gen11) spends ~7s on firmware, a while in kernel (can’t tell how much of that is me typing my excessively long LUKS passphrase; last boot was 11s, of which waiting for me to type was probably at least 5s…), and just 2.2s in userspace before reaching graphical.target (i.e. I get a GDM prompt). If my assumption on my typing speed is right, OS takes no longer than firmware + bootloader.

(~0.2s is wasted on a dependency that shouldn’t be there, but I’m not prepared to resolve that)


My laptop beats my PC in boot times, but also has a tendency to freeze/panic whewn switching to GDM, and when I force reboot it'll freeze a few times in a row and then work again (it's either an Nvidia problem or an Intel problem, but I don't get any stack traces because the display is frozen). 10th gen laptop versus 7th gen desktop seems to make quite a difference.

Linux definitely can be fast, but in my experience it starts slowing down as you install more services and tools, and I'd need to reinstall and reconfigure from a clean slate to get back to normal speeds. I can make it quick again ny disabling a ron of services and features, but that just slows me down later in the process.


> Linux definitely can be fast, but in my experience it starts slowing down as you install more services and tools

There's a weird tendency to put things into multi-user.target to have them run at boot, even if they are not actually required for user login. There's no actual need to start openssh before gdm, but that's how distros set things up.


That's true, but I don't think GDM actually cares about OpenSSH.

On my machine (which I didn't alter from Ubuntu defaults) GDM runs after:

    system.slice switcheroo-control.service dbus.socket plymouth-quit.service console-setup.service cloud-config.service gpu-manager.service rc-local.service systemd-journald.socket systemd-user-sessions.service basic.target plymouth-start.service getty@tty1.service fwupd.service sysinit.target
according to systemctl show gdm.service --property=After

cloud-config is the weirdest one, but it has to do with locales and system configurations for automatically deployed systems (and after first install), I believe. getty@tty1.service is also strange, but doesn't even show up in systemd-analyze blame.

In my boot-analyze charts I don't really see that many problems with illogical start orders to be honest. systemd-networkd-wait-online seems to hold up a bunch of networking services but that depends on the rest of my network responding in time so I can't even blame it for that.


Have you tried running

    systemd-analyze
And then

    systemd-analyze critical-chain
To see what is slowing it down?

My system takes 10 seconds to boot apparently, with fairly minimal tuning (I boot from a USB drive so I try not to worry too much about this, hah!). But I don’t use a desktop environment or any of that sort of stuff, so it is probably not a good comparison.

Still, 45 seconds is pretty slow, I wonder if it is waiting for something that idles out; network interfaces or something like that.


20 seconds in firmware, 8 in kernel, 6 seconds waiting for systemd networks to load, a second for Docker and an encrypted LUKS drive are most of my boot process (and a thousand tiny systemd services, of course).

I can disable a few things here and there, but whatever hardware detection takes place in the kernel (I assume Nvidia related things, it's always Nvidia it seems) isn't really somethint I can change. I can try disabling more POST parts, maybe, and disable Docker on boot to take off another second, but to shave everything down I'll probably need to reinstall.


Your original post (which, I assume, was just kind of an off the cuff guess, so I’m not trying to “gotcha” here), had us at around 25 seconds in kernel+user. So the goal of half that is around 12.5 seconds. And you measured 14 seconds. So it seems like it is in the right ballpark? I can save about a second by disabling NetworkManager… I’m sure there’d be some way to defer it, but it is not worth playing with it IMO.

Out of curiosity did you do some tuning or was the boot just faster than you guessed (if it is the latter, let’s all celebrate the fact that our computers have gotten fast enough to confuse us, haha! When XP actually came out I was a teenager with junky hand-me-down hardware, 45 second boots would have been a dream…).


systemd-analyze plot gives me the following numbers:

Startup finished in 17.642s (firmware) + 9.467s (loader) + 20.168s (kernel) + 12.175s (userspace) = 59.453s graphical.target reached after 12.102s in userspace.

Subtract the slow motherboard firmware (17.642s) and me typing in my password (about 5s, nowhere close to the 9s+20s that the loader+kernel is taking to boot before systemd kicks in) and I get about 45 seconds of "this is what Linux and tools is actually doing before I can log in".

All Windows version from XP up to 7 were horrifically slow until modern SSDs came along. For years, you could take any Vista (or even XP) era computer, double or quadruple the RAM and insert an SSD, and it would feel like a completely new machine. You can still pick up a second hand high-end Windows 7 computer for cheap and use it for what you would otherwise spend $500-700 dollars on after ripping out the hard drive and installing an SSD.

Having used Windows 7 from a HDD for years, I can tell you for sure that Windows 7 sure didn't boot this fast when I first got it :)


Idk your specifics, but modern Windows also essentially hibernates a clean memory state image before login and resumes from it anytime it does a cold boot (it discards it and does a “real” boot when you explicitly Restart). So that seems to be one way it gets a leg up in this contest.


I know, but I've disabled most fast boot tricks because rebooting into Limux became an issue.

It's a cool trick that reminds me of Firecracker. It would be nice to see Linux use that trick as well, but with the existing issues for Linux ×+ hibernation + lockdown mode, I think it'll take a while before that's finished.


OSes not tailored for generic platforms can be built without all that conditional code that executes whenever this or that chipset is found; then all those delays because you don't know how much time a chipset or peripheral will take to initialize, etc.


Read up on “raspberry pi fast boot” and you’ll learn a lot about the tricks that can be used for fast booting, and also why they aren’t the default.


I've been wondering how an OS draws a high-quality GUI.

I've don't bare metal development and drawn simple graphics with a linear frame buffer, but for a UI like this or Windows, I'm wondering how the images that represent the UI components are generated.

I don't see any png files for window borders, but I do see code for setting pixels to the correct color for a theme.

Are UI components in an OS usually programmed and not created in an image editor and then tiled?


> Are UI components in an OS usually programmed and not created in an image editor and then tiled?

These days, it's almost always vector graphics. Used to be simple bitmaps. This OS in particular is using vector graphics.


If starting from scratch, PNG files may actually be more complex, since decoding is needed. On the other hand, once a drawing API is available, it is easier to map some form of vector graphics encoding which calls corresponding drawing instructions. At the same time, this would also solve scaling issues and the vector graphics assets also normally takes far less space.


PNG files are much, much more complex. Although you can slim it down quite a bit, libpng16.so.16.40.0 alone is 229.9KiB. The entire Kernel.esx of Essence is just 843.1KiB.


PNG decoding is little more than an implementation of DEFLATE and handling of the delta encoding. For your own OS you most likely do not have to be feature complete. As an example, picoPNG [1] decodes most .png images, and is only ~500 lines of beautiful C++ code. For most practical purposes, you can probably get away with much less.

[1] https://lodev.org/lodepng/


Good observation: the link above specifically calls out:

> The user interface is completely vector-based

So these are all descriptions of how to draw the UI, as opposed to using raster images.


These days it's mostly hardware-accelerated (OpenGL) shaders.


in case you've somehow missed this concept until now:

https://en.wikipedia.org/wiki/Vector_graphics


Related:

Essence: Desktop operating system built from scratch - https://news.ycombinator.com/item?id=29950740 - Jan 2022 (290 comments)


Usually new OS efforts seem like they're too "boil the ocean" impractical. I like this. It immediately sticks out as something with some promise.

Tips:

(1) Add virtualization early. That way it'd be possible to run other OSes and thus familiar apps. In the long term a seamless form of virtualization similar to Parallels Coherence could be explored as a way to run foreign apps on their host OS. This would be a short-medium term solution to the 'no apps' problem that dogs any new OS effort.

(2) Limit hardware support scope by targeting a few target platforms. I'd suggest Linux-oriented laptop vendors like System76 and Framework, the Raspberry Pi, and a few other familiar things with related ethos.

(3) Look at using foreign drivers such as those from Linux or BSD via a compatibility API. This might let you support notoriously painful to support things like wifi cards.


Sorry for the aside, just in case you do not notice this from nakst's (the author of Essence) homepage, at https://nakst.gitlab.io/

> nakst's webpage // This site works best with JavaScript disabled

Great guy ;)


Oh! It's the person who made gf, the gdb frontend: https://github.com/nakst/gf

Small world.


You can see in the gf screenshots Essence as a target - calls of EsSomeFunction() in the assembly...


Looks great. I hope they keep developing it!

https://gitlab.com/nakst/essence/-/issues/20


> I am not working on it at the moment. But I would like to return to the project when I have more free time ;)


Thank you, yes I should have added that sentence for context :)


Impressive work!

And oddly, it would be even more impressive if they built their own Web Browser from scratch ...


SerenityOS is doing exactly that:

https://github.com/SerenityOS/serenity/tree/master/Ladybird

I also like their Jakt programming language:

https://github.com/SerenityOS/jakt

Though I'm more enthusiastic about Redox (doing it in Rust):

https://gitlab.redox-os.org/redox-os/redox/


Yeah I'm a software developer myself and I've been blown away by the progress in Ladybird. I didn't think it would be possible in the modern day and a complex web. Most of it has just been built by the lead developer and a handful others. But maybe HTML5/WHATWG and a better "standards landscape" has actually helped, countering some past hurdles of "Designed for Netscape Navigator". We still have some issues but you can get very far by only rendering according to the standard. In fact, I think some browser sniffers on websites to "adapt" it does more harm than help. And you've got to do some pretty wonky stuff and ought to think twice about your design if you must do that stuff in 2023.


A chrome monoculture is hardly good for the software ecosystem, and I would like to encourage web-developers to test their webpages, apps and whatnot in ladybird, so that there is an actual libre-alternative!

On the subject, would anyone now what if there is any selenium / Webdriver support, for Ladybird?


headless browser drivers are the devil. They are by far the slowest part of any test suite, they usually have to use hacks like setting up timers to wait (doing nothing and wasting precious dev time over and over again) while repeatedly checking for things like whether a dom element has finished setting up, and they run quite non-deterministically, resulting in random test fails, which are literally the worst.

Not only is it too soon for that, it's too evil for that.


I bet that you can use webdrivers badly and or for evil, but then again isn't that true for any non-useless tool?

Wouldn't it be helpful for the ladybird development team, to use to check for regressions?

Apart from that, if Ladybird could be used in a test suite, wouldn't that be massively beneficial for it's adoption?

My thinking is that if it became praxis among hard core open source fanatics to test their webpages thoroughly in ladybird, it would after a while more or less become "best practice" to do so, with said fanatics serving as "early adopters".

What also speaks for this argument is that hard core open source fanatics, often go for simple and no-nonsense designs such as the very page we are using to discuss this on? I'm sure HN is perfectly usable in ladybird, even though I haven't checked it out myself!


I think it's a little too early for that?


I believe it has some WebDriver support, but I'm not sure how complete it is.


I've also been impressed by the progress in Ladybird. One point though is that there is an incredibly long tail of work in the major browsers related to OS-specific work, performance, browser-specific features, edge cases, legacy support, etc.

I'm not saying Ladybird "needs to" have all this, just putting some of it into context. And it's no slight to the project either; they've managed to get a lot working with a paucity of person-hours.


Redox is IMO trying to do too much at once. I'm not sure about Essence, but this seems to be a common failing of new OS efforts. A new kernel would be a monumental achievement. Even a new windowing system would be awesome. Reinventing everything else at the same time is madness. Hopefully it's fun, but it's certainly not conducive to success, afaict. Some things might be easier to reimplement than to port, but how many cases are like that? All in all I'd be more excited about a new OS that reuses more existing open source code, as it'd be likely to become usable sooner.


> rect.set_size(width: 640, height: 480)

I've apparently been "ruined" by functional languages because this sort of imperative mutational style of "make this object have these attributes" is just an instant turn-off. an equivalent functional style would look something more like:

> rect = Struct.merge_attr(rect, %{width: 640, height: 480})

(assuming you have the Elixir language feature of being able to re-use the same name, but underneath it's actually pointing to a new value; if not, you'd have to assign to a new name, but at least there'd be no ambiguity, and you could still continue to refer to the old name until it went out of scope)


I’m confused, but I’m very very far from a functional expert. Can you explain a hat is better about that version? It looks longer and less instantly readable to me. I’m sure you functional fans aren’t simply madmen, so I’d just like to hear your perspective.


*a hat = what. Thanks phone.


They did mention the mandlebrot set was made in Lisp.


This is really interesting!

> It can take less than 30MB of drive space

A modern display driver alone takes disk space/RAM in the class of 30MB, so Essence is really tiny. The probably downside is that it cannot utilise modern hardware.


It needs, at the very least, a vt100 terminal app, a C/C++/Rust compiler (ideally to llvm so this can run on ARM or ia64 or anything else), a decent code editor, and a basic web browser.


It looks pretty clean and sharp, awesome job! People that build OS (and other low-level software) always have my respect, and first of all because of their dedication and commitment


This is really, really significant imo. The idea of system wide tabs has been in the back of my head for a while, as well as moving away from the linux kernel. I've messed around with it a bit and I'm very impressed.

Starting from scratch is awesome, and I'm firmly in the group that we should not strive for backwards compatibility at this stage, and instead aim to maximize intuitiveness for both developers and users.


This model of desktop can still be immensely useful. All we need to do is support only a few high level protocols.

For e.g. twitter-like streams / RSS, simple high level workflows for generic business purposes, and so on.

Browsers should necessarily not be supported. Layout options should be fixed, and totally protocol specific. (For e.g. a twitter-like stream can only be displayed in limited ways. )


The GUI is reminiscent of BeOS https://en.wikipedia.org/wiki/BeOS or Haiku https://www.haiku-os.org/


Very nice! Hope to see more of it in the future. The window has old chrome vibes, it is not a bad thing.

> If you're interested in contributing, join our Discord server to discuss ideas with other developers.

Would be nice to have a dedicated forum (Discourse?) instead, but I get that it is too soon for this project.


I love the ideas and this is impressive progress for a indy OS. My only point of discontent is that every window looks like a tiny little Chrome window. Not a GUI look and feel I care for at all. There is, however, no accounting for taste so I can't really fault the devs for that.


It's amusing to see that UI along with the phrase "An operating system that respects the user."


It has a theme editor. Must edit a 'res/Theme Source.dat' that becomes a 'res/Theme.dat'


I would only install it if I absolutely had to install a fresh install of an OS. Fortunately/unfortunately, this happens around every three months when my computer inevitably breaks down, so I'll install this instead of Ubuntu.


Nice and detailed demo. What will take this from toy to a product to daily driver?

I love the idea of all application windows being able to be moved into tab groups. Are there window managers that do similar things?


> I love the idea of all application windows being able to be moved into tab groups. Are there window managers that do similar things?

Haiku Stack&Tile!

https://www.haiku-os.org/docs/userguide/en/gui.html#stack-ti...

A long time ago I implemented something like it (the stack part, not the tile part) in Awesome/Lua, so that floating windows could be tabbed together. Code is lost though. It was a bit too buggy anyway, pushing Awesome to its limits.


Pekwm[1] does this and is still actively developed. One of my favorite floating WMs.

1.:https://github.com/pekwm/pekwm


I think one of the old X11 window managers could mimic that behavior as well, when using the BeOS skin... It might've been KWin around KDE3 series, but my memory might be failing me.


Stardock Groupy adds this functionality to Windows, and Microsoft briefly toyed with the idea themselves [1].

[1]: https://www.zdnet.com/article/microsoft-windows-10-testers-w...


I've used some third party tool on Windows, too. I've even tried disabling tabs in Firefox so that I could mix and match the “native” ones, but it didn't really work out.

Good times!


I'm not sure that anything like this would ever be suitable for a daily driver due to a lack of graphics drivers and the like. It would be amazing if there was an OS agnostic driver layer, or if we could somehow reuse the driver stack from Linux or Windows. As far as I'm aware nothing like that actually exists though.


It depends what you want to do for a "daily driver".

Basic things like writing documents, viewing images, and (limited) web browsing using a text-based browser would probably be perfectly fine with an unaccelerated framebuffer.

It would be amazing if there was an OS agnostic driver layer

That's what things like VESA VBE are for.


The *BSDs import the Linux graphics drivers these days. Big chunk of AMD GPU drivers are autogenerated from hardware description tables.


i3 at least has a tabbed container. I think it's common in tiling window managers?


I vaguely remember fluxbox having such a feature.


Yes, mostly the reason why I switched from openbox to fluxbox back in the day.


fluxbox can do


The example "Hello World" program source code shows how simple the event-loop style of development looks here. I like it, and it does look like something that could be useful.

Thanks for sharing!


It's simple and clean looking now now, but with time and success, you will join the darkside.


Kudos to the developers. It looks gorgeous, and I eagerly await further progress. :)


Hmmm. Seems to be Intel-only (or at least I can't find an ARM64 branch)


OK, do yourself a favour and watch the intro video: https://www.youtube.com/watch?v=aGxt-tQ5BtM

I have NEVER seen an OS installation (and boot sequence) as fast as that - just incredible.


Very cool! Would this be portable to a Raspberry Pi?


the user creates window for the application, not applications for themself. like in plan9/rio.


everything has tabs, righteous


Haven’t watched the video yet. What is the kernel philosophy? Is it a mono or microkernel?


I think any new OS should be written in a memory save language like Rust. And there are actually some that use Rust in the works I think, could not name names but ...


That's too broad a statement.

But yes, if you're starting from scratch, that virtually begs to do something at the language level.

A quick look at the repo suggests this is coded in C++? Too bad - a missed opportunity imho.

Otherwise a very cool project!


Redox does, if you're into that.


Still not memory safe as soon as you do anything involving a JIT or hardware access. You need secure hardware and assembly language as well.


Yeah sure, still I think its better to start with a more modern and more save language even though it does not make everything automatically secure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: