But, one fun thing I could imagine doing is using it as an incredibly portable PirateBox. Or any other use of a file server hiding in plain sight.
Speaking of Beowulf, has there ever been an evolution of the concept? The closest I've seen since has been QNX's QNet, which allows transparent management and communication between process on nodes of the cluster. I suppose Hadoop or even Kubernetes can be seen as the continuation of the concept?
In some ways this idea came to dominate. If you look at the top500 list:
All of these machines are big clusters running Linux. Mostly on Intel CPUs.
But on the other hand, the idea of using commodity hardware is kind of a thing of the past. It's mostly Xeon CPUs, not desktop processors. And it's specialized network hardware. And more and more you see dedicated compute hardware like Intel Phi and Nvidia Tesla cards.
And thinking of he main network room, with the amount of brocades in there, it's probably more expensive than the main enterprise pod just in sheer super-expensive network stuffs.
We're also behind the times in lots of our management. 80% of our servers are bare metal, with limited automation. But we also do "NOC in a box"... many of our use cases wouldn't cleanly work right using tech like docker and kubernetes.
If you look through the archives of the beowulf mailing list, occasionally someone makes the argument you're making, and few people agree with it.
The rest of the non-phi/non-tesla hardware is pretty much off the shelf, but the interconnect is one of the two distinguishing features of a supercomputing-class cluster; the other is high-performance shared storage (which of course requires the interconnect to function).
and now to do machine learning...
I don't know if you'd call it an "evolution of the concept", but there are people who've made "low cost" clusters of Raspberry Pi boards (anywhere from four, to several hundred), not so much for practical purposes, but more for learning how to set up, use, and maintain such a system, without needing the space or power requirements a real system would need.
I also thought FirefoxOS would evolve to maybe get in this space, but I was mega wrong about that and lots of other things so there's that. I'm excited that Asteroid won't meet the same fate, but maybe I'm biased.
Also stumbled upon http://www.openembedded.org/wiki/Main_Page while looking the repo for Asteriod. Excited to see what comes of this project and maybe even contribute in the future.
I know that places like Formlabs use it (source: interned there), and 100% agree with the sibling comment: there's a huge, painful learning curve to get started.
It's a combination of a lot of problems: the question of what expertise level to write tutorials/walkthroughs for, decent documentation (that you think you understand but then realize, oh shit, no, I don't), knowing the ecosystems (man, the sheer F/OSS drama that you can discover while searching for something...), were all problems that I noticed just trying to extend our build system.
When the Raspberry Pi came out and only cost $25, it made me think I could write some relatively resilient/robust software, put it on a SD card, put it in a PI, add a case, and sell useful hardware for $50. OE seems like a good step in the direction of recovering some of the speed/efficiency losses that running even some of the most lightweight linux distros would force you into.
I think the end user problem can be solved with extremely robust client-side installers and amazing instructions. If IKEA can get people to build furniture (even if badly), why can't we manage to get a user who has booted an operating system on a running computer to flash a device, when usually most cases are the default case (as in you usually don't have to change a ton of ADB/system settings to connect to most android devices).
everyone bought a computer from an advertising (google) or fashion (apple) company that only runs in kiosk mode. how does your 90s self feel about that?
Pretty sure there is no GNU component in iOS which is based on BSD. Also, Android uses only the Linux kernel, not even close to a GNU/Linux system.
Anyway, this misses the point of the FSF in insisting on calling it GNU/Linux.
The point of the GNU is not to name every application that runs on your system, but to say that you're running the Linux kernel AND the GNU userland to have a functional system, i.e. that GNU is the second half of a complete system, GNU/Linux.
You don't need Apache to have a functional system. You do need libc etc. to have a functional system and when these are provided by GNU, I think it's fair to call it GNU/Linux.
The point of the GNU is not to name every application that runs on your system, but to say that you're running the Linux kernel AND the GNU userland to have a functional system, i.e. that GNU is the second half of a complete system, GNU/Linux.
I thought the point was to bring attention to the idea of Free Software as a philosophy. Where as Open Source is more of a marketing tool.
Linux is not a GNU package and hence not GNU Linux, however when it includes the GNU userland, it runs many GNU packages, hence GNU(userland)/Linux(kernel).
RMS is not taking credit for Linux and does not want anyone to call the kernel GNU/Linux, but rather the whole system IF it is using GNU userland.
In Windows there is lots of software not written by Microsoft, then there's the NT kernel, (which by itself does not make an OS) and the userland (which together with the kernel makes a basic OS that you can install all the other nice pieces onto to have a great experience, but the core is NT + userland)
No, Richard, it's 'Linux', not 'GNU/Linux'. The most important contributions that the FSF made to Linux were the creation of the GPL and the GCC compiler. Those are fine and inspired products. GCC is a monumental achievement and has earned you, RMS, and the Free Software Foundation countless kudos and much appreciation.
Following are some reasons for you to mull over, including some already answered in your FAQ.
One guy, Linus Torvalds, used GCC to make his operating system (yes, Linux is an OS -- more on this later). He named it 'Linux' with a little help from his friends. Why doesn't he call it GNU/Linux? Because he wrote it, with more help from his friends, not you. You named your stuff, I named my stuff -- including the software I wrote using GCC -- and Linus named his stuff. The proper name is Linux because Linus Torvalds says so. Linus has spoken. Accept his authority. To do otherwise is to become a nag. You don't want to be known as a nag, do you?
(An operating system) != (a distribution). Linux is an operating system. By my definition, an operating system is that software which provides and limits access to hardware resources on a computer. That definition applies wherever you see Linux in use. However, Linux is usually distributed with a collection of utilities and applications to make it easily configurable as a desktop system, a server, a development box, or a graphics workstation, or whatever the user needs. In such a configuration, we have a Linux (based) distribution. Therein lies your strongest argument for the unwieldy title 'GNU/Linux' (when said bundled software is largely from the FSF). Go bug the distribution makers on that one. Take your beef to Red Hat, Mandrake, and Slackware. At least there you have an argument. Linux alone is an operating system that can be used in various applications without any GNU software whatsoever. Embedded applications come to mind as an obvious example.
Next, even if we limit the GNU/Linux title to the GNU-based Linux distributions, we run into another obvious problem. XFree86 may well be more important to a particular Linux installation than the sum of all the GNU contributions. More properly, shouldn't the distribution be called XFree86/Linux? Or, at a minimum, XFree86/GNU/Linux? Of course, it would be rather arbitrary to draw the line there when many other fine contributions go unlisted. Yes, I know you've heard this one before. Get used to it. You'll keep hearing it until you can cleanly counter it.
You seem to like the lines-of-code metric. There are many lines of GNU code in a typical Linux distribution. You seem to suggest that (more LOC) == (more important). However, I submit to you that raw LOC numbers do not directly correlate with importance. I would suggest that clock cycles spent on code is a better metric. For example, if my system spends 90% of its time executing XFree86 code, XFree86 is probably the single most important collection of code on my system. Even if I loaded ten times as many lines of useless bloatware on my system and I never excuted that bloatware, it certainly isn't more important code than XFree86. Obviously, this metric isn't perfect either, but LOC really, really sucks. Please refrain from using it ever again in supporting any argument.
Last, I'd like to point out that we Linux and GNU users shouldn't be fighting among ourselves over naming other people's software. But what the heck, I'm in a bad mood now. I think I'm feeling sufficiently obnoxious to make the point that GCC is so very famous and, yes, so very useful only because Linux was developed. In a show of proper respect and gratitude, shouldn't you and everyone refer to GCC as 'the Linux compiler'? Or at least, 'Linux GCC'? Seriously, where would your masterpiece be without Linux? Languishing with the HURD?
If there is a moral buried in this rant, maybe it is this:
Be grateful for your abilities and your incredible success and your considerable fame. Continue to use that success and fame for good, not evil. Also, be especially grateful for Linux' huge contribution to that success. You, RMS, the Free Software Foundation, and GNU software have reached their current high profiles largely on the back of Linux. You have changed the world. Now, go forth and don't be a nag.
Thanks for listening.
In Debian, the Linux kernel is just one of many optional packages. Replace it with BSD and you still have Debian the operative system, running on a BSD kernel.
Some people call it a Linux distribution, but that's incorrect. Its a Software distribution, similar to how Apple distribute software through the app store, and how Microsoft distribute software through the windows store. To link the kernel to the distribution make sense if the distribution only support a single kernel, but that's not true any more. Debian is no more a Linux operative system than its a BSD operative system or a Hurd operative system. Debian is however a operative system.
If there is one thing I wish people would do it is to stop confusing the role of a kernel and the role of a operative system. I don't go to the kernel.org and expect to get a full blown operative system to install on my laptop. I don't tell people to go there when suggestion an alternative to windows and mac. Nothing that people use to distinguish which operative system they currently got involves a kernel, and one do not talk about kernel code when recommending people to switching from one operative system to an other.
To say a few more words about Debian, the operating system has targets for multiple architectures, multiple kernels, and multiple platforms/hardware. Some treat them as four different operating systems, ie "Debian GNU/Linux", "Debian GNU/Hurd", and "Debian GNU/kFreeBSD" and "Debian GNU/NetBSD". It look silly, and its the same software in all of them unless you do things very close to the hardware.
The proper name of "Linux" (the kernel) is indeed Linux, because Linus has said so and everyone, including RMS agrees, no one insists on calling Linux (the kernel) GNU/Linux, as there's no GNU in there and it would be pretty silly on insisting on calling it GNU/Linux.
Also, GCC is hardly the only critical GNU component that modern GNU/Linux systems rely on, but even so, it's not that anyone wants to name your program GNU/something just because it was compiled with GCC, rather is that Linux is the kernel, GNU is the userland. To make a functional system, you do need a kernel (Linux) and the userland (ie GNU), so if you're using both components and one is called GNU and the other Linux, it's fair to call the result GNU/Linux.
The reason RMS wants people to do this is not to take more credit for himself than is due, but to bring more attention to "free software", (which GNU promotes), as opposed to just "open-source" (which Linus promotes).
i'm saying, shut the fuck up with this pedantic shit and make something that people want. our competitors are multi-billion dollar companies. we can't just promote ideas, we actually have to fight head to head. most people aren't ideologically driven, they just buy whatever seems best/most convenient.
focus on actual measurable things like marketshare. how many people are getting the four freedoms? that's the goal, right? so measure it. free software has benefits, but they aren't being marketed aggressively enough to actually reach consumers. we have all of the pieces, but no vision or marketing strategy.
strong copyleft provides equal protection for IP as proprietary licensing, especially when you consider AGPL. charge for shit. make it sexy. whatever you have to do to make money and spread free software. sue over license infringement. fight, damnit!
RMS a nag? No way, I can't possibly believe that.
GCC became relevant when UNIX vendors, initially Sun, decided to sell the developer tools instead of bundling them for free.
So the 80's hipsters that had largely ignored GCC, decided to contribute to its development instead of paying UNIX vendors for their tools.
Long before Linux was even an idea.
GNU contains a lot of non-GNU-written software because, to create a fully free OS, the only portions that needed to be written were the portions that did not have free replacements.
All the “Linux” distributions are actually versions of the GNU system with Linux as the kernel. The purpose of the term “GNU/Linux” is to communicate this point. To develop one new distribution and call that alone “GNU/Linux” would obscure the point we want to make.
As for developing a distribution of GNU/Linux, we already did this once, when we funded the early development of Debian GNU/Linux. To do it again now does not seem useful; it would be a lot of work, and unless the new distribution had substantial practical advantages over other distributions, it would serve no purpose.
Instead we help the developers of 100% free GNU/Linux distributions, such as gNewSense and Ututo.
i would require root to have most things i do on a full linux computer. Such as running network diagnostics, using special kernel drivers and properly debugging programs.
the problem is not that some app "do not require root" the problem is that every time the user does need root, they are denied! because only the advertising company has root access to their pocket (and wrist!) computers.
I remember there were some effort to port to first intel's compiler, then clang - but AFAIK the official stance of upstream Linux is "we use GCC, get over it"?
And yes, IOS is BSD. you are right again.
But in the end, your reply is completely off-topic and misses the core issue of my comment :)
My desktop machine is still a desktop machine.
I did have a Master System, but still, I mostly played with my father's MSX and XT computers. Which had better games anyuway, and where i learned to edit hex values and cheat on my save games and later on to write simple qbasic games.
Sure, I have a very awesome computer that cost hundreds of dollars but I can't really use it because... somebody said so?
Zsh, bash, make, clang, SSH, Python, vim...
Their package selection is small compared to debian, but still quite nice.
But it hurts as a power user.
1 - https://f-droid.org/repository/browse/?fdfilter=termux&fdid=...
It would be great if my phone was completely under my control. But my priorities are getting my communication protocols back, and avoiding losing my desktops and servers. The phone can wait.
It's a knife to the heart when you put it that way.
The Linux fork used on Android and the set of official NDK APIs, make it so that Google can at any Android release change the kernel for something else and only OEMs or devs using forbidden APIs will notice.
This type of attitude just discourages people from ever wanting to leave the so-called kiosk mode.
Remembering the "runs on a toaster" shirts I am now curious if NetBSD (or any BSD) will run on it. The thought that I never even considered messing with the watch makes me a bit sad (I've turned into too much of a consumer, not enough tinkerer left :P)
At it's most basic, just a notification light that mirrors that on my phone/tablet.
Ideally, I think a ticker-tape-style circular display around the edge of the (real mechanical) watch face to give notification headings would be awesome.
Weirdly the tap to click stops working after connecting to the MacBook using VNC, even though if you open System Preferences and look at the touchpad options, it believes it is enabled with tap-to-click. They've said it is fixed about 10 times now - not sure if it is. Must retest.
There was a viral Flash(?) game a few years ago involving a frog sticking out its tongue to trap insects; the catch for the game was that it had no help, everything in the UI was discoverable, but barely.
Unless, of course, you used tap to click, which wasn't registered by the game. I spent 5 minutes trying to play before deciding the whole thing must be a hoax.
"AsteroidOS is built upon a rock-solid base system. Qt 5.6 and QML are used for fast and easy app development. OpenEmbedded provides a full GNU/Linux distribution and libhybris allows easy porting to most Android and Android Wear watches."
AFAIK, Hybris isn't just for the GPU and is used to port various binary Android drivers to ubuntu touch, sailfish, tizen, luneos etc as can be seen from the following chart:
No idea why Tizen needs it though. Samsung can afford writing normal drivers for all their hardware.
So there's clearly a market for some sort of wrist-device that makes using your phone easier.
The thing that makes it feel like a stupid fad is when you have to charge it every day and therefore forget to put it on. It hasn't become habitual quite yet.
Which is why I love my smartwatch for having an e-ink display and not an amoled display. So even after more than a year of operation I still only charge it once a week.
Yep, pretty amazing that a quad-core 1.2GHz machine with half a gig of RAM can run more than one thing at a time!
I see it as a way for mobile OES to sell more electronics, now that everyone and their dog owns a mobile phone and a tablet, and don't plan to buy news ones anytime soon.
Being able to glance at my wrist and see if I need to get my phone out is pretty nice. I don't try to be inconspicuous in meetings or anything, but when I'm doing something, or walking down the street, etc., it's nice to be able to decide if I need to stop and handle it (like a call from my wife or the daycare), or can ignore it, like an email from a newsletter.
And with 2FA everywhere, it's nice to have a standalone token generator that I can wear. The one on my pebble is strictly offline operation (doesn't need the phone connected), which means it's even useful if I break/loose my phone or the battery's dead or something.
Music control is pretty cool as well. But mostly for things like when I'm doing the dishes, swimming in the pool/at the beach, or in the shower. My Pebble is fully waterproof, so I only take it off to charge every few days. So I can stream music from my phone to a bluetooth speaker, and control it without needing to handle my phone.
For it to be usable, it should not require sight and higher typing speeds should be achievable by training. This rules out virtual querty keyboards and predictive suggestions. I think glyph recognition would be usable, but also other gesture systems would work if they don't demand supervision by sight.
Downside is that it requires training to learn the symbols, and of course without sight or other feedback you don't notice mistakes while doing it (same with keyboards though).
edit: I forgot to mention Minuum. It's quite similar, but compresses all the keys into a single row. Consequently, it doesn't seem to be as accurate.
Maybe you could compensate a bit with some haptic feedback.
Although I think a better input method would be voice (like the GoPro cameras) for most use cases. "Watch, send text to Mom...." Morse Code is just too slow and most humans can no longer send or copy it. It should be the input method of last resort.
In reality, I think the author just did it for its own sake:
> I ended up with a free LG Watch Urbane ... I realized that smartwatches were just a fad (for me at least), and this was a device I could experiment with.
But an issue is power usage. (eg) ubuntu runs on a smartphone, but with much shorter battery life than android. (Tho TBF, I don't know the power efficiency of Asteroid OS).
One side-benefit of non-rooting linux (eg termux, terminalIDE) is retaining battery life.
However, Asteroid OS is open source, which counts for a lot!
Still, the battery isn't something like the battery in a traditional watch so at some point I won't just be able to order a button cell on Amazon and swap it out like I could on any old Timex, etc. Definitely hurts the lifespan and it's a major reason I only bought this watch because I got it on sale for $100. I wouldn't feel as comfortable buying a $500 watch that would be forever battery-dead in 3-4 years.
The battery lasts for around two days.
I've heard such converters are the hardware equivalent of running the unaccelerated VESA driver due to the low bandwidth though. I don't expect it would do 60fps beyond 1152x864 and 30fps beyond 1280x1024.
Thanks for the info, I'd always wondered.
Kinda sad none of them spent the extra effort on building a differential update protocol of some kind - but then the processor inside would probably need to be 250MHz+...
Though I don't think MHL is open source, and I'm probably completely wrong in thinking the BT hardware on that could be used with the bluetooth host stack
Four cores, which makes a difference.
> what's the battery life?
It might not be as bad as could be feared, specifically regarding the CPU at idle anyway. A lot of modern CPUs support turning cores individually on/off (or at least into very low power sleep states) as needed and if the OS scheduler is bright enough then taking advantage of this can be a lot more power efficient than trying to fiddle around with variable clock rates. There might be a performance hit for single one-thread tasks of course as per-core performance might be low, but at times when you care (while in active use, interacting with an app) there will be at least three distinct tasks going on: core function management, display management, and at least one user task. While the watch is idle there will just be one active task most of the time so only one core needs to be powered up (of none most of the time, with device management tasks and user apps that respond to events/notifications only waking up on interrupt).
Having said that, having to charge it at least once most days is my only major complaint with the MS Band that I wear most of the time. It would be interesting to see how well this manages in that regard.
Sure, the author doesn't confirm this nor expand on it, but I'd say his comment is sufficient.
I think what the original question was getting at was more like "what is the battery life of the watch when running AsteroidOS?" That seems like a pretty fair question which was not addressed.
$ watch --interval 0.1 --no-title date
I'll admit that it's not the design choice I would have made, but then again, I'm not a designer!
Even ignoring technical considerations (the dizzying amount of code and cruft required to run a watch), it goes against one time-honored watch tradition: simple, elegant mechanics.
So, just like every digital watch ever created?
Seriously though - stop getting hung up on the word 'watch' - it's not the real point here. Nobody is arguing that telling the time doesn't require a full blown OS.
> simple, elegant mechanics.
Have you ever looked inside a watch!
> simple, elegant mechanics.
Have you ever looked inside a watch!
But "simple"? It's telling that the term for each feature on a watchface is "complication" :-D
When I last tried hacking my Moto360 it was possible to get Debian running in a chroot reasonability easily.
The trouble came mostly with video access. The userland graphics libs are all compiled against BIONIC rather than glibc. And they were at the time only available in compiled form. That meant it wasn't really possible to have a clean glibc system.
I guess either something has changed, or they're using a hack, incorporating BIONIC, which is what many people have done on other mobile platforms.
Very neat though, I'm going to have to try this out!
Thanks for the bug report btw. I'll fix it when I'm thinking straight in the morning!
Also yeah Nexus 4 isn't bad, too bad they don't update it anymore :(
edit: there are even more huge div's that have the same background, so there are multiple layers of gradients with hue blend mode.
The guy mentions how "Lennart Poettering would love it!" as the h2, and also describes X11 as "legacy".
With these in mind I feared he was serious about the systemd bit.
I'm really sad X11 is legacy software myself, as an aside. It's a disaster, sure, but now we have one more layer of "uhhhh..." for all the UX-types to get scared away by: it used to be "(WinAPI) vs ((Qt)/(GTK+)/(Xlib/XCB))", which was embarrassing enough; now it's "(WinAPI) vs (X11((Qt)/(GTK+)/(Xlib/XCB))/Wayland((Qt)/(GTK+)/(???)))" which is just plain annoying for low-level graphics hacker wannabes - I can make a WinAPI app in C that opens a window in a few KB, where as to do that in Linux now I HAVE to support XCB and also write my own tiny UI for Wayland.
Practically speaking it means that most developers will just pick a side^H^H^H^Htoolkit and go with that. It doesn't help that I've never been able to get past Qt's love of background processes vs. GTK's various displays of autism/spasticness.
sighs...rant over, situation accepted a bit more.
systemd is still a disaster though. I saw a massive 3Wx5H 1080p video wall in a shop window the other day, displaying... systemd emergency mode.
At least I learned that some video stretchers are smart and will drop the panels they're controlling into standby if they display black for too long. (Only the two panels at the top-left displaying the error were on, the others visibly had their backlights off. Neat.)
Actually, I think this is the situation that lead to the growth in webapps and probably helped the decline (or failure to rise) of windows phone, no one had a clue where MS was going.
Windows Forms is officially dead as communicated at Build a few years ago. It is now playing chess with Carbon.
MFC is officially on life support. Way forward for C++ developers is UWP.
Everything from Win32 that isn't required for UWP support is deprecated and Project Centipede is the official way to bring Win32 applications into the new shinning UWP world.
TIL about this aspect of the bigger picture. I'm a bit behind on where Windows is at in the grand scheme of things nowadays.
That's worse than this: you will quite possibly need features that are not implemented through Wayland, but through each different Desktop Environment, through different APIs, since Wayland ditched many X11(+standardised extensions) features.
Whatever we're left with will create quite an interesting ecosystem; here's hoping it's not too much of a political disaster.
For me, that means hoping Qt keeps up at the end of the day; it's been far superior to GTK in every way IMHO for some time.
Wayland doesn't change this. Once Wayland is adopted the X server will become a Wayland client and X client's will connect to the X server as usual. You don't have to write a native Wayland application if you don't want to.
But another commentator noted how Wayland doesn't provide X11-standard functionality (https://news.ycombinator.com/item?id=13346877).
I fear that X11 will eventually be installed (and possibly even available) in less and less environments, in the long term.
So 10 years from now it'll be interesting to see where things are at. Hopefully things haven't devolved too far.
Thanks very much for that feedback, I'll keep it in mind.
If I had to use one word to describe systemd's integration and adoption into the Linux ecosystem it would have to be "hostile" - the label has unfortunately been applicable in both directions.
Most of the feathers flew around 2012 when the major Linux distributions adopted systemd as their default init system, irreversibly pulling in all of systemd's system management policies as well, many of which were poorly designed.
Several big names in the Linux community (Linus Torvalds and Greg Kroah-Hartman, to name two) have had heated discussions with Lennart Poettering and other people behind systemd about major bugs, design flaws and policy integration issues, with the systemd response consistently being "the way we're doing it is the right way, no patches will be accepted, go away" even when shown multiple times that something contravenes design best practices or tradition (aka principle of least surprise).
For this reason I dislike systemd's highly bureaucratic "manglement" style, and am very sad that all major distributions have adopted it so widely. systemd uses a very dictatorial approach which makes it very very hard to use any other init system without nontrivial and obscure system reconfiguration.
I understand Lennart also built PulseAudio and got it integrated into pretty much all Linux distributions. PA works well now, but if it's having a bad day and I really need sound working in a pinch, I can just kill it and use ALSA/OSS directly.
systemd categorically isn't like that because it's (ostensibly) an init system. However it comes with so many extra "side features" (which an increasing number of things are depending on) that temporarily shoving it out of the way to became impossible very quickly, and before any real documentation was established. I think it's understandable a large amount of the Linux community have growled and snarled when presented with this set of circumstances.
Nowadays, systemd is pretty much part of the woodwork now, but the communication and social issues continue.
The first reply to a previous comment I made about systemd was extremely enlightening to read: https://news.ycombinator.com/item?id=12877934
I observe that systemd has a plethora of other systems that you mention, including a DHCP server. Yes a DHCP server.
I do not understand it.
Edit: Yes I know Fedora 7 was ancient. Just my memories. I think the fact that PA was broken in it got fixed pretty swiftly, from memory. But I was plagued with glitchy audio releases after this - could be my incredibly lame hardware at the time (but worked fine in ALSA).
The problem with systemd's NTP and DHCP and whatnot is that they use their own systemd-specific APIs. Not using the APIs means that you don't talk to those components. And the thing is, if you're on a systemd-based system (which you can generally assume* to be the case now), you can 100% depend on those components absolutely definitely existing, regardless whatever else is(n't) installed.
(* Unless your users are using Slackware (hi there :D), Devuan or something like that.)
So of course things are beginning to depend on those services' APIs.
Which are exposed via D-Bus. ("Desktop"-Bus. On servers. Facepalm, Inc.)
Now, I do understand that when you use systemd-nspawn or LXC or Docker or whatever else you can generally assume that these components will interoperate and that's why they were implemented. That's the theory.
In practice, things... don't work out so well. This was on here a couple days ago: https://thehftguy.com/2016/11/01/docker-in-production-an-his...
Damn it, they have a web server in there for the sole reason of displaying a QR code for the initial log signing key. A signing system that apparently Poettering's brother came up with as a doctorate thesis, with systemd-journald being the only implementation (that i know of).
BTW, these days you find dbus inside the initramfs. Because systemd need it to be present during bootstrap. After systemd-pid1 is up, it will kill the initramfs version and fire up the one from the HDD instead.
There are times i wonder if the Fedora maintainers grit their teeth and play along with Poettering and crew because they have the same paymasters.
> Damn it, they have a web server in there for the sole reason of displaying a QR code for the initial log signing key. A signing system that apparently Poettering's brother came up with as a doctorate thesis, with systemd-journald being the only implementation (that i know of).
Okay, that I didn't know.
Actually let me read that backwards...
> log signing key
What on earth? Is the log encrypted?
> QR code
How are QR codes relevant to encryption?
> web server
Why do I need a WEB SERVER to display a QR code?! Uh... I can get displaying a QR code on the screen, sure. But... I get the impression you mean the QR code is served over a web server?
Oh. For headless boxes. But... why display a QR code, again? Why not just serve the log signing key itself? QR codes aren't encryption (just a good week's worth of reading on error-correction).
> BTW, these days you find dbus inside the initramfs. Because systemd need it to be present during bootstrap. After systemd-pid1 is up, it will kill the initramfs version and fire up the one from the HDD instead.
Mmm. Because all of its APIs are delivered as D-Bus (desktop-bus) services. I totally get that, but... aghhh. Why not even ZeroMQ :(
> There are times i wonder if the Fedora maintainers grit their teeth and play along with Poettering and crew because they have the same paymasters.
Unless things have changed, Linus Torvalds uses Fedora. He's had a lot to say about things.
I would be very very surprised if there wasn't a noteworthy bunch of mental-pitchfork-wielders.
Meaning that the first key is used to sign a new key that signs the journal entry and the next key that sign yet another key and entry etc etc etc. And that by having the initial key handy one can at any time walk through the journal to verify that it has not been tampered with.
The whole QR thing it there to allow a would be admin to quickly transfer the initial key to their smartphone or similar by scanning the code.
As for Torvalds being a Fedora user, my impression is that his usage needs are fairly modest these days. He spends most days reading emails via gmail, and approve commits to the kernel code housed on the kernel.org servers.
It's almost sad systemd has some good points. Heh.
I vaguely recall a video that noted where Torvalds was at nowadays; he seems to mostly be in administration/management now, as opposed to low-level hacking. Must be an interesting position to be in.
Historically, its the difference between Monolithic vs Micro design. The linux kernel is not just a layer between the application and the hardware, it also support a bunch of extra things which the project want to have built in rather than as optional libraries. There is no TCP/IP or AES library, but there is non-supported alternatives to those that are libraries.
If you wonder why systemd has a dhcp implementation, ask why the tcpip stack don't.
They'd probably go along the lines of saying that there are already millions of lines of code in there and adding these types of features would add to the codebase size and permanent maintenance requirements.
But it would be really cool if all of these kinds of high-level features were available, yeah...
Linux is turning into something of a hybrid kernel, or perhaps will emerge a micro-kernel given time.
And it inevitably always is, since if you're generalizing all system operations onto a single bus, that bus would either need to support some generic form of contextualization hinting or have some kind of theorem-solver-inspired system to determine what requests have no dependencies. I don't suspect Minix incorporates either approach...
The problem I see is the need to put "these are audio frames" in a different queue than "here are filesystem request packets". (Ideally the filesystem queue would itself allow further sharding, since most filesystems are multithreaded now.)
Writing such a generalized queue sounds like a rather fun exercise to me.
That said, if any such implementations are out there or there are any counter-arguments to make to this, I'd love to hear them. I mean, AFAIK Mach is a microkernel, so it's clearly solved some of this.
If you need a queue with particular properties you write one, as its own userspace process (or system of cooperating processes). The kernel dispatcher isn't assumed to be a fully general messaging system.
I am wondering about one thing though.
> there's simply not a lot of computation involved
Wow, 26 instructions.
Here's my worst-case scenario: you have 8 concurrent threads (a current reality on POWER8), and let's say all of them are engaged in fetching large amounts of data from different servers - let's say disk and TCP I/O are both servers.
I'm genuinely curious how well a 26-instruction-but-singlethreaded message passing system would hold up. (I honestly don't know.)
Worst case scenario, the cache and branch predictor would perpetually resemble tic-tac-toe after an earthquake.
I think it would be genuinely interesting to throw some real-world workloads at Minix, Hurd, etc, and see how they hold up.
Now I'm wondering about ways to preprocess gcc's asm output to add runtime high-resolution function timing information that (eg) just writes elapsed clock ticks to a preallocated memory location (within the kernel)... and then a userspace process to periodically read+flush that area...
Speculating: if you were passing all the data in messages, terribly. But that's not how you'd handle it. You'd use messages as a control channel instead, similar to DMA or SIMD instructions. E.g. if you're downloading a file to disk, the browser asks to write a file, the filesystem server does its thing to arrange to have a file and gets a DMA channel from the disk driver server. The TCP layer likewise does its thing and gets a DMA channel from the network card driver, and either the browser or a dedicated bulk-transfer server connects them up. The bulk data should never even hit the processor, yet alone the message-passing routines.
> I think it would be genuinely interesting to throw some real-world workloads at Minix, Hurd, etc, and see how they hold up.
Do. Also look at QNX which is the big commercial successful microkernel.
> Now I'm wondering about ways to preprocess gcc's asm output to add runtime high-resolution function timing information that (eg) just writes elapsed clock ticks to a preallocated memory location (within the kernel)... and then a userspace process to periodically read+flush that area...
I'd look at something along the lines of perf_events ( which I encountered via http://techblog.netflix.com/2015/07/java-in-flames.html ).
One of the targets I've been trying to figure out how to hit is how to make message-passing still work if you're using it in the dumbest way possible, eg using the message transport itself to push eg video frames. I'm slowly reaching the conclusion that while it'll work, it'll just be terrible, like you say.
I mention this because, at the end of the day, most web developers would just blink at you all like "DM-what?" if you suggested this idea to them. These types of techniques are simply not in widespread use sadly.
In my own case, I'm not actually sure myself how you use DMA as a streaming transport. I know that it's a way to write into memory locations, but I don't know how you actually take advantage of it at higher levels - do you use a certain bit as a read-and-flush clock bit? Do you split the DMA banks into chunks and round-robin write into each chunk so that the other side can operate as a "chaser"? I'm not experienced with how this kind of thing is done.
Well, workload-testing microkernel OSes is now on my todo list, buried along with "count to infinity twice" :) (I really will try and get to it one day though, it is genuinely interesting)
Regarding QNX, I actually mentioned that to the other person who replied in this thread (https://news.ycombinator.com/item?id=13346822), and I said a few other words about it a couple months ago - https://news.ycombinator.com/item?id=12777520
I really wish the QNX story had gone ever so slightly differently :'(
Regarding perf_events and the linked blog post, thanks for both - this is really interesting!
I don't know enough to answer this stuff - last message was already second-hand info (or worse). All I can say is, best of luck.
As i have come to understand there is one successful such kernel out there, QNX. And that while both the OSX/iOS and Windows NT kernel started out as a micro design, both Apple and Microsoft have been moving things in and out of the kernel proper as they try to balance performance and stability (Most famously with Windows, the graphics sub-system).
The OS was cautiously courting a "shared source" model where you could agree to a fairly permissive (but not categorically pure-open-source) license and get access to quite a few components' source code.
It was anybody's guess what might develop from that, and an intriguing and hopeful time.
And then BlackBerry came along and bought QNX and killed the shared source initiative. Really mad at BB for deciding to do that.
Nowadays QNX is no longer self-hosting - no more of that cool/characteristic Neutrino GUI anymore :(
In both instances what the person produces only becomes "stable" after he pass the maintainership to someone else.