Hacker News new | past | comments | ask | show | jobs | submit | softirq's comments login

GOPATH actually made me realize that the

~/src/$host/$owner/$repo

organization structure makes a ton of sense for every project and as long as you organize all of your languages into this one tree, everything just works.


Can you expand on this a bit please? What does $host mean? Why would I need that for a purely local project that is only created for my own use? And what about grouping projects? E.g. "personal", "work", etc. And where in the structure are languages? Is that the overarching directory?


> What does $host mean?

In Go parlance, it would be the remote host where the repository is hosted, e.g. github.com, dev.azure.com, golang.org, etc.

> Why would I need that for a purely local project that is only created for my own use?

If nobody else is using your purely local project, and you're sure nobody will ever use it until the end of time, then I guess you could just use "~/src/$HOSTNAME/$USERNAME/$PROJECTNAME". Otherwise, it would be wise to setup a remote repository ahead of time.

Go has a strong opinion that, in this day and age of distributed computing, projects should be online-first, so they can be easily used as dependencies. One of the nice consequences of this opinion is that Go dependencies can just be specified in the import statement - e.g. using grpc dependency is just:

    import "google.golang.org/grpc"
No need for pom.xml, requirements.txt, cmake.txt, or any other kind of dependency configuration. It just works (unless it doesn't, like with private repositories, in which case it requires some exotic configurations in ~/.gitconfig or ~/.netrc, but that's a whole other can of worms - for most public repositories I've used it works flawlessly).

> And what about grouping projects? E.g. "personal", "work", etc.

Assuming you only use one repository hosting service and have one username, all your personal projects would be under "~/src/$PERSONAL_HOSTING_SERVICE/$USERNAME/", and your work would be under "~/src/$WORK_HOSTING_SERVICE/$WORK_ENTITY/" or something like this.

> And where in the structure are languages?

It isn't. That's either a bug or a feature. If it's a bug, you could just do the whole thing by language, e.g. "~/src/go/", "~/src/java/", etc.


> No need for pom.xml, requirements.txt, cmake.txt, or any other kind of dependency configuration. It just works.

...until the project decides to switch to another hosting provider. Which has happened more than once in the past; it used to be common to host projects in Sourceforge, for a while Google Code was common, now many projects are on GitHub, and it won't surprise me at all when another forge becomes the popular one. Individually, projects might switch between being self-hosted (in their own domain name) and hosted on a shared forge (using the forge's domain name).

IMO, it's a bad design. It forces the project's repository's location to become the project's official "name", that is, it mixes up location and naming. It's better to have an indirection layer to map the project name to the project location, like most other languages do.


> ...until the project decides to switch to another hosting provider

Or decides their github username looks better with an upper case letter (https://github.com/sirupsen/logrus/issues/570). Or for people who use their real name as their github name, updating their username after marriage, divorce, gender transition or whatever.


Or they create a github organisation (which might be even worse because the old repo probably still exists but could be stale)


Go already supports an indirection layer, commonly known as vanity URLs. It works by making a request to a domain owned by the project and parsing a meta tag in the response that points to the actual repository location. Of course, the problem is that few projects bother to set this up.


Go also provides a replace directive in the go.mod file [0].

[0] https://go.dev/ref/mod#go-mod-file-replace


> It's better to have an indirection layer to map the project name to the project location, like most other languages do.

Who takes care of the indirection layer when the upstream decides to switch to another hosting provider?


The upstream who manages their name::location mapping?


Please refrain from using the awkward question mark, it's condescending and rude.

What do you do if they fail to update their name::location mapping, and the language doesn't provide a way to do it yourself?

At least in Go, when that happens we can just add a `replace` statement in the go.mod file:

    replace example.com/foo/bar v2.35.0 => example.org/baz/bar/v2 master


Why have owner in there? Isn't the clear from the filesystem metadata, which also has the benefits of shared ownership groups etc.?


The "owner" could be multiple directory levels depending the hosting service. Gitlab lets you have arbitrary sub levels. The owner of the files also isn't necessarily related to the owner of the repo on Github.


Is there then a hierarchy of owners?


GNOME has come a long way, but its stubborn insistence on not having a desktop with a real application launcher remains a huge usability misstep. GNOME's marketshare is the desktop, and so the initial value proposition of a hybrid UI seems very much wishful thinking, while the keyboard based workflows it seems to want to enable are better served by tiling WM such as Sway, and do not make sense for the "default" WM that is picked up by casual converts who are used to a point and click system. Overall it's just a confusing mess for new users, which Canonical/System76 rationally get rid of (which is probably a majority of the GNOME user base).

So why does GNOME continue down this path. Is it a fear of being "just like everyone else" by using a tried and true dock/application bar? Is it a desire to not be the front running WM and be more "niche" to power users? I still don't really understand their decision making process.


I’m not sure what you mean by application launcher. Is Super not enough?

Super, type 2 or 3 letters of the program I want, enter. Works really well for me.


Vocally seconding this. I wanted to not like Gnome (longtime wmi/wmii/i3 user) but its application launcher (and actually everything to do with the super/three-finger swipe/hot corner system) is superb. I'm amazed how equally highly usable it is via keyboard, mouse, and touch.


Yeah, I was a big DWM user for a long time. I finally bit the bullet and gave Gnome a chance on Fedora a few years ago and it has an amazing keyboard workflow out of the box. Plus the extra mouse gesture goodies. People complain a lot about Gnome but if you engage with Gnome on its own terms the workflow is actually fantastic.


No it's not, for several reasons:

1. Average users primarily use mouse based workflows.

2. Super isn't discoverable.

3. It confused users coming from other DEs.

4. It actually takes more keypresses than clicking a favorited app on a dock that is always available.

Overall it's less discoverable, less efficient, and not baked into the mind of computer users who are coming from almost any other bistro.


Oh I see, you're arguing for a dock-style mouse-based app launcher. I see no reason there can't be both that and keyboard-based Spotlight-like launcher too. It's not like one impedes the functionality of the other.


> Overall it's less discoverable, less efficient, and not baked into the mind of computer users who are coming from almost any other bistrot.

I see what you did :).


> Super, type 2 or 3 letters of the program I want, enter. Works really well for me.

I don't use GNOME and can't speak for softirq, but what you're describing sounds to me like a command line interface. I can imagine two problems with it:

- Ergonomics. People who usually keep a hand on the mouse would have to move to the keyboard whenever they launch an app, and then move back again. Not a showstopper, but definitely a time waster. (And perhaps just as annoying to some people as it is to me when no keyboard shortcuts are available for common actions.)

- Discoverability. If someone knows what they want to do but doesn't know (or forgot) the names of the apps that can do it, they're left to type in guesses until they find something that works out. Also, if they just want to browse the apps that came preinstalled on their system, an application launcher provides, while a command line interface does not.


Super does the exact same thing as clicking (or just mousing quickly to) "Activities" in the upper-left, which is one of the few UI elements visible on the desktop.

Docked apps also show up in this view, along with a button to show all apps. (Similar workflow as Start -> Program Files.) Keyboard is unnecessary.

It's somehow faster than every hardcore keyboard-based WM I've used (I was a longtime wmi/wmii/i3 user... all of which needed two keystrokes to open the app launcher, e.g. Super+P) and also every mouse-based WM I've used, and also works fantastically from my touchscreen when I'm feeling particularly lazy.


For Mouse ergonomics, there is hot corner which is great. You can open activities overview by flicking the mouse into the top left corner. Then click the app you want. I do that when I'm primarily using the mouse. I find it faster than the start menu on windows. If you really want a persistent bar at the bottom, then almost every distro includes that extension by default.

As far as discoverability, they have Gnome Tour now which opens on first launch that explains all this stuff interactively.


> > Super, type 2 or 3 letters of the program I want, enter. Works really well for me.

> I don't use GNOME and can't speak for softirq, but what you're describing sounds to me like a command line interface. I can imagine two problems with it:

Don't imagine. Get in the lab, see how people are using it and how you are actually using it.

It's not a command-line. I run awesome and <super> r is a command line, as in "if i don't type the exact name of the application it won't work, if i don't type the exact first letters then tab won't work". Try out gnome and see how it doesn't behave the same.


> Don't imagine. Get in the lab, see how people are using it and how you are actually using it.

My point was simply that the actions described by the person to whom I replied fail to satisfy some common use cases, and do not refute the original complaint. It doesn't take a research project to see that. I wasn't commenting on whether there might be some other way to satisfy them.

In any case, I have tried GNOME recently, and found that it doesn't suit me. Opinionated UI isn't always bad, but this one is full of opinions that I find counterproductive.


> > Don't imagine. Get in the lab, see how people are using it and how you are actually using it.

> My point was simply that the actions described by the person to whom I replied fail to satisfy some common use cases, and do not refute the original complaint. It doesn't take a research project to see that. I wasn't commenting on whether there might be some other way to satisfy them.

No, your points are simply not grounded in real usage:

> - Discoverability. If someone knows what they want to do but doesn't know (or forgot) the names of the apps that can do it, they're left to type in guesses until they find something that works out.

Absolutely not. The default gnome launcher is still browsable with the mouse. You only have to scroll, which is way easier than clicking back and forth in kicker or tree based default menu where you have to guess which category the app you are looking for falls in. With Gnome, all the icons/apps are laid out.

They are not left to type in guesses.

Guesses ? "Jeez, I wonder which word I should type in to start libreoffice of firefox, let me try Internet navigator and word muncher". What weird imaginary use case is that.

> Also, if they just want to browse the apps that came preinstalled on their system, an application launcher provides, while a command line interface does not.

Also... what ? Op wrote "Super, type 2 or 3 letters of the program I want, enter. Works really well for me."

"Program I want". Why are you bringing up discoverability as a counter-argument when it's not what OP is doing ? It's like complaining a terminal is problematic to start an app because there isn't a list of icons to select. This is just moving the goal post from "starting an app" to discoverability.

> I wasn't commenting on whether there might be some other way to satisfy them.

> In any case, I have tried GNOME recently, and found that it doesn't suit me. Opinionated UI isn't always bad, but this one is full of opinions that I find counterproductive.

Okay, I see, this is just pissing on gnome for the sake of pissing on it then.


> Okay, I see,

You don't, but I'm done with your rudeness. Goodbye.


> Super, type 2 or 3 letters of the program I want, enter. Works really well for me.

You've just described basically every modern desktop user interface. Windows Start Menu ("Windows Key"), Cinnamon menu, KDE defaults, XFCE (w/ Whisker Menu), etc. all support the [Super]→[Start typing]→[Enter] launch workflow.


Same. Every OS I use always make sure it has this kind of simple app launcher. Super key -> search field -> autocomplete -> enter -> app starts. What more do you need? Most OS's have a version built-in nowadays. Seems like a solved problem.


Right? Also much faster than, I guess, scrolling with mouse through some menus


Personal opinion but to me this is the best way of moving around a computer - to be point where it’s all I do in macOS as well. Can’t remember the last time I saw a dock there.


That's completely undiscoverable.


Click on "Activities" in the upper-left corner. (One of the few desktop UI elements.) It is exactly the same thing as pushing the Super key. A search bar pops up, start typing to search (EDIT: no 2nd click needed).


TIL you can open the activities with the Super key. GP complaint is totally warranted.


They recently (late 2022 maybe) released Gnome Tour which explains this stuff to new users. Admittedly it's too late for people already on Ubuntu or something, but discoverability of these features is getting better.

Even in this article they mention adding a widget that displays shortcuts.


I agree it would be nice if Gnome displayed keyboard shortcuts in tooltips. I don't think there's even a tweak to enable that unfortunately.


Which is terrible UI. Let's force the user to turn a flow they could do at any time by clicking a single button at the bottom of their desktop into a context switch into another window, followed by the same button click. Or know about the Magic Keyboard short cut, then type in multiple characters, then press enter. So in any case we're turning a single input into multiple inputs just to open a commonly used app.


What are you comparing to? Windows has worked that way for 30 years (Start menu) and most users seem to figure it out fine. So do smartphones (home button). macOS is the only prominent example I can think of which shows icons of closed apps on the screen at all times by default.


It works identically to macOS Launchpad.


> remains a huge usability misstep

Evidence? The Gnome project has performed UX studies[0] to validate their design, and has continually made changes in response (some of which I disagree with, FWIW).

[0]: https://wiki.gnome.org/Design/Studies


You just linked to studies that directly support my point:

"On the other hand, new users generally got up to speed more quickly with Endless OS, often due to its similarity to Windows. Many of these testers found the bottom panel to be an easy way to switch applications. They also made use of the minimize button. In comparison, both GNOME 3.38 and the prototype generally took more adjustment for these users.

“I really liked that it’s similar to the Windows display that I have.” —Comment on Endless OS by a non-GNOME user"


In my career, I witnessed several software UX changes that elicited massively negative user feedback once released - and every single one of those was backed by a UX study. It seems that if you have really strong opinions about what you want your software to look like, engineering UX studies around that is not difficult.


Most UX studies ignore that users want people to leave things the fuck alone and stop breaking their workflows.


Or,

UX changes always elicit negative opinions and the studies show that once the change is familiar people prefer the new UX.

I’m reminded of the MS ribbon which was so heavily derided but a few years in, OpenOffice/LibreOffice also had to implement something similar because users significantly preferred it when set side by side.


I don't think that this is always, or even more often, the case. I'll grant you that Ribbon might be an exception.

But the bigger problem is that these days, even when it is the case, by the time you get used to the new UX, it's not new anymore - and now it is on the way out, because the new crop of UX designers have yet another drastically different idea in mind (and they have UX studies to prove that it's better). But change itself carries a usability cost with it, and that is usually not accounted for at all. When it comes to desktop software specifically, quite frankly, what we had 20 years ago was already "good enough".


I loved the ribbon on first sight. I also miss new features like that coming to products - nowadays it's all AI nonsense or the padding and spacing has been changed for the 10th time.


We just deployed RHEL9 and had to quickly revert gnome because users (and me!) had no effing idea how to use it.

For instance, no button to minimize and maximize a window, no taskbar to switch between windows, what the actual...


It took me almost half a year to figure out that Firefox was missing the minimize/maximize button because Gnome by default hides them. And I only figured out after having to install Gnome Tweak tool because I moved to Gnome temporarily...


Evidence? No other somewhat popular desktop rejects "core desktop ideas" the way gnome does. Both windows and Mac have desktop icons, tray icons, a task bar/dock, minimize and maximize buttons...


It is infuriating that they used systematic approaches to UX and still came up with the current thing. I said it before, too many implicit gestures that are not discoverable until you google for it.

Last time i tried GNOME was last week and gave up after a day.


A lot of times, what you should be doing should probably be relatively obvious anyway. [1] Other times, the people you should be trying to understand are already directly and nearly universally telling you how they feel, and all you actually have to do is just listen. [2]

I'm not saying systematic ways of thinking are universally useless, but the appearance of being "systematic" or "objective" certainly seems to attract some people who use complexity as a means of obfuscating, and of reducing other people to a passive object of study or subject of control. In those cases, "research" isn't a way of finding what's correct. The important thing to them is that they're correct; they already know that they are, and the "studies" are meant to make sure you know it too as they do whatever they already wanted to do anyway.

Such individuals rarely seem to care about "evidence" at the start of their decisions. Only when they're trying to shut down subjective critical opinions, or rationalize the actions they've already taken.

1: https://en.wikipedia.org/wiki/Principle_of_least_astonishmen...

2: https://en.wikipedia.org/w/index.php?title=Controversy_over_...


I did not know that the GNOME 3 thing was such a big controversy. I dropped GNOME during that time for the same reasons, but i was unaware of how big this was.


Oh yeah, no. Multiple entire desktop environments with significant popularity (Cinnamon, MATE) owe their existence today to how universally hated GNOME 3 was, and how obstinant and intolerant the GNOME developers were towards differing opinions that challenged their "vision".

In fact, the same thing is sorta playing out even right now with GTK4 and other GNOME stuff, though I think with somewhat less public spectacle but arguably even larger development efforts behind it:

https://joshuastrobl.com/2021/09/14/building-an-alternative-...

https://www.theregister.com/2021/11/08/system76_developing_n...

https://blog.system76.com/post/closing-in-on-a-cosmic-alpha

https://github.com/BuddiesOfBudgie/budgie-desktop/issues/141

https://medium.com/@fulalas/gnome-mess-is-not-an-accident-4e...


What implicit gestures? Everything in the Gnome desktop you can get to by clicking the "Activities" menu at the top-left: search bar, dock, applications button, minimized windows, and 2nd desktop all then become visible.

Granted, the applications button icon is quite nondescript (9 dots). But it's still just 2 clicks of prominent UI elements away.

Same # of clicks as Windows (Start -> Program Files) and MacOS (Finder -> Applications).


Maximizing a Window. I had to google how to do it.


What am I missing?.. there is a maximize button in the top-right corner of every window.

EDIT: Oh, I guess these are hidden by default? I don't remember enabling them on my setup but I've been using Gnome a few years now. I agree it would be better if they were visible by default.


I only had a cross to close the window. Ubuntu 22.04.


Gnome is a DE designed to be used by distributions.

Gnome doesn’t have an opinion on a desktop application launcher because it expects the distribution to add it.

The only distributions which don’t are GnomeOS which is intended for developing Gnome, and Fedora, which is intended to be a bleeding edge distribution to mass release stuff before it’s included in RHEL.

Turns out, however, that a lot of people actually like the default Gnome look and so are happy with using Fedora.

But in practice this isn’t an issue for anyone because their distribution will come with an application launcher.

And even better you can completely change and/or add an application launcher because they are implemented through extensions.


> Gnome is a DE designed to be used by distributions. > > Gnome doesn’t have an opinion on a desktop application launcher because it expects the distribution to add it.

I don't think this is true at all. Where have you read this? If GNOME intended for distributions to customize it, I can't imagine why they'd keep harping on about how custom theming is awful. See: https://blogs.gnome.org/alatiera/2021/09/18/the-truth-they-a..., and https://blogs.gnome.org/tbernard/2018/10/15/restyling-apps-a...

I'm honestly surprised they still support adding extensions via extensions.gnome.org.


I mean the official position has always been reasonable, if disappointing. GNOME never had any official support for themes, it was just a concept invented by users who patched the CSS. It's OSS so you can obviously do what you want but they aren't going to support it and reserve the right to make changes that break your themes.


I'm not arguing against their position. Rather, I'm citing their position on theming to argue that it would feel inconsistent for them to hold that position on theming, but then also have a position of encouraging downstream distros to customize the whole UI/UX of the DE.


Every DE is designed to be used in a distribution. I think what you are trying to say is that GNOME is designed to be "finished" by the distribution, which is a completely made up idea. Show me where GNOME says you need to finish the DE yourself during integration. GNOME is designed as a complete DE, the reason Canonical/System76 change it is because it's poorly designed for new users/casuals, which is their user base.


Two things that made me switch to XFCE, JavaScript based extensions affecting performance, complete refusal to add back support for Windows shading (roll up).


That's why System76 is writting extensions in Rust and runs them in seperate processes.


Whatever compiled language, even with my C related rants, I am happier with C written XFCE extensions than the GNOME JavaScript ones.

The separate process is an interesting point, back in the day dynamically loading code into the host process was the way to go, due to hardware resources and how demanding would be to use UNIX IPC for everything.

Now a couple of years later, with the existing hardware resources, and the ongoing stability and security issues of loading code into process, turns out separate process is a good idea after all.


which extensions are js-based?

there's no gnome extension to do roll-up windows?


All of them, since they replaced the C API with the JavaScript based one in GNOME 3.0

"GNOME Shell and extensions are written in GJS, which is JavaScript bindings for GNOME Platform APIs. For an overview of how extensions fit into GNOME Shell and the platform, see the Architecture page."

https://gjs.guide/extensions/

Roll-up was never an extension on UNIX windows managers, having to install one is a joke on us.

As for the GNOME developers stance on that, https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/1028

Meanwhile XFCE, KDE, Windowmaker, Afterstep, Enlightment.... do them just fine.


the shade issue seems to be an issue with wayland and whatnot as far i can tell..


That is why I wrote "Meanwhile XFCE, KDE, Windowmaker, Afterstep, Enlightment.... do them just fine. " to make the point the others handle it without excuses.


I would assume that it's because there are more people like me who use gnome. I kept using gnome 3 from gnome 2 because gnome 3 worked the way I was already using gnome 2.

I tried using tiling window managers, but they didn't give me the niceties of the out of the box gnome experience. I do have one extension for topicons. (sometimes I use gsconnect, but not recently) I don't need anything else.


I've been enjoying Budgie for this reason. It feels like a "normal" desktop experience with a taskbar at the bottom, something like a start-menu with the search functionality we're used to, and without anything really trying to be fancy or transparent. It just feels "regular".

The criticism some people have of Budgie is that it feels like a customized version of GNOME rather than its own thing - but that's what I'm looking for. I'm not looking for something to ditch the good parts of GNOME. I'm looking for something that will just give me a regular desktop where things feel like what macOS or Windows have been offering for a couple decades.


Out of curiosity, why Budgie over XFCE, MATE, or LXDE? All of those are pretty much just a "normal" desktop experience with a taskbar, something like a start menu with search, or anything trying to be fancy or transparent.

Nothing against Budgie, I'm just curious what made it stand out for you.


It took me a while to figure out what you mean by "real application launcher" but I think you mean that hitting the top left corner menu (or the Windows key) launches the fullscreen window overview plus launcher and that's not enough for you? I guess the "Dash to Dock" extension is what you're after but I can't say I've ever missed it. I don't really use the keyboard for it, but every time I use a non-GNOME system I badly miss the ability to just click in the corner to manage programs, whether switching between them or launching new ones. If GNOME got rid of this I would probably stop using GNOME.


the app launcher really only works on a laptop with gestures. pain in the butt with a mouse, especially on a large display.


On desktop my hands are 99% of the time already in the right position for the keyboard shortcuts.

I generally quite like the out-of-the-box gnome3+ experience, it fits my use cases pretty well (when I don't have a highly tweaked custom tiling window manager setup).

It's nice to actually get choice, rather than 5 "choices" that are all chasing exactly the same style, if you prefer the XFCE or KDE experience, then that's what XFCE or KDE are for.


Proper choice would be the ability to customize that kind of thing in the DE. Having to switch the DE completely over one simple thing like this is, frankly, ridiculous.

But GNOME explicitly hates customizability.


> Having to switch the DE completely over one simple thing like this is, frankly, ridiculous.

And who is forcing you to switch DEs over one simple thing? The parent comment is merely suggesting that we use the DE that we like the most (or--I'll add--hate the least).


One simple thing can be such a major productivity blocker as to be a deal-breaker in practice.

And the situation where you have to choose between several options that all suck, and use the one that you hate the least, is exactly the one I'd rather avoid, but also the one that seems to most accurately describe the current state of software - precisely because of increasing lack of customizability and outright hostility to it.


and yet I have no issue using it with a mouse on a 27" screen.

Certainly better than the tiny little window you get on every other OS.


Last time I tried gnome it was in an Ubuntu install I think which had a weird launcher that imo didn’t work particularly well.

I’m not sure what you are meaning by a launcher here - to me there’s a fairly reasonable ambiguity about what is meant. On windows people still seem to use shortcuts on the desktop as a primary launcher (especially given the current start menu abomination, sheesh talk about destroying one of the most effective bits of UI MS ever produced), on Mac I actually mostly use spotlight (though that usage is subject to the “secret chord” problem), but most normal users use the dock for common apps and the applications folder for everything else.

I keep meaning to retry kde at some point but I’m waiting for more asahi work to support my desktop (they are reasonable to focus on the laptops but my laptop is my work machine so less willing to surrender disk space when llvm builds already consume half a TB).

It’s weird to me that back when I was a primarily Linux user gnome was The(tm) commercial Linux desktop and kde was the clunky also-ran, I assumed the gnome was destined to win in the long term due to gtk APIs, commercial support, and at the time seemingly more polish and completeness. I guess things can change given a decade or so of development :D


Gnome has a built in extension called “Applications Menu” that is just that.

I don’t really use it that much. I find opening search and typing what I want much faster. Or I use the dock


Because not everyone likes a classic app launcher? I like how reliable gnome's search is, it replaces the start menu.

I just click super, see my workspaces, all of my running apps and can just directly start typing and find the app I'm looking for.

Why would I ever need a start menu? It's soo good that KDE added something similar recently.


Not sure how much this is still true, but as I recall, GNOME used to have a 'start' launcher like Windows, and removed it because of a lawsuit (gates/ballmer era Microsoft). That was the big motivation for gnome 2 to 3, and also why canonical split off unity for a while


I know it's not perfect but I use the 'dash to panel' extension


> while the keyboard based workflows it seems to want to enable are better served by tiling WM such as Sway

This is where I think your analysis starts to break down. Tiling and keyboard-oriented are almost orthogonal. There's no a priori reason that a tiling WM has to be keyboard-oriented, nor that floating WMs are inherently less accommodating to keyboard-oriented workflows. As an anecdote, way back in my youth, I had all kinds of keyboard shortcuts for resizing and moving windows by different amounts in my Openbox WM setups. Likewise, I really tried to like the popular tiling WMs (i3, xmonad, AwesomeWM), but I eventually realized that I can't literally be focusing on content from multiple windows simultaneously, and it makes way more sense for me to size and position each window so that I can optimize my interactions with that one when I am focusing on it.

> do not make sense for the "default" WM that is picked up by casual converts who are used to a point and click system. Overall it's just a confusing mess for new users, which Canonical/System76 rationally get rid of (which is probably a majority of the GNOME user base).

Let's be real, though. The year of Linux On The Desktop is not coming--hell, The Desktop is pretty much dying altogether. So, I really don't care if we optimize for the "casual computer user" who just happens to stumble into a chair in front of a Linux desktop, because that person doesn't exist. It may sound selfish, but I rather they optimize for users who already exist.

> GNOME has come a long way, but its stubborn insistence on not having a desktop with a real application launcher remains a huge usability misstep. > [...] > So why does GNOME continue down this path. Is it a fear of being "just like everyone else" by using a tried and true dock/application bar?

GNOME is actually fairly close to macOS in this regard. Yes, macOS has a dock with an application launcher, but if I didn't already know what the application launcher icon looks like, I'd have no idea how to get to its application launcher: the icon I'm looking at right now on my work Mac is a square icon with a 3x3 grid of colored squares inside it--what the hell does that mean? Is it a color picker app? Some kind of Tetris or Candy Crush game?

GNOME's top-left stupid oval button is equally bad, but not worse, than macOS's UX discoverability, IMO.

And I have to wonder how truly "intuitive" the Windows situation is, either. The old Windows versions used to have the word "Start" on the button, which at least gives some kind of hint that my computing journey "begins" there. I think since Vista or 7, it's basically just been the Windows icon. I suspect it's more intuitive than the macOS or GNOME analogs, but probably only a tiny bit if I were to sit someone down who hasn't used a Windows PC in their life.


> So why does GNOME continue down this path.

Because it's great and everyone apart from a tiny but incredibly vocal minority get on just fine with it.


All these comments just want Gnome to be like every other DE in existence. MF'er leave my preferred DE alone, go use KDE if you want a traditional DE.


GNOME seems to be least buggy DE and gets out of my way when doing things. A nice bonus is that it also looks and feels good.

I used to use plasma but it was just too buggy for me. Just today a random user realized that installing a theme from the built-in theme selection thingy deleted all his data...

I have some hope for cosmic DE as it seems like they try to emulate GNOME's workflow with some twists.


As someone who had only used GNOME or DEs based on / inspired by it for 18 years, I had to switch over to KDE on my work laptop because GNOME was giving me endless trouble with the external monitor for some reason (to the point where I could choose between either ridiculously impractical resolutions or mouse flickering).

I'm now actually quite satisfied with KDE (although I continue to use GNOME apps too). I guess every one of these DEs has their own share of bugs.


Yeah! GNOME is wonderful. I can’t with the neckbeards.


I'm a neckbeard and I vastly prefer GNOME Shell to any other DE (including proprietary ones like on Windows and macOS). I still don't consider myself a "fan" and I have plenty of complaints about it- I just have way more complaints about all the others... :D


Meanwhile the CEO of Nvidia is telling the world that people don't need to learn to code and we'll all be out of jobs soon.


No-code isn't a new concept, and there's a reason why all past attempts have failed, or why people still pay web developers despite the existence of tools like square space. Nothing about the LLMs of today suggests they have solved the no-code problems or will radically displace coding. They generate bad, oftentimes incorrect code for well trodden paths, while struggling to solve novel problems or work in private or unique code bases. They do not easily keep up with new trends or tools. They do not offer the type of semantic understanding that is necessary to work in a logic based field.

LLMs are nothing more than an alternative take on auto-complete, a feature that has been around forever and doesn't radically change programming. It will speed up good programmers to some degree and probably lead to bugs and more bad code from everyone else.

This is yet another hype cycle overselling a modest advancement in technology.


Computer chess has been failing for 30 years, until it didn't. Try winning a Go or a chess game against the computer now. There easily might be another architectural find lateral to LLMs that will 10x the code generation quality.


You can say that about any field. We could invent the elixir of immortality tomorrow, but is that a realistic expectation? The CEO of Nvidia is a smart guy, he's pushing the hype train because his business is riding the wave. But you have to separate hype from an empirical view of what we can actually do today with these tools, versus what hasn't been delivered and is being oversold.


I think you're viewing it from the programming bubble. He's not that vested in AI success for programming. Even if AI code generation completely failed, NVIDIA's business is still more than OK because LLMs have a lot of other uses, killing Google search for example. That's not a small niche.


The post is about the demise of coding. I'm only responding to the topic of the discussion.


My previous response was refuting your statement: "The CEO of Nvidia is a smart guy, he's pushing the hype train because his business is riding the wave"


Chess is a Perfect information game with a finite tree.

Very different problem than programming.


Do chess-bots rely on this? I was under the impression that a full search of the space was infeasible, so our current state-of-the-art approaches use heuristics, bounded search, and learned strategies. In other words, I suspect our current models apply to programming better than we might expect.


Chess is decidable, it may be PSPACE-hard or EXPTIME-hard, but there are reductions.

Entscheidungsproblem and Halt are not decidable in the general case.

While you have to find reductions, decidable problems having access to both yes-instances and no-instances makes it easier to find them.


Knowing that we don't approach either by attempting to fully solve them, does that change the architecture, or just the difficulty?


Chess is a bounded, non-moving target. Think about the difference between chess in the 1970s and today, and compare that to the same time period with programming. Chess is a single game whereas programming is a federation of tools, protocols, and standards that are ever evolving. They're not comparable in any sense.


I don't think that's a particularly relevant metric, as we can easily restrict programming to languages like Lisp/Pascal from the 70s, and the landscape doesn't change much.

I'd also suggest that our chess bots have evolved dramatically in that time. Deep Blue works very differently than AlphaZero, for example. Deep Blue might not be suited to code generation, but AlphaCode spawned from AlphaZero.


The genius of ebpf is allowing for pluggable policy in a world where the kernel API is very slow to change and can’t meet everyone’s needs. Whether it’s how the kernel handles packets off the wire, how it controls traffic, scheduling entities, or instrumentation, ebpf lets you provide logic rather than turn a bunch of knobs or use a bespoke syscall that only handles one case. It also moves the processing logic to the data in the kernel rather than having the kernel have to do expensive copies to and from userspace.

ebpf isn’t really novel beyond the interfaces it provides. They are just kernel modules that have been vetted and are sandboxed. Inserting executable code has been part of the kernel since forever in module form and kprobes.


> ebpf isn’t really novel beyond the interfaces it provides. They are just kernel modules that have been vetted and are sandboxed. Inserting executable code has been part of the kernel since forever in module form and kprobes.

This should be sung from the mountaintops. This concisely summarizes nearly everything that uninformed reader should take away from the comment section.


> the kernel API is very slow to change and can’t meet everyone’s needs

Better yet - eBPF provides a stable ABI:) It makes things that were formerly kernel-internal possible to work with from a stable ~userspace interface.


bpf tooling generally provides no stability guarantees when you interact with kernel primitives. See [0], for example. Tho things have improved somewhat with CO-RE

[0] - https://lore.kernel.org/lkml/93a20759600c05b6d9e4359a1517c88...


I'm curious what this guarantee includes - the bytecode? Because the actual in-kernel eBPF API is famously unstable, with eBPF-based applications usually requiring a cutting-edge kernel version (for industry anyway). And of course the eBPF programs themselves rely on accessing structures for which no stability guarantees are made whatsoever.


Most companies completely missed the point of SRE/PE/DevOps and keep them on separate teams doing sysadmin toil work and oncall thrown over the wall by engineers who are only concerned with feature deadlines. They regress them back to sysadmin duties and get none of the value of a true SRE program.

SRE should always be a subtitle for a SWE and not a separate position, and they should always be embedded with SWEs into one team either building products of infrastructure. The shared ownership and toil reduction only works if you have these two things.

All this said, I think the regression is also due to the fact that real SREs are rare. A solid SWE that also has deep systems domain knowledge, understanding how to sift through dashboards and live data, and root cause complex performance problems is a master of many domains and is hard to find.


The regression is also due to that a real SRE is expensive. It's cheaper to just get some newly grads to react to alarms following a set runbook of what to do if that alarm triggers.

VERY few companies operate at googles scale. For 99.99% of companies it makes sense to investigate single machine issues.


Google SREs also end up investigating single machine issues, fyi.


Yes, but At Scale®

It's a totally different experience when you have the people who technically own the hardware side of the operations taking no responsibility for the well-being of it, and the people who own the software developing elaborate workarounds for bad machines, and the SREs maintaining blacklists of individual nodes.


In my experience it's fun to do that but only worth it when SLOs are on the line (so a significant number of bad machines).


I've had a lot of trouble with commands in Warp going haywire because of how much magic there is in creating visual blocks, and it looks like Jet Brains is taking it one step further. The reason terminals are great is because they are dumb, standard interfaces where you can have full access to any system, local or remote, and directly execute binaries and interact with a command language like bash or zsh.

I feel like this terminal is counterproductive. It adds visual niceties at the cost of dumbing down the power of the terminal and removing terminal feature that will be confusing to regular terminal users. It really doesn't have to be this complicated to be useful.


On the other hand, it makes the terminal much more accessible for less regular users. I’ve really appreciated the extra tools of Warp as someone who’s in the terminal infrequently enough to forget most intermediate commands.


Yes it is. Watching a 3D movie on an entire wall in bed next to my spouse is mind blowing.


You'll have to file me under

a) doesn't watch films or use internet gadgets in bed, especially to the exclusion of someone else

b) would at any rate choose a film my partner wanted to watch too

c) ignoring the above, would probably buy the cheapest VR headset that offered a virtual theatre good enough


Often the other people in your house would like to be excluded from the movies you're watching, for instance because they want to sleep and don't want the light leakage.


How many people watch tv in bed, is that a thing?


Yes it certainly is a thing


(My experience with watching movies on VR headsets- haven’t tried newest generation)

Have you ever watched a CAM version of a movie? I feel like this kind of activity peaked in late 2000s…

That’s what it felt like watching a movie in VR to me. (In my experience with non-luxury headsets)

It’s worse picture quality than my phone.

For me, watching a movie in 4K on a TV is very very different than the equivalent of ~< 720p, blurry mess with giant god rays.

If AVP can deliver on what people describe, it’s compelling. Still probably won’t buy one though


This is not true at all. You get full 1080p detail on a Quest, with excellent color and contrast.

The AVP gets you to around 4K detail.

It is nothing at all like CAM. Not in color and not in resolution. It doesn't even make sense to say that because the display specs for these devices are well known.

God rays aren't really a problem watching TV content because they're most prevalent at the edges of the display, but you place your virtual screen in the middle. So they're basically a non-issue. (And TV's can have glare problems of their own if you're not watching them in the dark.)


How can you possibly get full detail on a Quest 3?

The resolution of the horizontal on a Quest is 2064 pixels. However this fills the headsets entire ~110 degree horizontal FOV. Also, you are not seeing the edges of the panels, so we need to eliminate some of those pixels you can't technically even see to around say 2000 (cut off 32 on each side which I think is fair).

Now a 1080p video has a horizontal resolution of 1920 pixels. You only have a 2000 pixel canvas that fills up your 110 degree FOV.

Now sure, if you zoom your virtual movie theater screen to fill your entire FOV, then you can say you are seeing the whole 1080p video resolution. But nobody I have ever heard of watches movies at a horizontal FOV of 110 degrees.

Industry standards are around 35-45 degrees. Yes I personally think that is a bit low. I have a 150" projection screen at home, and I sit at about a 53 degree horizontal FOV. I wouldn't want any closer. This represents sitting like 1/3 to the front of a typical movie theater.

However, even at a 55 degree virtual screen, that means the virtual screen is only 1000 pixels across on a Quest 3. this isn't even full 720p resolution which would need 1280 pixels across. Let alone 1080p needing 1920 pixels across.

Now the AVP does better obviously. It's 3680 pixels across 100 degree FOV. If we subtract a few due to not seeing the edge and say about 3600 pixels, and if we say the virtual screen is again 55 degree horizontal FOV, then that gives us a virtual theater screen of about 1850. A little shy of the 1920 for 1080p.

So at best, if you make your virtual screen huge, like 60 degree horizontal FOV, then I could concede you get about a 720p virtual screen in a Quest 3 and about a virtual 1080p screen in a AVP.

Last point I will make is that even at this it's not quite equivalent because you lose a bit of resolution too due to your head being slightly askew and the video pixels not being able to line up straight with the physical virtual theater screen pixels in the headset. So the resulting image becomes a bit softer since the pixel mapping isn't 1:1.

I haven't used an AVP, but I have used many other VR headsets including a Quest 3, and the quality of the virtual movie screen looks quite low to me. Nowhere near even my old 1080p projector on my 150" projector screen. Let alone my current 4K projector on the 150" screen.


You're forgetting that the effective pixel width is wider because the two eye displays only overlap about three quarters of the way.

So the 2064 pixels becomes about 2500 in practice. So a screen width of 1920 is perfectly doable.

The image doesn't get "softer", surprisingly, because of the constant resampling at 90 or 120 hZ with tiny constant head movement. Any individual frame might be a little softer, but the actual viewing experience doesn't lose any detail at all.

Yes, the virtual screen is huge. It's like IMAX. But it's not a problem -- it's actually great. It's not a bug, it's a feature. Now when I go to a movie theater, I find the screen annoyingly small.

If you find the quality of the virtual screen on the Quest 3 to be low, first make sure you use an app like Skybox that lets you make the screen as large as desired. And then second, do a live comparison with the same content on your laptop (play a file, not a streaming service that might deliver a different bitrate). You'll find that you really are seeing all the same 1080p detail.


It's nowhere close for me since I can clearly see the individual pixels and aliasing of the Quest 3 screen.

But I cannot see the individual pixels and aliasing on my TVs, computer monitors, and projector screens.

The PPD (pixels per degree) of the Quest 3 is about 25. The average human eye has the vision capability of about 60 PPD+.

Plus after using OLED TVs and monitors I can't go back to using an LCD for video, so the contrast in dark scenes looks poor and washed out to me in the quest.

In this regard the AVP is much better as it's using OLED panels with near-infinite contrast.

Otherwise, at home I am normally used to movies on my 150" 4K native JVC projector setup where I sit about 11ft away from it giving me about a 53 degree horizontal FOV. I don't want it to be any larger of my FOV, and I wouldn't want to in VR either.


> It's nowhere close for me since I can clearly see the individual pixels and aliasing of the Quest 3 screen.

That doesn't make it not 1080p -- which is what you were originally claiming it was less than.

I can absolutely see the individual pixels on my 1080p projector too. It's not a problem. It's inherent to 1080p content. It's just what the content is. You're not losing any detail.

And I'm happy you've got $5,000+ to drop on a 4K projector, with the space for a 150" screen. But 99+% of people aren't comparing their VR headset to that. If mean sure, I were you, I wouldn't be watching something on a VR headset either.


This has been my experience- i haven’t tried any of the newest generation, but have tried many.

“any vr headset that has a virtual theatre” won’t offer a good movie watching experience


I'm not sure exactly what you mean by "virtual theater", but if you have an app that allows you to freely move and resize a virtual screen in a black "void", the movie-watching experience is exceptional, including on a Quest 2 or 3.

I recommend using the popular SkyBox player.


I’m glad it’s gotten better. A few years ago, it was genuinely terrible. Any dark scenes looked awful, screendoor effect was huge. Haven’t tried Quest 2/3


I've tried the Quest 3 and it's still not very good IMO. Quest 3 is still a low contrast LCD and at absolute best gives you close to a 720p virtual theater screen.

I think the AVP is the first headset that can actually provide a good movie viewing expedience in a headset coming close to a full 1080p virtual screen and with high OLED contrast.


AVP gets you around 1440p


Maybe with an ungodly large and wide FOV virtual screen.

AVP has a 3680 pixel wide screen across a 100 degree horizontal FOV.

If you make your virtual screen take up a 50 degree FOV (which is still somewhat bigger than most people normally sit/view at home or in a typical theater), then you are still only getting at most 1840 pixels across the virtual screen, which is a bit less than 1080p. Nowhere near 1440p.

That doesn't mean I don't think it can be a great experience. I think it can be, but there is still a lot of room for improvement.


Thanks. I was looking for better numbers.


It doesn't bother you that your spouse can't see it?

Even with an iPad, they are still sharing the space with you because they peripherally see what you are are doing, watching, etc. If something particularly interesting was on the screen you could point it out to them etc. I can completely believe it's mind blowing (I do it with my Quest 3), but I can't see how this isn't something that will ultimately harm your connectedness to the people around you.


Richest company in the world shits out a 2 kilos vr headset so we can watch the blandest Netflix original of the month from our bed between two soulless shift at work. The future is bright.

At that point I'm genuinely more interested in watching a tomato grow in my garden


3D movies suck and there's no difference between it being the size of a wall and being a laptop sitting on your lap or chest in bed. Field of view don't care about the "size". (I have a Vision Pro)


The standard library being designed to allow freestanding binaries makes Zig a much better language for systems software such as kernels, EFI applications, etc. than Rust.

Rust's addition of operator overloading is also a perplexingly bad decision. I don't think there is a single worse design decision in C++ or Rust than allowing someone to redefine what + means on some arbitrary type.


> Rust's addition of operator overloading is also a perplexingly bad decision. I don't think there is a single worse design decision in C++ or Rust than allowing someone to redefine what + means on some arbitrary type.

Why is that? Won’t it be more ergonomic than needing to call, say `.add`, when you want to do something clearly similar to addition. It’s just sugar, no?

In Java, there is no operator overloading, so if you want to compare 2 strings by contents rather than their memory addresses, you have to use a `.equals` method. Comparing 2 strings by their contents is the most common use case, so it not being the default is a design flaw (similar to how you need to `break` out of a `case` in most languages).

If you can’t overload the addition operator, then why have a whole language construct that can only be used on primitive integers and floats?


> Why is that? Won’t it be more ergonomic than needing to call, say `.add`, when you want to do something clearly similar to addition. It’s just sugar, no?

Especially since, in a typed language, this function would be desugared at compile time and (probably) aggressively inlined, making it hardly different from the compiler builtins for adding floats and ints. Unless, of course, you're doing something like allocating to concatenate strings.

Really, though, it's more of a developer common-sense issue than a language one.


When writing systems software like a kernel, clarity is far more important than ergonomics. The basic operators aren't functions, they're translated directly into opcodes based on the types of the operands. Operator overloading allows them to be either opcodes or functions, and if you think about the amount of work that goes into a function - saving registers, creating a stack frame, pushing addresses, multiple branches - versus a single opcode, as a kernel developer you really don't want this to be hidden behavior based on whether something has been overloaded or not. In the best case the compiler will optimistically inline multiple instructions, in the worse case it will call a function before you've even set up a stack.


SIMD intrinsics are also translated directly into opcodes. There's very little reason to have floating point ops represented by special syntax rather than ordinary functions like fadd() fsub() fmul() fdiv() fmuladd(), other than mere legacy. (And they might not even be simple opcodes in a soft-float implementation.) I mention floating point specifically because that's where even something basic like the order of operations can affect the outcome, so the extra precision actually matters.


What if your target cpu doesn't have, say, floating point operations, or integer division. Will Zig refuse to compile any code that uses these operations? Will it inline the emulation code? Or gasp generate a call out to a library function?


This is an extremely common issue to deal with when creating a kernel, which is, again, why levels of indirection in code and hiding context only makes life more difficult for a systems developer. The kernel may need to enable accelerators, switch between arm and thumb, enable an fpu, clear cache lines, etc. A lot of decisions will be made by the compiler, but based upon parameters passed in at build time, and a lot of it will be arch specific code interwoven in assembly. And there are tons of times when a compiler will generate things you don't want, forcing you to add pragmas and so forth.


Thing is, Rust has full support for hygienic macros so operator overloading could've been added as part of that. You'd just have to write, e.g. int_expr![a + b] or whatever, but that would've made the syntax fully extensible.


Macros are easy to spot, the whole point of operator overloading is that it's a trojan horse. It might do simple addition, it might do a heap allocation and talk to a printer.


Native ints support math operators because math operators are far more readable than method calls. And if you're doing something that requires a custom numeric type, like base-10 floats or fixed-width, method calls don't get any more readable than math operators there either. A language where you can't overload numeric ops is a language with a strong disincentive against using better numerics. And this is without getting into having a single source of semantic equality, whether it be `==` or otherwise; not having that is probably the single worst design decision of Go.

This complaint is always made in the context of some pathological case where a library author tries to do clever things with operator overloading; if any such libraries exist in the first place, you can guarantee that you are not using them by simply not using any libraries with fewer than 10k downloads. The horror stories that fill HN comments pretty much never make it into real code.


See my other comment about the necessity of clarity in a systems language. Also in my opinion readable code is code that is easily understood with as little context as possible. Operator overloading is the opposite, it hides behavior based on the types of the operands. Instead of thinking, that * will turn in to an smul, we now have to check for and read the implementation of * for every type. A function makes it clear that there is a call site and the types of its operands, and the name can more clearly convey the meaning of what is to be done than *.

What you are referring to as readability is actually terseness, which I think is a lousy metric to optimize for, especially for systems software where correctness is important and people will read code a lot more than they will write it.


I get where you're coming from, but I think it doesn't have to be more complicated than: "Are the two values standard Rust number types? If yes, they do simple multiplication on an asm level. Otherwise, check the relevant Mul implementation."


I disagree, consider a generic that is constrained by std::ops::Add. If you want to write generic functions with this type, you have to contend with types that might do simple addition or do allocations with potential side effects.


Why would you constrain a generic with ops::Add, if you didn't want to specifically allow for generic implementations of +? If you just want to be generic over built-in integers, it would be as easy as a "trait Integer: Add + Sub + Mul + TryFrom<i32> + ... {}" that's implemented by the standard integer types and sealed off from outside implementations.


As I understand it, generics are just a particular mechanism for interfacing with multiple types, and C manages without. For the same program, could a programmer ever be more unaware about which types are parameterized and which specific implementations are called in Rust than in C? I don't have much knowledge of C, admittedly, so this isn't a rhetorical question.


The whole point of generics is that you don’t know and don’t care what the type is.


> I don't think there is a single worse design decision in C++ or Rust than allowing someone to redefine what + means on some arbitrary type.

This has perplexed us for D. Experience with C++'s iostream's operator overloading meant running away screaming. Another terrible thing is people would code up DSL's using operator overloading, such as a Regex language. The horror there is the source code looks like ordinary C++ arithmetic, but it is actually doing Regexes.

So, how to allow operator overloading for arithmetic, but not for other porpoises?

1. Only allow overloading of arithmetic operators (i.e. no overloading of unary *) and [ ]

2. Only allow < overloadable, instead of < <= > >=. This enforces symmetry.

3. Don't allow overloading of && || ?:

4. A strong Compile Time Function Execution feature which enables DSLs in the form of string literals

5. Develop a culture of operator overloading is for arithmetic

This has worked well.


> Another terrible thing is people would code up DSL's using operator overloading, such as a Regex language.

There's nothing wrong with this if your language has proper hygienic macros. Then you can have all of math_expr![ … ], float_expr![ … ] (dangerous! order of operations may affect results), regex_expr![ … ], or even stream_concat_expr![ … ] all using the same operators while meaning completely different things and preserving complete extensibility. They would even be composable since each macro invocation would desugar its own operators and leave those in other contained macros unaltered.


Macros form their own hell, hygienic or not. The reason is inevitably these evolve into one's personal, undocumented, quirky, unmaintainable language.


This sounds great if you’re stuck inside the C-derivative mindset block. I.e. the best you can do.


> Rust's addition of operator overloading is also a perplexingly bad decision.

You know, I don't even have a side on this never ending debate... but it's perplexing that similarly intelligent people, with similar interests and backgrounds can both make so confident blank statements such as this, one way or another! IMO that is a pretty good indication that there's no right answer, it's simply a matter of preference... and the fact that people still feel like they're right and the people who disagree with them must be making "perplexing bad decisions" is, for lack of a better word, hilarious.


> Rust's addition of operator overloading is also a perplexingly bad decision.

Yes. The poor man’s custom operators.


Eh, I think operator overloads make a lot of sense in the right context. If you created a type that’s a kind of mathematical object, you want a mathematical syntax for it. However, they are horrible when abused.


D is a real anomaly to me because it should have had the same trajectory as Rust, it vastly improved upon other systems languages at conception, the authors evangelized it, including at FAANG, and yet, Rust seems to have gained traction everywhere D failed to do so. Even in places where C++ has historically been shunned, Rust has some traction (Linux Kernel). I now believe that language adoption is just a product of the right news cycles and timing, and perhaps hype over the creators or a certain feature. I am sad we got Rust and not D. D is so much easier to grok as a C++ person, and I think Rust is incredibly verbose looking.


I don't find it particularly surprising. D uses a garbage collector while C, C++ and Rust do not. D's GC can be disabled but that isn't that useful when most D code including the standard library until just a few years ago were not written with that in mind.

D is much more closely a competitor of C# than it is C++. D has a few nice features like advanced compile time programming but the actual nuts and bolts that Staff engineering management looks isn't really solid. D's GC is a design straight out of the 80's. Dmd has good compiler throughout but code quality isn't very good. Ldc is much better but compile times are much longer.

Adopting languages at FAANG beyond a single team just yolo deploying them to production requires integrating dozens of engineering systems for everything from post mortem debugging to live profiling to authentication systems. The cost to do this is in the order of tens of millions of dollars.

D just isn't suitable as a C or C++ replacement in the places that actually require it and the cost to enable it in large companies isn't worth it for the incremental improvement it does offer in some areas.


Rust has memory safety without GC, and a from-scratch language design. D is an evolutionary development of C++ (which is also gaining new features of its own) with little to recommend it besides. A comparison with Carbon and cppfront is also instructive, note that both of those have not added GC to the language.


Culture matters. "Culture eats strategy for breakfast". Rust has a safety culture. Yes it has a bunch of safety technology but the technology doesn't decide how things are used. It would be legal using the Rust compiler to implement the IndexMut trait on slices such that it just YOLOs like a C++ index. Rust doesn't do that, not because somehow the technology forbids it - it does not - but because culturally it's anathema to them.


When I heard about D it was often in combination with issues that seemed rather basic. Like multiple mutually exclusive runtime libraries that made D libraries incompatible with each other from the start, or hard version breaks that where trying to solve fundamental issues but also caused projects to lag behind for years. Have you seen how long the Python 2 to 3 migration took? The news cycles didn't do anything to fix that mess either.


What year did you come to that conclusion?

D could have been something but most people avoided it because of the commercial nature, i.e. not being Free software.


Timing is important. D was like year 2000, Rust 2015 or something? A lot had changed in the meantime.


Community is everything, and D leadership smothered theirs like a bag of kittens in the river.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: