Also by contributing to gedit (probably for free), you help this guy
selling gedit on Mac:
If you read this, please don't buy gedit there, there is a free (but
older) version here:
Maybe contribute to gedit and try to sell an even newer version for Mac?
Selling free software isn't a bad thing to do.
I wonder how the author of the article makes a living? It seems quite spiteful to try and undermine a fellow developer like that. Isn't anyone who works on GNOME working for Red Hat, without getting paid? They were a for-profit company last time I checked...
So what if someone does the labor of porting up-to-date gedit to OS X and wants to get paid for it? If it was that easy to keep it up to date, the free one would be up to date too.
its absolutely fine to take money to properly maintain and port a software for money. it does feel kind of scummy though if they're portraying themselves as if they're actually the sole developer of said software. and that is the vibe i got from skimming their product page.
The licence allows it so it's ok.
It used to be just at around one quarter during 2.6-2.10 days, then it jumped to above 90% when they scared off the prevalent majority of developer community during TOPAZ (3.0) train wreck.
What was the name of guy who was running around with "GNOME UX" thing? Anybody remembers?
He has a point about keeping compatibility - (this has not been something that has happened with gedit and plugins), but there has been a large community of python based plugins for gedit written by third parties and this can only be a good thing.
AFAIK Gedit is GPL, at least, I'm not sure how this guy can sell this software while not violating the license at the same time.
It's like people don't understand there is many different "free software licenses" as well.
> A business can build a billion dollar company from gedit and not give the original author anything.
A billion dollar company which sells GPL software will have to provide the entire source and its modifications to their clients and make it available as GPL so that the client can exploit it commercially without giving a single cent to that billion dollar company.
Moreover, if you understand that this isn't a violation of the license, what were you saying in your original comment?
of course since most clients, know how to support and especially compile the source code (p.s. this barley happens)
And any modification of that code for free AFAIK.
The GPL isn't as anti-commercial as its detractors make it out to be.
Ever had to pay through the roof because there was a single vendor for something, like the single supermarket in town or the only coffee vendor onboard a train?
Sure, you get "value for money" as opposed to not having the thing at all. Just that value is not a binary, it's a spectrum, and you might get too little value for too much money.
Sure, if your supplier has a monopoly on the essentials of life that's a different matter, but I don't think we are anywhere near that.
Meaning some intersection between a person likely to use Linux as a desktop, but wants an editor mostly geared for the not-tech-savvy.
Not discounting gedit, but it feels very narrow niche to me.
I use vim for a lot of stuff and IDE's like IntelliJ IDEA as well, but for simple stuff (like copy/pasting stuff from websites, and editing Markdown documents or simple text documents) Gedit is really nice to have. It's pretty much what Notepad should have been on Windows by now; simple, straight forward, clean, but capable enough (syntax highlighting, line numbering, etc.).
My girlfriend who isn't a programmer uses it exclusively for text editing. A graphical desktop environment isn't complete without a simple GUI text editor like Gedit.
I guess it must have just been the clean simplicity of it. That was before I knew most of the keyboard shortcuts I use now, so maybe that's all it was.
It's not easy to write generic code.
A simple example would be a PRNG. Specialized version:
int dice_roll = random(1,6);
int dice_roll = distribution(generator);
Often generic solutions are the best trade-off in the end, but one should not assume that more generic is automatically better.
I think the reliance of modern programmers on libraries which offer highly generic (and thus almost inevitably sub-optimal) solutions is one of the primary reasons for software boat.
On the other hand, code can also be "generic" by avoiding decisions, parameters and special-cases. An example of this is Lisp's use of cons cells to represent data: regardless of the contents, and what it's meant to represent, we can always use `car` and `cdr`; they're "generic".
Compare this to a typical OO approach, like a `Customer` object with a `name` and `dateOfBirth`: we can't access those fields without either writing those particular strings (`name` and `dateOfBirth`) in our code; or use some complicated reflection approach to look up the names then use those to look up the data.
Of course, it depends completely on the context whether to use special-purpose datastructures (e.g. custom classes to model a domain) or generic ones (pairs, lists, maps, etc.), but generic doesn't always imply bloat and complexity.
One advantage of "specific" (as opposed to "generic") code, like explicit constructor/destructor pairs (e.g. named properties and accessors), is that by forcing client code to choose one particular, specific accessor, we have a lot more static information to use for optimisation; i.e. we've encapsulated the implementation behind an opaque interface, which we're free to implement in whichever way makes most efficient use of the resources we care about.
Furthermore I bet the generic, templated version is as small and fast as a naive implementation. It's all templates so it's all static code generation.
I.e. a 200% increase in code size! Now imagine that throughout the entire code base.
> Is 2 lines not an accepted tradeoff for being able to change the engine, and crucially, to draw numbers from any distribution you want?
It is a complete waste if you don't need that functionality. That is the point. Generic solutions solve problems you don't even have, and that comes at a price.
>Furthermore I bet the generic, templated version is as small and fast as a naive implementation. It's all templates so it's all static code generation.
You kinda missed the point. I did not even specify how random(a,b) was implemented, so making statements about the speed of the compiled code makes no sense here.
C++'s templates are another nice example of the cost of generic code, though. They are one of the primary reasons why compiling C++ is so slow / resource intense. They have to be instantiated at compile-time again and again and again.. which is a non-trivial process, much slower to compile than a plain non-generic function call. Also they are historically infamous for producing hard to understand error messages.
I will install "Ubuntu Mate", it comes with Gnome 2 shell built with gtk3.
* Emphasis on undo instead of confirmation dialogs. Though, some applications got implementation completely wrong. Contacts, I am looking at you. It delays deleting a contact instead of offering a real undo, which means that if you exit early no action is performed at all.
* Different colors for constructive / destructive actions used consistently across applications.
* Keyboard shortcuts window. Though, I don't understand why most applications insist on making it modal, which essentially prohibits looking at shortcuts and trying them out at the same time.
* Headers bars. I prefer those over traditional menu bars, especially if number of different actions to perform is limited. Though, portability suffers as making it work in environments without client side decorations requires some custom code. Thus, if developer didn't take this into account it probably doesn't work.
* In-app notifications (used for example for example for undo I have already mentioned). Though, AFAIK this doesn't seem to be builtin part of GTK yet and require a little bit more custom code than other widgets.
* Empty placeholders! (Though, they might have been there already?)
Gnome needs solid lightweight email and messaging clients but I'm not sure I have the time to do it.
I find that, now that Gnome has builder, the thing holding me back is lack of documentation on how to get started with gnome dev (and use the numerous Gnome libraries).
Earlier this week there was some talk on HN about google's kubernetes having "documentation for geniuses" but it seems pretty approachable compared to the Gnome stuff.
I think it's mostly because I'm not a strong enough C programmer and it's a tall order to document things like GObject well for n00bs but if I knew how I worked I'd be happy to document it for others.
Maybe the lesson on that one is to work more with external apps instead of replacing them.
I'd actually be interested to read a public reply to a potential maintainer for one of their unmaintained projects.
The same misconceptions about GPL again and again and again and again. The gist of the GPL is to give more freedom by limiting limitations, that is, you can compile a GPL software and sell it for a trillion bucks and nobody -not even RMS himself- will bark at you, provided that if you made any changes to the software you make their source available along the compiled product.
The GPL is not about preventing users from selling the software, but rather about preventing users from imposing limitations on other users, which is exactly what commercial software licenses do. You can download a Debian image, stick it onto a USB memory and sell it for 100 bucks, but you don't get the right to prevent me from doing the same, or even sell it at half the price. That's the way the GPL promotes both openness and competition.
The rational is that it uses someone else's free (as in free beer) work and sells it with very little added work.
For most people, it is easier to pay than to compile software.
Agreed but it depends on what services you add to that piece of code, and in any case good information about the open nature of what is sold can help.
And considering that React Native can barely seem to keep the controls in sync on iOS and Android , trying to throw in a tiny minority platform like Gnome makes no sense at all. Better to build controls for raw X or even an OpenGL UI implementation. It's not like it's impossible to do the latter. 
In fact, VS Code already runs mostly everywhere, has gotten a huge amount of community mindshare in an amazingly short period of time, and at this point I couldn't imagine ever using gedit for...well, anything, really. Why would I?
P.S. Not a downvoter.
 A friend is working on a project that's run into a number of "bug on Android, not on iOS" that, once patched, became a "bug on iOS and not on Android" issue. Rinse and repeat.
 https://kivy.org/#home http://www.fltk.org/index.php
I take it you've never used a Java Swing application.
Swing was, in fact, an absolute disaster.
As a dev who has used both React and a couple of native desktop UI libs, I get a fair bit of pushback on the whole Electron / app-in-browser idea. The primary concern seems to be performance, although I'm picking up an undercurrent of fear that the web stack could swallow the native realms, even if it is technically inferior, just because of the sheer inertia behind it.
In a way, Electron is a threat to the culture of the pure-native devs. Everything they know gets subsumed and disappears below the browser, and a horde of outsider devs come rushing into the native app space, with little awareness of the nuances of specific platforms and their communities.
The expansion of the industry means that those with little experience vastly outnumber those with plenty, and are highly incentivised to degrade the value of experience. That's why we see the explosion of new "frameworks", if it's only a year old then someone with one year in the industry on paper has as much "experience" as someone with 30. On paper.
That is why hardware gets faster and more reliable every year but software still gets slower and buggier. The HW side has somehow managed to keep this toxic culture in check, and their engineers, with experience on their side, are actually managing to advance their field, while the SW side keeps reinventing the wheel, a little squarer each time.
Earlier in my career, I worked primarily in a native development context. I concede in a heartbeat that Flash's buggy and insecure browser plugin was a vastly inferior user experience compared to the web standards we have now.
A platform's native tools are generally going to provide the best user experience. Mind you, they don't always provide the best DEVELOPER experience... so we're endlessly experimenting with new approaches and abstractions. But the pendulum always swings back around.
I mean, I'm fine with doing that for my favorite OS, but I'm not going to learn the details of everyone else's OS, not when I can ride above it all on a magic abstraction layer.
What? Sure you can do both.
Crap like systemd gets ton of corporate funding, versus tens of really beautiful apps getting slowly neglected.
Not a good direction in general.
Desktop Linux seems stale or abandoned all over the place. There are a lot of updates in some corners - desktop environments like Mate and Budgie and the developer side of things like git, cmake, vim, GCC, Clang... they all get regular updates. But overall I feel like 10 years ago things were more exciting and would change all the time with new features and functionality. I remember being really excited to install the next 6-month release of Ubuntu. Now I run Arch and the only updates I seem to see are to python-setuptools.
Maybe it's the overall slowing down of the PC market. The people who should have grown into maintaining all this are doing something else. Like when the Mule messed up the Seldon Plan, smart phones have wrecked the Free Software plan. Everyone is too busy looking at Instagram or playing Candy Crush to write Free Software nowadays.
As i grow older i learn the value of mature software. Software that knows its place in the grander scheme of things. That do its job quietly and reliably, with clear information when something goes wrong.
Yes, for a young newcomer said software may look boring, many not be utilizing their latest GPU to draw rectangles on screen, but said software has been doing what it has been doing reliably for a decade or more.
Damn it, just look at systemd-resolved. It keeps pulling the trigger on footgun after footgun because someone decided that he could do DNS better than existing, battle tested, software, and lumped it in with the rest of the systemd shoggoth for good measure.
What seems to happen with most of these projects is that the initial developer gets discouraged when he bumps into the 10% of remaining functionality of what he is replacing. The 10% that is hard, and that has lead to the large amount of gnarly code in the old and "stale" project he was gunning to replace in a caffeine fueled weekend.
But sadly this threadmill that FOSS is saddled with will continue to spin, because as a culture software development has come to value LOCS pr hour over reliability. And we keep punting the old guard into middle management, and replace them with youngers that hammers out said LOCS rather than stop and consider what the old code is actually doing.
You can disable this in logind.conf
> Desktop Linux seems stale or abandoned all over the place. … But overall I feel like 10 years ago things were more exciting and would change all the time with new features and functionality.
Sometimes it feels a bit like that. On the other hand a lot of things work a lot better than 10 years ago. Maybe we simply reached a point where you basically have what you need most of the time and new features are either not necessary, minor refinements instead of big jumps or more specialized so you don't notice them. Then the Linux desktop still has this fragmentation problem where a lot of work is duplicated unnecessarily and the problem of breaking libraries/frameworks where applications without the manpower to keep up simply degrade or outright die.
> Maybe it's the overall slowing down of the PC market. The people who should have grown into maintaining all this are doing something else. Like when the Mule messed up the Seldon Plan, smart phones have wrecked the Free Software plan. Everyone is too busy looking at Instagram or playing Candy Crush to write Free Software nowadays.
Yeah the catastrophic failure of a true free software ecosystem on the smart phone while starting so promising may have pushed the moral down. The desktop may also feel a bit boring for many now. It basically does the job and you now have things like the web, smart phones or incredibly cheap and capable embedded devices which may seem more interesting to them.
Hell, it seems that only as systemd people started talking up nspawn as a way to do containers did the business side sit up and take notice. Before then it was mostly considered a bikesheding runaround between nerds.
i also like journald, logging is a pleasure now, even to syslog, and other syslog servers.
well what I don't get is timedatectl i.e systemd-timesync and many more. they are mostly not needed for init itself and already had a really really good counterpart.
I haven't seen many vulnerabilities in the PID 1 part of systemd that concerned me. The last vuln I recall was the UID parsing issue, which isn't really exploitable.
The systemd-resolved vuln was pretty bad, but that's an optional component that I've never used anyway. (It's not used by default on Debian.)
Yeah yeah, Gnome can work without logind. But doing so is not easy and require ongoing patches as new versions of Gnome rolls out.
And Gnome is THE gorilla in the Linux DE world.
I like Sublime. I enjoy using Sublime. I paid for a Sublime license. Still, the appearance of its impending bitrot made me switch back to Emacs.
Gedit is now 18 years old, you can't expect its developpers to go on forever. Particularly if they now have actual babies and have less free time to spend on Free Software.