"Selected highlights include:
* Support has been added for qcow2 images and external snapshots in vmm(4)/vmd(8).
* "join" has been added for Wi-Fi networks.
* Security enhancements include unveil(2), MAP_STACK, and RETGUARD. Meltdown/Spectre mitigations have been extended further, and SMT is disabled by default.
* rad(8) has replaced rtadvd(8).
* bgpd(8) has undergone numerous improvements, including the
addition of support for BGP Origin Validation (RFC 6811).
smtpd.conf(5) uses a new, more flexible grammar.
* For the first time, there are more than 10,000 (binary) packages (for amd64 and i386)." 
With a little work you can have your own caching DNS server including domain blocks for tracking sites and if you want privoxy or a squid proxy. It's also easy to set up your own root CA and switch over to certificate-based authentication for wireless clients as long as the wireless base station supports radius.
I haven't published a tech note on it yet because Android still complains about importing self-signed certificates even when you import the root CA.
One thing I wish that OpenBSD devs would change in their philosophy is the --help messages. Many commands simply offer a list of switches, as if that's somehow helpful. Sometimes you need the detail in a man page, but a lot of times you don't and it would save so much time and energy to have a succinct list in the --help message itself.
# syspatch --help
usage: syspatch [-c | -l | -R | -r]
One thing I really dislike about modern UNIXes is their lack of decent manpages in place of standins like --help.
I love the BSDs and especially OpenBSD for their attention to manpages. It's the main reason why I don't use Linux anymore unless I have to.
Adding detailed --help messages would take time away from maintaining manpages, it also presents a duplication of information. If you want to know what the switches do, read the manpage.
I completely agree. The difference in the quality of manpages between OpenBSD and Linux alone is enough for me to prefer OpenBSD.
With OpenBSD specifically, you can get 90% of the way there with chroots, standard process isolation, and a bit of shell scripting to handle deployment automation.
Yeah, Docker's cool, but it's really not that hard to run multiple applications/services on the same physical machine while keeping them from clobbering each other or the OS in general (step 1 being to make sure each service/daemon is running under its own minimally-privileged user).
This is a classic case of "THAT HackerNews response to Dropbox" .
If it's that easy, why isn't there a prepackaged wrapper with simple switches, rather than leaving developers to fight for themselves among piles of custom hacks?
The problem is not just deployment, the Docker differentiation is simplification of the development pipeline. OpenBSD should seriously look at their story in that area, because it's one of the few where they could still potentially compete (because Docker is still fundamentally a pile of hacks, and pretty insecure too).
Unless, of course, everyone is happy to remain "the little project that could" and crack jokes like the BCHS stack.
I’m pretty sure the Open BSD developers are completely comfortable with their story. They develop this software for themselves first. If you like it and it’s useful to you you are welcome to it. If not, look elsewhere. That’s been their working philosophy all along and if you ask me that’s what makes it so great to use. Every piece of the system is carefully thought out and organized so it doesn’t suffer from nearly as much feature creep as other systems.
And as far as setting up chroit and isolating processes; it’s not hard and sometimes you don’t need someone else to write a script for you when you can do it yourself in 10 steps or less.
They were pretty comfortable with their patching story -- until enough people complained and lo, syspatch(8) appeared.
Beyond the posturing, nobody likes to run a project that nobody else uses; and sometimes even lusers are right.
> Every piece of the system is carefully thought out and organized
I am not saying they should rush out a crap docker clone, but rather a "carefully thought out and organized" docker alternative.
> you can do it yourself in 10 steps or less.
It's still 5x the steps you need with docker. As I said, it's about simple reproducibility rather than just isolation. Even if it were easy to write my own docker-compose (and indeed many people argued the same, when docker first emerged, because it actually was little more than shell scripts), having one well-defined set of tools helps tremendously with kickstarting adoption and to avoid reinventing the wheel every few weeks.
Most of the time on the openbsd email list, when a "user" suggests a feature or asks for a change to something, the reply is something along the lines of "sounds great, where is your patch?"
Edit: And in response to toyg's comment, here is a link to a video of a talk by the developer in question, who mentions in passing that he builds things for OpenBSD that help him put it into production. This is less than a minute into his talk. Please educate yourself about the project before making ridiculous demands on the devs' time and falsely assigning wrong motives to them. It's unfriendly.
Oh, I am educated enough, don't worry. Which is why I was so surprised to see it finally adopt a solution for a problem that had been pointed out for 20 years, after spending those 20 years replying to everyone that it was just the wrong thing to do in principle.
> It's unfriendly
It's also unfriendly to gaslight away blatant problems, for whatever reason, until they get fixed -- at which point they are admitted as actual problems. Then again, OpenBSD is hardly a friendly project, culturally speaking.
A criticism that seems fair has been presented here in a benign, non-antagonizing manner, and I am very perplexed as to why all comments arguing in favor of that view have been anonymously downvoted wholesale without anything approaching sufficient substantive explanation.
This is not the kind of behavior that (the) HN (community) is respected for.
Let me sum up what I see here.
- Someone argues in favor of Docker, and are downvoted by enough people their comment turned grey. I think this means it's at -5 or -10 or something. So, no explanation, no comments; just downvotes.
- The one reply that goes into a bit detail comes from a traditionalist UNIX standpoint, and is a bit passive-aggressive. (This comment isn't grey.)
- The next reply frames the parent as "THAT HackerNews response to Dropbox", and highlights that the implied simplicity and sense of "only one obvious way to solve this problem" is in fact not implied and that significant wheel-reinvention must (and presumably has) be done "on the ground". Docker's simplicity is highlighted along with its insecurity. This comment is grey.
- The next reply further brushes-off the stated arguments by (passive-aggressively) noting that the project seems successful enough, and maybe that's because they actually have it figured out. (This comment isn't grey.)
- I read the next reply as a gentle reminder of the importance of remaining relevant going forward - and the fact that this doesn't necessarily mean ground-up reinvention. This comment is also grey.
What is going on here?!
A nontrivial number of comments in this thread, and the other OpenBSD threads I've seen, are basically all chanting about OpenBSD's perfection.
Good customer service, good social skills and forward thinking are some of the most fundamental aspects of commercial success. Does open source think it can get away with "no shirt, no shoes" just because it's free? :(
It required a service set up by the project itself. And this after years (decades?) of explicitly rejecting the concept of automated patching (because it supposedly engendered "a false sense of security"). Come on.
> ...avoid reinventing the wheel every few weeks
This is pretty ironic in this context.
In my experience, "simplification" and "Docker" don't really go in the same sentence. Yeah, it's (maybe) simpler if you're just plugging things together that are already Dockerized, but if you're writing your own Dockerfiles, none of the actual complexity really goes away. If anything, you're making things even more complex by containerizing things that don't actually need containerized.
"why isn't there a prepackage wrapper"
Because nobody's gotten around to writing one yet, or perhaps because nobody's felt the need to do so yet. Not much stopping anyone from doing so. That's where the "bit of shell scripting" comes in. Writing an rc.d script  (using rc.subr to do away with the boilerplate normally associated with the sorts of initscripts normally strawmanned by systemd advocates) ain't any harder than writing a Dockerfile (in my opinion it's actually much nicer/simpler). Neither is creating a user under which your app will run. Hardest part will probably be around deciding what needs to go in your chroot.
Hell, if you include OpenBSD packaging  as part of that development pipeline, then tada, you're pretty much there. Install the package, run "rcctl enable your-app-name && rcctl start your-app-name", and you're good to go.
So the trick here would then be to extend that to install and run multiple isolated copies of that package, each instance having its own configuration and chroot. Or perhaps using a single package and writing your service/app such that it does the forking/chrooting for each isolated environment (which is what quite a few OpenBSD-focused daemons already do, from what I can tell/observe).
The overall point, though, is that comparing Docker to Dropbox is erroneous. Dropbox actually was simpler/easier than the "solution" posed by that comment. Compared to the OpenBSD way of doing things, Docker is not; if anything, it's more complexity.
Don't get me wrong, sure it has its uses, but to pretend that Docker(TM) is just as big a leap (or as essential) as any of those things is laughable.
Since I believe they don't accept code patches without relevant man page patches that explain them if they alter the behavior or add/modify an option, this seems like a sane way to avoid bikeshedding on what is essentially superfluous information, since the "appropriate" amount of info to include is ambiguous.
That's not the dispute. It's a question of how much you value brevity.
syspatch --help usage: syspatch [-c | -l | -R | -r]
looks quite friendly to me, but YMMV.
This is the 45th release of OpenBSD!
I'm quite amazed that there exists hardware where they can test this! Maybe there are some embedded systems still using motorola chips?
Anyway, OpenBSD is great. I'm running it on my router and it also powers my 96 mb ram dual pentium pro 200 mhz computer from the 90s :) That computer also has a quantum fireball 20 gb disk as it's main storage, another thing I am amazed that still runs..
Donate to this project, it deserves it!
OpenBSD also used to run on Motorola's 88k VME boards, but the mvme88k port was discontinued after 5.5.
If you have any spare parts/systems, I suspect aoyama@ would be interested in hearing from you.
Is this on all architetures or just Intel's Hyperthreading? I'd imagine that other CPU's with hardware threads (especially the 4 and 8 way Sparc T series) would be quite hobbled in terms of performance with this change.
 - https://marc.info/?l=openbsd-tech&m=153504937925732
When I want new packages, I'll upgrade the operating system. I very much appreciate the stability of packages during a release cycle.
doas pkg_add -u
I don't think I'll run OpenBSD as a desktop OS unless performance drastically improves, but it's staying on my router for the foreseeable future.
(Specifically the part "instruct the boot loader to boot this kernel" because it says to type in the file name during the boot process, which is not exactly easy on a remote machine.)
I have a lot of respect to OpenBSD devs when people don't contribute back much even if they use OpenSSH everyday but a bit more friendliness doesn't hurt to let people try it out more.
There is an in-between case where you have a console, but do not control the boot loader (so cannot boot bsd.rd) and cannot boot an iso. It sounds like maybe you are in this situation? In this case, you can still follow the upgrade directions as if you do not have a console. Alternately, sometimes when I am in this situation I just download the install kernel (bsd.rd), move it where the boot loader is hard-coded to look (/bsd), and then reboot. The boot loader will boot the install kernel and you can follow the usual / common upgrade procedure on the console.
There is also autoinstall, which can automate the upgrade procedure for you and reduces upgrades to just rebooting into bsd.rd and waiting a bit. There is a bit of effort to create the response file, etc., so this may be overkill for a single instance but is very useful for upgrading fleets of machines quickly.
If so, download the new bsd.rd image and place it in /
Reboot, and at the boot prompt type:
Then follow the prompts.
For example, things that I need that it does not have (last that I checked) include: filesystem-neutral nmount(), POSIX RT signals, the "new" 1990s dynamic PTY allocation system, a KDGETMODE ioctl on wscons, waitid(), fexecve(), and ACLs.
And then there are the things that would make life easier to not have to bodge around: const-correct ncurses API (available in ncurses since 1997), const-correct login_cap API, const-correct sysctl(), and no multiple evaluations in EV_SET().
It would be very welcome for it to gain all of these, but until then there is no "it has all that you need" realization on the cards.
I'm tempted at using OpenBSD as OS, but need to run things like MS Office. A VM is probably the easiest way to do that.
I cannot comment on whether it is suitable for running VMs at this time.
QEMU is available in packages/ports, though; while almost certainly slow, it's a start. VirtualBox on Linux requires a kernel module, so unless someone manages to port that to OpenBSD (which would translate to adding it to OpenBSD's kernel, which doesn't support loadable kernel modules anymore), that one probably ain't an option.
As far as I know this is not possible. You can always do it with qemu but it wouldn't be practical.
If libreoffice is not appropriate for your needs, you may try googledocs inside the browser.
I have been a user of LibreOffice since it was called StarOffice. It is dog slow, still has an ugly UI, crashes etc. But for many things it has been and is good enough. (and keeps getting better). Unfortunately, customers wants to use Word-specific files, Excel-specific files with macros and whatnot that just don't work in Libre. Google does not handle these files correctly either. It is not a fault of these alternatives, but a world where people don't realize that they are being locked in. They use the features of the tools they have to solve their problems.
(I guess this became a bit off-tangent)
So what I was hoping to do, and the reason for asking about the state of running VMs in OpenBSD is of course to use it as the main OS, and run the other OSes I need to use in VM. Basically as I do today. I would be close to home in terms if userland experience and gfet good (better) security.
BTW: I found this presentation by Mike Larkin from March 2018 about the state of vmm:
Or did you mean that using a VM is a heavy way to try a system out?
Battery life under OpenBSD is atrocious compared even to Linux.
It's a great server OS but it's not ready for laptop daily driver use unless your laptop just sits plugged in all day.
They've been adding drivers like crazy for a while now, too, so there are many more hardware choices than there were a couple years ago.
OS X and Windows are better GUI, OpenBSD is a better place for a lot of my work... basically anything that I’d use emacs for.
Guess I should give OBSD another chance...
Any growing project will always suffer from consistency problems -- humans just aren't capable of scaling to the point where a single hierarchy can consistently manage a monstrously large system. Linux doesn't have a 'base system' quite like OpenBSD: coreutils, libc, bash, and 20 other similar packages all come from a large variety of differently minded maintainers across the world.
Viewed from a macro scale, OpenBSD's consistency breaks down immediately upon starting to install stuff from the ports collection - sure the base system makes a great firewall and basic HTTP server, but to use anything popular you pretty much immediately end up with ports. And the quality standards there are identical to Linux land, because it's the same code.
sendmsg(2), sendto(2), recvfrom(2) and recvmsg(2) are run without KERNEL_LOCK.
The lock is being removed, slowly with care, piece by piece.
Yes, and goals. OpenBSD is much more conservative with regard to features, and focuses on security, and that results in a more secure system overall. If you need to eek out every last percent of performance, use something else. If you want to worry less about security on a system that is fairly exposed (e.g. a firewall), then it can be an extremely good fit. For example, here's a comparison of security advisories for the main OpenBSD, FreeBSD and Linux projects (that is, not separated and exported items like OpenSSH). While I'm sure these lists have their problems, they are interesting. Specifically, the number of exploits column...
* Electron apps - VSCode, Atom, Slack, almost every universal desktop app that gets released today. Individual ports of apps to FreeBSD exist, but there is no way to automatically build Electron apps for any of the BSDs.
* Good desktop virtualization - KVM, VirtualBox (okay FreeBSD has these in theory), VMware
* First-class Docker and desktop container support (FreeBSD jails exist but there is no container ecosystem like there is on Linux)
You can still run Firefox, mail clients, vim/emacs, Unix utilities and LibreOffice on any of the BSDs just fine. They're lacking other niceties however. And that's a bad thing in my opinion - although it's mostly not the fault of any of these projects. Some people think BSD is better for lacking those options, but I can't live without them for one.
It's unfortunate: Linux and the BSDs used to have more or less the same application support. Anything Unix-y ran on anything Unix-y. There was nothing stopping you from having an OpenBSD desktop almost identical in function as, say, a Xubuntu desktop - one that looked much cleaner on the inside. But once broader commercial interest started happening for Linux, the BSDs were mostly left by the wayside in application support.
Now what exactly created this divide to begin with is a matter of debate but it took place mostly in the 90's and the very early 00's. By the mid 2000's it was clear that BSDs would have a really hard time ever catching up with Linux, especially for non-server applications.
Paradoxically it might be partially why BSDs are so clean and tidy: fewer features, fewer contributors and of course they're making a complete operating system instead of just a kernel or just a distribution.
FreeBSD remains my favourite OS for servers, it's rock solid and a joy to administrate. Unfortunately for the desktop I've given up almost a decade ago, driver support is just too lackluster, especially (as you mention) for proprietary software that can't be easily ported.
The Linux users gave back (often because they had to) and the BSD users often didn't.
So in the long run it is back to the old freeware days, with the lower lever free and everything on top closed, with zero contributions.
Sendmail, postfix and qmail all had BSD-ish licenses, and those three covered quite a majority of all opensource mail serving at the time of BSD-vs-Linux "in the same early years"
So, nah, it's not the license.
The areas where OpenBSD can compete (in addition to the ones it already fights in, like routing and network-edge roles) are cloud deployments and orchestration. If it were as easy to run OpenBSD for development as it is a containerized Linux, people would pick the more secure choice. They could also make prebuilt appliances for the most sensitive components in a deployment (databases etc).
vmctl create disk1.img -s 10G
vmctl start vm1 -d disk1.img -b /bsd.rd -L -c
Good! On FreeBSD, for now, sanity prevails!
This is one of many reasons not to use Electron. Claims of Electron portability are a sham.
However, when it comes to running scientific applications and squeezing out last bits of performance or servers where people expect stuff to "just work, and if doesn't do apt-get blah", it's Linux that takes the cake.
FreeBSD and ZFS - the best I've worked with for Network Attached Storage. Can't believe it is just free.
ZFS file server: https://aravindh.net/post/zfs_fileserver/
ZFS file server performance: https://aravindh.net/post/zfs_performance/
Linux has become a chaotic place to be, I agree.
Linux is terribly balkanized what with all the competing distros. There are no standards. This is one reason why Linux has not taken off on the desktop outside of tech circles.
systemd. It violates almost every *nix tenet out there, especially "a program should do one thing well". It's has a few benefits, but the negatives outweigh these, namely more and more programs outside of the base system now require systemd. This should never be. I and millions of others agree that an init system should be able to be tweaked as text files. Not happening now. The logs are stored as binaries when they should be plain text. Debugging is more difficult. A program should do one thing well. Full stop. There is always BSD and Slackware, and I don't think Slackware will adopt systemd, as their user base doesn't want it. Slackware is the oldest currently-developed Linux distro out there and the vast majority of users want it to remain true to its roots while still advancing.
I _really_ enjoy being able to apt-get or brew install pretty much any of the applications out there and am a bit worried about how that experience would be on OpenBSD. I guess the best way to find out would be to try it eh? :)
Personally, I’d be more interested into having openbsd as the standard for cloud deployments, a place currently inhabited by Ubuntu derivatives. If one could get the declarative goodness of Nix, the popularity of docker, and the reliability / security of openbsd, the world would be a better place.
It basically requires you to copy the firmware to a USB stick and run the tool and then Wireless just works.
I ended up having to switch to Slackware, though; lots of stuff I have to use for work that simply doesn't run on OpenBSD. If vmm gets to the point where I'm able to run Linux desktop apps with reasonable performance (and ideally a reasonable degree of integration with the rest of the system) I might try switching back.
You can search NYC*BUG board, where users submit their dmesg(8) output, e.g. there is a submission for Dell Latitude e7270.
My most recent laptop upgrade ran like this: http://bsdly.blogspot.com/2017/07/openbsd-and-modern-laptop....
OpenBSD is great, of course, but my favorite thing about open source is the cross collaboration. Rather than dump literal decades of constant refinement to Linux and it's ecosystem because OpenBSD is much more elegant seems unwise. There are things Linux and the general Linux stacks could learn from OpenBSD (and should.)
I don't think the world would be better if OpenBSD were in the position Linux is in today (though I have a hunch the security story around FOSS stacks would be better.) I think there is a lot of good that Linux can do with its more fast-and-loose organizational structure.
Which makes half of the news today. So, yes, world would be a bit better today if only ..
Of course, a base OpenBSD installation is wonderful to work with and the damage can be minimized through the careful selection of packages.
I'm not sure that's fair.
Linux has snowballed, whereas OpenBSD hasn't. Linux is a much larger system. Is it surprising you find OpenBSD to be tidier?
Companies extend the BSD OSs with proprietary additions, then abandon the work and it gets lost. With Linux, everyone is forced to play nice and release under the GPL, and the work gets to live as long as people value it.
Hence the Linux kernel snowballing and taking over the world, whereas the BSDs have not, despite being at least as strong technically.
The better Linux gets, the more people target it, and the virtuous cycle continues.
Additionally, as it grows bigger and bigger, the more of the proprietary competition it crushes underfoot (sorry Solaris), and the more traction FOSS gets.
1. The BSDs adoption was severely hurt in the 90s by the AT&T lawsuit, it basically stopped several years of development while the lawsuit legal status was clarified. Linux and the GNU tooling didn't have that problem. If the lawsuit never took place it is doubtful whether the Linux kernel would have got as much interest as it did at the time.
2. If you don't keep up contribute your changes back to upstream (whatever the license) eventually you will be left behind and have to maintain your own incompatible version. It is in your interest to send upstream patches.
3. GPL code gets stolen all the time and put into propriety software. I've worked at loads of places that have just straight out cut and pasted GPL code into their own product (usually this is done without management's approval). Large projects such as the Linux kernel companies can't really get away with it. However a lot of companies don't build software for the masses, most build bespoke software that is only deployed on one or two servers on a company intranet and the general public will never see it. A lot of developers will just straight up steal code (not caring about the license) from wherever. A surprising number of companies still don't even use source control, let alone bother reviewing code.
4. Companies do contribute back to BSD licensed projects, however this is normally financially not through patches.
I thought that, in this case, it's the company itself who is the "user" of the software and isn't obligated to do anything to/for/about upstream (since there is no stream.. they're not re-distributing it). In that sense, they're not stealing anything, just using what was, explicitly, free to use.
You would be right if it was PHP / Python or something else that was interpreted.
The real point to take away is that any modifications will never reach upstream.
How is it any different from an individual making changes to GPL software and using it (compiler or interpretted) on a personal computer? Surely that individual isn't obligated to share anything, either.
> The real point to take away is that any modifications will never reach upstream.
The GPL doesn't mention, AFAIK, any such concept. I thought the point was freedom for users of software, not implied benefit to some "upstream" programmer.
IOW, since the GPL is about the user, it's about protecting "downstream" and, without redistribution, there's none to protect.
>The GPL doesn't mention, AFAIK, any such concept. I thought the point was freedom for users of software, not implied benefit to some "upstream" programmer.
The OP specifically said that one of the benefits of the GPL is people had to contribute back because they have to make the code public. As we have discovered they don't.
Indeed, that wasn't at all clear. The comment to which I was responding only used the word "company" (singular and plural), without any modifiers, which I read as describing the same party.
That clears up some of my confusion, since that's an obvious violation (assuming source code wasn't available to those same 3rd parties, which you also didn't explicitly state).
> The OP specifically said that one of the benefits of the GPL is people had to contribute back because they have to make the code public.
Such an assertion (which I see in neither ancestor comments nor the article) still seems mistaken, so perhaps it's a strawman?
The GPL, IIUC, is meant to protect the user, aka downstream, not provide benefits to "upstream". If the binary itself isn't made public, then the source code need not be, either (though I suppose the user/customer in your scenario would have the freedom to choose to make it public, they have no obligation and little, if any, incentive).
> Companies extend the BSD OSs with proprietary additions, then abandon the work and it gets lost. With Linux, everyone is forced to play nice and release under the GPL, and the work gets to live as long as people value it.
'Secret' purely internal use of modified GPL software is not a violation - if the modified software is never distributed publicly, there's no issue.
(The Affero GPL licence is different in that regard, and was developed as a response to the software-as-a-service trend, but we're discussing the plain old GPL.)
Imperfect enforcement is a valid point, but the terms of the GPL are effective at least some of the time. Major technology companies do not want copyright scandals, even if plenty of fly-by-night companies are willing to risk it.
As the parent pointed out, purely-internal isn't what he meant. Distribution can be non-public, which is distribution nonetheless. Such distribution would require availability of source, but that availability wouldn't be public, if the original distribution wasn't public.
The parent seems to be focusing on "theft" (GPL violation) by relatively-unknown companies, which didn't necessarily occur. It's plausible that it did, but, even if the violation were corrected, since that correction doesn't require public release of source code, is likely irrelevant to the overall discussion.
You seem to be focusing only on publically-released software, which may or may not be the majority (by whatever measure).
I have no "side" in this, just trying to understand the points, which I've failed to grasp. Are you talk past each other?
The point that you keep on ignoring is that the OP said "companies have to contribute back". One of my points is that they don't even do it though they legally should.
License arguments wasn't the point of my response. The point is that people will abuse goodwill and pretending that it doesn't happen is naive.
You didn't say so, and, even now, you're only implicitly saying there was a GPL violation. The details are important, in order to further understanding.
> The point that you keep on ignoring is that the OP said "companies have to contribute back".
I'm pretty sure I'm not ignoring it, because it didn't happen. That's likely the source of my confusion. You've certainly said so repeatedly, but I'm missing where anyone else in the conversation has said so (hence my thinking it's a strawman).
> One of my points is that they don't even do it though they legally should.
This does sound like you are, again, saying there are circumstances where contributing "back" is legally required, which is the assertion that prompted my own original response. I don't believe those circumstances ever exist. The only obligation is providing (contributing) source code forward. Only when "forward" is the public at large does that end up being, as a side effect, "back".
> The point is that people will abuse goodwill and pretending that it doesn't happen is naive.
I doubt anyone here is actually naive enough to believe it never happens, but there may be a belief that it's rare or exceptional. Without large-sample-size evidence, this can be short-circuited to the usual cynicism vs. "people are basically good" argument.
NOPE. The context is the original posters words. We are talking about that and I am saying that contributing back doesn't happen magically because of the GPL.
I suggest you learn to keep the context of the argument in mind rather than keep focusing on being pedantic.
I am done with this conversation now.
Since those words never mentioned contributing back, I was, understandably confused.
> I am saying that contributing back doesn't happen magically because of the GPL.
I don't see where anyone was saying otherwise, ergo you're arguing against a strawman.
> keep the context of the argument in mind rather than keep focusing on being pedantic
That only works for making a (counter-)argument, not attempting to understand the argument(s) in the first place.
In the instant case, I'm now convinced that any disagreement was based on a flawed premise, or there was no disagreement at all. My understanding of hwo the GPL functions (and is intended to function) remains unchanged.
> Yes, you're reading me right, but I skipped over the question of software which is never released publicly - mmt is correct that the GPL does not require public release of works, instead it prevents you from releasing the binaries while withholding the source.
The fact it doesn't prevent you from doing that. It only really prevents large companies that people are watching.
Violations happen all the time. They just happen on smaller GPL projects.
> 'Secret' purely internal use of modified GPL software is not a violation - if the modified software is never distributed publicly, there's no issue.
The impression I get is that GPL advocates like yourself seem to think that the unwashed developers that work on proprietary code don't understand the GPL and have to be constantly told how a software license works. You aren't an enlightened individual because you understand a software license. I understand the license and the arguments about it just fine.
There is an issue with stuff not going back to upstream. If a defect fix only happens in downstream that is generic enough that it should benefit everyone then only downstream benefits, this things don't get contributed back and there is no improvement of upstream.
> Imperfect enforcement is a valid point, but the terms of the GPL are effective at least some of the time. Major technology companies do not want copyright scandals, even if plenty of fly-by-night companies are willing to risk it.
The flyby night companies as you put it are the majority, not the minority. If it isn't a big project most companies won't get found out.
Again GPL doesn't magically make people contribute back, which was my original disagreement with your comment.
That's my understanding, as well, but that was never asserted, only "play nice", (re-)"release under the GPL", and "gets to live as long as people value it" as you were able to quote upthread.
This seems like contributing forward, not back, or downstream, not upstream.
The eventual effect, for publically-released software, usually ends up being an upstream contribution, but that's not automatic.
I'm not an advocating any particular license, but it does seem like you're responding to a strawman that nobody in this subthread (GPL advocate or not) has argued.
Apparently I am responding to a strawman after the OP actually replied and said my assessment of what they said was correct.
This is why I dislike speaking to the FSF crowd. It is much like a religion (as far as I am concerned).
Licenses don't work like you think they do. In fact, they work backwards: the decision whether to release the source or not doesn't depend on the license, it's the license - and thus the choice of existing software to base your work on - that depends on the decision on whether not to release the source.
BSD makes it possible to release your changes if - and when - you see fit. GPL - doesn't. That's why companies like Sony or Juniper couldn't base their products on Linux. Sure, Sony doesn't give back - but eg Juniper does.
This is Torvalds' idea, not mine  (though he doesn't speak to Linux-vs-BSD directly)
> Sure, Sony doesn't give back - but eg Juniper does.
But in aggregate, Linux has taken over the world, and BSD hasn't. The 'snowball' effect is real.
I hope they didn't. But if they try, triple check, or better rewrite :)
These systems do things that Linux just can't.
Linux is no RTOS, nor does it have a pure microkernel architecture. It can't do anything where latency guarantees are needed (hard realtime) nor high assurance, nor any actual semblance of security.
I suggest these posts:
There's far enough technical reasons to ditch Linux for a cleaner design.
I don't see any proprietary Unix seriously competing with Linux any time soon though.
As for competition with proprietary UNIX, probably not against Aix or HP-UX as they survive on maintenance contracts for big customers like banks and telecommunication companies, but Linux is still no match for high integrity computing OSes, many of which are micro-kernels with POSIX userspace available as possible API.
Yes, I know - but the software which ends up getting deployed/used/sold will be proprietary forks, right? That's the whole point: the licence is friendly to proprietary forks.
> Linux is still no match for high integrity computing OSes
Sure, VxWorks is safe.
That goes beyond strawman.
Try reading: https://microkerneldude.wordpress.com/2018/08/23/microkernel...
For as long as Linux is a monolithic design, the whole kernel is going to be part of the TCB.
And thus, you should be skeptical of any claims of Linux providing security.
It's a strawman when I agree with someone?
> And thus, you should be skeptical of any claims of Linux providing security.
When did I say Linux has perfect security?
I presume the point you're trying to make is that different operating systems, with different architectures, can beat Linux in various regards. Of course this is correct. But for a general-purpose multi-platform Unix-like OS, Linux is king, and will be for the foreseeable future.
Citation or you only heard it.
Not even going to poke on your rest of statements.