The upgrade process works well, but it's much more manual than it is in the Linux world.
When PkgBase becomes a thing, I hope that the process will be closer to the Linux world.
The other thing that hurts the upgrade process is the lack of packaged software. My big pain point is currently PostgreSQL 12+PostGIS 3.0. This combination in FreeBSD isn't a thing, the packaged/ported PostGIS 3.0 is for PostgreSQL 11. My solution has been to build PostgreSQL 12 from ports (because the packaged version doesn't include XML support, which I also want) then build PostGIS 3.0 from source, after installing all its dependencies from packages. I have to rebuild PostGIS after most FreeBSD upgrades since it links against now-changed libraries. If I were doing this in the OpenSuse world I would just use the Applications:Geo repository and be able to install the right combination of software very easily.
Because of these issues, I'm slowly moving towards using FreeBSD to host VMs running whatever OS/distribution is most appropriate for the job. Moving PostgreSQL off of FreeBSD will greatly simplify my upgrade process.
It should be easier though.
My personal systems aren't where they should be for CM, but I still can't think of a good reason to update in place rather than rebuild and copy over configuration/data.
Flavors should make it easier to operate with packages instead of ports, but I don't have any idea what's going on there. If I were supporting a large number of systems, I'd want FreeBSD over Linux any day of the week and twice on Sundays because the ports system doesn't make you choose between a stable OS branch and current software.
The current process for upgrading FreeBSD is roughly, assuming that you're just using packages and not ports is:
* freebsd-update upgrade
* freebsd-update install
* pkg upgrade
* (optional) reboot
In comparison, the process for upgrading OpenSuse Tumbleweed, assuming that you're just using packaged software, is:
* zypper dup
If the base system were managed by pkg, the FreeBSD version of the process should probably look a lot like the OpenSuse Tumbleweed version. Condensing upgrades down to a two-step process seems like it will be a significant quality-of-life improvement to me.
For base updates, we'd usually shut down services, install kernel and userland, and reboot --- I know this isn't the proper order, but most of the time the new userland is compatible enough to reboot (and you can usually power cycle if that doesn't work out). For updates where rebooting would be hard this way, we'd do it the right way, but then you need to add an extra reboot after installing userland, or restart all the services from base. If needed, we'd install the compat packages after the reboot. We would not update ports/pkg/etc at the same time; old binaries are expected to continue to work on a new system (and generally do). In house binaries are either compiled on the system they run, or centrally on a machine (or chroot/jail) running the oldest release on the fleet.
When you need to upgrade software not from base, because of security updates or new features or ennui, you do that isolated from base changes. The only complication here is if you haven't updated base recently enough, pkg may provide binaries you can't run, and ports may provide software you can't compile (this one takes longer).
We would generally only update base for exceptional releases, or when it became hard to update non-base software on older releases. However, new machine installs would get the most recent release we had tested.
At least, I have heard this explanation in the context of OpenBSD, and assume it may also apply to FreeBSD.
To solve this, the updater could create a new boot environment, install everything in that boot environment using as many passes as required, then reboot into that environment. OpenSuse already does a similar thing, though I don't know how well it works.
Any time a new syscall is wrapped the userland maintainers need to account for what happens when run on an old kernel and typically have some fallback logic.
At this point, I've only got two services (postgresql and mosquitto) which run on bare metal, everything else is in VMs. Once I move those off of FreeBSD the upgrade process will be greatly simplified (well, as simplified as FreeBSD gets anyway).
I'm sure I could work around it, but then I start to ask myself whether I'd rather do FreeBSD software administration or create an OpenSuse VM, install the packaged software (which is already the right version and already has the right config options), and work on my GIS project.
If you tell make.conf that postgres12 is the default it will compile postgis for that version, same with php/ruby/mysql and so on.
Incidentally I wrote a shell script that does this.
No not really just your packages and the dependencies of those.
Well BSD is dying was said already 15 years ago, BUT the project should be more responsive, an example:
You send in a patch with a updated port, and you wait 3 month until it's in the tree, in the meantime already a new version is out. I think if the users make the work and send in patches (just with a changed version-number and zero or nearly zero dependencies) some porters @ freebsd should act much quicker (some are super fast and helpfull but not as many), otherwise they will change the os or make there own fork of the package tree like i do.
Yes absolutely, and it's no question that packaging is really hard work, i wanna thank everyone that works on my most beloved system, ESPECIALLY the packagers.
Oof, that hurts.
I'm going to add a check right now to PhotoStructure to assert that people's library is not stored within the container to prevent this from happening to my users.
But the graphics have to play a catch up game with Linux, and in my experience a couple of drivers were missing or not working properly (like wireless stuff, acpi, etc..)
The performance of the network stack is the best of any OS, and the interface with the OS in general is much more well designed than Linux, giving Linux is more of a Bazar effort with things stitched here and there to make some things works.
It's sad that the amount of man-hours on Linux, make it almost unavoidable and i bet this will be more and more true with time even with Windows and OS X.
Linux has the most mindshare, so most things are built for Linux and don't run (or don't run as well) on FreeBSD. As a result, more people use Linux resulting in increased mindshare, and the process repeats.
It makes me sad because I really like FreeBSD.
My general impression, which stems mostly from the description I once read on the FreeBSD downloads page and from elsewhere is that if you want to help find bugs in upcoming releases you run the CURRENT branch and else you run RELEASE. But I expect that there are people running the beta releases as well who make meaningful bug reports that result in fixes prior to release versions being minted – otherwise I think they would not be making beta releases still.
I did used to run CURRENT for a while, but eventually found out that personally I wasn't making any particularly helpful bug reports based on it anyways so I went back to RELEASE.
For the VM server that I have been using for many years now for sending and receiving mail, as well as to host some websites of mine, running FreeBSD RELEASE has been working great for years.
Meanwhile, my desktop computer which is currently running FreeBSD 12.1-RELEASE is having issues where it freezes and requires a hard reboot. But I had this same kind of issue when I was running various Linux distros on this same desktop computer over the past years. I am suspecting that either it is a cause of hardware malfunction or possibly of the Nvidia driver for the graphics card, but I don't really have any way of finding out what the actual problem is. Not the least because of the fact that it will often run for a week of being powered on perfectly fine (both with FreeBSD and Linux distros) until suddenly it just completely freezes. Doesn't react to mouse, doesn't react to keyboard, doesn't respond to ping.
The fact that my desktop computer can run fine for about a week at a time makes it hard to figure out what the actual problem is. Say that I were to pull the graphics card out for example, and it ran fine for two weeks. How do I know at that point that I can really blame the graphics card and it wasn't just random chance that made it run for longer than usual this time around?
I think in order to isolate the problem I would have to build like 8 different physical machines all with the same hardware, and then say ok four of the machines I will initially run like I do with my current desktop, and four of them I will pull the graphics card out of and then have them running 24x7 for a month or until at least two of the machines with Nvidia graphics cards crashed if sooner. At that point I might conclusively say there is strong evidence of problems with the Nvidia graphics cards or with the drivers for said cards.
But if the machines without the Nvidia graphics cards were also crashing then I'd need to swap out for example all of the motherboards and I'd run the experiment for another week. An so on.
On the flip-side, if all of them ran fine for a month with the initial configuration I would conclude that probably some of the hardware in my current machine is just broken, and that there is no bigger issue at play. But all of that would cost not just a lot of time but also a lot of money. And as-is the next thing I can afford to buy is certainly not such an amount of hardware – the next thing I am going to buy when I can afford it is rather a couple of new hard-drives, as the spinning disks that hold most of my data are getting a few years old and I am getting quite nervous about them suddenly going to fail me. I've had hard-disks fail on me in the past. In particular I used to have a machine with 4x 2TB hard-drives in it to provide me with ZFS storage, and then I moved houses and I accidentally dropped my computer from about 1 meter height and last time I tried to see if the drives were working none of them showed up neither after boot nor in BIOS, neither in the original computer that I had dropped nor in the desktop computer that I have now. Well anyways it don't matter too much, none of the data was that important really. And I think said drives were making unusual noises also but I don't quite remember as it was a while ago now. But that kind of experience, along with having had some external hard-drives fail me in the past, does feed into the worry that I have for my current hard-drive as it is getting a few years old now.
But back to what I was saying about trying to debug the issue with my desktop. In theory I could disconnect basically all but the RAM and the CPU from the motherboard (well, and but the fans of course :P) and boot a minimal kernel from a USB stick and have it run in RAM and let it stay powered on for four weeks and see if it crashed, and then if it doesn't to connect the SSD and the HDD to it and run it for another four weeks and see if it crashes, and then to run the regular RELEASE kernel on it for four weeks and see what happens, and after that with the graphics card connected and see what happens for the next four weeks. But all of those weeks man. I just want my desktop to work, I don't want to spend weeks with it running on its own without being available for my use.
Anyways, that brings me to something else related to the testing that people do and that is the fact that I find it kind of hard to know what hardware to pick for a FreeBSD desktop. There's things like https://www.freebsd.org/releases/12.1R/hardware.html and https://wiki.freebsd.org/Graphics and https://wiki.freebsd.org/Graphics/AMD-GPU-Matrix but they all seem kind of out of date.
If I had the money to buy the hardware for it and to pay the electric bills for it I would gladly be running a bunch of machines in different configurations of current consumer hardware and making notes and filing bug reports and hardware compatibility reports to help other users of FreeBSD and Linux. Maybe someday. Until that day I'll just have to accept that every week or so I might have to hit that hard reset button on my desktop, inconvenient as it might be.
That really must be your Hardware, i have FB12.1-REL and it's rockstable on 5 different Machines from 10yo to 1 two laptops and three Workstations.
And many many VM's and two really big physical servers
- I wonder how many other people use Nvidia graphics cards on FreeBSD and on Linux.
- A few years ago I used to mine crypto on my graphics card. I made like $20 from it, so if the graphics card is physically in bad shape and this is what caused it then gg on my part for ruining a ~$390 graphics card in order to make about $20.
- The PSU is probably the oldest component in my computer. In fact I think the store I bought it from has been shut down for many years even; that's how old that PSU is. Could it be supplying my computer with bad quality power? Maybe.
- My RAM is of the non-ECC variant. Could cosmic rays be to blame for the crashes? ¯\_(ツ)_/¯ The BOFH would certainly have said that this was the reason, but even in the real world it could be maybe?
Many it's official supported
>A few years ago I used to mine crypto on my graphics card
Mining does not damage your card
>My RAM is of the non-ECC variant.
Make a extensive memory check like memcheck+
>The PSU is probably the oldest component in my computer.
Here i would bet is your problem, if the ram is ok, no CAP has a bubble and your computer is dust free then it's to 80% your PSU
> Mining does not damage your card
Mining certainly does not damage the card, however i'd check the cooling. Overheating might be a problem, and mining means that the card is working at 100% of its capabilities for long times (instead of the usual/occasional gaming session).
could you please tell me what that means ? I googled "CAP bubble computer" and the results were unrelated.