Stable Releases or Rolling Releases?
bob at proulx.com
Tue May 3 16:55:11 MDT 2022
Phil Marsh wrote:
> My two cents. The thing I dislike is that I need to completely
> upgrade the OS to get the latest software (e.g. compilers etc...)
> and often libraries sometimes require newer software than the
> versions have.
I felt like this deserved a topic thread of its own.
Stable Releases or Rolling Releases? That has been a question that
has been debated a lot over the years. Most operating systems are
stable releases. That's what most people want. But some are rolling
releases. Because different people want different things.
Stable releases tend to be the most... Stable! They get some testing
of how the collection works together. For me the best release cycle
is about every two years. That's often enough for me to keep up with
major changes without being so long that I have forgotten how to do
For businesses though five or ten years is better. Often they deploy
something and actually *never* want to change it. The longer they can
avoid changing *anything* the better.
But desktop users tend to like the latest fluff and glitter. And yea
verily the web browsers actively force that upon us. Web sites and
web browsers conspire to force everyone to upgrade the browsers early
and often. This is an example of might makes right. And the web is
too mighty to be ignored. Forcing stable distributions to be rolling
for web browsers even if for nothing else.
Developers often are attracted to the latest fancy thing. And then
use it in their development. Forcing anyone else who wants to use
their program to also use their choice of libraries. It's a forced
DLL Hell model. Microsoft has always developed Word and Excel that
way. How often have people needed to upgrade those only because
someone else had a newer version and working with them forced an
upgrade? Pretty much every day anyone used it!
> For this, I rely on PPAs but I don't think this is ideal.
The Ubuntu PPA model is that the user assembles a loose collection of
parts that might work together and see if they do work together. Then
forget they have done this and change other parts breaking the parts
they have forgotten about.
> Yea, I get that the Linux folks are reluctant to have brand-new
> software in the interests of stability.
One set of people wants new stuff so begs for new stuff continuously
even if it breaks things. Another set of people don't want anything
to break so please stop changing things. These are opposing
viewpoints impossible to reconcile.
> I wonder if there's a way to have rolling updates instead of
> reinstalling a fresh OS which winds up taking me about 2 weeks to
> set up?
There are many examples of Rolling Release models. Debian Unstable
has always been rolling. So does Arch. Void uses a rolling release
of curated features for stability. It's best to keep upgraded every
week or so that one does not fall behind.
A rolling release model does not mean that things don't break. It
rather means that things might break at any time. Things get broken.
Things get fixed. I will say that this is the original development
model. And it was out of using the rolling release model that people
asked themselves, "Couldn't we use a stable branch where only known
good things went into it?" And created the stable release model.
And certainly for production servers I think the stable release model
makes a lot of sense. But for a desktop user I think they can use
anything they feel like using if it only affects them. Want
stability? Use a stable release. Want new stuff? Use a rolling
release. Do whatever brings you the most joy and happiness.
Don't FORCE that joy and happiness upon the rest of us!
> OS upgrades are painful for me because I'm an amateur and also
> because I end up needing to re-configure things like Apache and
> Owncloud servers.
Debian, Ubuntu, Mint, Devuan, and the others keep the current
configuration in place and asks you about conffiles that are modified
but need to be upgraded and possibly merged. I know Red Hat has a
similar but different strategy. (But since Red Hat generally does not
support upgrades across major releases the question is not relevant.)
Therefore upgrades generally work. And should work. If not then it
is a bug and should be reported.
If things change too much though then configuration files must be
manually handled. Apache from 2.2 to 2.4 was a quite different
configuration file syntax and could not be easily automatically
upgraded. It required a human to get involved. That's not the
operating system's fault. It was Apache's. What's an OS to do?
My strategy is to upgrade non-interactively telling the package
manager to give me the new configuration files and put the old
configuration files off to the side *.dpkg-old and then afterward I
will find /etc -name '*.dpkg-*' (and '*.ucf-*') and walk through each
of them manually merging in any configuration customization I have
made locally. (And having done that a time or two I automate that
process using scripts. I do very little manual edit hacking. But I
have many systems. If you have one or two then manual is easier for
one or two.)
I have systems I have continued for years and years through many
upgrades. It is a continuous rolling release model of stable
Having said that though some upgrades means not taking advantage of
other improvements. If one is running an ext3 file system then an
upgrade will stay ext3 but a fresh install will be ext4 at least or
possibly switched to xfs or other. It's possible to squeeze out these
remaining features and upgrade them in place too. But more of an
advanced topic. Not easy for casual users. So sometimes it is nice
to just do a fresh install and get all of the latest goodness.
> I don't think that just doing the standard Ubuntu upgrade will work
> right here either and I've always installed fresh to upgrade.
I hate to make sweeping general statements but I am not aware of any
huge breaking changes happening right now. Not like Apache 2.2 to 2.4
and similar. I think upgrades should work. And if they don't then I
think that is a bug to be fixed.
Also I rely upon the core part of the OS to upgrade and work. There
will be a kernel running and I will get a command line shell. And
from there if worst comes to worst I can purge something like apache
and install it again and then reconfigure it. Again I think Apache is
okay right now and beside I am not using it anywhere. I am using
Nginx everywhere that I care about web servers.
If the core part of the OS fails to upgrade that is bad. Especially
on servers. But on a laptop let's say that it does. Then I have
physical access. I can boot the installer ISO and use it as a rescue
mode to rescue the system. The current installers all do a very good
job of being rescue media. They will automatically start up RAID.
They will will activate LVM. They will mount up disks. They will
reinstall GRUB. On the times I have needed to rescue systems the
installer as a rescue mode has worked very well.
If one is running a *remote* server then upgrades are more stressful.
If it is a virtual server at a cloud host then for there I would make
a snapshot of everything. Then can return to the previous snapshot if
things go sideways. I would also just spin up a second VM with the
latest install and then configure it off to the side. When
everything is working then I would swap it into place. If that goes
sideways then can always swap back to the previous server still
running. Then debug and fix the problem. Then try again.
Is there an answer? No. There are only choices.
More information about the NCLUG