I’d like to know how other people are handling patching.
Previously I used pulp v.2 to mirror several repositories locally. The nice thing with this is it stored each package once but could be referenced by several repos, so without additional storage overhead I had a script that ran monthly and created a new empty repo $YYYY-$MM-$REPO and copied all the contents of the source repo into that monthly copy. This meant I could apply the exact same package sets in dev then qa then prod.
Here we are years later and Pulp v.2 is end-of-life, I’m looking into pulp v.3 but it has been totally re-written essentially a new product which is leading me to re-evaluate my options. As the initial question asks I’d like to know how other people are patching today, are you just constantly chasing the latest packages from upstream or a local mirror, are you using a tool like pulp to snapshot/freeze your repos so you can apply packages methodically across your environment, if so what are you using to achieve this.
The simplest view is that AlmaLinux has repositor(y|ies) and they release new content when they have built packages (after Red Hat has released sources of RHEL content). For user is left to run dnf up regularly; some packages do update if there are new versions for them in the repos.
On point updates the older versions are moved to vault from the repos. For comparison, Rocky Linux has build system that keeps only the latest version of package in the repo (and I don’t know whether the older versions are stored anywhere).
Mirror’s of the AlmaLinux repos are a convenience that balances network load (particularly on the AlmaLinux servers). One can trivially have a local mirror.
What you have had is to have a local mirror, but also keep older versions of it. That way you have both the latest version and snapshots from earlier dates. That achieves idempotency; you can repeat install with exact same set of package versions. The downside is that the system will lack all (security and bug) patches released since the creation of the snapshot.
A question is then, which is more important: absolutely homogenous systems or installation of security patches?
One could argue that if one deploys all available updates to all systems, then the systems remain homogenous (but not identical to what they were initially installed with).
Sidenote: the Alma repos are in a directory tree that has two branches: osand kickstart. The os branches are used by the default config. On release of a point update the two branches are identical. The updates within the point update life cycle are added to the os only. Hence one can (re)install “frozen” versions from the kickstart (as long as the point update exists – 6 months).
We do have two issues (1) what is in the repositories, and (2) what to install from repository.
For the latter, the use of config management system (like Ansible, Chef, Puppet) has increased. They surely can dictate which version to have installed (within limits of what the repos do have).
Personally, I do install available updates, rather than stick to something specific.
I’m using reposync which is in the yum-utils package. I have three sets of repos, synced which matches upstream (as of the most recent sync), test which test servers are pointed at, and production which production servers are pointed at.
The contents of synced is updated daily from upstream. Once a week the contents of test is copied to production immediately followed by the contents of synced being copied to test. That copying is done using rsync and hardlinks to minimise disk usage. There’s all handled by scripts run by cronjobs.
A cronjob on each server tells it to install updates and reboot once a week and any errors from dnf are emailed to us. It’s all set up on the principle of servers get updated unless we intervene, and we generally only think about it if we get emailed dnf errors or something stops working.
One day I might get time to look at Pulp. I like the idea of only the packages we actually use getting mirrored.
The latter, with Pulp 3, but with Katello as front-end. We do cluster computing with lots of out-of-tree kernel modules and other such combinatoric version-matching to do before we wanna be applying updates everywhere.