Support for Relax and Recover (rear) or other recovery solution

Hi

Did anyone successfully used rear on alma Linux and recovered the system successfully?

I am looking for an solution to make a rescue + backup image that can be used to recover a failed system.

The rear application successfully makes an bootable USB stick and backup.
However when recovering the application is not able to restore the bootloader.
The recovered system works until reboot.

Any one has experience with rear on alma?
Or other solutions that can make a rescue image and backup without interrupting the server or boot a special image to clone the disk?

There are a few threads over on the Rocky forum about backups if you canā€™t find any here.

I finally settled on a combination of cron job and custom bash scripts to create LVM snapshots (non-std disk /space config to allow) , run rear to create a bootable usb image ( too complicated to use for anything else and even there not adequately documented - it is anything BUT ā€œrelaxā€ - and then rdiff-backup for reverse incremental backups of the snapshots - all works but probably not how Iā€™d do it again. ( of course DBs etc another issue )

There have been changes in the last 2/3 yrs in qcow2 image management where I believe you can now do incremental backups on running VMs so that might be worth looking at ( on my todo list )

note rdiff-backup are about to alter their API ( after 20+ yrs ! ) whichā€™ll break my scripts (***^%!)

Also, some how-tos online are wild-goose-chases / time-wasters - if it doesnā€™t document a full recovery it prob wonā€™t be any use - eg. there are serious issues with Duplicity as it wonā€™t write to a non-empty destination.

Timeshift is something else worth looking at. (epel 7,8,9 )

EDIT: In recovery I use the ReaR boot image to reinstate the system disk layout/lvm etc. and then switch to manual mode (in ReaR) to reinstate the files from the rdiff-backup files using rsync and then let ReaR complete - I seem to recall it has stuff it doesnā€™t do until after the file replacement stage and you have to follow this sequence to get a working system back. My recovery instructions doc is about 70 lines for the first disk - avoiding VG naming conflicts and unwanted auto mounts etc. , only attach the backup disk once Iā€™ve reached manual mode. (I know I tried ReaRā€™s remapping disks at one point and it didnā€™t work). I also comment/remove any /etc/fstab or /etc/crypttab entries which may block reboots while making ammendments - where new/replacement disks arenā€™t present / have different UUIDs etc.
Then thereā€™s Selinux : add ā€˜.autorelableā€™ (file) to root of each volume - itā€™ll relabel on reboot.
And then I have to reboot with the AL/RL installer, enter recovery, choose ā€œcontinueā€ and enter the passphrase for LUKS devices when prompted and reboot. Dunno why but otherwise it fails to access sda2.

That said I donā€™t know if there is a better way to get a bootable image which reinstates your disk layout / passwords etc.

1 Like

When you say ā€œyou can now do incremental backups on running VMsā€ are you meaning that the host machine can backup the VM? The only time Iā€™ve tried the host detected a change and backed up the whole QCOW each day. I do the backup on the VM and send the resulting data back to the host.

Thatā€™s what it looks like ( I just skimmed it / havenā€™t tried it ) : Create online, thin provisioned full and incremental or differential backups of your kvm/qemu virtual machines.

Hi @bob

Thank you for sharing your experience.
A bare metal recovery is still a bit difficult.

At the moment I hope rear team can give me support and try to resolve the problem.

but in the reply to MarinR can this be used to recover a bare metal installation?

I donā€™t use rear and in reply to @rvk my system does enable bare metal rebuilds. Indeed when I my old server started throwing disks it got tested a couple of times!

The master script handles mounting the external disk and reading the configuration file. As it works through the file it calls slaves on each node which perform the actual backup (dump, xfsdump or for btrfs a manipulation of subvols) and send the results back to the host for storage. If all else fails, then tar can be used with sqlite3 providing support for per-disk backup dates. Thereā€™s a prototype VMS command procedure running backup for my VMS VM but Iā€™ve been having issues with VSIā€™s implementation of OpenSSH. I run level 0 dumps on the second Wednesday of the month, level 1 on other Wednesdays and level 2 on the other days of the week, all automatically. When I spin up a new VM the config file simply needs an entry like:

Fedora:
	Announce
	Define remote
	Backup /boot
	Backup / as root
	Backup /home

which handles two BTRFS filesystems and one ext4.

Getting back on topic, if QCOW virtual volumes can be done incrementally rather than all-or-nothing, it would enable me to access them directly and avoid having to have the VM running. Hence my interest in @bob 's mention of the possibility.