From time-to-time, I need to backup data, mainly engineering simulations results to USB 3.2. sticks. The transfer of files e.g. of a zip archive of up to 1Gb or a few hundred files is a matter of seconds (simulations produce numerous files per timestep and there are many timesteps) BUT as soon as a file is larger than about 1Gb or the number of files exceeds about 1000 files, the estimated data transfer time jumps from seconds to an hour and more and the progress information almost stalls as a result of a drop of the data transfer speed. I am not talking about large amounts of data, just 2-6Gb at a time.
Of course I can copy everything part-by-part manually managing the limits but it’s annoying to do that to get a few Gb (yesterday 6Gb) onto an almost empty 64Gb stick.
Any idea about how to fix this problem?
Stop using USB sticks, they generally have very poor/cheap flash storage, even the USB3 ones can’t saturate a USB3 interface. You’d be better off with a USB HDD/SSD enclosure.
its nothing to do with almalinux, i have the same problem on debian and windows.
I’ve seen this before, and it’s always a function of how slow the USB stick actually is. Linux will cache the data up to a point, and returns immediately; what you’re seeing is when you exceed the cache size, and then get the actual estimate with the actual transfer speed of the USB stick.
If you have access to Windows, you can run an independent benchmark outside of linux to test and confirm the true write speed of your USB stick.
I formatted the stick which took 1.5 hours for the 64Gb to complete. After that, read speed is at 20Mb/s and write speed is at 9.5Mb/s but writing files exceeding 2Gb is not reliable and tends to stall at about 4.5Gb.
I’ll get a case for an external SSD for my backups.
sounds like you’ve got a fake drive there, formatting should take seconds, unless you encrypted it. also if you’re seeing problems around 4.5gb are you sure you formatted it with a decent filesystem not vfat?
in any case you don’t want to trust backups to something flaky.
dd without buffering to see real speeds:
dd if=<src> of=<usb> bs=4M status=progress oflag=direct conv=fsync
(not sure about the fsync - seem to recall it didn’t work without it)
If you do it with buffering it will appear to hang on large files once all buffered and still writing.