High load times after CentOS 7.9 to Almalinux 8 upgrade

Hi,

I have recently had my WHM Server on CentOS 7.9 upgraded to to Almalinux 8 and we are now experiencing extremely abnormal load times with continuous process’s running and taking up a huge amount of CPU as below:

opt/cpanel/ea-php81/root/usr/bin/php-cgi

Before we did the upgrade load times were rarely above 1 and never over 2.

Soon as the upgrade was done our processes are now sitting as a minimum 2 and fluctuating up to 5 and 6 when these processes are running.

The company that has done the upgrade can’t find anything causing these issues but as I run the server daily myself I know these are not normal.

Has anyone experienced this or could point me in the direction of where to look or advise the company to look to find what is causing this.

Thanks
(My level on knowledge extends to running the front end of the server not via SSH.)

What’s the output of the following commands:

  1. top c
  2. iotop

Hi, thanks for responding.

Here is a copy of the recent top c I have just run.

194317 netzoli+ 20 0 711672 176616 112348 R 98.0 1.1 0:03.12 /opt/cpanel/ea-php81/root/usr/b+
194318 leeksons 20 0 710336 153652 90444 R 77.5 1.0 0:02.34 /opt/cpanel/ea-php81/root/usr/b+
194319 netzoli+ 20 0 671064 103360 77872 R 54.3 0.6 0:01.64 /opt/cpanel/ea-php81/root/usr/b+
194320 leeksons 20 0 669392 76260 52764 R 29.8 0.5 0:00.90 /opt/cpanel/ea-php81/root/usr/b+
1496 mysql 20 0 4860532 992204 16080 S 8.3 6.2 34:55.18 /usr/sbin/mariadbd
194321 netzoli+ 20 0 655356 39776 30076 S 5.0 0.2 0:00.15 /opt/cpanel/ea-php81/root/usr/b+
194286 netzoli+ 20 0 756260 136748 99980 S 1.3 0.8 0:02.61 /opt/cpanel/ea-php81/root/usr/b+
194204 leeksons 20 0 759112 125488 86116 S 1.0 0.8 0:02.29 /opt/cpanel/ea-php81/root/usr/b+
1613 mongod 20 0 1643016 83260 15248 S 0.7 0.5 2:46.58 /usr/local/jetapps/usr/bin/mong+
13 root 20 0 0 0 0 S 0.3 0.0 4:26.96 [ksoftirqd/0]

This is when I have ran iotop but load times were low at that moment.

194289 root 20 0 262476 4724 3684 R 0.7 0.0 0:00.81 top c
1 root 20 0 93764 11328 7784 S 0.3 0.1 1:02.10 /usr/lib/systemd/systemd --swit+
1410 root 20 0 759784 60852 9680 S 0.3 0.4 0:57.78 /opt/imunify360/venv/bin/python+
193168 root 20 0 0 0 0 I 0.3 0.0 0:00.42 [kworker/0:2-events]
2 root 20 0 0 0 0 S 0.0 0.0 0:00.07 [kthreadd]
3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 [rcu_gp]
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 [rcu_par_gp]
5 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 [slub_flushwq]
7 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 [kworker/0:0H-events_highpri]
10 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 [mm_percpu_wq]

You didn’t write the following top line:
%Cpu(s): 0.1 us, 0.1 sy, 0.0 ni, 99.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
It generally gives a hint as to what is causing load averages.

  • If wa is high, it means that the disk usage is higher and CPU is waiting for I/O.
  • If si is high, it means software interrupts is taking CPU.

and so on

Though, from the current top output, it does look that users are what are making your load averages high.

Hi thanks for coming back to me.

The processes seem to be relating to traffic but the server handled much more traffic at a lower load figure.

If I simultaneously access several websites at one time the load times massively spike.

It’s as if the server cannot handle as much traffic since without high loads since it’s been upgraded.

I’ll keep an eye on it and see if it settles over the next few days.

Thanks