[Progress Communities] [Progress OpenEdge ABL] Forum Post: RE: Network Database Connection Pause using VMWare - vmxnet3 vs. e1000

Status
Not open for further replies.
P

Paul Koufalis

Guest
Hi Rob, This sounds suspiciously like a KB entry that was distributed via the PANS late last week. I agree with VMWare/PSC that you should not be using E1000 over vmxnet3. I suspect that if you do further benchmarking tests you'll see that the E1000 is significantly slower than the vmxnet3. Certainly we have in our testing. Your comments regarding 1 socket vs multiple sockets suggests that this is a NUMA issue within the hypervisor. Do simple ping tests show similar results? I.e. if you run ping for an hour or two on the 32-core VM, do you see any variations in the ping times? Does the 4-socket/32-core configuration represent the entire physical capacity of the physical server? Did you configure the hypervisor with vCPU = # of actual processors, or # of hyperthreads? I often see hypervisor configurations where the the number of vCPU is = # hyperthreads = 2 X #cores . Perhaps the better question is how many physical CPUs and cores, how many NUMA nodes, and then how many vCPUs configured in ESXi? What is the scheduler being used? Example: $ cat /sys/block/sda/queue/scheduler noop [deadline] cfq What is the output of numactl -H on the 32 core VM? numactl also allow you to bind a process and it's memory to a single core or set of cores. I would be curious to see if the results are different if you tie the broker, its memory and the single _mprosrv -m1 to a single processor. How many _mprosrv -m1 processes are running and being used by your tests? I would think just one, but want to be certain. Are you checking if the broker and presumably single _mprosrv -m1 process are bouncing around cores? The example below shows that my _mprosrv processes are bound to either CPU 0 or 1 (the PSR column). $ for i in $(pgrep _mprosrv);do ps -mo pid,tid,fname,user,psr -p $i;done PID TID COMMAND USER PSR 10244 - _mprosrv root - - 10244 - root 1 PID TID COMMAND USER PSR 10274 - _mprosrv root - - 10274 - root 0 PID TID COMMAND USER PSR 10278 - _mprosrv root - - 10278 - root 1 PID TID COMMAND USER PSR 10282 - _mprosrv root - - 10282 - root 1 PID TID COMMAND USER PSR 10285 - _mprosrv root - - 10285 - root 0 PID TID COMMAND USER PSR 10313 - _mprosrv root - - 10313 - root 1 PID TID COMMAND USER PSR 10336 - _mprosrv root - - 10336 - root 0 PID TID COMMAND USER PSR 10366 - _mprosrv root - - 10366 - root 1 PID TID COMMAND USER PSR 10369 - _mprosrv root - - 10369 - root 1 PID TID COMMAND USER PSR 10397 - _mprosrv root - - 10397 - root 1 PID TID COMMAND USER PSR 10424 - _mprosrv root - - 10424 - root 0 PID TID COMMAND USER PSR 10937 - _mprosrv root - - 10937 - root 1 PID TID COMMAND USER PSR 10994 - _mprosrv root - - 10994 - root 1 PID TID COMMAND USER PSR 16326 - _mprosrv root - - 16326 - root 1 PID TID COMMAND USER PSR 34287 - _mprosrv root - - 34287 - root 1

Continue reading...
 
Status
Not open for further replies.
Top