![]() $ numactl -m 1 netperf -T 4 (pinning netperf to physical cpu #4) ![]() make sure to run netperf on different cores on the same numa node as the guest. $ echo 01>/proc/irq/$virti0-output/smp_affinityĥ.For host to guest scenario. $ echo 01>/proc/irq/$virti0-input/smp_affinity $ taskset -p 20 $vhost #(pinning vcpu2 thread to physical cpu #5 )Ĥ.In guest,pin the IRQ to one core and the netperf to another.ġ) make sure irqbalance is off - `$ service irqbalance stop`Ģ) find the interrupts - `$ cat /proc/interrupts`ģ) find the affinity mask for the interrupt(s) - `$ cat /proc/irq//smp_affinity`Ĥ) change the value to match the proper core.make sure the vlaue is cpu mask. get vhost PID and then use taskset to pin it on the same soket. $ taskset -p 80 $vcpus2 #(pinning vcpu2 thread to physical cpu #7 )ģ.To pin vhost on host. $ taskset -p 40 $vcpus1 #(pinning vcpu1 thread to pyshical cpu #6 ) $ numactl -c 1 -m 1 qemu-kvm -smp 2 -m 4G (pinning guest memory and cpus to numa-node 1)Ģ.For a single instance test, it would suggest trying a one to one mapping of vcpu to pyhsical core. 1.First level pinning would be to use numa pinning when starting the guest.Į.g.
0 Comments
Leave a Reply. |