Discussion:
i210 interrupts always on CPU 0
(too old to reply)
Fredrick Klassen
2015-10-05 15:24:27 UTC
Permalink
Greetings,

I have an appliance with 2 CPU cores and 3 network interfaces. In my case,
eth0/eth1 are on an i210 and eth2 is on i211. The first 2 interfaces
(eth0/eth1) are acting as a bridge, and eth2 is normal. I am using Linux
kernel 3.18.20 and igb 5.3.3.2.

I am hunting down performance delays when traffic from eth2 is passing
through the eth0/eth1 bridge. I notice that all interrupts from all
adapters are occurring on CPU 0. I suspect that I can get a performance
boost if I distribute my interrupts across both CPU cores (similar to my
ixgbe based appliances). Is this possible through some configuration or
minor driver changes?

Here is a summary of my interrupts and a sample of my CPU affinity:

----------------- cut ----------------
# cat /proc/interrupts
CPU0 CPU1
0: 50 0 IO-APIC-edge timer
1: 3 0 IO-APIC-edge i8042
4: 2205 0 IO-APIC-edge serial
8: 13 0 IO-APIC-fasteoi rtc0
9: 0 0 IO-APIC-fasteoi acpi
12: 5 0 IO-APIC-edge i8042
18: 0 0 IO-APIC 18-fasteoi i801_smbus
23: 87 0 IO-APIC 23-fasteoi ehci_hcd:usb1
87: 4 0 PCI-MSI-edge i915
88: 5790 0 PCI-MSI-edge 0000:00:13.0
89: 1 0 PCI-MSI-edge eth0
90: 13149 0 PCI-MSI-edge eth0-TxRx-0
91: 8221 0 PCI-MSI-edge eth0-TxRx-1
92: 1 0 PCI-MSI-edge eth1
93: 13379 0 PCI-MSI-edge eth1-TxRx-0
94: 8840 0 PCI-MSI-edge eth1-TxRx-1
95: 674 0 PCI-MSI-edge eth2
96: 11654 0 PCI-MSI-edge eth2-TxRx-0
97: 1674 0 PCI-MSI-edge eth2-TxRx-1
98: 64 0 PCI-MSI-edge iwlwifi
NMI: 3 3 Non-maskable interrupts
LOC: 295147 249655 Local timer interrupts
SPU: 0 0 Spurious interrupts
PMI: 3 3 Performance monitoring interrupts
IWI: 1 0 IRQ work interrupts
RTR: 0 0 APIC ICR read retries
RES: 1916 2172 Rescheduling interrupts
CAL: 116 2390 Function call interrupts
TLB: 1340 1345 TLB shootdowns
TRM: 0 0 Thermal event interrupts
THR: 0 0 Threshold APIC interrupts
MCE: 0 0 Machine check exceptions
MCP: 3 3 Machine check polls
HYP: 0 0 Hypervisor callback interrupts
ERR: 0
MIS: 0
# cat /proc/irq/90/smp_affinity
3
----------------- cut ----------------
--
*Fred Klassen*
Fujinaka, Todd
2015-10-06 15:44:31 UTC
Permalink
The only I can give you is "it depends". I'd suggest keeping RX/TX pairs on the same core. It's been a while since I've heard of a system with only two cores. For your experiments you just have to change the affinity by stopping the irqbalance daemon and doing something like:

echo 1 > /proc/irq/90/smp_affinity
echo 2 > /proc/irq/91/smp_affinity

echo 1 > /proc/irq/93/smp_affinity
echo 2 > /proc/irq/94/smp_affinity

echo 1 > /proc/irq/96/smp_affinity
echo 2 > /proc/irq/97/smp_affinity

Todd Fujinaka
Software Application Engineer
Networking Division (ND)
Intel Corporation
***@intel.com
(503) 712-4565

-----Original Message-----
From: Fredrick Klassen [mailto:***@appneta.com]
Sent: Monday, October 05, 2015 8:24 AM
To: e1000-***@lists.sourceforge.net
Subject: [E1000-devel] i210 interrupts always on CPU 0

Greetings,

I have an appliance with 2 CPU cores and 3 network interfaces. In my case,
eth0/eth1 are on an i210 and eth2 is on i211. The first 2 interfaces
(eth0/eth1) are acting as a bridge, and eth2 is normal. I am using Linux kernel 3.18.20 and igb 5.3.3.2.

I am hunting down performance delays when traffic from eth2 is passing through the eth0/eth1 bridge. I notice that all interrupts from all adapters are occurring on CPU 0. I suspect that I can get a performance boost if I distribute my interrupts across both CPU cores (similar to my ixgbe based appliances). Is this possible through some configuration or minor driver changes?

Here is a summary of my interrupts and a sample of my CPU affinity:

----------------- cut ----------------
# cat /proc/interrupts
CPU0 CPU1
0: 50 0 IO-APIC-edge timer
1: 3 0 IO-APIC-edge i8042
4: 2205 0 IO-APIC-edge serial
8: 13 0 IO-APIC-fasteoi rtc0
9: 0 0 IO-APIC-fasteoi acpi
12: 5 0 IO-APIC-edge i8042
18: 0 0 IO-APIC 18-fasteoi i801_smbus
23: 87 0 IO-APIC 23-fasteoi ehci_hcd:usb1
87: 4 0 PCI-MSI-edge i915
88: 5790 0 PCI-MSI-edge 0000:00:13.0
89: 1 0 PCI-MSI-edge eth0
90: 13149 0 PCI-MSI-edge eth0-TxRx-0
91: 8221 0 PCI-MSI-edge eth0-TxRx-1
92: 1 0 PCI-MSI-edge eth1
93: 13379 0 PCI-MSI-edge eth1-TxRx-0
94: 8840 0 PCI-MSI-edge eth1-TxRx-1
95: 674 0 PCI-MSI-edge eth2
96: 11654 0 PCI-MSI-edge eth2-TxRx-0
97: 1674 0 PCI-MSI-edge eth2-TxRx-1
98: 64 0 PCI-MSI-edge iwlwifi
NMI: 3 3 Non-maskable interrupts
LOC: 295147 249655 Local timer interrupts
SPU: 0 0 Spurious interrupts
PMI: 3 3 Performance monitoring interrupts
IWI: 1 0 IRQ work interrupts
RTR: 0 0 APIC ICR read retries
RES: 1916 2172 Rescheduling interrupts
CAL: 116 2390 Function call interrupts
TLB: 1340 1345 TLB shootdowns
TRM: 0 0 Thermal event interrupts
THR: 0 0 Threshold APIC interrupts
MCE: 0 0 Machine check exceptions
MCP: 3 3 Machine check polls
HYP: 0 0 Hypervisor callback interrupts
ERR: 0
MIS: 0
# cat /proc/irq/90/smp_affinity
3
----------------- cut ----------------

--
*Fred Klassen*
------------------------------------------------------------------------------
Fredrick Klassen
2015-10-06 16:40:29 UTC
Permalink
Thank you Todd.

Your suggestion has helped immensely. Immediately after entering the suggested commands, I saw a drop in bridge jitter by 23%. I want to drop it even more, but this gives me a good start.

Thanks, Fred.
------------------------------------------------------------------------------
Loading...