Achieving Top Network Performance - Red Hat Summit
Achieving Top Network Performance - Red Hat Summit Achieving Top Network Performance - Red Hat Summit
Why Bother – A quick teaser●Check MTU# ifconfig eth4eth4 Link encap:Ethernet HWaddr 00:02:C9:36:79:80inet addr:172.17.200.50 Bcast:172.17.200.255Mask:255.255.255.0inet6 addr: fe80::202:c9ff:fe36:7980/64 Scope:LinkUP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1RX packets:2634628 errors:0 dropped:0 overruns:0 frame:0TX packets:31433648 errors:0 dropped:0 overruns:0carrier:0collisions:0 txqueuelen:1000RX bytes:184742056 (176.1 MiB) TX bytes:47590480340(44.3 GiB)
Why Bother – A quick teaser● ifconfig eth0 mtu 9000# ./netperf -l 30 -H 172.17.200.82TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to172.17.200.82 (172.17.200.82) port 0 AF_INET : spin interval : demoRecv Send SendSocket Socket Message ElapsedSize Size Size Time Throughputbytes bytes bytes secs. 10^6bits/sec87380 16384 16384 30.00 23923.65●Changing MTU 9 Gb/sec -> 24 Gbit /sec
- Page 18 and 19: NUMA - Latency[root@perf ~]# numact
- Page 20 and 21: PCI Bus - and related issues●●
- Page 22 and 23: 40 Gbit Gen3 vs 10 Gbit PCI Gen2 la
- Page 24 and 25: CPU Characteristics - Basics●●
- Page 26 and 27: CPU - Performance Governors●echo
- Page 28 and 29: CSTATE default - C7 on this configp
- Page 30 and 31: NPtcp latency vs cstates - c7 vs c0
- Page 32 and 33: RHEL6 “tuned-adm” profiles# tun
- Page 34 and 35: Kernel Bypass Technologies - Pros a
- Page 36 and 37: Offload - Solarflare OpenOnloadAver
- Page 38 and 39: KVM Network ArchitectureVirtioConte
- Page 40 and 41: KVM Network Architecture - vhost_ne
- Page 42 and 43: Latency comparison - RHEL 6Network
- Page 44 and 45: Host CPU Consumption virtio vs vhos
- Page 46 and 47: KVM Network Architecture - PCI Devi
- Page 48 and 49: KVM Network Architecture - SR-IOV
- Page 50 and 51: KVM Architecture - Device Assignmen
- Page 52 and 53: RHEL6 - new features●●●●●
- Page 54 and 55: RHEL6 - new features●●Add getso
- Page 56 and 57: Receive Steering - improved message
- Page 58 and 59: Tuning Knobs - Overview●●●By
- Page 60 and 61: sysctl - View and set /proc/sys set
- Page 62 and 63: sysctl - TCP related settings●TCP
- Page 64 and 65: Why Bother ? - Teaser 1effect of ne
- Page 67: lspci - details# lspci -vvvs 81:00.
- Page 71 and 72: Tuning- first pass bottleneck resol
- Page 73 and 74: Tuning - second pass setup●●●
- Page 75 and 76: Tuning - irqbalance disabled, netpe
- Page 77 and 78: Tuning - second pass●mpstat on th
- Page 79 and 80: Tuning - step 3●Try TCP_SENDFILE#
- Page 81 and 82: Tuning - checking ethtool -S eth4
- Page 83 and 84: Tuning - step 4●More buffers# ./n
- Page 85 and 86: Tuning - sanity check●Sometimes m
- Page 87 and 88: Throttling - cgroups in Action
- Page 89 and 90: Cgroup default mount points# cat /e
- Page 91 and 92: cgroups[root@dhcp1001950 ~]#
- Page 93 and 94: incorrect bindings![root@dhcp47-183
- Page 95 and 96: Throttle with cgroups●●memory
- Page 97 and 98: Network Tuning Tips●Separate netw
- Page 99 and 100: For More Information - Other talks
- Page 101: Stay connected through the Red Hat
- Page 104 and 105: Configuration Tools - System Level
- Page 106 and 107: sar - some common flags●Some comm
Why Bother – A quick teaser● ifconfig eth0 mtu 9000# ./netperf -l 30 -H 172.17.200.82TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to172.17.200.82 (172.17.200.82) port 0 AF_INET : spin interval : demoRecv Send SendSocket Socket Message ElapsedSize Size Size Time Throughputbytes bytes bytes secs. 10^6bits/sec87380 16384 16384 30.00 23923.65●Changing MTU 9 Gb/sec -> 24 Gbit /sec