cpumap-pping 1.0.0 RC 1 #150
thebracket
started this conversation in
Show and tell
Replies: 2 comments
-
Hey @thebracket , just curious, with your HyperV test environment, is your shaper a VM running with two NICs bridged? or routed? Just thinking about lab env for our stuff, running under VMware, bridging two nics is not something we can do as I understand it. Is this hyperv magic or have I misunderstood entirely? Cheers! |
Beta Was this translation helpful? Give feedback.
0 replies
-
Hey,
My lab setup is:
1 VM ("shaper") with 2 virtual NICs each connected to a "private switch"
(each going to test clients), and a 3rd NIC connected to my LAN for easy
updating/access via SSH. I had to change some "allow MAC address changing"
settings to get the bridge to work at all.
1 VM connected to the 1st private switch, setup as an iPerf server. It has
a 2nd virtual NIC also connected to my LAN, with masquerade enabled.
2 VMs connected to the 2nd private switch, each running iPerf clients and
with Internet connectivity through the shaper.
It's worked pretty well so far.
…On Tue, Nov 8, 2022 at 2:33 PM Warren ***@***.***> wrote:
Hey @thebracket <https://github.com/thebracket> , just curious, with your
HyperV test environment, is your shaper a VM running with two NICs bridged?
or routed? Just thinking about lab env for our stuff, running under VMware,
bridging two nics is not something we can do as I understand it. Is this
hyperv magic or have I misunderstood entirely? Cheers!
—
Reply to this email directly, view it on GitHub
<#150 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADRU4356RLN7RDU3EAKM4WTWHK2LJANCNFSM6AAAAAARX5JGIY>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
After heavy testing and resolving a few issues under heavy load, cpumap-pping has tagged release
1.0.0 RC1
. It should be ready for the v1.3 release of LibreQoS.What does it do?
cpumap-pping
merges two projects:xdp-cpumap-tc provides some of the heavy lifting behind LibreQoS. It maps IP addresses to Linux
tc
classifiers/qdiscs. I recently added IPv6 and IPv4 subnet (e.g. match on 192.168.0.0/24), which is included incpumap-pping
. By mapping directly (instead of filters), thecpumap
is able to shift traffic shaping processing to individual CPUs - bypassing the performance limits of the default Linux traffic shaper. Because BPF programs run in kernel space (in a sandbox), it is able to sustain very high performance.xdp-pping is an in-kernel BPF version of the excellent Polere Pping by Kathie Nichols. Previous versions of LibreQoS ran the original
pping
to gather TCP round-trip time data, providing accurate Quality of Experience (QoE) metric to help optimize your ISP and monitor the benefits of the Cake shaper.pping
is a great tool, but tended to consume too much CPU time (on a single core) under heavy load.xdp-pping
could sustain very high loads, and still provide accurate RTT information.Combining the two to run separately was troublesome, and duplicated a lot of work: both programs would individually parse Ethernet headers (
cpumap
also parses VLAN headers,pping
did not), TCP headers, extract addresses, etc. For LibreQoS, it just made sense to combine them.cpumap-pping
is a drop-in replacement (fully compatible) forxdp-cpumap-tc
in LibreQoS. Once in place, instead of runningpping
and reading the results, you periodically runxdp_pping
and retrieve the current snapshot of performance data - ready classified to match the queues that LibreQoS is already setting up. The results are handed out in a convenient JSON format:Only a subset of TCP data is sampled. Rather than process every packet, the first 58 "ack" sequences are timed for each tc handle. This has two advantages:
Performance
pping
component, performance is around 4.21 gbit/s per-core. In other words, it's very fast.0.004 ms
to customer ping times.@rchac has been working hard on connecting this to the graphing system. You can see some great examples of progress in this - now closed - issue. For example:
Beta Was this translation helpful? Give feedback.
All reactions