xdp-forward: added VLANS support#504
Open
enhaut wants to merge 2 commits intoxdp-project:mainfrom
Open
Conversation
This commit adds support for VLANs. All the scenarios of packets forwarding from/to interfaces are supported: * untagged inf -> untagged inf * tagged inf -> tagged inf * untagged inf -> tagged inf * tagged inf -> untagged inf Unfortunately, this adds roughly 4% performance overhead for all the scenarios. Since patch for kernel this is based on, hasn't been merged yet this requires patching kernel manually. When running this version of xdp-forward on unpached kernel all the route lookups (`bpf_fib_lookup`) will return `-EINVAL` and so, will be passed to regular kernel stack.
This patch extends previous one and it adds support for 802.1Q VLANs without mentioned kernel patch. VLAN interfaces are detected via netlink as well as their ids. These data are stored in BPF map and provided to XDP program which then can handle VLAN packets the same way as if there was kernel patch applied. VLAN interfaces are detected via netlink and only during startup. Therefore, packets forwarded to VLAN interfaces added later won't have corresponding entries in BPF map which forwards packets to regular kernel stack.
|
Hi! Is there any news about the integration of the VLAN patch? Thx |
Member
|
Is there any news about the integration of the VLAN patch?
Sadly, no. I got sidetracked before I managed to submit the kernel patch
to add VLAN support to the fib lookup, and this stalled out as a result.
As I recall, that was quite a small patch, so should be possible to
resurrect it.
Alternatively, if someone wants to rebase this and drop the kernel patch
path, we can merge that first and add the other thing once the kernel
patch lands.
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This patch adds support for VLAN processing. All the directions are supported:
However, this adds ~4% overhead, therefore it's disabled by default. To enabled it,
xdp-forwardneeds to be recompiled with eitherVLANS_USERSPACEorVLANS_PATCHEDbased on desired mode. 2 modes are supported:xdp-forward, userspace part usesnetlinkto get all the VLAN interfaces on top ofxdp-forward-enabled devices. The map that maps vlan ifindex to underlying physical ifindex+VLAN ID is then passed to XDP program. (bpf_fib_lookupreturns VLAN interface ifindex when packet is forwarded to VLANed network, but VLAN interfaces does not implement xmit function for xdp. Therefore, packet needs to be sent out of physical inf). Limitation of this version is, that mapping map is not automatically updated, so when VLAN interface is changed (removed, added, vlan id)xdp-forwardrequires manual reload.bpf_fib_lookupfor support of physical device lookup for vlan interfaces. (@tohojo is working on this patch)Performance comparison:
Before this patch, the performance of VLAN tagged traffic was even slower with
xdp-forwardenabled as it was just passing it to the stack but added some overhead.