How to setup and configure mptcp on Ubuntu

Hey Folks,

In this story I want to share my experience and the steps I did to setup mptcp on two of my Ubuntu servers. My goal was to evaluate the performance of iperf3 using mptcp over two 10Gbit/s links. It was a stony road to get it to work, so please read this story as a mix of a short introduction to mptcp and a guide on how to configure it.

As always, I’m focusing more on the practical side. I found a lot of helpful resources for mptcp in the Internet, but a short guide for Ubuntu was missing. So please use this story as inspiration for your own machines to play around with mptcp. Just as a short spoiler: In terms of performance, it was definitely worth it to go through the trouble. But enough for prologue, let’s get started.

1. Observation: There is not “that mptcp” for Linux

At the beginning of my journey, I thought that there is one mptcp implementation which is this one. It’s basically available for any kind of distribution, can be build from source and contains a ton of cool features. However, I read that it supports only specific Linux kernels (4.x.y), which sounded quite suspicious for me. Furthermore, I didn’t want to downgrade my kernel just to enable this tooling. So with my Ubuntu, I did some research on how to install it (apart from the things mentioned on the mptcp website) and found out that there is a package for Ubuntu 22.04. I thought: Okay nice, let’s just do a release upgrade and use the package. The cool thing is: Ubuntu 22.04 comes with Linux kernel ≥ 5.15.x which has a built-in mptcp implementation available. But (what took me a bit of time to figure out, so let me state it right at the beginning): These two implementations are completely separated. multipath-tcp.org is the initial implementation done by researches who worked on mptcp and the implementation in the newer Linux kernels is the mptcp upstream project. The funny thing is, that on multipath-tcp.org the have a note that for all Linux Kernels ≥ 5.6 please use the mptcp upstream project, which I completely overlooked the whole time… So for the rest of this story please keep in mind that these two are configured differently. 

Installation

As mentioned before, I decided to go with the upstream mptcp implementation in the kernel. If you’re running Ubuntu 21.10 or 22.04, you’re basically ready to use mptcp. It can be configured with the mptcp path manager in the ip route package. You can verify that mptcp is available on your system via sudo dmesg | grep mptcp. This command should log any reasonable output. To see if it’s enabled, you can use sude systcl net.mptcp.enabled. This value should be set to 1 if you plan to use mptcp.  Side note: You can of course also use the multipath-tcp.org implementation. Once you install the package on your system it will add a dedicated mptcp kernel to your boot menu and after configure your boot manager to boot it (in my tests it was chosen automatically by grub) you can boot in your mptcp kernel. But please be aware that the configuration is in some nuances different than the one I present here.

How applications can use mptcp

For applications, mptcp aims to create a similar interface as using normal tcp. The applications creates a socket and binds it. Now there are basically three ways to use mptcp. 

  1. Use setsockopt call with the protocol number 26 to inform the kernel that your socket should use mptcp. This is great if you’re writing your own applications, but changing this option in external tools sounds not really reasonable, because these tools need to be all built from source.
  2. That’s why we come to way number two: Enable mptcp automatically for all applications that open normal tcp sockets. This can be done via a systemtap script as shown on the redhat site. On my servers this didn’t work (I got some weird errors), but it could be worth a try.
  3. And finally the way that I personally found best: Use mptcpize to make a specific application use mptcp instead of normal tcp. It works by hacking into library calls of the target application and set the mptcp protocol number. For me it looked like heavy black magic but it didn’t show any errors, so it seemed fine so far. 

So using mptcpize I thought I was ready to run a sample application, in my case it was iperf3 since I was interested in doing performance measurements over mptcp. In the guide from redhat they show a command which should output that iperf3 is using mptcp (ss -nti ‘( dport :5201 )'), which I tried. Without success. It was just showing cubic/cwnd and nothing mptcp related. I thought it didn’t work, but I was wrong.

2. Observation: Use reliable ways to test if mptcp is used

The commands to check if a socket is using mptcp or not should be used carefully. I spend two hours trying to figure out why mptcp is not working, while it was working, the commands simply did not output what I expected. And finally, it was just not configured to multiple paths. What I can definitely recommend is using tcpdump sudo tcpdump -i lo any . There you should see mptcp as protocol in the packets targeted to the applications that use mptcp.

Configure Routing for mptcp

My first tries to run iperf3 over mptcp didn’t show any difference compared to iperf3 over normal tcp (around 9.3Gbit/s throughput over one 10Gbit/s link). The reason for this is that I didn’t configure any additional paths. There are some cool examples on how to do this (multipath-tcp.org, redhat guide). Let me present how I configured my servers to run mptcp. My setup looks like this: Server1: eno1: 10.0.0.2, eno2: 192.168.0.2, 10Gbit/s direct Server2: eno1: 10.0.0.1, eno2: 192.168.0.1, 10Gbit/s direct

So let’s have a look at the configuration on Server1:

# According to https://multipath-tcp.org/pmwiki.php/Users/ConfigureRouting
 # This creates two different routing tables, that we use based on the source-address.
ip rule add from 10.0.0.2 table 1
ip rule add from 192.168.0.2 table 2

# Configure the two different routing tables
ip route add 10.0.0.0/24 dev eno1 scope link table 1
ip route add 192.168.0.0/24 dev eno2 scope link table 2

# Allow more paths
sudo ip mptcp limits set subflow 2 add_addr_accepted 2

# Add additional path
sudo ip mptcp endpoint add 192.168.0.2 dev eno1 signal

And on Server2:

# According to https://multipath-tcp.org/pmwiki.php/Users/ConfigureRouting
 # This creates two different routing tables, that we use based on the source-address.
ip rule add from 10.0.0.1 table 1
ip rule add from 192.168.0.1 table 2

# Configure the two different routing tables
ip route add 10.0.0.0/24 dev eno1 scope link table 1
ip route add 192.168.0.0/24 dev eno2 scope link table 2

# Allow more paths
sudo ip mptcp limits set subflow 2 add_addr_accepted 2

# Add additional path
sudo ip mptcp endpoint add 192.168.0.1 dev eno1 signal

The iperf commands on client and server with respect to the configuration above are the following: client (Server2): mptcpize run iperf3 -c 10.0.0.2 server: (Server1): mptcpize run iperf3 -s What we can see here it that iperf connect with the initial flow over the 10.0.0.x addresses and uses the configured endpoint 192.168.0.x as additional subflow. After configuring the routing, I ran iperf3 again and I saw an interesting change: iperf3 now achieves around 13 Gbit/s, evenly distributed between both links. That’s cool, I finally got mptcp to work properly. To further tune performance, I increased the read and write buffers of TCP and it went up to 17 Gbit/s. That’s it, I’m happy with the results. I think achieving the whole 20 Gbit/s is only an issue of fine tuning. 

As always, please mail me any feedback you have, I really appreciate any kind of comments or additional information, and I will update this article with any helpful input I get. The original article was posted here.

Cheers, Marten