Citing Multipath TCP

A growing number of scientific papers use the Multipath TCP implementation in the Linux kernel to perform experiments, develop new features or compare Multipath TCP with newly proposed techniques. While reading these scientific papers, we often see different ways of citing the Multipath TCP implementation in the Linux kernel. As of this writing, more than twenty developers have contributed to this implementation and the number continues to grow. The full list of contributors is available from : http://multipath-tcp.org/mptcp_stats/authors.html

If you write a scientific paper that uses the Multipath TCP implementation in the Linux kernel, we encourage you cite it by using the following reference :

Christoph Paasch, Sebastien Barre, et al., Multipath TCP implementation in the Linux kernel, available from http://www.multipath-tcp.org

The corresponding bibtex entry may be found below

@Misc{MPTCPLinux,
    author =    {Christoph Paasch and Sebastien Barre and others},
    title =     {Multipath TCP implementation in the Linux kernel},
    howpublished = {Available from http://www.multipath-tcp.org}
}

Please also indicate the precise version of the implementation that you used to ease the reproduction of your results. We also strongly encourage you to distribute the software the you used to perform your experiments and the patches that you have written on top of this implementation. This will allow other researchers to reproduce your results.

Recommended Multipath TCP configuration

A growing number of researchers and users are downloading the pre-compiled Linux kernels that include the Multipath TCP implementation. Besides the researchers who performed experiments on improving the protocol or its implementation, we see a growing number of users that deploy Multipath TCP on real machines to benefit from its multihoming capabilities. Several of these users have asked questions on the mptcp-dev mailing list on how to configure Multipath TCP in the Linux kernel. There are several parts of the Multipath TCP implementation that can be tuned.

The first element that can be configured is the path manager. The path manager has been reimplemented in a modular manner recently. This is the part of the software that controls the establishment of new subflows. The latests versions of Multipath TCP contain a path manager that has a modular architecture, but as of this writing, only two different path managers have been implemented : the fullmesh and the ndiffports path managers.

The fullmesh path manager is the default one and should be used in most deployments. On a client, it will advertise all the IP addresses of the client to the server and listen to all the IP addresses that are advertised by the server. It also listens to events from the network interfaces and reacts by adding/removing addresses when interfaces go up or down. On a server, it allows the server to automatically learn all the available addresses and announce them to the client. Note that in the current implementation the server never creates subflows, even if it learns different addresses from the client. The reason is that the client is often behind a NAT or firewall and creating subflows from the server is not a good idea in this case. The typical use case for this fullmesh path manager is a dual-homed client connected to a single-homed server (e.g. a smartphone connected to a regular server). In this case, two subflows will be established on each of the interfaces of the dual-homed client. We expect that this is the more popular use case for Multipath TCP. It should be noted that if the client has N addresses and the server M addresses, this path manager will establish N \times M subflows. This is probably not optimal in all scenarios.

The ndiffports path manager was designed for a specific use case in mind : exploit the equal costs multiple paths that are available in a dataceenter. This allowed to demonstrate nice performance results with Multipath TCP in the Amazon EC2 datacenter in a paper presented at SIGCOMM11. It can also be used to perform some tests between single-homed hosts. However, this path manager does not automatically learn the IP addresses on the client and the server and does not react to interface changes. As for the full-mesh path manager, the server never creates subflows. The ndiffports path manager should not be used in production and should be considered as an example on how a path manager can be written inside the Linux kernel.

A second important module in the Multipath TCP implementation in the Linux kernel is the packet scheduler. This scheduler is used every time a new packet needs to be sent. When there are several subflows that are active and have an open congestion window, the default scheduler selects the subflow with the smallest round-trip-time. The various measurements that have been performed during the last few years with the Multipath TCP implementation in the Linux kernel indicate that this scheduler appears to be the best compromise from a performance viewpoint. Recently, the implementation of the scheduler have been made more modulas to enable researchers to experiment with other schedulers. A round-robin scheduler has been implemented and evaluated in a recent paper that shows that the default scheduler remains the best choice. Researchers might come up later with a better scheduler than improves the performance of Multipath TCP under specific circumstances, but as of this writing the default rtt-based scheduler remains the best choice.

A third important part of Multipath TCP is the congestion control scheme. The standard congestion control scheme is the Linked Increase Algorithm (LIA) defined in RFC 6356. It provides a similar performance as the NewReno congestion control algorithm with single path TCP. An alternative is the OLIA congestion control algorithm. The paper that proposes this algorithm has shown that it gives some benefits over LIA in several environments. Our experience indicates that LIA and OLIA could safely be used as a default in deployments. Recently, a delay based congestion control scheme tuned for Multipath TCP has been added to the Linux implementation. Users who plan to use this congestion control scheme in specific environments should first perform tests before deploying it.

There are two other configuration parameters that could be tuned to improve the performance of Multipath TCP. First, Multipath TCP tends to consume more buffers than regular TCP since data is transmitted over paths with different delays. If you experience performance issues with the default buffer sizes, you might try to increase them, see https://fasterdata.es.net/host-tuning/linux/ for additional information. Second, if Multipath TCP is used on paths having different Maximum Segment Sizes, there are scenarios where the performance can be significantly reduced. A patch that solves this problem has been posted recently. If your version of the Multipath TCP kernel does not include this patch, you might want to force the MTU on all your interfaces one the client to use the same value (or force a lower MTU on the server to ensure that the clients always use the same MSS).

Multipath TCP discussed at Blackhat 2014

The interest in Multipath TCP continues to grow. During IETF90, an engineer from Oracle confirmed that they were working on an implementation of Multipath TCP on Solaris. This indicates that companies see a possible benefit with Multipath TCP. Earlier this week, Catherine Pearce and Patrick Thomas from Neohapsis gave a presentation on how the deployment of Multipath TCP could affect enterprise that heavily rely on firewalls and IDS in their corporate network. This first ‘heads up’ for the security community will likely be followed by many other attempts to analyse the security of Multipath TCP and its implications on the security of an enterprise network.

In parallel with their presentation, Catherine and Patrick have released two software packages that could be useful for Multipath TCP users. Both are based on a first implementation of Multipath TCP inside scapy written by Nicolas Maitre during his Master thesis at UCL.

  • mptcp_scanner is a tool that probes remote hosts to verify whether they support Multipath TCP. It would be interesting to see whether an iPhone is detected as such (probably not because there are no servers running on the iPhone). In the long term, we can expect that nmap
  • mptcp_fragmenter is a tool that mimics how a Multipath TCP connection could send start over different subflows. Currently, the tool is very simple, five subflows are used and their source port numbers are fixed. Despite of this limitation, it is a good starting point to test the support of Multipath TCP on firewalls. We can expect that new features will be added as firewalls add support for Multipath TCP.

NorNet moving to Multipath TCP

The NortNet testbed is a recent and very interesting initiative from the Simula laboratory in Norway. This testbed is composed of two parts :

  • the NorNet Core is a set of servers on different locations in Norway and possibly abroad. Each location has a few servers that are connected to several ISPs. There are correctly more than a dozen of NortNet Core sites in Norway
  • the NorNet Edge comprises hundreds of nodes that are connected to several cellular network providers

As os this writing, NortNet is the largest experimental platform that gathers nodes that are really multihomed. Recently, NortNet has upgraded its kernels to include Multipath TCP. This will enable large scale experiments with Multipath TCP in different network environments. We can expect more experimental papers that use Multipath TCP at a large scale.

Additional information about NortNet maybe be found on the web and in scientific articles such as :

Gran, Ernst Gunnar; Dreibholz, Thomas and Kvalbein, Amund: NorNet Core - A Multi-Homed Research Testbed Computer Networks, Special Issue on Future Internet Testbeds, vol. 61, pp. 75-87, DOI 10.1016/j.bjp.2013.12.035, ISSN 1389-1286, March 14, 2014.