mptcptrace demo, experiment four

This is the fourth post of a series of five. Context is presented in the first post. The second post is here. The third post is here.

Fourth experiment

Green at 0s:
  • delay : 10ms
  • bandwidth : 4mbit/s
  • loss : 0%
Green at 5s:
  • delay : 100ms
  • bandwidth : 4mbit/s
  • loss : 10%
Green at 15s:
  • delay : 10ms
  • bandwidth : 4mbit/s
  • loss : 0%
Red at 0:
  • delay : 40ms
  • bandwidth : 4mbit/s
Red at 15:
  • delay : 40ms
  • bandwidth : 4mbit/s
  • loss : 1%
Client:
  • Scheduler : default

In this fourth experiment, we change the loss rate of the red path to 1% after 15 seconds.

Again we take a look at the evolution of the sequence number.

../../../_images/sequence3.png

In this case however we can see the shift after 15 seconds. Because the red link become lossy after 15 seconds, the congestion window of the red subflow shrinks and MPTCP needs to send data again on the green subflow if the congestion window of the red subflow becomes too small to sustain the application rate. Because MPTCP now sends a little bit of traffic on the green subflow, it realises that the green subflow has changed and has now a lower delay and a lower loss rate. As a consequence the green subflow will open the congestion window again and will have a large enough congestion window to sustain the application rate.

../../../_images/sequence_zoom3.png
../../../_images/sequence_zoom_2.png

If we take a look at the evolution of the goodput, we see the two shifts at 5s and at 15s

../../../_images/gput3.png
../../../_images/gput_zoom2.png
../../../_images/gput_zoom_2.png

The evolution of the MTPCP unacked data is shown below :

../../../_images/flight1.png

We can see the change after 15s and that we use less of the receiver window after 15 seconds… In this case, because the red link is lossy, we consume less of the receive window. In our case the receive window is big enough anyway, but we would have different results if the window were smaller. We could reduce the window size by reducing the rmem but this for an other experiment.

Again we can look at the evolution of the unacked data at the TCP level

../../../_images/flightpf1.png

again we observe the shift after 15 seconds.

mptcptrace demo, experiment three

This is the third post of a series of five. Context is presented in the first post. The second post is here.

Third experiment

Green at 0s:
  • delay : 10ms
  • bandwidth : 4mbit/s
  • loss : 0%
Green at 5s:
  • delay : 100ms
  • bandwidth : 4mbit/s
  • loss : 10%
Green at 15s:
  • delay : 10ms
  • bandwidth : 4mbit/s
  • loss : 0%
Red:
  • delay : 40ms
  • bandwidth : 4mbit/s
Client:
  • Scheduler : default

In this third scenario, the green link become lossless again after 15 seconds and its delay comes back to 10ms.

Again let’s see the evolution of the sequence numbers.

../../../_images/sequence2.png

We can observer the same shift as the previous experiment from the green path to the red path after 5 seconds (see below). However, quite surprisingly, we can not see any shift after 15 seconds. Due to the many losses and the high delay between 5 and 15 on the green path, the green path may have a small congestion window and still have the same estimation of the RTT because no new traffic has been sent over the green subflow. No traffic has been pushed on the green subflow by MPTCP, because the red subflow, even though it has a longer delay, can still sustain the application rate. In conclusion, because MPTCP never probes the green subflow, it can not get any idea that this subflow could now be better.

../../../_images/sequence_zoom2.png

We can also take a look the goodput. We see the same perturbation at 5 second that we saw on the previous experiment. And we don’t see any any change around 15 seconds. It’s normal because MPTCP does not come back on the green subflow.

../../../_images/gput2.png

Again the zoom shows similarities with the previous experiment.

../../../_images/gput_zoom1.png

We can also take a look at the amount of data that is not yet acked from the perspective of the sender.

../../../_images/flight.png

On this figure, the black line shows the evolution of the receive window announced by the receiver. The cyan line shows the amount of data that is not MPTCP-acked while the blue line shows the sum of data that is not TCP acked. If the blue line and the cyan line do not stick, this means that that segments arrive out of order (from the MPTCP sequence number view point) at the receiver

We can also look at the evolution of the tcp unacked data by subflow.

../../../_images/flightpf.png

This graph shows the evolution of the unacked data at the TCP layer. We can observe that MPTCP either choose one path or the other but hardly never choose to use both at the same time. On this graph we can also observe the consequence of a longer delay on the red path : the amount of data on flight is higher.

mptcptrace demo, experiment two

This is the second post of a series of five. Context is presented in the first post.

Second experiment

Green at 0s:
  • delay : 10ms
  • bandwidth : 4mbit/s
  • loss : 0%
Green at 5s:
  • delay : 100ms
  • bandwidth : 4mbit/s
  • loss : 10%
Red:
  • delay : 40ms
  • bandwidth : 4mbit/s
Client:
  • Scheduler : default

In this scenario, we change delay and the loss rate of the green link after 5 seconds.

Just like we did for the first experiment, we take a look at the ack/seq graph.

../../../_images/sequence1.png

As expected, we see the shift from the green path to the red path after 5 sec. We will now take a closer look to what happens around 5 seconds, when the delay and the loss rate rise.

../../../_images/sequence_zoom1.png

Because of the loss, the congestion window of the green subflow starts to shrink. As a consequence, MPTCP starts to send segments over the red subflow because there is not enough space in the green congestion window to sustain the rate. The spacing between the green segments and green triangle is different than the spacing between the red segments and the red triangles. This reflects the different RTT on the green subflow and on the red subflow. Because the green link is lossy and has a longer delay, we observe duplicate acks from the red link because we miss some segments sent on the green link at the receiver. However on the red link, even if we have a higher delay, we don’t have many losses. This leads to jump in acks during the transition. We can see the stair shape.

As we did for the previous experiment, we now take a look at the evolution of the goodput.

../../../_images/gput1.png

We see the transition after 5 seconds on the blue line. The consequences over the cyan line is however quite small. But let’s zoom a bit on this specific event.

../../../_images/gput_zoom.png

As we can see after five seconds there is a drop in the “instantaneous” goodput, however this change does not really last enough time to have an impact on the average goodput. If the application is sensitive to goodput variation this may be an issue. If the application is not, the user should barely see the difference.

With synproxy, the middlebox can be on the server itself

Multipath TCP works by adding the new TCP options defined in RFC 6824 in all TCP segments. A Multipath TCP connection always starts with a SYN segment that contains the MP_CAPABLE option. To use benefit from Multipath TCP, both the clients and the server must be upgraded with an operating system that supports Multipath TCP. With such a kernel on the client and the servers, Multipath TCP should be used for all connections between the two hosts. This is true provided that there are no middleboxes on the path between the client and the server.

A user of the Multipath TCP implementation in the Linux kernel recently reported problems on using Multipath TCP on a server. During the discussion, it appeared that a possible source of problems could be the synproxy module that is part of recent iptables implementations. synproxy, as described in a RedHat blog post can be used to mitigate denial of service attacks on TCP by filtering the SYN segments. This module could be installed by default on your server or could have been enabled by the system administrators. If you plan to use Multipath TCP on the server, you need to disable it because synproxy does not currently support Multipath TCP and will discard the SYN segments that contain the unknown MP_CAPABLE option. In this case, the middlebox that breaks Multipath TCP resides on the Multipath TCP enabled server…