Xpra: Ticket #619: better TCP_NODELAY handling: only use it when it is useful

Follow up from #514: at present we enable TCP_NODELAY globally which is a bit wasteful.

It ensures that packets go out as soon as we queue them, but when the packets contain large-ish binary data this means that the binary data and the actual xpra packet structure are likely to travel in separate TCP-level packets.

It would be better to only enable TCP_NODELAY when aggregating packets is not helping: when we have no more data to send or when the output buffer is full. As per: Is there a way to flush a POSIX socket? and this answer: What I do is enable Nagle, write as many bytes (using non-blocking I/O) as I can to the socket (i.e. until I run out of bytes to send, or the send() call returns EWOULDBLOCK, whichever comes first), and then disable Nagle again. This seems to work well (i.e. I get low latency AND full-size packets where possible)

Good read: The Caveats of TCP_NODELAY



Mon, 15 Aug 2016 09:42:43 GMT - Antoine Martin:

See also: #1211.


Mon, 20 Feb 2017 11:49:36 GMT - Antoine Martin:

See also #639, #999, #401, #540, #417


Thu, 25 Jan 2018 05:18:46 GMT - Antoine Martin: priority, description changed

I'm seeing some weird behaviour with win32 clients trying to improve #999 and detecting late acks.

The network layer's source_has_more function may not be the right place to set and unset NODELAY because lots of small screen updates can take under a millisecond to compress, which is still slower than it takes the network layer to send them... Maybe we need to pass the screen update flush attribute down to the network layer too.


Thu, 25 Jan 2018 07:15:01 GMT - Antoine Martin: owner changed

Done in r18149 + r18150. Implementation notes:

As of r18151, we can use XPRA_SOCKET_NODELAY to overrule the automatic settings:

TODO:

@maxmylyn: this ticket is tailor made for the automated tests - we want to compare before and after to see if this helps, especially under bandwidth constrained conditions. (only slight problem is, there is a bug I'm working on which causes congestion detection to kick in too early, and the counter measures are too aggressive, causing the framerate to drop..)


Thu, 25 Jan 2018 19:28:19 GMT - J. Max Mena:

I did some maintenance this morning on the test box - it's been acting up again. For starters, I've split the TC tests into two different files - one runs just the packet loss/delay tests, the other runs bandwidth limited cases. That way if something goes wrong with either test suite, we don't lost all the data.

However there's a separate problem that I'm going to look at now - seemingly at random the tests fail to stop the server properly, at which point all following tests fail consistently because they're unable to spin up a server on the display that's still in use. A simple xpra stop clears the server, so I'll sprinkle that command in to the scripts between test suites....but I'd like to figure out why I need to do that.

I'll hold on to this ticket for a few days so we can gather more data.


Sun, 01 Apr 2018 05:45:39 GMT - Antoine Martin: milestone changed


Thu, 03 Jan 2019 18:06:50 GMT - J. Max Mena:

As I mentioned in #1840, my mathematician is unavailable for a little bit, and he was working on the new charts, so I'm going to post the raw data I have so far.


Thu, 03 Jan 2019 18:07:21 GMT - J. Max Mena: attachment set

TCP output data


Fri, 18 Jan 2019 11:15:56 GMT - Antoine Martin:

TCP output data

It's not clear what command lines were used for each run, as there are 3 possible values for XPRA_SOCKET_NODELAY (0, 1, unset)

It also doesn't look like this was being tested with any bandwidth constraints? All encodings are tested in there, which is going to make the data very noisy. I would focus on one setting ("auto" or "rgb"), and only one test (one that generates lots of small-ish screen updates - maybe simulate console user, or gtkperf) and maybe vary only the throttling - if anything.


Mon, 21 Jan 2019 18:22:52 GMT - J. Max Mena:

It's not clear what command lines were used for each run, as there are 3 possible values for XPRA_SOCKET_NODELAY (0, 1, unset)

For these tests, I have it run a series of three test runs. There are two sets - the ones with the prefix nodelay_ have XPRA_SOCKET_NODELAY=1 set, and delay_ has XPRA_SOCKET_NODELAY=0 set.

It also doesn't look like this was being tested with any bandwidth constraints?

I'll copy some of the Bash script from when I was running daily tests with bandwidth constraints - 25, 16, and 8 megabits should be a good enough starting point. I've changed the config file for this test run to only use rgb, and it only runs the two console tests and gtkperf - since I'm not hardware limited anymore, it doesn't hurt to run extra tests since I'll usually have a spare machine or two in case I need it for something else.


Wed, 23 Jan 2019 17:36:38 GMT - J. Max Mena: attachment set


Wed, 23 Jan 2019 17:43:21 GMT - J. Max Mena:

I've posted a new set of data. This time I ran tc qdisc add dev lo root netem rate 25mbit between test runs with XPRA_SOCKET_NODELAY set to 0 and 1, with 25mbit, 16mbit, and 8mbit bandwidth constraints.

Unfortunately most of that data is useless. It looks like the IP Tables command didn't work (EDIT: This one's on me - I left USE_IPTABLES disabled in the config), so all the packet accounting didn't get recorded, and as mentioned in #1840 quite a few of the other columns are missing as well. I'll try to figure out why that's the case, but I'm posting it anyways since all the data is ~500KB.


Thu, 24 Jan 2019 05:55:41 GMT - Antoine Martin:

.. as mentioned in #1840 quite a few of the other columns are missing as well ..

As mentioned in #1840, the breakage is likely quite recent whereas this ticket is a year old. Why not test with a version that does record the data we want?


Mon, 28 Jan 2019 17:18:00 GMT - Antoine Martin:

r21493 waits until after we have sent the last chunk before enabling NODELAY. I believe that's correct and the kernel will then flush the socket, but it would be much better to verify that with test data.


Tue, 29 Jan 2019 14:03:26 GMT - Antoine Martin:

r21495: also disable NODELAY for multiple chunks (doh)


Thu, 31 Jan 2019 12:50:19 GMT - Antoine Martin:

See also #2130


Sat, 09 Feb 2019 03:41:49 GMT - Antoine Martin: owner changed


Thu, 01 Aug 2019 11:57:47 GMT - Smo: owner changed


Fri, 09 Aug 2019 02:26:28 GMT - Smo: attachment set

nodelay charts


Fri, 09 Aug 2019 02:28:43 GMT - Smo: owner changed

Attached some charts and data for this.

I'm not sure if the script for charting took into account the instances I ran with trickle.

I could have just included the network/packet stuff in the charts but left all the details there.


Fri, 09 Aug 2019 12:51:48 GMT - Antoine Martin: owner changed

Please include the SOCKET_CORK option (#2130) in this test data. I think it would make more of a difference if we could compare with and without trickle limits too, but I don't think that the "perf charts" code will let you do that as it is? It would be useful compare per-bandwidth-limit / per-nodelay / per-cork settings for example. I assume that's why the "max-batch-delay" goes so high with all settings (300 ms!): it would help to see how that varies per-bandwidth-limit. For some metrics, the data from low bandwidth-limits may skew the results. All these screensaver tests mostly behave the same (full screen updates, high framerate), it would be more useful to have other tests in there: gtkperf, xterm, even x11perf. As expected, the "server-number-of-threads" and "server-vsize" go up in "auto" mode since that uses multi-threaded video encoders. But I don't see the benefit of video encoders on framerate ("regions-per-second") or pixels-per-second. (though the "encoding-pixels-per-second" does show a very different profile)

Some thoughts on what I was expecting to see:


Thu, 15 Aug 2019 02:07:31 GMT - Smo: attachment set

test of nodelay and cork


Thu, 15 Aug 2019 02:08:46 GMT - Smo: owner changed

Attached is with the combinations of XPRA_SOCKET_NODELAY and XPRA_SOCKET_CORK compared.

Longer tests this time a few different ones.


Thu, 15 Aug 2019 11:52:52 GMT - Antoine Martin: owner changed

Sorry, I forgot to ask you to include the default case with XPRA_SOCKET_NODELAY unset, both with and without CORK for completeness.

Very interesting to have 4 combinations already. Maybe we should combine more test results?

So far:


Mon, 19 Aug 2019 15:01:49 GMT - Smo: attachment set

new test with nodelay unset data


Mon, 19 Aug 2019 15:03:42 GMT - Smo: owner changed

Okay I have attached the previous data and with the new stuff you asked for.

I was hoping to make the charts a bit more readable but after spending some time I was not able to. (I'm not the best at JS/HTML)

Take a look and let me know if this is good. Maybe we can finally close this ticket.


Mon, 19 Aug 2019 16:26:04 GMT - Antoine Martin: owner changed

@smo: there are two sets of NODELAY Unset CORK 1 - what's the difference?


Mon, 19 Aug 2019 16:55:01 GMT - Smo: attachment set

bad label on previous chart


Mon, 19 Aug 2019 16:56:05 GMT - Smo: owner changed

Oops that was my bad. Wrong label for that one. Attached new tarball.


Tue, 20 Aug 2019 05:09:17 GMT - Antoine Martin: status changed; resolution set

The charts are now available here: https://xpra.org/stats/nodelay-cork/.


Wed, 25 Sep 2019 08:47:09 GMT - Antoine Martin:

This option can now be enabled on a per-socket basis: ticket:2424#comment:1.

See also #2975.


Sat, 23 Jan 2021 05:01:04 GMT - migration script:

this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/619