Xpra: Ticket #2130: TCP_CORK

Related to #619 and ticket:2121#comment:5.

We know when there are more chunks to be written to the socket, so we can use TCP_CORK.



Thu, 31 Jan 2019 13:00:10 GMT - Antoine Martin: owner changed

Done in r21517. (note: no support for BSD OS variants - patches welcome)

Results in aggregated TCP packet chunks for large packets (ie: "draw"), including on websockets and SSL. As per example in ticket:2121#comment:5, the PNG pixel data and the xpra packet metadata no longer require an extra TCP frame:

T 127.0.0.1:10000 -> 127.0.0.1:38956 [AP] #12476
  82 7e 00 b5 50 00 00 07    00 00 00 7a 89 50 4e 47    .~..P......z.PNG
  0d 0a 1a 0a 00 00 00 0d    49 48 44 52 00 00 00 0c    ........IHDR....
  00 00 00 0d 08 02 00 00    00 12 4b 18 15 00 00 00    ..........K.....
  41 49 44 41 54 78 5e 63    fc ff ff 3f 03 2a 60 64    AIDATx^c...?.*`d
  64 44 13 61 42 e3 63 e5    42 15 61 ea 46 56 0d 52    dD.aB.c.B.a.FV.R
  84 5f 05 50 01 13 50 05    a6 b3 d0 dd 44 50 05 c8    ._.P..P.....DP..
  24 12 1c 8e 5f 29 f5 4c    42 b7 07 ab 3f 58 a8 e6    $..._).LB...?X..
  3b 00 f7 39 0f 17 4e 12    5b 8c 00 00 00 00 49 45    ;..9..N.[.....IE
  4e 44 ae 42 60 82 50 00    00 00 00 00 00 2b 6c 34    ND.B`.P......+l4
  3a 64 72 61 77 69 31 65    69 31 37 65 69 31 35 65    :drawi1ei17ei15e
  69 31 32 65 69 31 33 65    33 3a 70 6e 67 30 3a 69    i12ei13e3:png0:i
  31 34 65 69 30 65 64 65    65                         14ei0edee
#

@maxmylin: Just like #619, this should result in slightly lower bandwidth / better bandwidth utilization, and lower latency. Most noticeable on low-bandwidth setups.


Sat, 09 Feb 2019 03:44:54 GMT - Antoine Martin: owner changed


Thu, 01 Aug 2019 12:02:18 GMT - Smo: owner changed


Tue, 13 Aug 2019 18:15:13 GMT - Smo: attachment set

test of cork on and off


Tue, 13 Aug 2019 18:16:19 GMT - Smo: owner changed

What do you think of this as a baseline for testing with bandwidth constraints.

I think I may need longer tests for these I was only testing with rgb and auto encodings.


Wed, 14 Aug 2019 05:52:45 GMT - Antoine Martin: owner changed

I don't see the raw test data, what was the bandwidth constraint used? Looks like the gtkperf test failed with CORK=0. (no data) The only surprise so far is how the max-damage-latency is quite a bit higher and the min-quality is lower. It could be that using TCP_CORK makes the network layer push back more aggressively - which is not a bad thing. (fits with the lower max-batch-delay which is more coarse grained than damage-latency) We would need to get the round-trip latency figures to verify that.

As per ticket:619#comment:27, it would be useful to combine CORK with NODELAY.


Thu, 15 Aug 2019 11:54:45 GMT - Antoine Martin: status changed; resolution set

As per ticket:619#comment:29, this is better, even without measuring the effect on end-to-end latency, which should also be improved.


Mon, 19 Aug 2019 16:38:20 GMT - Antoine Martin:

Will follow up in #2381.


Tue, 20 Aug 2019 05:12:35 GMT - Antoine Martin:

The charts are now available here: https://xpra.org/stats/nodelay-cork/.


Wed, 25 Sep 2019 08:47:15 GMT - Antoine Martin:

This option can now be enabled on a per-socket basis: ticket:2424#comment:1.

See also #2975.


Sat, 23 Jan 2021 05:43:00 GMT - migration script:

this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/2130