The batch delay increased rapidly because of congestion events and bandwidth limits:
2018-07-07 21:38:09,208 update_batch_delay: bandwidth-limit : 6.77,45.88 {'used': 32245920, 'budget': 5242880}
2018-07-07 21:38:09,209 update_batch_delay: congestion : 1.87,8.74 {}
And sometimes also because of client-latency:
2018-07-07 21:38:23,080 update_batch_delay: client-latency : 2.86,0.69 {'target': 8, 'weight_multiplier': 503, 'smoothing': 'sqrt', 'aim': 800, 'aimed_avg': 8178, 'div': 1000, 'avg': 233, 'recent': 446}
And to a lesser extent the client decode speed:
2018-07-07 21:38:27,286 update_batch_delay: client-decode-speed : 2.15,4.59 {'avg': 131, 'recent': 449}
This all points towards a network / CPU performance bottleneck on the client.
You can now turn off bandwidth detection more easily, see #1912.
With this turned off, it should now be impossible to get update_batch_delay: congestion
to raise the batch delay since the congestion-value
should always be zero.
(it is calculated from the congestion_send_speed
list which is only updated in record_congestion_event
and this method is bypassed when bandwidth detection is turned off).
This won't fix the massive jitter you are seeing though:
- this is very wrong (1.2 second for getting the paint packet ack):
2018-07-08 10:59:12,969 record_latency: took 1294.6 ms round trip, 1294.5 for echo, 14.0 for decoding of 240 pixels, 59 bytes sent over the network in 1280.4 ms, 1280.3 ms for echo
- whereas later it goes fine for similar (tiny) packets later:
2018-07-08 10:59:30,733 record_latency: took 14.3 ms round trip, 14.3 for echo, 1.0 for decoding of 44888 pixels, 60 bytes sent over the network in 12.8 ms, 12.7 ms for echo