108 | | |
109 | | [[BR]] |
110 | | |
111 | | === Batch Delay === |
112 | | If "xpra info" is not enough, you can dump the change in "batch delay" to lof file by setting: |
113 | | {{{ |
114 | | XPRA_DELAY_DEBUG=1 |
115 | | }}} |
116 | | when starting the server. Then every 30 seconds, or every 1000 messages (whichever comes first), the delay factors will be dumped to the log (this is done to prevent the logging itself from affecting the system and the calculations - and it still does, but much more sparsely), the log messages look like this: |
117 | | {{{ |
118 | | update_batch_delay: wid=5, last updated 249.50 ms ago, decay=1.00s, \ |
119 | | change factor=9.8%, delay min=5, avg=5, max=6, cur=6.7, \ |
120 | | w. average=6.0, tot wgt=227.2, hist_w=113.6, new delay=7.4 |
121 | | }}} |
122 | | For more details, to get the actual factors used, set: |
123 | | {{{ |
124 | | XPRA_DELAY_DEBUG=2 |
125 | | }}} |
126 | | Then the output should be much more verbose: |
127 | | {{{ |
128 | | Factors (change - weight - description): |
129 | | -14 38 damage processing latency: avg=0.013, recent=0.013, target=0.014, aim=0.800, aimed avg factor=0.729, div=1.000, s=<built-in function sqrt> |
130 | | +128 15 damage processing ratios 12 - 13 / 5 |
131 | | -25 50 damage send latency: avg=0.014, recent=0.015, target=0.030, aim=0.800, aimed avg factor=0.557, div=1.000, s=<built-in function sqrt> |
132 | | -65 8 damage network delay: avg delay=0.001 recent delay=0.002 |
133 | | -65 23 client decode speed: avg=32.3, recent=32.3 (MPixels/s) |
134 | | +0 0 no damage events for 1.2 ms (highest latency is 100.0) |
135 | | -49 8 client latency: avg=0.002, recent=0.002, target=0.006, aim=0.800, aimed avg factor=0.260, div=1.000, s=<built-in function sqrt> |
136 | | -40 4 client ping latency: avg=0.003, recent=0.003, target=0.007, aim=0.950, aimed avg factor=0.353, div=1.000, s=<built-in function sqrt> |
137 | | -54 4 server ping latency: avg=0.003, recent=0.002, target=0.006, aim=0.950, aimed avg factor=0.211, div=1.000, s=<built-in function sqrt> |
138 | | -100 0 damage packet queue size: avg=0.000, recent=0.000, target=1.000, aim=0.250, aimed avg factor=0.000, div=1.000, s=<built-in function sqrt> |
139 | | -100 0 damage packet queue pixels: avg=0.000, recent=0.000, target=1.000, aim=0.250, aimed avg factor=0.000, div=90000.000, s=<built-in function sqrt> |
140 | | -99 20 damage data queue: avg=0.346, recent=0.033, target=1.000, aim=0.250, aimed avg factor=0.019, div=1.000, s=<built-in function logp> |
141 | | -100 0 damage packet queue window pixels: avg=0.000, recent=0.000, target=1.000, aim=0.250, aimed avg factor=0.000, div=90000.000, s=<built-in function sqrt> |
142 | | }}} |
143 | | Note: the change and weight are shown as percentages in the output (rather than floating point numbers in the implementation). |
144 | | |
145 | | [[BR]] |
146 | | |
147 | | === Auto speed and quality === |
148 | | |
149 | | To dump the changes to video encoding speed and quality, set: |
150 | | {{{ |
151 | | XPRA_VIDEO_DEBUG=1 |
152 | | }}} |
153 | | when starting the server, you will then get messages like these when using fixed quality/speed settings: |
154 | | {{{ |
155 | | video encoder using fixed speed: 10 |
156 | | video encoder using fixed quality: 10 |
157 | | }}} |
158 | | Or these when actually using the tuning code: |
159 | | {{{ |
160 | | video encoder quality factors: wid=5, packets_bl=1.00, batch_q=0.31, \ |
161 | | latency_q=94.44, target=30, new_quality=25 |
162 | | video encoder speed factors: wid=4, low_limit=157684, min_damage_latency=0.03, \ |
163 | | target_damage_latency=0.20, batch.delay=17.69, dam_lat=0.00, dec_lat=0.05, \ |
164 | | target=4.00, new_speed=5.00 |
165 | | }}} |
166 | | Please refer to the code for accurate information. |
167 | | |
168 | | The auto-tuning code: |
169 | | * honours the minimum speed and quality settings set via {{{--min-quality}}} and {{{-min-speed}}} |
170 | | * tunes the speed so that encoding of one frame takes roughly as long as the batch delay (the time spent accumulating updates for the next frame), so that there is only one frame in the pipeline on average. The aim is to always keep the encoder busy, but without any backlog. (note: this does not necessarily lead to the best framerate, since a busy cpu may cause the batch delay to increase... and in turn lower the speed) |
171 | | * client speed: if the client is struggling to decode frames, then we use a higher speed (which means less effort in decompressing the stream) - the target client decoding speed is 8 MPixels/s |
172 | | * quality: we try to lower the quality if we find that the client has a backlog of frames to draw/acknowledge (usually a sign of network congestion), or if the measure client latency is higher than normal (also a sign of congestion) |