Opened 4 years ago
Last modified 16 months ago
#1700 assigned defect
faster damage processing - bandwidth constraints handling
Reported by: | Antoine Martin | Owned by: | Antoine Martin |
---|---|---|---|
Priority: | major | Milestone: | 5.0 |
Component: | encodings | Version: | trunk |
Keywords: | Cc: |
Description (last modified by )
Follow up from #999. These classes are becoming complicated and slow.
TODO:
- run profiling again
- merge video source? (we never use window source on its own anyway)
- support multiple video regions?
- cythonize, use strongly typed and faster "deque":Ring buffers in Python/Numpy
- pre-calculate more values: ECU "engine map" like
- more gradual refresh when under bandwidth constraints and at low quality: the jump from lossy to lossless can use up too much bandwidth, maybe refresh first at 80% before doing true lossless
- use more bandwidth? (macos client could use more quality?)
- slowly updating windows should be penalized less
- don't queue more frames for encoding after a congestion event (ok already?)
- maybe keep track of the refresh compressed size?
See also #920: some things could be made faster on the GPU..
Attachments (1)
Change History (11)
comment:1 Changed 4 years ago by
Description: | modified (diff) |
---|---|
Status: | new → assigned |
Summary: | faster damage processing → faster damage processing - bandwidth constraints handling |
comment:2 Changed 4 years ago by
Description: | modified (diff) |
---|
comment:3 Changed 4 years ago by
Description: | modified (diff) |
---|
comment:4 Changed 4 years ago by
See also ticket:1769#comment:1 : maybe we should round up all screen updates to ensure we can always use color subsampling and video encoders? Or only past a certain size to limit the cost?
comment:5 Changed 4 years ago by
After much profiling, it turns out that encoding selection is actually pretty fast already:
And so we're better off spending extra time choosing the correct encoding, instead of trying to save time there: r18669.
Other micro improvements: r18667, r18668
See also ticket:1299#comment:6, we seem to be processing the damage events fast enough (~0.25ms for do_damage
), but maybe we're scheduling things too slowly when we get those damage storms?
comment:6 Changed 4 years ago by
Milestone: | 2.3 → 3.0 |
---|
For the record, I've used this command to generate the call graphs:
python2 ./tests/scripts/pycallgraph -i damage -- start --start-child="xterm -ls" --no-daemon
Minor related fix: r18685.
Re-scheduling as the profiling has shown that this is not a huge overhead after all.
comment:7 Changed 4 years ago by
Milestone: | 3.0 → 3.1 |
---|
comment:9 Changed 2 years ago by
Milestone: | 4.0 → 5.0 |
---|
comment:10 Changed 16 months ago by
this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/1700
See also #1761