#365 closed enhancement (fixed)
don't copy pixmap data to ram: avoid the round-trips and stay on the GPU if we can
Reported by: | Antoine Martin | Owned by: | Antoine Martin |
---|---|---|---|
Priority: | minor | Milestone: | 2.1 |
Component: | server | Version: | |
Keywords: | Cc: |
Description
EXT_texture_from_pixmap should allow us to use the pixels on the GPU for doing CSC and/or encoding without first needing to copy them to RAM (via XShmGetImage
or XGetImage
as we do now)
XShmGetImage
is pretty fast but then we have to upload the data again to the graphics card (assuming we do csc on the GPU - which is the fastest option) and then download the results. Quite wasteful, especially at high res.
Change History (16)
comment:1 Changed 9 years ago by
Milestone: | future → 1.0 |
---|---|
Owner: | changed from Antoine Martin to Antoine Martin |
Status: | new → assigned |
comment:3 Changed 8 years ago by
Another API we could potentially use (maybe just on win32?) is NvIFROpenGL
, for which there is zero documentation...
Only this entry in the 319.49 driver changelog:
Added the NVIDIA OpenGL-based Inband Frame Readback (NvIFROpenGL) library to the Linux driver package. This library provides a high performance, low latency interface to capture and optionally encode an individual OpenGL framebuffer. NvIFROpenGL captures pixels rendered by OpenGL only and is ideally suited to application capture and remoting.
Although DRC seems to think it's not worth it:
I determined that the IFR stuff is not any faster than using PBOs
comment:5 Changed 6 years ago by
Milestone: | 0.17 → 1.0 |
---|
comment:8 Changed 6 years ago by
For win32 we now have an API we can use #1317 "nvidia capture sdk support"
comment:9 Changed 5 years ago by
Milestone: | 2.0 → 2.1 |
---|
comment:10 Changed 5 years ago by
Milestone: | 2.1 → 2.2 |
---|
comment:12 Changed 5 years ago by
Milestone: | 2.2 → 2.1 |
---|
Boom! Done for NVFBC (#1317) + NVENC v8 (#1552) in r16458!
Improved in r16459, made the default in r16476.
Still TODO:
- could keep the gpu buffer active, just in case we decide to actually use a video encoder, even after doing the scrolling detection and downloading the pixels to the CPU side (and we could make the scrolling detection tighter too when the gpu buffer is present)
- could do scrolling detection via CUDA on the GPU
- could do lz4 / zlib / whatever on GPU for small changes
comment:13 Changed 5 years ago by
Resolution: | → fixed |
---|---|
Status: | assigned → closed |
Will follow up in #1597. Closing at last!
comment:14 Changed 5 years ago by
Also done for Linux in r16492: needed new NVENC kernels as Linux uses a different pixel format (XRGB vs BGRX).
comment:16 Changed 17 months ago by
this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/365
Here's how I think this can work.
Note: it might be easier to test this using "xpra shadow" and a full display copy since the "root window" never goes away, and we already have code to override behaviour for the root window (see
GTKRootWindowModel
).window.get_image(x, y, w, h)
inWindowSource
, we can start copying the display pixels to a PBO (maybe even asynchronously!) using glReadPixels or glCopyTexImage2D and return an image wrapper for the PBOdriver.memcpy_htod
innvenc
, we can just skip that part and instead use pycuda's gl functions to access the GL buffer (maybe it can be done as part of the NV12 CSC step anyway - even if we have to copy it to a CUDA aligned buffer, this is no big deal)Obviously, we'll also need fallback code for dealing with non-nvenc encoders, and lots of other little details I can't foresee..
Links: