Easiest way I found of reproducing this is to shadow my main desktop 5kx2k from an Intel HD laptop, you get:
2015-08-11 18:22:59,856 do_paint_rgb24 error Traceback (most recent call last): File "/usr/lib64/python2.7/site-packages/xpra/client/window_backing_base.py", line 275, in do_paint_rgb24 success = (self._backing is not None) and self._do_paint_rgb24(img_data, x, y, width, height, rowstride, options) File "/usr/lib64/python2.7/site-packages/xpra/client/gl/gl_window_backing_base.py", line 597, in _do_paint_rgb24 return self._do_paint_rgb(24, img_data, x, y, width, height, rowstride, options) File "/usr/lib64/python2.7/site-packages/xpra/client/gl/gl_window_backing_base.py", line 610, in _do_paint_rgb self.gl_init() File "/usr/lib64/python2.7/site-packages/xpra/client/gl/gl_window_backing_base.py", line 391, in gl_init glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, self.texture_pixel_format, w, h, 0, self.texture_pixel_format, GL_UNSIGNED_BYTE, None) File "latebind.pyx", line 32, in OpenGL_accelerate.latebind.LateBind.__call__ (src/latebind.c:989) File "wrapper.pyx", line 318, in OpenGL_accelerate.wrapper.Wrapper.__call__ (src/wrapper.c:6561) GLError: GLError( err = 1281, description = 'invalid value', baseOperation = glTexImage2D, pyArgs = ( GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB, 5760, 2160, 0, GL_RGB, GL_UNSIGNED_BYTE, None, ), cArgs = ( GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB, 5760, 2160, 0, GL_RGB, GL_UNSIGNED_BYTE, None, ), cArguments = ( GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB, 5760, 2160, 0, GL_RGB, GL_UNSIGNED_BYTE, None, ) )
Traceback (most recent call last): File "/usr/lib64/python2.7/site-packages/xpra/client/gl/gl_window_backing_base.py", line 692, in gl_paint_planar self.gl_init() File "/usr/lib64/python2.7/site-packages/xpra/client/gl/gl_window_backing_base.py", line 391, in gl_init glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, self.texture_pixel_format, w, h, 0, self.texture_pixel_format, GL_UNSIGNED_BYTE, None) File "latebind.pyx", line 32, in OpenGL_accelerate.latebind.LateBind.__call__ (src/latebind.c:989) File "wrapper.pyx", line 318, in OpenGL_accelerate.wrapper.Wrapper.__call__ (src/wrapper.c:6561) GLError: GLError( err = 1281, description = 'invalid value', baseOperation = glTexImage2D, pyArgs = ( GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB, 5760, 2160, 0, GL_RGB, GL_UNSIGNED_BYTE, None, ), cArgs = ( GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB, 5760, 2160, 0, GL_RGB, GL_UNSIGNED_BYTE, None, ), cArguments = ( GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB, 5760, 2160, 0, GL_RGB, GL_UNSIGNED_BYTE, None, ) )
Traceback (most recent call last): File "/usr/lib64/python2.7/site-packages/xpra/client/gl/gtk2/gl_window_backing.py", line 41, in gl_expose_event self.gl_init() File "/usr/lib64/python2.7/site-packages/xpra/client/gl/gl_window_backing_base.py", line 391, in gl_init glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, self.texture_pixel_format, w, h, 0, self.texture_pixel_format, GL_UNSIGNED_BYTE, None) File "latebind.pyx", line 32, in OpenGL_accelerate.latebind.LateBind.__call__ (src/latebind.c:989) File "wrapper.pyx", line 318, in OpenGL_accelerate.wrapper.Wrapper.__call__ (src/wrapper.c:6561) OpenGL.error.GLError: GLError( err = 1281, description = 'invalid value', baseOperation = glTexImage2D, pyArgs = ( GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB, 5760, 2160, 0, GL_RGB, GL_UNSIGNED_BYTE, None, ), cArgs = ( GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB, 5760, 2160, 0, GL_RGB, GL_UNSIGNED_BYTE, None, ), cArguments = ( GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB, 5760, 2160, 0, GL_RGB, GL_UNSIGNED_BYTE, None, ) )
client_decode_error: no csc module found for YUV420P(3840x1440) to RGB or RGBX(5760x2160) in {'YUV422P': {'BGR': [codec_spec(swscale)], ...
The problem is that the codec spec has a hard-coded limit of 4k. Now that we run some startup checks on the codecs, we should just probe to find what the real limits are.
Slight improvement in r10258 + r10259, now we get a more helpful message:
Error: cannot initialize RGB texture: 5760x2160 GLError( err=1281, description = 'invalid value', baseOperation = glTexImage2D )
More error logging improvements in r10260, r10262, r10263. r10264 allows us to use the cython module above 4k (very slow but it works - the 32-bit case could easily be optimized to access memory 32-bit at a time instead of 8-bit at a time.. but until now this was mostly a fallback module, so not a priority). I will try to find the limits of swscale at runtime instead of hard-coding it to 4kx4k.
The remaining problem is the maximum texture size, which can be seen by running the gl_check utility (add "-v" flag before r10265). This is what I get on a:
Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09)
from lspci
Integrated Graphics Chipset: Intel(R) HD Graphics 4000
from Xorg log file
Mesa DRI Intel(R) Ivybridge Mobile
from gl_check renderer info
VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1)
from lspci
NVIDIA GPU GeForce GTX 970 (GM204-A) at PCI:1:0:0 (GPU-0)
from Xorg log file
GeForce GTX 970/PCIe/SSE2
from gl_check renderer info
We need to take this into account and:
@afarr, more info:
Tested with 0.16.0 r10380, fedora 21 server, osx & windows 8.1 clients.
window[3].size=(4859, 2093)
, window[3].size=(5942, 2088)
, and window[3].size=(6353, 645)
across two monitors. Paintboxes showed orange, yellow, pink, and blue... no sign of rgb errors (or any others).
window[3].size=(6353, 645)
window created on the set of larger displays - again no sign of any errors.
window[3].size=(4998, 2060)
.
Note, however, when I switched the connection back from the osx client with the smaller displays to the windows client, the oversized window displayed as window[3].size=(2560, 645)
, rather than the window[3].size=(6353, 645)
that it had been displaying at before being passed back, then forth (no resizing occurred along the way on my part).
If I've understood the issue correctly, it looks like it should be ready to backport/close.
Stretched one past size of 4K monitor, no errors.
Then your graphics card supports textures bigger than 4k and you're not testing this new code. Run the gl check tool to see the card's limits.
It is possible that some intel GPUs support textures bigger than 4k. Mine does not.
But since you're not seeing adverse effects either, I guess those changes don't break anything, backports to 0.15.x: r10339, r10400, r10401.
Running the GL check tool, I get the following:
2015-09-02 15:50:26,420 OpenGL properties: 2015-09-02 15:50:26,420 * GLU extensions : GL_EXT_bgra 2015-09-02 15:50:26,420 * GLU version : 1.2.2.0 Microsoft Corporation 2015-09-02 15:50:26,420 * display_mode : DOUBLE 2015-09-02 15:50:26,420 * gdkgl.version : 6.2 2015-09-02 15:50:26,420 * gdkglext.version : 1.2.0 2015-09-02 15:50:26,420 * gtkglext.version : 1.2.0 2015-09-02 15:50:26,420 * has_alpha : True 2015-09-02 15:50:26,420 * opengl : 4.0 2015-09-02 15:50:26,420 * pygdkglext.version : 1.0.0 2015-09-02 15:50:26,420 * pyopengl : 3.1.0 2015-09-02 15:50:26,420 * renderer : Intel(R) HD Graphics 4000 2015-09-02 15:50:26,420 * rgba : True 2015-09-02 15:50:26,420 * safe : True 2015-09-02 15:50:26,420 * shading language version : 4.00 - Build 10.18.10.3345 2015-09-02 15:50:26,420 * texture-size-limit : 16384 2015-09-02 15:50:26,420 * vendor : Intel 2015-09-02 15:50:26,420 * zerocopy : True
I take it that that texture-size-limit is significantly bigger than 4K?
Running a quick check on my other machine (before trying to swap monitors), I see that it also lists a texture-size-limit of 16384.
I'm not even going to bother to check my old osx 10.6.8 which doesn't support OpenGL at all.
If I find a machine that doesn't support it, I'll run the test (once I get the chance to lug a 4K monitor over to said mythical machine).
Looks like this particular HD 4000 does support 16k.
once I get the chance to lug a 4K monitor over to said mythical machine
I suspect that it is going to be mostly cheap laptops like mine, so you can move the laptop instead of lugging the monitor around!
FWIW: my OSX virtual machines support opengl starting with 10.6.x (just without transparency):
$ /Volumes/Xpra/Xpra.app/Contents/Helpers/OpenGL_check OpenGL glEnablei is not available, disabling transparency OpenGL_accelerate module loaded OpenGL properties: * GLU extensions : * GLU version : 1.3 MacOSX * display_mode : SINGLE * gdkgl.version : 1.0 * gdkglext.version : 1.2.0 * gtkglext.version : 1.2.0 * has_alpha : True * opengl : 2.1 * pygdkglext.version : 1.0.0 * pyopengl : 3.1.1a1 * renderer : Apple Software Renderer * rgba : True * safe : True * shading language version : 1.20 * texture-size-limit : 16384 * vendor : Apple Computer, Inc. * zerocopy : True
r10538 now also takes into account the maximum viewport dimensions and the scaling if any (#976). The viewport tells us how big we can make the windows. (it is always larger than the texture size)
When we use scaling, we can paint the pixels to a smaller texture than the window since the opengl hardware will scale it to put it on screen.
So, even my cheap-and-nasty laptop can now render windows up to 16k by using scaling:
$ ./xpra/client/gl/gl_check.py | egrep "texture|viewport" * max-viewport-dims : (16384, 16384) * texture-size-limit : 4096
@afarr: once you locate a system with low enough texture limits, you can check that scaling allows you to make windows bigger than the texture-size-limit.
Note: it is possible that the crashes reported in #976 were due to windows getting too big when we scaleup repeatedly - testing should be able to tell. Compositors often render windows onto an opengl surface before they layer it on screen. If that's the case, then the 8x limit is inadequate and we need to use the "texture-size-limit" instead.
Ok, I finally found a cheap orphaned laptop to test with.
Installed xpra 0.16.0 r11058 on the fedora 21 client, against a 0.16.0 r11031 fedora 21 server.
OpenGL properties: * GLU extensions : GLU_EXT_nurbs_tessellator GLU_EXT_object_space_tess * GLU version : 1.3 * display_mode : ALPHA, SINGLE * gdkgl.version : 1.4 * gdkglext.version : 1.2.0 * gtkglext.version : 1.2.0 * has_alpha : True * max-viewport-dims : (16384, 16384) * opengl : 3.0 * pygdkglext.version : 1.1.0 * pyopengl : 3.1.0 * renderer : Mesa DRI Intel(R) Ivybridge Mobile * rgba : True * safe : True * shading language version : 1.30 * texture-size-limit : 4096 * vendor : Intel Open Source Technology Center * zerocopy : True
Unfortunately, it doesn't seem to be able/willing to handle the 4k monitor as a 4k.
2015-10-28 10:51:05,836 desktop size is 3286x1200 with 1 screen(s): 2015-10-28 10:51:05,836 :0.0 (869x318 mm - DPI: 96x95) workarea: 3286x741 at 0x459 2015-10-28 10:51:05,836 HDMI1 1920x1200 (621x341 mm - DPI: 78x89) 2015-10-28 10:51:05,836 LVDS1 1366x768 at 1920x432 (309x174 mm - DPI: 112x112) 2015-10-28 10:51:05,836 scaled using 1.5 x 1.5 to: 2015-10-28 10:51:05,837 :0.0 (869x318 mm - DPI: 64x63) workarea: 2190x494 at 0x306 2015-10-28 10:51:05,837 HDMI1 1280x800 (621x341 mm - DPI: 52x59) 2015-10-28 10:51:05,837 LVDS1 910x512 at 1280x288 (309x174 mm - DPI: 74x74) 2015-10-28 10:51:06,238 Xpra X11 server version 0.16.0-r11031
And likewise, the server's notion of desktop size:
2015-10-28 10:51:05,809 client root window size is 2190x800 with 1 displays: 2015-10-28 10:51:05,809 :0.0 (869x318 mm - DPI: 64x63) workarea: 2190x494 at 0x306 2015-10-28 10:51:05,809 HDMI1 1280x800 (621x341 mm - DPI: 52x59) 2015-10-28 10:51:05,809 LVDS1 910x512 at 1280x288 (309x174 mm - DPI: 74x74) 2015-10-28 10:51:05,939 server virtual display now set to 3120x1050 (best match for 2190x800)
As a result, stretching an xterm to cover the entire desktop only got it to window[1].size=(2167, 758)
So I went searching and found a patchwork of cables to attach a third display... which pushed me past 4K of desktop real estate, but the third display was just black (well, it had some stylish horizontal stripes too, but those weren't of much use):
2015-10-28 11:16:24,726 desktop size is 4566x1200 with 1 screen(s): 2015-10-28 11:16:24,726 :0.0 (1208x318 mm - DPI: 96x95) workarea: 4566x1173 at 0x27 2015-10-28 11:16:24,726 HDMI1 1920x1200 (621x341 mm - DPI: 78x89) 2015-10-28 11:16:24,726 LVDS1 1366x768 at 1920x432 (309x174 mm - DPI: 112x112) 2015-10-28 11:16:24,726 DVI-1-0 1280x720 at 3286x397 (597x336 mm - DPI: 54x54) 2015-10-28 11:16:24,726 scaled using 1.5 x 1.5 to: 2015-10-28 11:16:24,727 :0.0 (1208x318 mm - DPI: 64x63) workarea: 3044x782 at 0x18 2015-10-28 11:16:24,727 HDMI1 1280x800 (621x341 mm - DPI: 52x59) 2015-10-28 11:16:24,727 LVDS1 910x512 at 1280x288 (309x174 mm - DPI: 74x74) 2015-10-28 11:16:24,727 DVI-1-0 853x480 at 2190x264 (597x336 mm - DPI: 36x36)
xpra info gave this info about desktop size:
client.desktop_size=(3044, 800) client.desktop_size.unscaled=(4566, 1200)
Stretching one of the xterms clumsily across the monitors as best I could, I managed to get it to window[1].size=(4087, 212)
before it refused to be enlarged any further.
No sign of any errors client side, and the only thing I'm seeing server side is a number of Uh-ohh sizing messages:
2015-10-28 12:31:16,734 Uh-oh, our size doesn't fit window sizing constraints: 4092x219 vs 4087x212 2015-10-28 12:31:16,735 Uh-oh, our size doesn't fit window sizing constraints: 492x308 vs 487x303
Oddly, stepping away for a little while at this point, it looks like the stretched xterm disappeared (no sign of anything in any logs, so maybe it just got lost on the failing third monitor? Just mentioning to be thorough).
Stretching another xterm to the same size (4087, 212) and then scaling back to 96 DPI, I get these dimensions on the client
2015-10-28 12:50:05,280 sending updated screen size to server: 4566x1200 with 1 screens 2015-10-28 12:50:05,280 :0.0 (1208x318 mm - DPI: 96x95) workarea: 4566x1173 at 0x27 2015-10-28 12:50:05,280 HDMI1 1920x1200 (621x341 mm - DPI: 78x89) 2015-10-28 12:50:05,281 LVDS1 1366x768 at 1920x432 (309x174 mm - DPI: 112x112) 2015-10-28 12:50:05,281 DVI-1-0 1280x720 at 3286x397 (597x336 mm - DPI: 54x54)
... but the scaling seems to have returned the xterm to sizing to fit one of the windows (luckily not the one that is dsfunctional) at (1915,56) size, but with no sign of errors client or server side.
window[2].size=(1915, 56) window[2].size-constraints.base-size=(19, 4) window[2].size-constraints.gravity=1 window[2].size-constraints.increment=(6, 13) window[2].size-constraints.minimum-size=(25, 17) window[2].size-constraints.size=(499,316)
Resizing it back to ginormous lengths:
window[2].size=(4093, 56) window[2].size-constraints.base-size=(19, 4) window[2].size-constraints.gravity=1 window[2].size-constraints.increment=(6, 13) window[2].size-constraints.minimum-size=(25, 17) window[2].size-constraints.size=(499, 316)
Scaling "up" again (shift-alt-minus ... is that up or down?), I see this on server side (still no errors):
2015-10-28 13:00:34,618 server virtual display now set to 8192x4096 (best match for 6849x1800) 2015-10-28 13:00:34,621 received updated display dimensions 2015-10-28 13:00:34,621 client root window size is 6849x1800 with 1 displays: 2015-10-28 13:00:34,622 :0.0 (1208x318 mm - DPI: 144x143) workarea: 6849x1759 at 0x40 2015-10-28 13:00:34,622 HDMI1 2880x1800 (621x341 mm - DPI: 117x134) 2015-10-28 13:00:34,622 LVDS1 2049x1152 at 2880x648 (309x174 mm - DPI: 168x168) 2015-10-28 13:00:34,622 DVI-1-0 1920x1080 at 4929x595 (597x336 mm - DPI: 81x81) 2015-10-28 13:00:34,904 Uh-oh, our size doesn't fit window sizing constraints: 1272x207 vs 1267x199 2015-10-28 13:00:34,909 Uh-oh, our size doesn't fit window sizing constraints: 2046x51 vs 2041x43 2015-10-28 13:00:34,915 DPI set to 144 x 144 2015-10-28 13:00:34,918 sent updated screen size to 1 client: 8192x4096 (max 8192x4096)
Resizing the xterm to ridiculous dimensions again (4087, 43) gives more uh-ohh warnings, but otherwise no errors:
2015-10-28 13:07:10,930 Uh-oh, our size doesn't fit window sizing constraints: 3480x51 vs 3475x43 2015-10-28 13:07:11,032 Uh-oh, our size doesn't fit window sizing constraints: 3696x51 vs 3691x43 2015-10-28 13:07:11,132 Uh-oh, our size doesn't fit window sizing constraints: 3864x51 vs 3859x43 2015-10-28 13:07:11,235 Uh-oh, our size doesn't fit window sizing constraints: 3900x51 vs 3895x43 2015-10-28 13:07:11,577 Uh-oh, our size doesn't fit window sizing constraints: 3918x51 vs 3913x43 2015-10-28 13:07:11,669 Uh-oh, our size doesn't fit window sizing constraints: 3930x51 vs 3925x43 2015-10-28 13:07:11,766 Uh-oh, our size doesn't fit window sizing constraints: 4092x51 vs 4087x43
Scaling again, I do get a message about a Xinerama workaround server side:
2015-10-28 13:09:52,287 temporarily switching to 6400x4096 as a Xinerama workaround 2015-10-28 13:09:52,397 server virtual display now set to 8192x4096 (best match for 8192x2152) 2015-10-28 13:09:52,398 received updated display dimensions 2015-10-28 13:09:52,398 client root window size is 8192x2152 with 1 displays: 2015-10-28 13:09:52,399 :0.0 (1208x318 mm - DPI: 172x171) workarea: 8192x2104 at 0x48 2015-10-28 13:09:52,399 HDMI1 3444x2152 (621x341 mm - DPI: 140x160) 2015-10-28 13:09:52,399 LVDS1 2450x1377 at 3444x775 (309x174 mm - DPI: 201x201) 2015-10-28 13:09:52,400 DVI-1-0 2296x1291 at 5895x712 (597x336 mm - DPI: 97x97) 2015-10-28 13:09:52,614 Uh-oh, our size doesn't fit window sizing constraints: 1266x204 vs 1261x199 2015-10-28 13:09:52,621 Uh-oh, our size doesn't fit window sizing constraints: 2445x41 vs 2443x30 2015-10-28 13:09:52,627 DPI set to 172 x 172
And I get a scaling warning client side:
2015-10-28 13:09:52,481 Warning: cannot scale by 0.444444444444 x 0.444444444444 or lower 2015-10-28 13:09:52,482 the scaled client screen 4566 x 1200 -> 10273 x 2700 2015-10-28 13:09:52,482 would overflow the server's screen: 8192 x 4096 2015-10-28 13:09:52,482 using 0.557373046875 x 0.557373046875 -> 8192 x 2152 2015-10-28 13:09:52,485 sending updated screen size to server: 8192x2152 with 1 screens 2015-10-28 13:09:52,485 :0.0 (1208x318 mm - DPI: 172x171) workarea: 8192x2104 at 0x48 2015-10-28 13:09:52,485 HDMI1 3444x2152 (621x341 mm - DPI: 140x160) 2015-10-28 13:09:52,485 LVDS1 2450x1377 at 3444x775 (309x174 mm - DPI: 201x201) 2015-10-28 13:09:52,486 DVI-1-0 2296x1291 at 5895x712 (597x336 mm - DPI: 97x97)
Further attempts to scale down are ignored by server/client.
I'll leave it at this, as this laptop is nearly out of juice now. It looks like the code is all working as expected, but let me know if I missed something.
Unfortunately, it doesn't seem to be able/willing to handle the 4k monitor as a 4k.
Could be an OS option? (DPI, etc?)
What OS is this running?
Stretching one of the xterms clumsily across the monitors as best I could, I managed to get it to window[1].size=(4087, 212) before it refused to be enlarged any further.
Good. That's the new limits code doing its thing.
To bypass that for testing, you can:
XPRA_WIN32_WINDOW_HOOKS=0
XPRA_WIN32_MAX_SIZE_HINT=0
This should allow you to make the windows bigger than they should be. Then who knows what will happen...
Note: you don't necessarily need 3 monitors to do that. You can just enlarge the window, move some of it off-screen to the side, resize it some more, move it... until you make it big enough.
Could be an OS option? (DPI, etc?)
What OS is this running?
It's actually fedora 21 client against fedora 21 server... but I think that the HDMI cable I found is failing to send enough data (much like what happens with a DVI cable adapted to an HDMI for a 2560x1440 display which is only supported at a 720x540-ish resolution or so).
In any case, installing a 0.15.2 r9769 on both client and server, once I stretch a window to window[2].size=(4489, 316)
, I begin seeing the errors you posted client side:
2015-10-29 15:53:33,291 do_paint_rgb24 error Traceback (most recent call last): File "/usr/lib64/python2.7/site-packages/xpra/client/window_backing_base.py", line 274, in do_paint_rgb24 success = (self._backing is not None) and self._do_paint_rgb24(img_data, x, y, width, height, rowstride, options) File "/usr/lib64/python2.7/site-packages/xpra/client/gl/gl_window_backing_base.py", line 558, in _do_paint_rgb24 return self._do_paint_rgb(24, img_data, x, y, width, height, rowstride, options) File "/usr/lib64/python2.7/site-packages/xpra/client/gl/gl_window_backing_base.py", line 593, in _do_paint_rgb self.gl_init() File "/usr/lib64/python2.7/site-packages/xpra/client/gl/gl_window_backing_base.py", line 394, in gl_init glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, self.texture_pixel_format, w, h, 0, self.texture_pixel_format, GL_UNSIGNED_BYTE, None) File "latebind.pyx", line 32, in OpenGL_accelerate.latebind.LateBind.__call__ (src/latebind.c:989) File "wrapper.pyx", line 318, in OpenGL_accelerate.wrapper.Wrapper.__call__ (src/wrapper.c:6561) GLError: GLError( err = 1281, description = 'invalid value', baseOperation = glTexImage2D, pyArgs = ( GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB, 4489, 316, 0, GL_RGB, GL_UNSIGNED_BYTE, None, ), cArgs = ( GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB, 4489, 316, 0, GL_RGB, GL_UNSIGNED_BYTE, None, ), cArguments = ( GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB, 4489, 316, 0, GL_RGB, GL_UNSIGNED_BYTE, None, ) )
It looks like all is behaving as expected. Did you want me to really go crazy with those windows environment variables and see if I can get a window to stretch large enough to exceed the 16384 texture-size limit? (I suppose I can give it a try, just to see...)
In the meantime, passing this back, again.
That will do.
this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/942