Using xpra v0.16.0 build, Ubuntu 14.04 (attached bug zip)
Works perfectly with the machine I have with an Nvidia Card. But the machine I have with just an Intel graphics card, the window will show, but the contents don't render (see screenshot).
See xpra.log for the error log from xpr client, and the :100.log for the xpra server.
xpra bug dump
xpra server log
xpra client log
First good catch (not solving this bug though):
failed to import swscale colorspace conversion (csc_swscale) No module named dec_avcodec2.decoder
is fixed in r11146.
Next we have lots of:
Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/xpra/client/gl/gtk2/gl_window_backing.py", line 41, in gl_expose_event self.gl_init() File "/usr/lib/python2.7/dist-packages/xpra/client/gl/gl_window_backing_base.py", line 401, in gl_init glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MAX_LEVEL, 0) File "/usr/lib/python2.7/dist-packages/OpenGL/error.py", line 208, in glCheckError baseOperation = baseOperation, OpenGL.error.GLError: GLError( err = 1281, description = 'invalid value', baseOperation = glTexParameteri, cArguments = ( GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MAX_LEVEL, 0, ) )
And rgb24 / rgb32 paint errors:
2015-11-05 08:10:34,584 do_paint_rgb32 error Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/xpra/client/window_backing_base.py", line 309, in do_paint_rgb32 success = (self._backing is not None) and self._do_paint_rgb32(img_data, x, y, width, height, rowstride, options) File "/usr/lib/python2.7/dist-packages/xpra/client/gl/gl_window_backing_base.py", line 625, in _do_paint_rgb32 return self._do_paint_rgb(32, img_data, x, y, width, height, rowstride, options) File "/usr/lib/python2.7/dist-packages/xpra/client/gl/gl_window_backing_base.py", line 641, in _do_paint_rgb self.gl_init() File "/usr/lib/python2.7/dist-packages/xpra/client/gl/gl_window_backing_base.py", line 401, in gl_init glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MAX_LEVEL, 0) File "/usr/lib/python2.7/dist-packages/OpenGL/error.py", line 208, in glCheckError baseOperation = baseOperation, GLError: GLError( err = 1281, description = 'invalid value', baseOperation = glTexParameteri, cArguments = ( GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MAX_LEVEL, 0, ) )
The GL_TEXTURE_MAX_LEVEL=0
is actually the recommended way of turning off mipmaps in opengl, see https://www.opengl.org/wiki/Common_Mistakes#Creating_a_complete_texture.
Sounds similar to this bug: Setting GL_TEXTURE_MAX_LEVEL to 0 fires GL_INVALID_OPERATION.
I'll try to reproduce this, but it looks like a buggy driver to me. I'll take a look when I get a chance, in the meantime you can help by providing the output of the gl_check.py script.
No problem at all:
➜ /tmp ./gl_check.py PyOpenGL warning: missing accelerate module PyOpenGL warning: missing array format handlers: numeric, vbo, vbooffset OpenGL Version: 3.0 Mesa 10.1.3 OpenGL properties: * GLU extensions : GLU_EXT_nurbs_tessellator GLU_EXT_object_space_tess * GLU version : 1.3 * display_mode : ALPHA, SINGLE * gdkgl.version : 1.4 * gdkglext.version : 1.2.0 * gtkglext.version : 1.2.0 * has_alpha : True * max-viewport-dims : (16384, 16384) * opengl : 3.0 * pygdkglext.version : 1.1.0 * pyopengl : 3.0.2 * renderer : Mesa DRI Intel(R) Haswell Mobile * rgba : True * safe : True * shading language version : 1.30 * texture-size-limit : 4096 * vendor : Intel Open Source Technology Center * zerocopy : False
Not sure if this helps - I dug into what driver that I was using:
sudo lshw -c video [sudo] password for markmandel: PCI (sysfs) *-display description: VGA compatible controller product: Haswell-ULT Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 0b width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:63 memory:d0000000-d03fffff memory:c0000000-cfffffff ioport:3000(size=64)
I went digging around, and found this excerpt in, not sure if it's relevent, I'm hoping it is: http://www.intel.com/content/dam/www/public/us/en/documents/guides/efi-bios-10-3-1-users-guide.pdf
7.6.10.2 OpenGL Use Considerations
Allocation of Mipmaps and Memory Usage
Under normal circumstances the OpenGL driver will allocate all mip levels for a texture at allocation time. This is due to the fact that the OpenGL API allows an application to make use of the mips without first conveying an intention to do so. All mips are therefore available all the time. The IEGD OpenGL driver has a special-case scenario to prevent the allocation of mips when the application can ensure that they will never be populated or used. On some hardware configurations this can save 50% on texture memory usage. To enable this feature, the application should do the following: Using glTexParameter*(), set the GL_TEXTURE_MAX_LEVEL parameter to 0 before populating the texture (before any call to glTexImage2D()). This will prevent mips 1-N from being allocated but will not prevent them from being used. If the mips are inadvertently used, the results are undefined.
set the level parameter on TEXTURE_2D
That's odd, we do set GL_TEXTURE_MAX_LEVEL
before using the textures, that's what it is failing on. Maybe with this older driver, we shouldn't be doing it?
r11150 now also sets GL_TEXTURE_BASE_LEVEL
, maybe it will help.
It's not really clear to me if the invalid value is GL_TEXTURE_RECTANGLE_ARB
, GL_TEXTURE_MAX_LEVEL
or 0
.
Although the glTexParameter docs specify that the target texture of the active texture unit, which must be either GL_TEXTURE_2D or GL_TEXTURE_CUBE_MAP, we actually use GL_TEXTURE_RECTANGLE_ARB
, but there are lots of opengl examples out there doing the exact same thing.
@markmandel: can you try the latest trunk?
And if that doesn't work, maybe try the patch attached above? Or just remove all those GL_TEXTURE_MAX_LEVEL
/ GL_TEXTURE_BASE_LEVEL
calls.
Sure - will take it for a spin!
This isn't anything special, just the normal xserver-xorg-video-intel
driver.
If the patches don't work, I'll give xorg-edgers a try, and see how that fares as well.
This could also be due to the outdated version of pyopengl found in Ubuntu Trusty, you could try replacing the system one with one you install from upstream: https://pypi.python.org/pypi/PyOpenGL (you can just sudo easy_install PyOpenGL
it)
@markmandel: does the new build and/or a new pyopengl help?
The same bug has been reported with Trusty and AMD cards: ticket:1036#comment:16
Not sure how to deal with this now: maybe pyopengl is too old, or maybe we should just not support Trusty at all with 0.16
As asked in ticket:1043#comment:5, here comes the chipset details:
$ python /usr/lib/python2.7/dist-packages/xpra/client/gl/gl_check.py PyOpenGL warning: missing accelerate module PyOpenGL warning: missing array format handlers: numeric, vbo, vbooffset OpenGL Version: 3.0 Mesa 10.1.3 OpenGL properties: * GLU extensions : GLU_EXT_nurbs_tessellator GLU_EXT_object_space_tess * GLU version : 1.3 * display_mode : ALPHA, SINGLE * gdkgl.version : 1.4 * gdkglext.version : 1.2.0 * gtkglext.version : 1.2.0 * has_alpha : True * max-viewport-dims : (16384, 16384) * opengl : 3.0 * pygdkglext.version : 1.1.0 * pyopengl : 3.0.2 * renderer : Gallium 0.4 on AMD CAICOS * rgba : True * safe : True * shading language version : 1.30 * texture-size-limit : 16384 * transparency : True * vendor : X.Org * zerocopy : False
$ sudo lshw -c video SCSI *-display description: VGA compatible controller product: Caicos [Radeon HD 6450/7450/8450 / R5 230 OEM] vendor: Advanced Micro Devices, Inc. [AMD/ATI] physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=radeon latency=0 resources: irq:42 memory:d0000000-dfffffff memory:fddc0000-fdddffff ioport:ee00(size=256) memory:fdd00000-fdd1ffff
Closing as fixed:
Feel free to re-open if you can reproduce with a recent version on default settings.
See wiki/ClientRendering/OpenGL
this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/1024