Xpra: Ticket #1662: server memory leak

It looks like there's a memory leak somewhere. After running firefox for 19 days, xpra was using over 8GB RAM.

$ ps -eo rss,etime,cmd | grep :64
8912696 19-06:49:25 /usr/bin/python /usr/bin/xpra --bind-tcp= --no-daemon --tcp-auth=file:filename=/home/nathan/.winswitch/server/sessions/64/session.pass --systemd-run=no start :64
38588 19-06:49:23 /usr/lib/xorg/Xorg-for-Xpra-:64 -noreset -novtswitch -nolisten tcp +extension GLX +extension RANDR +extension RENDER -auth /home/nathan/.Xauthority -logfile /run/user/1002/xpra/Xorg.:64.log -configdir /home/nathan/.xpra/xorg.conf.d -config /etc/xpra/xorg.conf -depth 24 :64
$ xpra info :64 | grep memory

Ubuntu 17.04 x64, xpra 2.1.2-r16903

Currently running glxgears with XPRA_DETECT_LEAKS=1. Anything else I can do to help track this down?

Tue, 17 Oct 2017 01:40:34 GMT - Antoine Martin: owner changed

Can you try turning off as many features as you can (sound forwarding, etc) to see if that helps? Do you need to have any screen activity to trigger it? Does the window have to be shown? Or does it leak no matter what? Reproducing with glxgears would help. It would also be useful to see if using mmap (local connection) still leaks. Another interesting test would be to run the server with XPRA_SCROLL_ENCODING=0 xpra start ... and see if that helps.

Fri, 03 Nov 2017 12:58:11 GMT - Antoine Martin: owner, status changed

I can reproduce it with gtkperf -a in a loop.

Sat, 04 Nov 2017 17:21:59 GMT - Antoine Martin:

Watching the server memory usage with xpra info | grep server.maxrss=, then running ./tests/xpra/test_apps/simulate_console_user.py in an xterm, the value goes up regularly by about ~0.2 to 2KB/s. This also happens with mmap. When re-connecting with a new client, the increase only occurs after the memory usage has reached the point where it left off when the previous client disconnected.

First had to fix memleak debugging (XPRA_DETECT_MEMLEAKS=1 xpra start ..) which broke with (newer versions?) numpy: r17300. (r17302 also helps debugging)

Then found a leak in the protocol layer, so "xpra info" would leak yet more memory when I was trying to find where the real leak was... fixed in r17299. And then another leak in the window source class fixed in r17301.

Both of those should be backported. I'll let it run for a few hours more to try to see if there are more leaks to be found..

Sat, 04 Nov 2017 17:22:52 GMT - Antoine Martin: attachment set

example of patch to enable memleak debugging for the classes that seemed to cause problems

Sun, 05 Nov 2017 12:00:18 GMT - Antoine Martin: priority changed

There are still some small leaks, so:

What makes this particularly difficult is that the leak debugging slows things down dramatically and blocking the main thread, so it can cause things to get backed up so much that they look like leaks when they're not.

Another problem is the "traceback reference cycle problem"

(Exception leaks in Python 2 and 3).

And more importantly, we're still leaking somewhere as this gets printed every time the leak detection code runs (always exactly the same leak count):

leaks: count : object
      15 :                             cell :   1469 matches
      14 :                            tuple :   4117 matches
      13 :                            frame :   1017 matches
       2 :                             list :   4145 matches

Sun, 05 Nov 2017 18:20:01 GMT - Antoine Martin:

By turning off the ping feature, the leaks are reduced. It also looks like generating network traffic (ie: moving the mouse around) also causes more leaking.

I suspect that this comes from the non-blocking socket timeouts, like this shown at debug level:

    <bound method SocketConnection.is_active of unix-domain socket:/run/user/1000/xpra/desktop-3>, \
    <bound method SocketConnection.can_retry of unix-domain socket:/run/user/1000/xpra/desktop-3>, \
    <built-in method recv of _socket.socket object at 0x7f6ab0585b90>, \
    (65536,), {}) timed out, retry=socket.timeout
Traceback (most recent call last):
  File "/usr/lib64/python2.7/site-packages/xpra/net/bytestreams.py", line 101, in untilConcludes
    return f(*a, **kw)
timeout: timed out

Mon, 06 Nov 2017 08:31:05 GMT - Antoine Martin:

Lots of related changes:

The main leak is still there though...

Mon, 06 Nov 2017 13:38:30 GMT - Antoine Martin:

r17328 (+r17330 fixup) fixes a leak caused by logging. The alternative fix would be to add a kwargs option to not track the loggers when we know we're not going to be re-using them.

Dumping all the cell objects (matched by type string since there does not seem to be a python type exposed for it), the recurring entries seem to be:

2017-11-06 17:31:17,250 [355] '<cell at 0x7f248ea09fa0: list object at 0x7f2486445518>': '[{\'__setattr__\': <slot wrapper \'__setattr__\ ..  124: <type \'set\'>}, (VideoSubregion(None),)]'
2017-11-06 17:31:17,250 [356] '<cell at 0x7f248ea09ef8: type object at 0x7f24bb122c60>': "<type 'frame'>"
2017-11-06 17:31:17,250 [357] '<cell at 0x7f248ea09e18: tuple object at 0x55c31868b020>': '(<frame object at 0x7f24bb20c790>, <frame objec .. 5c319abbd50>, <frame object at 0x7f2470003610>)'
2017-11-06 17:31:17,250 [358] '<cell at 0x7f248ea09ec0: dict object at 0x7f248e514050>': "{1: <type 'list'>, 4: <type 'cell'>}"

Not sure where they're from yet... could even be the leak debugging code itself.

Tue, 07 Nov 2017 16:00:42 GMT - Antoine Martin:

Left "xpra info" running in a loop for 4 hours and those leaks are definitely gone. However, gtkperf -a still causes another leak - and a pretty big one. At least now we can measure things without causing further misleading leaks:

Fri, 10 Nov 2017 16:24:55 GMT - Antoine Martin: attachment set

show the lists that leak and their backref (applies to r17356)

Fri, 10 Nov 2017 16:37:40 GMT - Antoine Martin:

More improvements:

This is hard...

Sat, 11 Nov 2017 04:57:13 GMT - Antoine Martin: owner, status changed

Related improvements: r17358 + r17360: avoid churn

I think the leaks are gone (at least the big ones), it just takes a very long time for the maxrss value to settle on its high water mark, probably because of memory fragmentation.

It would be worth playing with MALLOC_MMAP_THRESHOLD_ to validate this assumption, but I've already spent far too much time on this ticket.

@nathan-renniewaldock: can I close this?

Fri, 24 Nov 2017 16:06:35 GMT - Antoine Martin: status changed; resolution set

Sat, 23 Jan 2021 05:30:26 GMT - migration script:

this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/1662