Xpra: Ticket #1387: mmap-support for --bind-vsock (and --bind-tcp)

I recently discovered the vsock-support of xpra and tested it succesfully as described here: #983

Kvm also provides a shared memory device, but there does not seem to be a possibility to enable xpra's mmap-support when using --bind-vsock (or --bind-tcp).

My xpra version is: xpra-1.0-1.r14502.fc24.x86_64

The Linux distribution I use on the host and in the guest is Fedora 25.

The shared memory device in kvm called ivshmem was configured as follows:

It would be nice to use the shared memory device in order to have fast, secure and seamless sandboxes with kvm and xpra. That way every distribution running kvm would be able to provide a secure context for applications like Qubes OS does.

Also that would probably be one of the best Christmas presents for all the paranoid people out there also wanting to use grsecurity with their virtualization setups.

Mon, 26 Dec 2016 08:28:27 GMT - Antoine Martin: owner, description changed

(made some minor edits to the description - also tested with Fedora 25 at both ends)


Sun, 29 Jan 2017 02:42:27 GMT - pingberlin:

Hi Antoine, I did some research. Considering reading and writing to the mmap file:

While being in the VM, do _not_ load the uio modules. Instead try the following:

Does this help or do the problems persist?

Sun, 29 Jan 2017 10:21:17 GMT - Antoine Martin:

I had to rmmod uio and uio_ivshmem so that the "enable" flag showed "0". After that, I enabled it with echo and I can connect with xpra and mmap! (we're going to need a few minor patches to make things work - nothing major)

The only problem that I have is the size of the mmap area: it ended up the right size (256MB) on the host (probably after the client tried to write the token near the 256MB mark) but the guest only sees about 4MB. How do we control the size of this mmap area?

Sun, 29 Jan 2017 11:14:52 GMT - Antoine Martin: attachment set

workarounds for the small ivshmem size

Sun, 29 Jan 2017 11:27:27 GMT - Antoine Martin:

Here's what you need to use ivshmem with xpra:

To use it:

FWIW: I tried running the client in the guest instead, but writing to the mmap area failed with "Input / Output error", does the host side need to initiate things perhaps?

Sun, 29 Jan 2017 20:28:50 GMT - pingberlin:

Hi Antoine,

Thank you very much! This works great. I can confirm, that this works for "--bind-tcp" and also for "--bind-vsock" for host to guest connections.

I'll check on the other issues in a few days.

Mon, 30 Jan 2017 20:20:07 GMT - pingberlin:

Hi Antoine,

  1. Concerning the issue:
    • "running as root: I assume we can just chmod the socket? (not tested)"
    • I have tested xpra also against the ivshmem-plain device, which offers less features, but also doesn't require a socket. Test as follows:
      • Start your VM:
        sudo /usr/bin/qemu-system-x86_64 -enable-kvm -machine type=pc,accel=kvm -m 4096 \
        -name Fedora-25 -drive file=/var/lib/libvirt/images/fedora25.img,if=virtio,index=0,format=raw \
        -smp 4,sockets=1,cores=2,threads=2 -net nic -net user -cpu host \
        -object memory-backend-file,id=mb1,size=256M,share,mem-path=/dev/shm/ivshmem \
        -device ivshmem-plain,id=ivshm-plain,memdev=mb1
      • Start your xpra-server in your guest as usual with ivshmem.
      • Start your xpra-client on the host as usual with ivshmem.
      • Using "ivshmem-plain" removes the hosts need for the package "ivshmem-tools".
  2. Concerning the issue:
    • "How do we control the size of this mmap area?"
      • The size can be set like follows with ivshmem-doorbell by invoking ivshmem-server:
        sudo ivshmem-server -l 256M
      • The size can be set like follows with ivshmem-plain by invoking the qemu commandline listed in the previous issue. Just set the size parameter:
        -object memory-backend-file,id=mb1,size=256M,share,mem-path=/dev/shm/ivshmem

Tue, 31 Jan 2017 04:48:06 GMT - Antoine Martin: status changed; resolution set

Hah, changing the size is easy.

I don't think there's anything left to do on this ticket so I'm closing it, feel free to re-open if I've missed something.

Sat, 23 Jan 2021 05:22:51 GMT - migration script:

this ticket has been moved to: https://github.com/Xpra-org/xpra/issues/1387