xpra icon
Bug tracker and wiki

Opened 3 months ago

Closed 8 weeks ago

#1387 closed enhancement (fixed)

mmap-support for --bind-vsock (and --bind-tcp)

Reported by: pingberlin Owned by: pingberlin
Priority: major Milestone:
Component: core Version: trunk
Keywords: kvm mmap vsock tcp Cc:

Description (last modified by Antoine Martin)

I recently discovered the vsock-support of xpra and tested it succesfully as described here: #983

Kvm also provides a shared memory device, but there does not seem to be a possibility to enable xpra's mmap-support when using --bind-vsock (or --bind-tcp).

My xpra version is: xpra-1.0-1.r14502.fc24.x86_64

The Linux distribution I use on the host and in the guest is Fedora 25.

The shared memory device in kvm called ivshmem was configured as follows:

  • 1. host:
    • install dependencies:
      • sudo dnf install ivshmem-tools
    • start ivshmem-server:
      • sudo ivshmem-server
    • verify the existence of your shared memory dev:
      • ls -la /dev/shm/ivshmem
    • start the vm with:
      • sudo /usr/bin/qemu-system-x86_64 -enable-kvm -machine type=pc,accel=kvm -m 4096 -name Fedora-25 -drive file=/var/lib/libvirt/images/fedora25.img,if=virtio,index=0,format=raw -smp 4,sockets=1,cores=2,threads=2 -net nic -net user -cpu host -chardev socket,path=/tmp/ivshmem_socket,id=nahanni -device ivshmem-doorbell,chardev=nahanni
  • 2. guest:
    • install dependencies:
      • sudo dnf install kernel-devel
      • sudo dnf update kernel
      • sudo reboot
      • git clone https://gitorious.org/nahanni/guest-code.git
      • cd guest-code/kernel_module/uio/
      • make
      • sudo make install
      • sudo modprobe uio
      • sudo insmod /lib/modules/4.8.15300.fc25.x86_64/kernel/drivers/uio/uio_ivshmem.ko
    • verify the existence of your shared memory dev:
      • ls -la /dev/uio0

It would be nice to use the shared memory device in order to have fast, secure and seamless sandboxes with kvm and xpra.
That way every distribution running kvm would be able to provide a secure context for applications like Qubes OS does.

Also that would probably be one of the best Christmas presents for all the paranoid people out there also wanting to use grsecurity with their virtualization setups.

Attachments (1)

ivshmem-size-workarounds.patch (1.4 KB) - added by Antoine Martin 2 months ago.
workarounds for the small ivshmem size

Download all attachments as: .zip

Change History (8)

comment:1 Changed 3 months ago by Antoine Martin

Description: modified (diff)
Owner: changed from Antoine Martin to pingberlin

(made some minor edits to the description - also tested with Fedora 25 at both ends)

Notes:

  • if the socket /tmp/ivshmem_socket is left behind, the ivshmem-server command will fail with "cannot bind". (run with -v -F to get more details, then just remove the file if unused)
  • running as root: I assume we can just chmod the socket? (not tested)
  • rather than insmod, I used (personal preference / easier) depmod -a;modprobe uio_ivshmem
  • how do I read and write to the mmap file? I have tried variations on https://github.com/henning-schild/ivshmem-guest-code/blob/master/tests/DumpSum/VM/mmap.py (changed to write to the /dev/uio0 file) and I get "Invalid argument" every single time.. The only code example that does not fail is "writedump" but this accesses the file as a file, not as shared memory. What am I missing?
Last edited 3 months ago by Antoine Martin (previous) (diff)

comment:2 Changed 2 months ago by pingberlin

Hi Antoine,
I did some research.
Considering reading and writing to the mmap file:

While being in the VM, do _not_ load the uio modules. Instead try the following:

  • Get the pci-id of you shared memory pci device:
    1. lcpsi
  • Output may look like this:
    1. 00:04:0 RAM memory: Red Hat, Inc Inter-VM shared memory (rev 01)
  • Check if the device is enabled (should return 0):
    1. cat /sys/devices/pci0000\:00/0000\:00\:04.0/enable
  • Enable the device:
    1. echo 1 > /sys/devices/pci0000\:00/0000\:00\:04.0/enable
  • Now try the following python code to access the shared memory:
    #!/usr/bin/python
    
    import sys
    import mmap
    
    with open(sys.argv[1],"r+") as f:
      map = mmap.mmap(f.fileno(), 0)
      print map.readline()
      map.close()
    
  • Use it as follows:
    1. testmmap.py /sys/devices/pci0000\:00/0000\:00\:04.0/resource2

Does this help or do the problems persist?

Last edited 2 months ago by Antoine Martin (previous) (diff)

comment:3 Changed 2 months ago by Antoine Martin

I had to rmmod uio and uio_ivshmem so that the "enable" flag showed "0".
After that, I enabled it with echo and I can connect with xpra and mmap!
(we're going to need a few minor patches to make things work - nothing major)

The only problem that I have is the size of the mmap area: it ended up the right size (256MB) on the host (probably after the client tried to write the token near the 256MB mark) but the guest only sees about 4MB.
How do we control the size of this mmap area?

Changed 2 months ago by Antoine Martin

workarounds for the small ivshmem size

comment:4 Changed 2 months ago by Antoine Martin

Here's what you need to use ivshmem with xpra:

  • latest trunk revision: changes in r14883 allow us to use an existing mmap file client side, r14884: server can now override the path supplied by the client
  • ivshmem-size-workarounds.patch, until we can make the area bigger - just make sure to keep your windows small!


To use it:

  • server in guest:
    xpra start --bind-tcp=0.0.0.0: --no-daemon --start=xterm -d mmap \
        --mmap=/sys/devices/pci0000:00/0000:00:04.0/resource2
    
  • client on host:
    xpra attach tcp:GUEST:14500 --mmap=/dev/shm/ivshmem  -d mmap -d draw
    

FWIW: I tried running the client in the guest instead, but writing to the mmap area failed with "Input / Output error", does the host side need to initiate things perhaps?

comment:5 Changed 2 months ago by pingberlin

Hi Antoine,

Thank you very much! This works great.
I can confirm, that this works for "--bind-tcp" and also for "--bind-vsock" for host to guest connections.

I'll check on the other issues in a few days.

comment:6 Changed 8 weeks ago by pingberlin

Hi Antoine,

  1. Concerning the issue:
    • "running as root: I assume we can just chmod the socket? (not tested)"
    • I have tested xpra also against the ivshmem-plain device, which offers less features, but also doesn't require a socket. Test as follows:
      • Start your VM:
        sudo /usr/bin/qemu-system-x86_64 -enable-kvm -machine type=pc,accel=kvm -m 4096 \
        -name Fedora-25 -drive file=/var/lib/libvirt/images/fedora25.img,if=virtio,index=0,format=raw \
        -smp 4,sockets=1,cores=2,threads=2 -net nic -net user -cpu host \
        -object memory-backend-file,id=mb1,size=256M,share,mem-path=/dev/shm/ivshmem \
        -device ivshmem-plain,id=ivshm-plain,memdev=mb1
        
      • Start your xpra-server in your guest as usual with ivshmem.
      • Start your xpra-client on the host as usual with ivshmem.
      • Using "ivshmem-plain" removes the hosts need for the package "ivshmem-tools".
  2. Concerning the issue:
    • "How do we control the size of this mmap area?"
      • The size can be set like follows with ivshmem-doorbell by invoking ivshmem-server:
        sudo ivshmem-server -l 256M
        
      • The size can be set like follows with ivshmem-plain by invoking the qemu commandline listed in the previous issue. Just set the size parameter:
        -object memory-backend-file,id=mb1,size=256M,share,mem-path=/dev/shm/ivshmem
        

comment:7 Changed 8 weeks ago by Antoine Martin

Resolution: fixed
Status: newclosed

Hah, changing the size is easy.

I don't think there's anything left to do on this ticket so I'm closing it, feel free to re-open if I've missed something.

Note: See TracTickets for help on using tickets.