commit | e80c455483991233c35f8c550fbfbaf9946a9612 | [log] [tgz] |
---|---|---|
author | Jeongik Cha <jeongik@google.com> | Wed Aug 09 05:01:19 2023 +0000 |
committer | Automerger Merge Worker <android-build-automerger-merge-worker@system.gserviceaccount.com> | Wed Aug 09 05:01:19 2023 +0000 |
tree | 5c440884231ba96b418ead9bc627f891fd733e3f | |
parent | a3307d6a5a89a764779bab2f0c53e4cec6f48387 [diff] | |
parent | 965a1e7d0710a300704b74c023df63c88597bfba [diff] |
Import platform/external/rust/crates/vhost-device-vsock am: 965a1e7d07 Original change: https://android-review.googlesource.com/c/platform/external/rust/crates/vhost-device-vsock/+/2691688 Change-Id: I0d2693cebb3889ee510d9a0f7b956a7240c6b466 Signed-off-by: Automerger Merge Worker <android-build-automerger-merge-worker@system.gserviceaccount.com>
The crate introduces a vhost-device-vsock device that enables communication between an application running in the guest i.e inside a VM and an application running on the host i.e outside the VM. The application running in the guest communicates over VM sockets i.e over AF_VSOCK sockets. The application running on the host connects to a unix socket on the host i.e communicates over AF_UNIX sockets. The main components of the crate are split into various files as described below:
Run the vhost-device-vsock device:
vhost-device-vsock --guest-cid=<CID assigned to the guest> \ --socket=<path to the Unix socket to be created to communicate with the VMM via the vhost-user protocol> \ --uds-path=<path to the Unix socket to communicate with the guest via the virtio-vsock device> \ [--tx-buffer-size=<size of the buffer used for the TX virtqueue (guest->host packets)>]
or
vhost-device-vsock --vm guest_cid=<CID assigned to the guest>,socket=<path to the Unix socket to be created to communicate with the VMM via the vhost-user protocol>,uds-path=<path to the Unix socket to communicate with the guest via the virtio-vsock device>[,tx-buffer-size=<size of the buffer used for the TX virtqueue (guest->host packets)>]
Specify the --vm
argument multiple times to specify multiple devices like this:
vhost-device-vsock \ --vm guest-cid=3,socket=/tmp/vhost3.socket,uds-path=/tmp/vm3.vsock \ --vm guest-cid=4,socket=/tmp/vhost4.socket,uds-path=/tmp/vm4.vsock,tx-buffer-size=32768
Or use a configuration file:
vhost-device-vsock --config=<path to the local yaml configuration file>
Configuration file example:
vms: - guest_cid: 3 socket: /tmp/vhost3.socket uds_path: /tmp/vm3.sock tx_buffer_size: 65536 - guest_cid: 4 socket: /tmp/vhost4.socket uds_path: /tmp/vm4.sock tx_buffer_size: 32768
Run VMM (e.g. QEMU):
qemu-system-x86_64 \ <normal QEMU options> \ -object memory-backend-file,share=on,id=mem0,size=<Guest RAM size>,mem-path=<Guest RAM file path> \ # size == -m size -machine <machine options>,memory-backend=mem0 \ -chardev socket,id=char0,reconnect=0,path=<vhost-user socket path> \ -device vhost-user-vsock-pci,chardev=char0
shell1$ vhost-device-vsock --vm guest-cid=4,uds-path=/tmp/vm4.vsock,socket=/tmp/vhost4.socket
or if you want to configure the TX buffer size
shell1$ vhost-device-vsock --vm guest-cid=4,uds-path=/tmp/vm4.vsock,socket=/tmp/vhost4.socket,tx-buffer-size=65536
shell2$ qemu-system-x86_64 \ -drive file=vm.qcow2,format=qcow2,if=virtio -smp 2 -m 512M -mem-prealloc \ -object memory-backend-file,share=on,id=mem0,size=512M,mem-path="/dev/hugepages" \ -machine q35,accel=kvm,memory-backend=mem0 \ -chardev socket,id=char0,reconnect=0,path=/tmp/vhost4.socket \ -device vhost-user-vsock-pci,chardev=char0
# https://github.com/stefano-garzarella/iperf-vsock guest$ iperf3 --vsock -s host$ iperf3 --vsock -c /tmp/vm4.vsock
guest$ nc --vsock -l 1234 host$ nc -U /tmp/vm4.vsock CONNECT 1234
# https://github.com/stefano-garzarella/iperf-vsock host$ iperf3 --vsock -s -B /tmp/vm4.vsock guest$ iperf3 --vsock -c 2
host$ nc -l -U /tmp/vm4.vsock_1234 guest$ nc --vsock 2 1234
If you add multiple VMs, they can communicate with each other. For example, if you have two VMs with CID 3 and 4, you can run the following commands to make them communicate:
shell1$ vhost-device-vsock --vm guest-cid=3,uds-path=/tmp/vm3.vsock,socket=/tmp/vhost3.socket \ --vm guest-cid=4,uds-path=/tmp/vm4.vsock,socket=/tmp/vhost4.socket shell2$ qemu-system-x86_64 \ -drive file=vm1.qcow2,format=qcow2,if=virtio -smp 2 -m 512M -mem-prealloc \ -object memory-backend-file,share=on,id=mem0,size=512M,mem-path="/dev/hugepages" \ -machine q35,accel=kvm,memory-backend=mem0 \ -chardev socket,id=char0,reconnect=0,path=/tmp/vhost3.socket \ -device vhost-user-vsock-pci,chardev=char0 shell3$ qemu-system-x86_64 \ -drive file=vm2.qcow2,format=qcow2,if=virtio -smp 2 -m 512M -mem-prealloc \ -object memory-backend-file,share=on,id=mem0,size=512M,mem-path="/dev/hugepages2" \ -machine q35,accel=kvm,memory-backend=mem0 \ -chardev socket,id=char0,reconnect=0,path=/tmp/vhost4.socket \ -device vhost-user-vsock-pci,chardev=char0
# nc-vsock patched to set `.svm_flags = VMADDR_FLAG_TO_HOST` guest_cid3$ nc-vsock -l 1234 guest_cid4$ nc-vsock 3 1234
This project is licensed under either of