I have configured two libvirt nodes with the following host names:
The names are declared in a public DNS and can be resolved (public IP).
When I try to migrate a guest from one host to another:
root@mycompany-hv-02:~# virsh migrate prout qemu+ssh://mycompany-hv-01.example.tld/system --offline --persistent
error: internal error: hostname on destination resolved to localhost, but migration requires an FQDN
The error is the same when I try a live migration.
I know this is not exactly the same error, but I tried the tips on this page. My DNS already works, so I tried to force the resolution by adding entries to my /etc/hosts on both hosts, but that doesn't work.
This creates a constant video transmission of approximately 400 KB / s. Here I am deliberately using multicast, so several hosts on the network can play or record the transmission (without requiring that the source computer, which is connected via WiFi, send multiple transmissions). It is assumed that the host of the KVM guest analyzes the transmission and records it every time there is movement in the video.
Here is the problem: All hosts that are directly Connected to the network (= not a KVM guest) can receive UDP traffic, even the KVM host itself. However, the KVM guest cannot, only sees very few packages:
sudo timeout 20 tcpdump -i ens3 host 18.104.22.168
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens3, link-type EN10MB (Ethernet), capture size 262144 bytes
20:53:10.361355 IP vogelhaus.internal.example.com.48146 > 22.214.171.124.5000: UDP, length 4096
# omitted 5 lines
20:53:12.081881 IP vogelhaus.internal.example.com.48146 > 126.96.36.199.5000: UDP, length 4096
7 packets captured
16 packets received by filter
9 packets dropped by kernel
These are definitely not enough packages for a 400 KB / s video stream that runs for 20 seconds. When I do the same on another host that is directly connected, I get the expected results:
sudo timeout 20 tcpdump -i enp1s0f0 host 188.8.131.52
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp1s0f0, link-type EN10MB (Ethernet), capture size 262144 bytes
20:54:41.264709 IP vogelhaus.internal.example.com > 184.108.40.206: udp
# ... many, many more! ...
20:55:00.912257 IP vogelhaus.internal.example.com > 220.127.116.11: udp
20:55:00.912446 IP vogelhaus.internal.example.com.48146 > 18.104.22.168.5000: UDP, bad length 4096 > 1472
7205 packets captured
7231 packets received by filter
26 packets dropped by kernel
The operating system on the KVM host is Ubuntu 18.04. QEMU version:
QEMU emulator version 2.11.1(Debian 1:2.11+dfsg-1ubuntu7.21)
Any idea what prevents the KVM guest from seeing all the traffic?
I am using Arch Linux (updated) with QEMU-KVM, libvirt and virt-manager as front. I have several virtual machines, but so far only one is running at a time. The virtual machine I try to put to work is in Debian 10, but I also have a Kali and a CentOS 7 with the same problem when I try similar things.
The interface I am trying to use for macvtap is a wireless card (on a Thinkpad T580 laptop) connected to a Wi-Fi access point (WPA2).
I am trying to configure a macvtap interface to connect the wlp4s0 connection on my host to one of my virtual machines. To do that, I am using virt-manager. I tried bridge and VEPA mode on the macvtap, and tried all types of interfaces (virtual hardware) on the VM, without success, since there is no network connection. However, NAT mode works well on all virtual machines.
Libvirt puts the device (wlp4s0 on the host) in promiscuous mode, although ip-link does not show it (the mark in / sys / devices / … is changing, and dmesg says something about entering promiscuous mode).
When starting Wireshark and pinging the gateway (with a fixed IP) from the VM, I see the ARP request on the host in macvtap and in wlp4s0, but there is no response.
When using dhcp, dhclient gets no response.
I can provide more information is necessary. If you have any idea what is causing that, I will gladly listen to your suggestions!
Hi, I have been struggling to make the guest network work when I use a macvtap in VEPA mode between two virtual machines on a host. I've spent hours (days) searching on Google without joy. Does this network configuration really work?
I created the vtap using KVM Manager by adding a NIC, selecting the "macvtap" network source, the VEPA source mode, the device model: virtio.
Not sure if the above will format well, if the previous diagram was not formatted here is a JPEG
The host NIC is connected to a cisco nexus 9000, which I configured for the 802.1Qbg reflective relay.
In vm2-62 when I try to ping 22.214.171.124, I get the unreachable destination Host.
When I use tcpdump on the host, I can see the ARP request of vm2-62 looking for the mac for 126.96.36.199 (vm3-62). I can see the request in macvtap0, and in bond1.62 and in bond1, but NOT in macvtap1.
If I manually add the ARP inputs in vm3-62 and vm2-62, the ping works fine, so I think the reflective relay on the switch is set correctly.
It seems that the switch is not retrieving the ARP request or that I need to do something in Linux to enable bond1.62 to forward the ARP request to macvtap1.
I am following this tutorial on how to get through the GPU, however, when I get to the 6:43 mark, where do you press Start the installation button, I get this following error:
Unable to complete install: 'internal error: Process exited prior to exec: libvirt: error : unable to set AppArmor profile 'libvirt-5d739005-01d9-4c7c-9b41-bb3e3486c672' for '/usr/bin/qemu-system-x86_64': No such file or directory'
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/create.py", line 2119, in _do_async_install
File "/usr/share/virt-manager/virtinst/installer.py", line 419, in start_install
File "/usr/share/virt-manager/virtinst/installer.py", line 362, in _create_guest
domain = self.conn.createXML(install_xml or final_xml, 0)
File "/usr/lib/python3/dist-packages/libvirt.py", line 3717, in createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirt.libvirtError: internal error: Process exited prior to exec: libvirt: error : unable to set AppArmor profile 'libvirt-5d739005-01d9-4c7c-9b41-bb3e3486c672' for '/usr/bin/qemu-system-x86_64': No such file or directory
I'm experimenting with NetBSD and see if I can get the Fenrir screen reader to run on it. However, I came across a later installation of the glue; The console I was using for the installation worked perfectly fine, however, it stopped working completely once I completed the installation. For reference, here is the line I used for virt-install:
virt-install –connect qemu: /// system -n netbsd-testing
–ram 4096 –vcpus = 8
–cpu = host
–os-type = netbsd –os-variant = netbsd8.0
–disk = pool = devel, size = 100, format = qcow2
-w network = default –nographics
When I asked for the type of terminal I was using (this is the NetBSD installation program), I accepted the default value that was VT200. As I recall, I told him to use the BIOS for the boot, and not any of the serial communications ports. Has anyone had more experience not using graphics on a Libvirt virtual machine and have some point in how to get a console that works?
Is it possible through some mechanism of hooking or callback for my script or executable? running on the host to know that libvirt is ready or has recently sent a command for a Guest to close?
I am not trying to detect the case that the guest has decided to close on its own. I am trying to detect the case in which libvirt has decided to ask a guest to close.
I'm trying to do this so that my script or executable can automatically send a shutdown command "on the side" via SSH to a pair of macOS guests that do not respond to ACPI commands and can not execute the libvirt guest agent.
I have found script hooks and API callback mechanisms that will inform me after the guests have closed, but they can not discover a trick to learn about a tried to close a guest.
I am running libvirt under Slackware, but an answer regarding any host platform can be valuable. Thank you!