Virt-manager QEMU/KVM guests have no internet suddenly

I’ve been using VMs with virt-manager for a while on my current install with everything working fine.
Now none of my guests, both on QEMU/KVM user sessions with usermode networking and QEMU/KVM with NAT have no internet, local networking/LAN works fine though.

In the past portmaster was an issue (for NAT), but that had been resolved (I disabled to test anyway).
Outside of updates and other system changes the only other differences I can think of is a router/ip change (192.168.1.xx to .0.xx), and setting up Pihole (which I also disabled as a test).

From other resolved posts I made sure I was using legacy iptables as suggested, on top of that I reinstalled virt-manager and qemu-full, still nothing.

My kernel is 6.7.5-arch1-1, same issues on LTS though.
CPU is i7-7800X.

Thanks in advance for any suggestions.

I tried recreating the ‘default’ network for QEMU/KVM, and deleted a duplicate virbr0 network, still no changes.

It could be the networking change. If you have WAN access from the host device, then it’s very unlikely going to be an issue with pi-hole. I’ve more than a dozen VMs running with kvm/qemu and they’ve not been blocked by my pi-hole.

How did you have the networking setup for the VMs? Do you have DHCP set to auto assign IP addresses or do you have static ones set?

Can you ping from any of the VMs?

I used default settings from install within virt-manager (both usermode for user session and NAT for non user/root? session) - my host pc is set to static on it’s setting with the router reserving the same IP.
On the previous router it was also static on the pcs end, and the 3rd part of the IP was .1, now .0.

I could ping local addresses fine, but nothing outside my network.

Did you change anything, such as setting static IPs for any of the VMs?

Have you checked anything with the network bridge? That’s happened to me a few times where too many network bridges are created and then VMs lost WAN access.

That right there could be the cause, if you mean that your IP went from (just an example) 192.168.1.10 to now it’s 192.168.0.10. That could be causing a conflict.

What’s the output of ip a?

If you want to get ahead, my next response after you give that, would be for your network config…

Edit: Have you looked at this? Sounds basically like what you’re describing.

I tried the thing with iptables on the wiki - no change, and there were existing libvirt rules.

For your first question, no.
Second question - looks like I have 5 network bridges, 2 duplicate virbr0 for virt (I did delete the unused one before), docker0, and 2 br-[string] which I’m not sure what they’re for possibly docker.

Yeah, that’s what happened since the new router changed ranges.

ip a:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 10:7b:44:17:0d:56 brd ff:ff:ff:ff:ff:ff
altname enp0s31f6
inet 192.168.0.17/24 brd 192.168.0.255 scope global noprefixroute eno1
valid_lft forever preferred_lft forever
inet6 fe80::ded6:ca12:1938:6943/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 8e:54:75:dc:55:03 brd ff:ff:ff:ff:ff:ff permaddr 3c:95:09:79:07:b9
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:e0:32:ac brd ff:ff:ff:ff:ff:ff
inet 192.168.100.1/24 brd 192.168.100.255 scope global virbr0
valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:3a:1b:c0:36 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
6: br-7a27f7e97ad2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:68:30:50:9a brd ff:ff:ff:ff:ff:ff
inet 172.19.0.1/16 brd 172.19.255.255 scope global br-7a27f7e97ad2
valid_lft forever preferred_lft forever
inet6 fe80::42:68ff:fe30:509a/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
7: br-a5da80798f08: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:7d:76:b1:95 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global br-a5da80798f08
valid_lft forever preferred_lft forever
9: vethdac3e7b@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-7a27f7e97ad2 state UP group default
link/ether 56:10:2b:b3:91:56 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::5410:2bff:feb3:9156/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
11: vethc291f7e@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-7a27f7e97ad2 state UP group default
link/ether ba:23:d0:23:26:01 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::b823:d0ff:fe23:2601/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
13: veth1ab104c@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-7a27f7e97ad2 state UP group default
link/ether fa:89:ef:ee:7a:94 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::f889:efff:feee:7a94/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever

If the issues the ip I might try going non static temporarily, if it’s docker I should get on to shifting it to my Pi like I’ve been planning.

Thanks, that’s helpful info. Before I figure out more, a bit of clarification. Do you have both Docker containers running and VMs side by side? Do you have docker containers running in the VMs? If you are running docker, is it docker that doesn’t have WAN access? Or do neither docker nor the VMs have WAN?

What OS do you have operating inside the VMs? What’s the output of ip a from within your VMs?

My Docker instance is on my host PC - I’ve only set the Docker (Tandoor) to be local.
I haven’t tested on all my VMs, but on User Session I tested on Fedora35 and a new (now deleted) Mint - others are BlackArch, FerenOS, FreeBSD, Mint-XFCE.
On the non User Session it’s 3 Windows 10 VMs (Gaming, non-gaming, Tiny10), none of these have network access.

ip a on Fedora:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:88:92:e1 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic noprefixroute enp1s0
valid_lft 86332sec preferred_lft 86332sec
inet6 fec0::602c:5a4f:603c:4e3a/64 scope site dynamic noprefixroute
valid_lft 86335sec preferred_lft 14335sec
inet6 fe80::2cc9:20be:7020:ecc1/64 scope link noprefixroute
valid_lft forever preferred_lft forever

Mint:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:14:7a:dc brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic noprefixroute enp1s0
valid_lft 86326sec preferred_lft 86326sec
inet6 fec0::f917:a781:2ae4:df7f/64 scope site temporary dynamic
valid_lft 86330sec preferred_lft 14330sec
inet6 fec0::44b5:34a3:fcbc:52e6/64 scope site dynamic mngtmpaddr noprefixroute
valid_lft 86330sec preferred_lft 14330sec
inet6 fe80::502f:528:215c:22f9/64 scope link noprefixroute
valid_lft forever preferred_lft forever

Not sure if there’s an equivalent for Windows or not but I can do that too if it helps.

Apologies, I’ve been I’ll. I’ll keep trying to help as soon as I’m able.

1 Like

No worries, hopefully you feel better soon