Vmware network performance

Now that I have decided to seriously consider vmware as a replacement for virtualbox I met the first obstacle. And that is network performance guest<->host.

guest is Linux (debian or endeavourOS doesn’t matter)
on vmware workstation pro 16.2
host is endeavourOS (IP 192.168.132.37)

I saw bad nfs performance when the guest reads data from the host. The write performance is ok. So after some testing I started to use iperf to measure the pure network speed. And that seems to be the culprit.

Here is some data. First the good data with the guest in virtualbox:

VIRTUALBOX
guest as iperf server:

------------------------------------------------------------
Client connecting to 192.168.132.32, TCP port 5001
TCP window size: 2.26 MByte (default)
------------------------------------------------------------
[  3] local 192.168.132.37 port 34128 connected with 192.168.132.32 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  4.27 GBytes  3.66 Gbits/sec

guest as iperf client:

------------------------------------------------------------
Server listening on TCP port 5001
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  4] local 192.168.132.37 port 5001 connected with 192.168.132.32 port 46620
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  12.0 GBytes  10.3 Gbits/sec

and now vmware:

VMWARE
guest as iperf server:

------------------------------------------------------------
Client connecting to 192.168.132.57, TCP port 5001
TCP window size:  986 KByte (default)
------------------------------------------------------------
[  3] local 192.168.132.37 port 43776 connected with 192.168.132.57 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.10 GBytes   942 Mbits/sec

guest as iperf client:

------------------------------------------------------------
Server listening on TCP port 5001
TCP window size:  128 KByte (default)
------------------------------------------------------------
[  4] local 192.168.132.37 port 5001 connected with 192.168.132.57 port 43776
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  4.56 GBytes  3.92 Gbits/sec

It looks like the vmware network performance is just 1/3 of the virtualbox network performance.

This is also reflected in the nfs performance I measured with fio. If the guest is in virtualbox I get read speeds of ca. 300 MB/s. If the guest is in vmware the read speed is 100 MB/s. While both show write speed of around 300 MB/s.

I tried several things in vmware:

  • using vmxnet3 network adapter
  • changing to mtu 9000
  • tested several different nfs rsize/wsize values

But I just dont get it. Read speed in vmware is always limited to about 100 MB/s although the vmxnet3 adapter says that it supports 10000 Mbps:

# ethtool ens160
Settings for ens160:
        Supported ports: [ TP ]
        Supported link modes:   1000baseT/Full
                                10000baseT/Full
        Supported pause frame use: No
        Supports auto-negotiation: No
        Supported FEC modes: Not reported
        Advertised link modes:  Not reported
        Advertised pause frame use: No
        Advertised auto-negotiation: No
        Advertised FEC modes: Not reported
        Speed: 10000Mb/s
        Duplex: Full
        Auto-negotiation: off
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: internal
        MDI-X: Unknown
        Supports Wake-on: uag
        Wake-on: d
        Link detected: yes

What is going wrong here? Any idea?

I am still working now but I can do some testing later and see what my results look like.

1 Like

What I find remarkable is that iperf gives different results depending whether the guest is iperf server or client. If the guest is iperf server (receiving data = reading data) the performance drops. I am confused because I thought the direction of the traffic doesn’t matter for the bandwidth.

Can it be that the host is using the physical NIC to send data to the guest (which is configured as a network bridge)? The physical NIC of the host is certainly limited to 1000Mb/s. This is exactly the result iperf shows. But then, it only shows that result with vmware but not with virtualbox. Weird.

Are you using bridged mode? If so, try switching to NAT and see if that changes performance.

A few notes.

  • All your tests are using different TCP Window sizes. I wonder if that is why you have different results
  • On my side, with no optimization using vmxnet3 I get ~3.5Gbits/sec with a 128KB TCP window
  • For me performance does not change in bridged vs nat vs host-only networking

If I use NAT I can not access the host from the guest and vice versa. Instead the guest uses the physical NIC on the host to directly access the internet. In that scenario I cannot test with iperf. And the bandwidth limit would be my physical NIC anyways, which is at 1000 Mbit/s.

How did you measure that?

I tested with a new host-only interface in IP arrange 172.16.82.0/24:

Here 172.16.82.1 is the host and 172.16.82.129 is the guest. I tested with iperf3 in both directions (with and without -R). In that case the faster direction is when the guest is receiving data from the host (172.16.82.1 is sending)

root@nextcloud://root
# iperf3 -c 172.16.82.1 
Connecting to host 172.16.82.1, port 5201
[  5] local 172.16.82.129 port 56464 connected to 172.16.82.1 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   165 MBytes  1.38 Gbits/sec    0   1.05 MBytes       
[  5]   1.00-2.00   sec   171 MBytes  1.44 Gbits/sec    0   1.05 MBytes       
[  5]   2.00-3.00   sec   182 MBytes  1.53 Gbits/sec    0   1.05 MBytes       
[  5]   3.00-4.00   sec   174 MBytes  1.46 Gbits/sec    0   1.10 MBytes       
[  5]   4.00-5.00   sec   178 MBytes  1.49 Gbits/sec    0   1.10 MBytes       
[  5]   5.00-6.00   sec   170 MBytes  1.43 Gbits/sec    0   1.10 MBytes       
[  5]   6.00-7.00   sec   169 MBytes  1.42 Gbits/sec    0   1.10 MBytes       
[  5]   7.00-8.00   sec   169 MBytes  1.42 Gbits/sec    0   1.10 MBytes       
[  5]   8.00-9.00   sec   168 MBytes  1.41 Gbits/sec    0   1.10 MBytes       
[  5]   9.00-10.00  sec   170 MBytes  1.43 Gbits/sec    0   1.10 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.67 GBytes  1.44 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  1.67 GBytes  1.44 Gbits/sec                  receiver

iperf Done.
                                                                                                                                                                                                                                                                 
Mi 12. Jan 07:27:28 CET 2022
root@nextcloud://root
# iperf3 -c 172.16.82.1 -R
Connecting to host 172.16.82.1, port 5201
Reverse mode, remote host 172.16.82.1 is sending
[  5] local 172.16.82.129 port 56468 connected to 172.16.82.1 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   587 MBytes  4.93 Gbits/sec                  
[  5]   1.00-2.00   sec   580 MBytes  4.86 Gbits/sec                  
[  5]   2.00-3.00   sec   627 MBytes  5.26 Gbits/sec                  
[  5]   3.00-4.00   sec   702 MBytes  5.89 Gbits/sec                  
[  5]   4.00-5.00   sec   612 MBytes  5.13 Gbits/sec                  
[  5]   5.00-6.00   sec   838 MBytes  7.03 Gbits/sec                  
[  5]   6.00-7.00   sec   881 MBytes  7.39 Gbits/sec                  
[  5]   7.00-8.00   sec   647 MBytes  5.42 Gbits/sec                  
[  5]   8.00-9.00   sec   550 MBytes  4.61 Gbits/sec                  
[  5]   9.00-10.00  sec   730 MBytes  6.13 Gbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  6.60 GBytes  5.67 Gbits/sec  10115             sender
[  5]   0.00-10.00  sec  6.60 GBytes  5.67 Gbits/sec                  receiver

iperf Done.

But when I delete the host-only network and use the bridged network device the result is the opposite. In that case the faster direction is when the guest is sending data to the host:

root@nextcloud://root
# iperf3 -c rakete              
Connecting to host rakete, port 5201
[  5] local 192.168.132.57 port 32934 connected to 192.168.132.37 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   412 MBytes  3.46 Gbits/sec    0    310 KBytes       
[  5]   1.00-2.00   sec   420 MBytes  3.52 Gbits/sec    0    310 KBytes       
[  5]   2.00-3.00   sec   444 MBytes  3.72 Gbits/sec    0    310 KBytes       
[  5]   3.00-4.00   sec   435 MBytes  3.65 Gbits/sec    0    310 KBytes       
[  5]   4.00-5.00   sec   439 MBytes  3.69 Gbits/sec    0    310 KBytes       
[  5]   5.00-6.00   sec   442 MBytes  3.70 Gbits/sec    0    310 KBytes       
[  5]   6.00-7.00   sec   441 MBytes  3.71 Gbits/sec    0    310 KBytes       
[  5]   7.00-8.00   sec   441 MBytes  3.70 Gbits/sec    0    322 KBytes       
[  5]   8.00-9.00   sec   464 MBytes  3.88 Gbits/sec    0    322 KBytes       
[  5]   9.00-10.00  sec   420 MBytes  3.53 Gbits/sec    0    322 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  4.26 GBytes  3.66 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  4.26 GBytes  3.66 Gbits/sec                  receiver

iperf Done.
                                                                                                                                                                                                                                                                 
Mi 12. Jan 07:34:10 CET 2022
root@nextcloud://root
# iperf3 -c rakete -R
Connecting to host rakete, port 5201
Reverse mode, remote host rakete is sending
[  5] local 192.168.132.57 port 32938 connected to 192.168.132.37 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   110 MBytes   922 Mbits/sec                  
[  5]   1.00-2.00   sec   110 MBytes   921 Mbits/sec                  
[  5]   2.00-3.00   sec   111 MBytes   929 Mbits/sec                  
[  5]   3.00-4.00   sec   112 MBytes   940 Mbits/sec                  
[  5]   4.00-5.00   sec   111 MBytes   930 Mbits/sec                  
[  5]   5.00-6.00   sec   111 MBytes   935 Mbits/sec                  
[  5]   6.00-7.00   sec   111 MBytes   934 Mbits/sec                  
[  5]   7.00-8.00   sec   112 MBytes   937 Mbits/sec                  
[  5]   8.00-9.00   sec   112 MBytes   940 Mbits/sec                  
[  5]   9.00-10.00  sec   112 MBytes   940 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.09 GBytes   935 Mbits/sec    0             sender
[  5]   0.00-10.00  sec  1.09 GBytes   933 Mbits/sec                  receiver

iperf Done.

This is really weird.

I have a solution:

I created a second network adapter for “host-only” and I added vmxnet3 as the adapter in the guest as well. This gives me more than 3 Gbit/s on both directions. I am now using the new interface in the 172.x.x.x address range to mount the nfs share from the host. And the read performance is now very good.

1 Like

vmware doesn’t restrict this in any way. You can access the guest from the host and vice-versa using either the hosts real IP or it’s IP on the vm network. If that isn’t working, it is likely being blocked by a firewall on the host.

That being said when using a vmxnet3 adapter I had the same results in NAT, bridged and host-only.

As a side note, when using the default e1000 adapter, I didn’t. host-only and NAT(when targeting the vmnet address on the host) showed better performance than bridged or NAT when targeting the real IP on the host. I should also note that using vmxnet3 produced more than twice the performance of the e1000.

It is also worth noting that the

I tried to mirror your methodology so I used iperf. The guest was EndeavourOS and the host was Arch.

And the funny thing is: I dont even need that network performance anymore. The reason I was looking into this was that my nextcloud installation in virtualbox had very poor performance with shared folders. So that I had to use NFS between virtualbox guest and host to get some decent transfer speeds. Thats why I used NFS with vmware right from the start.

But now I know that this is not necessary. vmware has excellent performance with shard folders. No need for NFS between guest and host.

1 Like

I was actually wondering why you were using NFS to transfer data between guest and host. :sweat_smile: