Now that I have decided to seriously consider vmware as a replacement for virtualbox I met the first obstacle. And that is network performance guest<->host.
guest is Linux (debian or endeavourOS doesn’t matter)
on vmware workstation pro 16.2
host is endeavourOS (IP 192.168.132.37)
I saw bad nfs performance when the guest reads data from the host. The write performance is ok. So after some testing I started to use iperf to measure the pure network speed. And that seems to be the culprit.
Here is some data. First the good data with the guest in virtualbox:
VIRTUALBOX
guest as iperf server:
------------------------------------------------------------
Client connecting to 192.168.132.32, TCP port 5001
TCP window size: 2.26 MByte (default)
------------------------------------------------------------
[ 3] local 192.168.132.37 port 34128 connected with 192.168.132.32 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 4.27 GBytes 3.66 Gbits/sec
guest as iperf client:
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 128 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.132.37 port 5001 connected with 192.168.132.32 port 46620
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 12.0 GBytes 10.3 Gbits/sec
and now vmware:
VMWARE
guest as iperf server:
------------------------------------------------------------
Client connecting to 192.168.132.57, TCP port 5001
TCP window size: 986 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.132.37 port 43776 connected with 192.168.132.57 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.10 GBytes 942 Mbits/sec
guest as iperf client:
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 128 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.132.37 port 5001 connected with 192.168.132.57 port 43776
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 4.56 GBytes 3.92 Gbits/sec
It looks like the vmware network performance is just 1/3 of the virtualbox network performance.
This is also reflected in the nfs performance I measured with fio. If the guest is in virtualbox I get read speeds of ca. 300 MB/s. If the guest is in vmware the read speed is 100 MB/s. While both show write speed of around 300 MB/s.
I tried several things in vmware:
- using vmxnet3 network adapter
- changing to mtu 9000
- tested several different nfs rsize/wsize values
But I just dont get it. Read speed in vmware is always limited to about 100 MB/s although the vmxnet3 adapter says that it supports 10000 Mbps:
# ethtool ens160
Settings for ens160:
Supported ports: [ TP ]
Supported link modes: 1000baseT/Full
10000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: No
Supported FEC modes: Not reported
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Advertised FEC modes: Not reported
Speed: 10000Mb/s
Duplex: Full
Auto-negotiation: off
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
MDI-X: Unknown
Supports Wake-on: uag
Wake-on: d
Link detected: yes
What is going wrong here? Any idea?