I was blogging about How to update ESXi via CLI back in 2016. John Nicholson recently published blog post how to deal with new Broadcom Token when updating ESXi with ESXCLI. If you are interested in this topic, read his blog post Updating ESXi using ESXCLI + Broadcom Tokens.
I believe the Next Generation Computing is Software Defined Infrastructure on top of the robust physical infrastructure. You can ask me anything about enterprise infrastructure (virtualization, compute, storage, network) and we can discuss it deeply on this blog. Don't hesitate to contact me.
Monday, April 14, 2025
Updating ESXi using ESXCLI + Broadcom Tokens
Friday, April 11, 2025
VMware ESXi 8.0 Update 3e Release Notes - VMware ESXi free again?
VMware ESXi 8.0 Update 3e (Build 24674464) was released on 10 April 2025. The release notes are available here.
When I went through these release notes, I saw a very interesting statement ...
Broadcom makes available the VMware vSphere Hypervisor version 8, an entry-level hypervisor. You can download it free of charge from the Broadcom Support Portal - here.To be honest, I don't know if VMware community and home labbers will stay or migrate back to ESXi, and how long Broadcom will provide ESXi for free. To be honest, it seems that the relationship between Broadcom and the VMware community was broken, and trust is a very important factor to invest time in some technology.
Besides the statement about making ESXi free again, I went through the fixes and categorized improvements/fixes, and the majority of them fell into the STORAGE category, followed by the COMPUTE/Hypervisor category, and the NETWORKING category.
Full categorization is ...
- 34 fixes in the storage category
- 12 fixes in the compute category
- 7 fixes in the networking category
- 6 fixes in the manageability category
- 2 fixes in the security category
The majority of storage improvements/fixes do not surprise me. The storage is the most critical component of the data center.
Below are listed improvements/fixes within categories and sub-categories.
STORAGE (34)
NVMeoF/TCP improvements/fixes (5)
iSCSI improvements/fixes (4)
Fibre Channel improvements/fixes (2)
vSAN improvements/fixes (8)
SCSI
improvements/fixes (2)
Raw Device Mapping (RDM) improvements/fixes (1)
VMFS improvements/fixes (2)
VVOLs
improvements/fixes (6)
VADP (vSphere Storage APIs - Data Protection) improvements/fixes (2)
UNMAP improvements/fixes (1)
Storage I/O stack improvements/fixes (1)
COMPUTE (12)
Hypervisor improvement/fixes (12)
NETWORKING (7)
vSwitch improvements/fixes (4)
TCP improvements/fixes (1)
NVIDIA nmlx5 NIC driver improvements/fixes (2)
SECURITY (2)
Firewall improvement/fixes (2)
MANAGEABILITY (6)
Logging improvements/fixes (2)
Identity Management improvements/fixes (2)
Monitoring improvements/fixes (2)
Sunday, April 06, 2025
Network throughput and CPU efficiency of FreeBSD 14.2 and Debian 10.2 in VMware - PART 1
I'm long time FreeBSD user (since FreeBSD 2.2.8, 1998) and all these (27) years I lived with the impression that FreeBSD has the best TCP/IP network stack in the industry.
Recently, I was blogging about testing network throughput of 10 Gb line where I have used default installation of FreeBSD 14.2 with iperf and realized that I need at least 4 but better 8 vCPUs in VMware virtual machine to achieve more than 10Gb network throughput. Colleague of mine told me that he does not see such huge CPU requirements in Debian and such information definitely caught my attention. That's the reason I have decided to test it.
TCP throughput tests were performed between two VMs in one VMware ESXi host, therefore, the network traffic does not need to go to the physical networking.
Physical server I use for these tests has CPU Intel Xeon CPU E5-2680 v4 @ 2.40GHz. This type of CPU has been introduced by Intel in 2016 so it is not the latest CPU technology but both operating system will have the same conditions.
VMs were provisioned on VMware ESX 8.0.3 hypervisor, which is the latest version at time of writing this article.
VM hardware used for iperf tests is
- 1 vCPU (artificially limited by hypervisor to 2000 MHz)
- 2 GB RAM
- vNIC type is vmxnet3
Test results of FreeBSD 14.2
Test results of Debian 12.10
Comparison of default installations
Network throughput of default installation of Debian 12.10 is 7x better than default installation of FreeBSD 14.2. We can also say that Debian requires 7x less CPU cycles per bit/s.
FreeBSD Network tuning
In Debian, open-vm-tools 12.2.0 are automatically installed in default installation.
FreeBSD does not install open-vm-tools automatically but vmxnet driver is included in the kernel, therefore, open-vm-tools should not be necessary. Anyway, I installed open-vm-tools and explicitly enabled vmxnet in rc.conf, but there is no improvement in network throughput, which confirms that open-vm-tools are not necessary for optimal vmxnet networking.
So this is not the thing, so what else we can do to improve network throughput?
Network Buffers
We can try increase Network Buffers.
What is default setting of kern.ipc.maxsockbuf?
root@VM-CUST-0001-192-168-1-11:~ # sysctl -a | grep kern.ipc.maxsockbuf
kern.ipc.maxsockbuf: 2097152
What is default setting of net.inet.tcp.sendspace?
root@VM-CUST-0001-192-168-1-11:~ # sysctl -a | grep net.inet.tcp.sendspace
net.inet.tcp.sendspace: 32768
What is default setting of net.inet.tcp.recvspace?
root@VM-CUST-0001-192-168-1-11:~ # sysctl -a | grep net.inet.tcp.recvspace
net.inet.tcp.recvspace: 65536
Let's increase these values in /etc/sysctl.conf
# Increase maximum buffer size
kern.ipc.maxsockbuf=8388608
# Increase send/receive buffer sizes
net.inet.tcp.sendspace=4194304
net.inet.tcp.recvspace=4194304
and reboot the system.
When I test iperf with these deeper network buffers I can achieve 1.2 Gb/s which is even slightly worse than throughput with default settings (1.34 Gb/s) and far beyond the Debian throughput (9.5 Gb/s) , thus tuning of network buffers does not help and I revert settings to default.
Jumbo Frames
We can try enable Jumbo Frames.
I have Jumbo Frames enabled on the physical network, so I can try enable Jumbo Frames in FreeBSD and test the impact on network throughput.
Jumbo Frames are enabled in FreeBSD by following command
ifconfig vmx0 mtu 9000
We can test if Jumbo Frames are available between VM01 and VM02.
ping -s 8972 -D [IP-OF-VM02]
When I test iperf with Jumbo Frames enabled, I can achieve 5 Gb/s which is significantly (3.7x) higher throughput than throughput with default settings (1.34 Gb/s) but it is still less than Debian throughput (9.5 Gb/s) with default settings (MTU 1,500). It is worth to mention that Jumbo Frames helped not only with higher throughput but also with less CPU usage.
I have also tested iperf throughput on Debian with Jumbo Frames enabled and interestingly enough, I have get the same throughput (9.5 Gb/s) as I was able to achieve withou Jumbo Frames, so increasing MTU on Debian did not have any positive impact on network throughput nor CPU usage.
I have reverted MTU settings to default (MTU 1,500) and tried another performance tuning.
Enable TCP Offloading
We can enable TCP Offloading capabilities. TXCSUM, RXCSUM, TSO4, and TSO6 are enabled by default, but LRO (Large Receive Offload) is not enabled.
Let's enable LRO and test the impact on iperf throughput.
ifconfig vmx0 txcsum rxcsum tso4 tso6 lro
When I test iperf with LRO enabled, I can achieve 7.29 Gb/s which is significantly better than throughput with default settings (1.34 Gb/s) and even better than Jumbo Frame impact (5 Gb/s). But it is still less the Debian throughput (9.5 Gb/s) with default settings.
Combination of TCP Offloading (LRO) and Jumbo Frames
What if the impact of LRO and Jumbo Frames are combined?
ifconfig vmx0 mtu 9000 txcsum rxcsum tso4 tso6 lro
Conclusion
Network throughput
Network throughput within single VLAN between two VMs with default installations of Debian 12.10 is almost 10 Gb/s (9.5 Gb/s) with ~50% usage of single CPU @ 2 GHz.
Network throughput within single VLAN between two VMs with default installations of FreeBSD 14.2 is 1.34 Gb/s with ~40% usage of single CPU @ 2 GHz.
Debian 12.10 default installation has 7x higher throughput than default installation of FreeBSD 14.2.
Enabling LRO without Jumbo Frames increase FreeBSD network throughput to 7.29 Gb/s.
Enabling Jumbo Frames on FreeBSD increase throughput to 5 Gb/s. Enabling Jumbo Frames in Debian configuration does not help with higher Throughput.
Combination of Jumbo Frames and LRO increases FreeBSD network throughput to 8.9 Gb/s which is close to 9.5 Gb/s of default Debian system, but still lower result than network throughput on Debian.
CPU usage
In terms of CPU, Debian uses ~50% CPU on iperf client and ~60% on iperf server.
FreeBSD with LRO and without Jumbo Frames uses ~20% CPU on iperf client and ~25% on iperf server. When LRO is used in combination with Jumbo Frames, it uses ~25% CPU on iperf client and ~30% on iperf server, but it can achieve 20% higher throughput.
What system has better networking stack?
Debian can achieve higher throughput even without Jumbo Frames (9.5 Gb/s vs 7.29 Gb/s) but at the cost of higher CPU usage (50/60% vs 20/25%). When Jumbo Frames can be enabled the throughput is similar (9.5 Gb/s vs 8.9 Gb/s) but with significantly higher CPU usage in Debian (50/60% vs 25/30%).
Key findings
Debian has all TCP Offloading Capabilities (LRO, TXCSUM, RXCSUM, TSO) enabled on default installation. Disabled LRO in default FreeBSD installation is the main reason why FreeBSD has poor VMXNET3 network throughput on its default installation. When LRO is enabled, the FreeBSD network throughput is pretty decent but still lower than Debian.
It is worth to say, that LRO is not recommended on security appliances. While beneficial for performance, LRO can interfere with packet inspection, firewalls, or VPNs, which is why it’s often disabled in virtual appliances like firewalls (e.g., pfSense, FortiGate, etc.). That's probably reason why it is disabled by default on FreeBSD.
Jumbo Frames is another help for FreeBSD and does not help Debian at all, which is interesting. But I did another testing on my homelab with and without LRO and MTU and Debian with a single vCPU delivers throughput between 14.8 and 17.5 Gb/s, where LRO, which is used by default in Debian, improves performance by 8%, and MTU 9000 adds another 8%. Using more parallel iperf threads in individual tests does not help increase throughput in Debian.
Combination of LRO and Jumbo Frames boost FreeBSD network performance to 8.9 Gb/s but Debian can achieve 9.5 Gb/s without Jumbo Frames. I will try to open discussion about this behavior in FreeBSD and Linux forums to understand some further details. I do not understand why enabling Jumbo Frames on Debian does not have positive impact on network throughput and lower CPU usage.
Sunday, March 30, 2025
Network benchmark (iperf) of 10Gb Data Center Interconnect
I wanted to test 10Gb ethernet link I have got as data center interconnect between two datacenters. I generally do not trust anything I have not tested.
If you want test something, it is important to have good testing methodology and toolset.
Toolset
OS: FreeBSD 14.2 is IMHO the best x86-64 operating system in terms of networking. Your mileage may vary.
Network benchmark testing tool: IPERF (iperf2) is weel known tool to benchmark network performance and bandwidth.
Hypervisor: VMware ESXi 8.0.3 is the best in class hypervisor to test varios virtual machines
Methodology
I
have use two Virtual Machines. At the end I will test network
throughput between two VMs, where one VM is in each end of network link
(DC Interconnect). However before the final test (Test 4) of DC interconnect throughput, I will test network throughput (Test 1) within the same VM to test localhost throughput, (Test 2) between VMs within single hypervisor (ESXi) host to avoid using physical network, (Test 3) VMs across two hypervisors (ESXi) within single VLAN in one datacenter to test local L2 throughput.
Results
Test 1: Network throughput within the same VM to test localhost throughput
VMware Virtual Machines have following hardware specification:
- 8 vCPU (INTEL XEON GOLD 6544Y @ 3.6 Ghz)
- 8 GB RAM
- 8 GB vDisk
- 1 vNIC (vmxnet)
Test 2: Network throughput between VMs within hypervisor (no physical network)
VMware Virtual Machines with following hardware specification:
- 8 vCPU (INTEL XEON GOLD 6544Y @ 3.6 Ghz)
- 8 GB RAM
- 8 GB vDisk
- 1 vNIC (vmxnet)
Test 3: Network throughput between VMs across two hypervisors within VLAN (25Gb switch ports) in one DC
VMware Virtual Machines have following hardware specification:
- 8 vCPU (INTEL XEON GOLD 6544Y @ 3.6 Ghz)
- 8 GB RAM
- 8 GB vDisk
- 1 vNIC (vmxnet) - connected to 25Gb physical switch ports
Test 4: Network throughput between VMs across two hypervisors across two interconnected VLANs across two DCs
VMware Virtual Machines have following hardware specification:
- 8 vCPU (INTEL XEON GOLD 6544Y @ 3.6 Ghz)
- 8 GB RAM
- 8 GB vDisk
- 1 vNIC (vmxnet)
Conclusion
Network throughput requires CPU cycles, therefore number of CPU cores matters.- 1 CPUs VM can achieve network traffic up to 5.83 Gb/s. During such network traffic, CPU is fully used (100% usage) and maximum single iperf connection throughput of 6.65 Gb/s cannot be acieved duw to CPU constraint.
- 2 CPUs VM can achieve network traffic up to 6.65 Gb/s. During such network traffic, CPU is fully used (100% usage).
- 4 CPUs VM with -P 2 is necessary to achieve network traffic up to 10 Gb/s.
- 8 CPUs VM with -P 4 is necessary to achieve network traffic over 10 Gb/s. These 8 threads can generate 20 Gb/s which is good enough to test my 10Gb/s data center interconnect.
What real network throughput I have measured during this testing excercise?
Thursday, March 20, 2025
VMware PowerCLI (PowerShell) on Linux
VMware PowerCLI is very handy and flexible automation tool allowing automation of almost all VMware features. It is based on Microsoft PowerShell. I do not have any Microsoft Windows system in my home lab but I would like to use Microsoft PowerShell. Fortunately enough, Microsoft PowerShell Core is available for Linux. Here is my latest runbook how to leverage PowerCLI in Linux management workstation leveraging Docker Application packaging.
Install Docker in your Linux Workstation
This is out of scope of this runbook.
Pull official and verified Microsoft Powershell
sudo docker pull mcr.microsoft.com/powershell:latest
Now you can run powershell container interactively (-i) and in allocated pseudo-TTY (-t). Option -rm stands for "Automatically remove the container when it exits".
List container images
sudo docker image ls
Run powershell container
sudo docker run --rm -it mcr.microsoft.com/powershell
You can avoid image pull and run powershell container, it will pull image automatically during first attempt of run.
Install PowerCLI in PowerShell
Install-Module -Name VMware.PowerCLI -Scope CurrentUser -Force
Allow Untrusted Certificates
Set-PowerCLIConfiguration -InvalidCertificateAction Ignore -Confirm:$false
Now you can connect to vCenter and list VMs
Connect-VIServer -Server <vcenter-server> -User <username> -Password <password>
Get-VM
Saturday, March 15, 2025
How to update ESXi with unsupported CPU?
I have old unsupported servers in my lab used for ESXi 8.0.3. In such configuration, you cannot update ESXi by default procedure in GUI.
![]() |
vSphere Cluster Update doesn't allow remediation |
![]() |
ESXi host shows unsupported CPU |
Solution is to allow legacy CPU and update ESXi from shell with esxcli.
Allow legacy CPU
The option allowLegacyCPU is not available in the ESXi GUI (DCUI or vSphere Client). It must be enabled using the ESXi shell or SSH. Bellow are command to allow legacy CPU.
esxcli system settings kernel set -s allowLegacyCPU -v TRUE
You can verify it by command ...
esxcli system settings kernel list | grep allowLegacyCPU
If above procedure fails, the other option is to edit file /bootbank/boot.cfg and add allowLegacyCPU=true to the end of kernelopt line.
In my case, it look like ...
kernelopt=autoPartition=FALSE allowLegacyCPU=true
After modifying /bootbank/boot.cfg, ESXi configuration should be saved to make changes persistent across reboots.
/sbin/auto-backup.sh
Reboot of ESXi is obviously required to make kernel option active.
After reboot, you can follow by standard system update procedure by ESXCLI method as documented below.
ESXi update procedure (ESXCLI method)
- Download appropriate ESXi offline depot. You can find URL of depot in Release Notes of particular ESXi version. You will need Broadcom credentials to download it from Broadcom support site.
- Upload (leveraging Datastore File Browser, scp, winscp, etc.) ESXi offline depot to some Datastore
- in my case /vmfs/volumes/vsanDatastore/TMP
- List profiles in ESXi depot
- esxcli software sources profile list -d /vmfs/volumes/vsanDatastore/TMP/VMware-ESXi-8.0U3d-24585383-depot.zip
- Update ESXi to particular profile with no hardware warning
- esxcli software profile update -d /vmfs/volumes/vsanDatastore/TMP/VMware-ESXi-8.0U3d-24585383-depot.zip -p ESXi-8.0U3d-24585383-no-tools --no-hardware-warning
- Reboot ESXi
- reboot
Hope this helps other folks in their home labs with unsupported CPUs.
Friday, February 07, 2025
Broadcom (VMware) Useful Links for Technical Designer and/or Architect
Lot of URLs have been changed after Broadcom acquisition of VMware.
That's the reason I have started to document some of useful links for
me.
VMware Product Configuration Maximums - https://configmax.broadcom.com (aka https://vmware.com/go/hcl)
Network (IP) ports Needed by VMware Products and Solutions - https://ports.broadcom.com/
VMware Compatibility Guide - https://compatibilityguide.broadcom.com/ (aka https://www.vmware.com/go/hcl)
VMware Product Lifecycle - https://support.broadcom.com/group/ecx/productlifecycle (aka https://lifecycle.vmware.com/)
Product Interoperability Matrix - https://interopmatrix.broadcom.com/Interoperability
VMware Hands-On Lab - https://labs.hol.vmware.com/HOL/catalog
Broadcom (VMware) Education / Learning - https://www.broadcom.com/education
VMware Validated Solutions - https://vmware.github.io/validated-solutions-for-cloud-foundation/
If you are independent consultant and have to open support ticket related to VMware Education or Certification you can use form at https://broadcomcms-software.wolkenservicedesk.com/web-form
VMware Health Analyzer
- Full VHA download: https://docs.broadcom.com/docs/VHA-FULL-OVF10
- Collector VHA download: https://docs.broadcom.com/docs/VHA-COLLECTOR-OVF10
- Full VHA license Register Tool: https://pstoolhub.broadcom.com/
Tuesday, February 04, 2025
How my Microsoft Windows OS syncing the time?
This is very short post with the procedure how to check time synchronization of Microsoft Windows OS in VMware virtual machine.
There are two options how time can be synchronized
- via NTP
- via VMware Tools with ESXi host where VM is running
The command w32tm /query /status shows the current configuration of time sync.
Microsoft Windows [Version 10.0.20348.2582] (c) Microsoft Corporation. All rights reserved. C:\Users\david.pasek>
w32tm /query /status
Leap Indicator: 0(no warning) Stratum: 6 (secondary reference - syncd by (S)NTP) Precision: -23 (119.209ns per tick) Root Delay: 0.0204520s Root Dispersion: 0.3495897s ReferenceId: 0x644D010B (source IP: 10.77.1.11) Last Successful Sync Time: 2/4/2025 10:14:10 AM Source: DC02.example.com Poll Interval: 7 (128s) C:\Users\david.pasek>
If Windows OS is connected to Active Directory (this is my case), it synchronize time with AD via NTP by default. This is visible in the output of command w32tm /query /status.
You
are dependent on Active Directory Domain Controllers, therefore, the
correct time in Active Directory Domain Controllers is crucial. I was
blogging how to configure time in virtualized Active Directory Domain Controller back in 2011. Is is very old post but it still should work.
To check if VMware Tools are syncing time with ESXi host use following command
C:\>"c:\Program Files\VMware\VMware Tools\VMwareToolboxCmd.exe" timesync status
Disabled
VMware Tools time sync is disabled by default, which is the VMware
best practice. It is highly recommended to not synchronize time with
underlaying ESXi host and leverage NTP sync over network with trusted
time provider. This will help you in case someone will make
configuration mistake and time is not configured properly in particular
ESXi.
Hope you find this useful.