Sunday, June 15, 2025

Veeam Backup & Replication on Linux v13 [Beta]

I have finally found some spare time and I decided to test Veeam Backup & Replication on Linux v13 [Beta] in my home lab. It is BETA, so it is good to test it and be prepared for the final release, even anything can change before the final release is available. 

There is clear information that update and upgrade into newer versions will not be possible, but I'm really curious how Veeam transition from Windows to Linux is doing. 

Anyway, let's test it and get the feeling about the Veeam future with Linux based systems.

Saturday, June 14, 2025

PureStorage has 150TB DirectFlash Modules

I have just realized that PureStorage has 150TB DirectFlash Modules

That got me thinking. 

Flash capacity is increasing year by year. What are performance/capacity ratios?

The reason I'm thinking about it is that poor Tech Designer (like me) need some rule-of-thumb numbers for capacity/performance planning and sizing.

Virtual NIC Link Speed - is it really speed?

This will be a quick blog post, prompted by another question I received about VMware virtual NIC link speed. In this blog post I’d like to demonstrate that the virtual link speed shown in operating systems is merely a reported value and not an actual limit on throughput.

I have two Linux Mint (Debian based) systems mlin01 and mlin02 virtualized in VMware vSphere 8.0.3. Each system has VMXNET3 NIC. Both virtual machines are hosted on the same ESXi host, so they are not constraint by physical network. Let's test network bandwidth between these two systems with iperf.

Tuesday, June 03, 2025

How to troubleshoot virtual disk high latencies in VMware Virtual Machine

In VMware vSphere environments, even the most critical business applications are often virtualized. Occasionally, application owners may report high disk latency issues. However, disk I/O latency can be a complex topic because it depends on several factors, such as the size of the I/O operations, whether the I/O is a read or a write and in which ratio, and of course, the performance of the underlying storage subsystem. 

One of the most challenging aspects of any storage troubleshooting is understanding what size of I/O workload is being generated by the virtual machine. Storage workload I/O size is the significant factor to response time. There are different response times for 4 KB I/O and 1 MB I/O. Here are examples from my vSAN ESA performance testing.

  • 32k IO, 100% read, 100% random - Read Latency: 2.03 ms Write Latency: 0.00 ms
  • 32k IO, 100% write, 100% random - Read Latency: 0.00 ms Write Latency: 1.74 ms
  • 32k IO, 70% read - 30% write, 100% random - Read Latency: 1.55 ms Write Latency: 1.99 ms
  • 1024k IO, 100% read, 100% sequential - Read Latency: 6.38 ms Write Latency: 0.00 ms
  • 1024k IO, 100% write, 100% sequential - Read Latency: 0.00 ms Write Latency: 8.30 ms
  • 1024k IO, 70% read - 30% write, 100% sequential - Read Latency: 5.38 ms Write Latency: 8.68 ms

You can see that response times vary based on storage profile. However, application owners very often do not know what is the storage profile of their application workload and just complain that storage is slow. 

As one storage expert (I think it was Howard Marks [1] [2]) once said, there are only two types of storage performance - good enough and not good enough.
Fortunately, on an ESXi host, we have a useful tool called vscsiStats. We have to know on which ESXi host VM is running and ssh into that particular ESXi host.

The vSCSI monitoring procedure is

  1. List all running virtual machines on particular ESXi host, and identify our Virtual Machine and its identifiers (worldGroupID and Virtual SCSI Disk handleID)
  2. Start vSCSI statistics collection in ESXi host
  3. Collect vSCSI statistics histogram data
  4. Stop vSCSI statistics collection

The procedure is documented in VMware KB - Using vscsiStats to collect IO and Latency stats on Virtual Disks 

Let's test it in lab.