In VMware vSphere environments, even the most critical business applications are often virtualized. Occasionally, application owners may report high disk latency issues. However, disk I/O latency can be a complex topic because it depends on several factors, such as the size of the I/O operations, whether the I/O is a read or a write and in which ratio, and of course, the performance of the underlying storage subsystem.
One of the most challenging aspects of any storage troubleshooting is understanding what size of I/O workload is being generated by the virtual machine. Storage workload I/O size is the significant factor to response time. There are different response times for 4 KB I/O and 1 MB I/O. Here are examples from my vSAN ESA performance testing.
- 32k IO, 100% read, 100% random - Read Latency: 2.03 ms Write Latency: 0.00 ms
- 32k IO, 100% write, 100% random - Read Latency: 0.00 ms Write Latency: 1.74 ms
- 32k IO, 70% read - 30% write, 100% random - Read Latency: 1.55 ms Write Latency: 1.99 ms
- 1024k IO, 100% read, 100% sequential - Read Latency: 6.38 ms Write Latency: 0.00 ms
- 1024k IO, 100% write, 100% sequential - Read Latency: 0.00 ms Write Latency: 8.30 ms
- 1024k IO, 70% read - 30% write, 100% sequential - Read Latency: 5.38 ms Write Latency: 8.68 ms
You can see that response times vary based on storage profile. However, application owners very often do not know what is the storage profile of their application workload and just complain that storage is slow.
As one storage expert (I think it was Howard Marks [1] [2]) once said, there are only two types of storage performance - good enough and not good enough.Fortunately, on an ESXi host, we have a useful tool called vscsiStats. We have to know on which ESXi host VM is running and ssh into that particular ESXi host.
The vSCSI monitoring procedure is
- List all running virtual machines on particular ESXi host, and identify our Virtual Machine and its identifiers (worldGroupID and Virtual SCSI Disk handleID)
- Start vSCSI statistics collection in ESXi host
- Collect vSCSI statistics histogram data
- Stop vSCSI statistics collection
The procedure is documented in VMware KB - Using vscsiStats to collect IO and Latency stats on Virtual Disks
Let's test it in lab.
Step 1 - List running VMs
To list all running virtual machines and their associated virtual disks, use the command:
vscsiStats -l
Here’s an example output from my home lab ESXi host:
[root@esx22:~] vscsiStats -l
Virtual Machine worldGroupID: 30651107, Virtual Machine Display Name: wireguard-client.home.uw.cz, Virtual Machine Config File: /vmfs/volumes/6458dc3f-7bee0724-076b-90b11c142bbc/wireguard-client.home.uw.cz/wireguard-client.home.uw.cz.vmx, {
Virtual SCSI Disk handleID: 131645519331074141 (scsi0:0)
}
Virtual Machine worldGroupID: 41144908, Virtual Machine Display Name: backup.home.uw.cz, Virtual Machine Config File: /vmfs/volumes/91145d3b-5f74b29b/backup.home.uw.cz/backup.home.uw.cz.vmx, {
Virtual SCSI Disk handleID: 176716042846871651 (scsi0:0)
Virtual SCSI Disk handleID: 176716042851065956 (scsi0:1)
}
Virtual Machine worldGroupID: 45401459, Virtual Machine Display Name: XCP-NG-01, Virtual Machine Config File: /vmfs/volumes/6458dc3f-7bee0724-076b-90b11c142bbc/XCP-NG-01/XCP-NG-01.vmx, {
Virtual SCSI Disk handleID: 194997790185627775 (scsi0:0)
}
Virtual Machine worldGroupID: 46468259, Virtual Machine Display Name: XCP-NG-02, Virtual Machine Config File: /vmfs/volumes/673aff83-c6e32748-2f0b-90b11c142bba/XCP-NG-02/XCP-NG-02.vmx, {
Virtual SCSI Disk handleID: 199579661297000588 (scsi0:0)
}
Virtual Machine worldGroupID: 30833619, Virtual Machine Display Name: freebsd02.home.uw.cz, Virtual Machine Config File: /vmfs/volumes/6458dc3f-7bee0724-076b-90b11c142bbc/freebsd02.home.uw.cz/freebsd02.home.uw.cz.vmx, {
Virtual SCSI Disk handleID: 132429393812267102 (scsi0:0)
}
Virtual Machine worldGroupID: 30833728, Virtual Machine Display Name: freebsd01.home.uw.cz, Virtual Machine Config File: /vmfs/volumes/6458dc3f-7bee0724-076b-90b11c142bbc/freebsd01-home.uw.cz/freebsd01-home.uw.cz.vmx, {
Virtual SCSI Disk handleID: 132429861963702367 (scsi0:0)
}
Virtual Machine worldGroupID: 2533704, Virtual Machine Display Name: mwin01.home.uw.cz, Virtual Machine Config File: /vmfs/volumes/6458dc3f-7bee0724-076b-90b11c142bbc/mwin01.home.uw.cz/mwin01.home.uw.cz.vmx, {
Virtual SCSI Disk handleID: 10882184407687229 (scsi0:0)
}
Virtual Machine worldGroupID: 2535826, Virtual Machine Display Name: vCLS-4c4c4544-0050-5810-8033-c4c04f48354a, Virtual Machine Config File: /var/run/crx/infra/vCLS-4c4c4544-0050-5810-8033-c4c04f48354a/vCLS-4c4c4544-0050-5810-8033-c4c04f48354a.vmx, {
}
Virtual Machine worldGroupID: 2536112, Virtual Machine Display Name: mwin02.home.uw.cz, Virtual Machine Config File: /vmfs/volumes/6458dc3f-7bee0724-076b-90b11c142bbc/mwin02.home.uw.cz/mwin02.home.uw.cz.vmx, {
Virtual SCSI Disk handleID: 10892526688936064 (scsi0:0)
}
Virtual Machine worldGroupID: 47892506, Virtual Machine Display Name: ns1-old.home.uw.cz, Virtual Machine Config File: /vmfs/volumes/673aff83-c6e32748-2f0b-90b11c142bba/ns1-new.home.uw.cz/ns1-new.home.uw.cz.vmx, {
Virtual SCSI Disk handleID: 205696755583426714 (scsi0:0)
}
Virtual Machine worldGroupID: 43725944, Virtual Machine Display Name: TEST-IPv6-01.home.uw.cz, Virtual Machine Config File: /vmfs/volumes/91145d3b-5f74b29b/TEST-IPv6-01.home.uw.cz/TEST-IPv6-01.home.uw.cz.vmx, {
Virtual SCSI Disk handleID: 187801508056670341 (scsi0:0)
}
Virtual Machine worldGroupID: 47988101, Virtual Machine Display Name: ns1.home.uw.cz, Virtual Machine Config File: /vmfs/volumes/673aff83-c6e32748-2f0b-90b11c142bba/ns1.home.uw.cz/ns1.home.uw.cz.vmx, {
Virtual SCSI Disk handleID: 206107332982087839 (scsi0:0)
}
Virtual Machine worldGroupID: 48057657, Virtual Machine Display Name: mlin02.home.uw.cz, Virtual Machine Config File: /vmfs/volumes/6458dc3f-7bee0724-076b-90b11c142bbc/mlin02.home.uw.cz/mlin02.home.uw.cz.vmx, {
Virtual SCSI Disk handleID: 206406082317263017 (scsi0:0)
}
Virtual Machine worldGroupID: 48094357, Virtual Machine Display Name: mlin01.home.uw.cz, Virtual Machine Config File: /vmfs/volumes/6458dc3f-7bee0724-076b-90b11c142bbc/mlin01.home.uw.cz/mlin01.home.uw.cz.vmx, {
Virtual SCSI Disk handleID: 206563699027091627 (scsi0:0)
}
Virtual Machine worldGroupID: 2099359, Virtual Machine Display Name: pfsense01.home.uw.cz, Virtual Machine Config File: /vmfs/volumes/6458dc3f-7bee0724-076b-90b11c142bbc/pfsense01.home.uw.cz/pfsense01.home.uw.cz.vmx, {
Virtual SCSI Disk handleID: 9016691132473344 (scsi0:0)
}
Virtual Machine worldGroupID: 2099608, Virtual Machine Display Name: vc01.home.uw.cz, Virtual Machine Config File: /vmfs/volumes/6458dc3f-7bee0724-076b-90b11c142bbc/vc01.home.uw.cz/vc01.home.uw.cz.vmx, {
Virtual SCSI Disk handleID: 9017756284362755 (scsi0:0)
Virtual SCSI Disk handleID: 9017756288557060 (scsi0:1)
Virtual SCSI Disk handleID: 9017756292751365 (scsi0:2)
Virtual SCSI Disk handleID: 9017756296945670 (scsi0:3)
Virtual SCSI Disk handleID: 9017756301139975 (scsi0:4)
Virtual SCSI Disk handleID: 9017756305334280 (scsi0:5)
Virtual SCSI Disk handleID: 9017756309528585 (scsi0:6)
Virtual SCSI Disk handleID: 9017756313722890 (scsi0:8)
Virtual SCSI Disk handleID: 9017756317917195 (scsi0:9)
Virtual SCSI Disk handleID: 9017756322111500 (scsi0:10)
Virtual SCSI Disk handleID: 9017756326305805 (scsi0:11)
Virtual SCSI Disk handleID: 9017756330500110 (scsi0:12)
Virtual SCSI Disk handleID: 9017756334694415 (scsi0:13)
Virtual SCSI Disk handleID: 9017756338888720 (scsi0:14)
Virtual SCSI Disk handleID: 9017756343083025 (scsi0:15)
Virtual SCSI Disk handleID: 9017756347277330 (scsi1:0)
Virtual SCSI Disk handleID: 9017756351471635 (scsi1:1)
}
Virtual Machine worldGroupID: 44052000, Virtual Machine Display Name: fbsd-test01, Virtual Machine Config File: /vmfs/volumes/6458dc3f-7bee0724-076b-90b11c142bbc/fbsd-test01/fbsd-test01.vmx, {
Virtual SCSI Disk handleID: 189201912208302199 (scsi0:0)
}
Virtual Machine worldGroupID: 29499678, Virtual Machine Display Name: openvpn-client-c4c.home.uw.cz, Virtual Machine Config File: /vmfs/volumes/6458dc3f-7bee0724-076b-90b11c142bbc/openvpn-client-c4c.home.uw.cz/openvpn-client-c4c.home.uw.cz.vmx, {
Virtual SCSI Disk handleID: 126700160842473557 (scsi0:0)
}
[root@esx22:~]
In our test case, we will monitor server fbsd-test01, which has highlighted all identifiers in the vscsiStats -l output above.
In that FreeBSD server, we run fio (Flexible I/O Tester tool) to generate disk traffic in 70%/30% read/write ratio, random access, and 4 KB I/O size for 600 seconds (10 minutes) with 4 jobs (aka workers or threads). In this particular case, we know our storage profile, so we can easily validate how vscsiStats works. Below is the fio command we run in our test Virtual Machine.
dpasek@fbsd-test01:~ $
fio --name=randrw70 --rw=randrw --rwmixread=70 --bs=4k --size=1G --numjobs=4 --time_based --runtime=600 --iodepth=16 --filename=/tmp/test.file --direct=1
randrw70: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=16 ... fio-3.38 Starting 4 processes note: both iodepth >= 1 and synchronous I/O engine are selected, queue depth will be capped at 1 note: both iodepth >= 1 and synchronous I/O engine are selected, queue depth will be capped at 1 note: both iodepth >= 1 and synchronous I/O engine are selected, queue depth will be capped at 1 note: both iodepth >= 1 and synchronous I/O engine are selected, queue depth will be capped at 1 Jobs: 4 (f=1): [m(4)][17.7%][r=6586KiB/s,w=2774KiB/s][r=1646,w=693 IOPS][eta 08m:14s]
Step 2 - Start vSCSI statistics collection
Now we can start statistics collection in ESXi host by command
vscsiStats -s -w <worldGroupID> -i <vscsi>
[root@esx22:~]
vscsiStats -s -w 44052000 -i 189201912208302199
vscsiStats: Starting Vscsi stats collection for worldGroup 44052000, handleID 189201912208302199 (scsi0:0) Success. [root@esx22:~]
Step 3 - Collect vSCSI statistics histogram data
Our disk traffic generation in VM fbsd-test01 is running for 600 seconds (10 minutes), so let it run for a few seconds, then collect histogram data by command ...
vscsiStats -p all -w <worldGroupID> -i <vscsi>
[root@esx22:~]
vscsiStats -p all -w 44052000 -i 189201912208302199
Histogram: IO lengths of commands for virtual machine worldGroupID : 44052000, virtual disk handleID : 189201912208302199 (scsi0:0) { min : 512 max : 262144 mean : 33276 count : 2006890 { 10 (<= 512) 0 (<= 1024) 0 (<= 2048) 0 (<= 4095) 115 (<= 4096) 1 (<= 8191) 24 (<= 8192) 9 (<= 16383) 11 (<= 16384) 2000027 (<= 32768) 2 (<= 49152) 3 (<= 65535) 2432 (<= 65536) 0 (<= 81920) 179 (<= 131072) 4077 (<= 262144) 0 (<= 524288) 0 (> 524288) } } Histogram: IO lengths of Read commands for virtual machine worldGroupID : 44052000, virtual disk handleID : 189201912208302199 (scsi0:0) { min : 4096 max : 65536 mean : 32767 count : 1528701 { 0 (<= 512) 0 (<= 1024) 0 (<= 2048) 0 (<= 4095) 23 (<= 4096) 0 (<= 8191) 4 (<= 8192) 2 (<= 16383) 8 (<= 16384) 1528663 (<= 32768) 0 (<= 49152) 0 (<= 65535) 1 (<= 65536) 0 (<= 81920) 0 (<= 131072) 0 (<= 262144) 0 (<= 524288) 0 (> 524288) } } Histogram: IO lengths of Write commands for virtual machine worldGroupID : 44052000, virtual disk handleID : 189201912208302199 (scsi0:0) { min : 512 max : 262144 mean : 34903 count : 478190 { 10 (<= 512) 0 (<= 1024) 0 (<= 2048) 0 (<= 4095) 92 (<= 4096) 1 (<= 8191) 20 (<= 8192) 7 (<= 16383) 3 (<= 16384) 471365 (<= 32768) 2 (<= 49152) 3 (<= 65535) 2431 (<= 65536) 0 (<= 81920) 179 (<= 131072) 4077 (<= 262144) 0 (<= 524288) 0 (> 524288) } } Histogram: distance (in LBNs) between successive commands for virtual machine worldGroupID : 44052000, virtual disk handleID : 189201912208302199 (scsi0:0) { min : -15325439 max : 14868089 mean : -56 count : 2006892 { 658286 (<= -500000) 79275 (<= -100000) 24275 (<= -50000) 23728 (<= -10000) 3541 (<= -5000) 2761 (<= -1000) 375 (<= -500) 229 (<= -128) 54 (<= -64) 400572 (<= -32) 2 (<= -16) 0 (<= -8) 0 (<= -6) 0 (<= -4) 0 (<= -2) 0 (<= -1) 0 (<= 0) 5308 (<= 1) 0 (<= 2) 0 (<= 4) 0 (<= 6) 0 (<= 8) 0 (<= 16) 0 (<= 32) 0 (<= 64) 845 (<= 128) 3406 (<= 500) 3380 (<= 1000) 10692 (<= 5000) 4376 (<= 10000) 24116 (<= 50000) 24497 (<= 100000) 79550 (<= 500000) 657624 (> 500000) } } Histogram: distance (in LBNs) between successive Read commands for virtual machine worldGroupID : 44052000, virtual disk handleID : 189201912208302199 (scsi0:0) { min : -12657823 max : 12844393 mean : -53 count : 1528702 { 633291 (<= -500000) 76659 (<= -100000) 23445 (<= -50000) 22835 (<= -10000) 3411 (<= -5000) 2674 (<= -1000) 357 (<= -500) 221 (<= -128) 52 (<= -64) 1801 (<= -32) 1 (<= -16) 0 (<= -8) 0 (<= -6) 0 (<= -4) 0 (<= -2) 0 (<= -1) 0 (<= 0) 907 (<= 1) 0 (<= 2) 0 (<= 4) 0 (<= 6) 0 (<= 8) 0 (<= 16) 0 (<= 32) 0 (<= 64) 246 (<= 128) 531 (<= 500) 480 (<= 1000) 2857 (<= 5000) 3320 (<= 10000) 22892 (<= 50000) 23547 (<= 100000) 76632 (<= 500000) 632543 (> 500000) } } Histogram: distance (in LBNs) between successive Write commands for virtual machine worldGroupID : 44052000, virtual disk handleID : 189201912208302199 (scsi0:0) { min : -15325439 max : 14868089 mean : -37 count : 478191 { 190038 (<= -500000) 22648 (<= -100000) 6907 (<= -50000) 6771 (<= -10000) 1066 (<= -5000) 854 (<= -1000) 110 (<= -500) 52 (<= -128) 20 (<= -64) 340 (<= -32) 0 (<= -16) 0 (<= -8) 0 (<= -6) 0 (<= -4) 0 (<= -2) 0 (<= -1) 0 (<= 0) 4607 (<= 1) 0 (<= 2) 0 (<= 4) 0 (<= 6) 0 (<= 8) 0 (<= 16) 0 (<= 32) 0 (<= 64) 692 (<= 128) 3072 (<= 500) 3056 (<= 1000) 8674 (<= 5000) 1894 (<= 10000) 7126 (<= 50000) 7139 (<= 100000) 23048 (<= 500000) 190077 (> 500000) } } Histogram: distance (in LBNs) between each command from the closest of previous 16 for virtual machine worldGroupID : 44052000, virtual disk handleID : 189201912208302199 (scsi0:0) { min : -14607603 max : 10217297 mean : -70053 count : 2006894 { 135752 (<= -500000) 244318 (<= -100000) 151488 (<= -50000) 223683 (<= -10000) 40354 (<= -5000) 35499 (<= -1000) 4698 (<= -500) 2831 (<= -128) 610 (<= -64) 403089 (<= -32) 3 (<= -16) 0 (<= -8) 6 (<= -6) 0 (<= -4) 0 (<= -2) 0 (<= -1) 0 (<= 0) 11343 (<= 1) 0 (<= 2) 0 (<= 4) 0 (<= 6) 0 (<= 8) 1 (<= 16) 0 (<= 32) 1 (<= 64) 1827 (<= 128) 7395 (<= 500) 7909 (<= 1000) 43899 (<= 5000) 41317 (<= 10000) 222922 (<= 50000) 150155 (<= 100000) 205317 (<= 500000) 72477 (> 500000) } } Histogram: latency of IOs in Microseconds (us) for virtual machine worldGroupID : 44052000, virtual disk handleID : 189201912208302199 (scsi0:0) { min : 24 max : 111188 mean : 347 count : 2006893 { 0 (<= 1) 0 (<= 10) 429376 (<= 100) 1119168 (<= 500) 423287 (<= 1000) 23015 (<= 5000) 11474 (<= 15000) 555 (<= 30000) 10 (<= 50000) 2 (<= 100000) 6 (> 100000) } } Histogram: latency of Read IOs in Microseconds (us) for virtual machine worldGroupID : 44052000, virtual disk handleID : 189201912208302199 (scsi0:0) { min : 37 max : 111188 mean : 353 count : 1528702 { 0 (<= 1) 0 (<= 10) 5 (<= 100) 1103482 (<= 500) 416570 (<= 1000) 8473 (<= 5000) 161 (<= 15000) 0 (<= 30000) 3 (<= 50000) 2 (<= 100000) 6 (> 100000) } } Histogram: latency of Write IOs in Microseconds (us) for virtual machine worldGroupID : 44052000, virtual disk handleID : 189201912208302199 (scsi0:0) { min : 24 max : 36423 mean : 327 count : 478191 { 0 (<= 1) 0 (<= 10) 429371 (<= 100) 15686 (<= 500) 6717 (<= 1000) 14542 (<= 5000) 11313 (<= 15000) 555 (<= 30000) 7 (<= 50000) 0 (<= 100000) 0 (> 100000) } } Histogram: number of outstanding IOs when a new IO is issued for virtual machine worldGroupID : 44052000, virtual disk handleID : 189201912208302199 (scsi0:0) { min : 1 max : 255 mean : 2 count : 2006894 { 1533535 (<= 1) 421554 (<= 2) 12982 (<= 4) 2549 (<= 6) 1908 (<= 8) 3484 (<= 12) 3074 (<= 16) 2761 (<= 20) 2261 (<= 24) 1859 (<= 28) 1418 (<= 32) 5532 (<= 64) 13977 (> 64) } } Histogram: number of outstanding Read IOs when a new Read IO is issued for virtual machine worldGroupID : 44052000, virtual disk handleID : 189201912208302199 (scsi0:0) { min : 1 max : 3 mean : 1 count : 1528703 { 1528659 (<= 1) 39 (<= 2) 5 (<= 4) 0 (<= 6) 0 (<= 8) 0 (<= 12) 0 (<= 16) 0 (<= 20) 0 (<= 24) 0 (<= 28) 0 (<= 32) 0 (<= 64) 0 (> 64) } } Histogram: number of outstanding Write IOs when a new Write IO is issued for virtual machine worldGroupID : 44052000, virtual disk handleID : 189201912208302199 (scsi0:0) { min : 1 max : 255 mean : 8 count : 478191 { 422523 (<= 1) 11527 (<= 2) 5884 (<= 4) 2525 (<= 6) 1889 (<= 8) 3446 (<= 12) 3048 (<= 16) 2743 (<= 20) 2231 (<= 24) 1815 (<= 28) 1396 (<= 32) 5202 (<= 64) 13962 (> 64) } } Histogram: latency of IO interarrival time in Microseconds (us) for virtual machine worldGroupID : 44052000, virtual disk handleID : 189201912208302199 (scsi0:0) { min : 5 max : 16197996077626 mean : 8071947 count : 2006894 { 0 (<= 1) 1747 (<= 10) 471564 (<= 100) 1065349 (<= 500) 443936 (<= 1000) 24016 (<= 5000) 205 (<= 15000) 6 (<= 30000) 4 (<= 50000) 3 (<= 100000) 64 (> 100000) } } Histogram: latency of IO interarrival time for Reads in Microseconds (us) for virtual machine worldGroupID : 44052000, virtual disk handleID : 189201912208302199 (scsi0:0) { min : 20 max : 16198160593358 mean : 10596913 count : 1528704 { 0 (<= 1) 0 (<= 10) 13 (<= 100) 1051870 (<= 500) 450335 (<= 1000) 26253 (<= 5000) 193 (<= 15000) 10 (<= 30000) 10 (<= 50000) 3 (<= 100000) 17 (> 100000) } } Histogram: latency of IO interarrival time for Writes in Microseconds (us) for virtual machine worldGroupID : 44052000, virtual disk handleID : 189201912208302199 (scsi0:0) { min : 5 max : 16197996077626 mean : 33876651 count : 478192 { 0 (<= 1) 1357 (<= 10) 53174 (<= 100) 3021 (<= 500) 143772 (<= 1000) 269736 (<= 5000) 7057 (<= 15000) 8 (<= 30000) 4 (<= 50000) 3 (<= 100000) 60 (> 100000) } } [root@esx22:~]
Step 4 - Stop vSCSI statistics collection
When we are done we can stop vscsiStats collection by command
vscsiStats -x -w <worldGroupID>
[root@esx22:~]
vscsiStats -x -w 44052000
vscsiStats: Stopping all Vscsi stats collection for worldGroup 44052000, handleID 189201912208302199 (scsi0:0) Success.
Conclusion
I/O size
vscsiStats IO lengths histogram is perfect tool to show you what I/O sizes are send from GuestOS into disk subsystem and really understand what is happening under the cover.
In our test, we generated some known storage workload (70/30 read/write ratio, random access, and 4 KB I/O size within Guest OS (FreeBSD 14.2), but in VMware ESXi vSCSI layer we see all I/Os having 32 KB (32768 bytes). Why? Well, I told you that storage I/O topis is always tricky.
FreeBSD uses GEOM framework for block devices and CAM subsystem (Common Access Method) for SCSI I/O. These layers can coalesce small I/O requests (e.g., 4k) into larger ones (e.g., 32k) before they reach the virtual SCSI driver. Even if fio is issuing 4k operations, the GEOM scheduler or CAM I/O queuing may combine several 4k I/Os. These merged I/Os are what VMware sees, hence 32k in vscsiStats.
When we use fio (Flexible I/O Tester tool) with 128 KB I/O size, it is visible in vscsiStats histogram as 128 KB I/O size. This behavior was tested in my lab. So, it is good to know that every system behaves differently, and it's great that we have the tool in VMware toolbox for such monitoring.
I/O latency
Note: Even vscsiStats reports Latency, it is Response Time. The terms latency and response time are closely related and sometimes used interchangeably, but they have distinct meanings, especially in systems performance, networking and storage.
In our test, we see that almost all read I/Os have latency below 1 ms.
1103482 (<= 500)
416570 (<= 1000)
and all write I/Os have latency below 0.5 ms.
429371 (<= 100)
15686 (<= 500)
This is expected, because I have NVMe disk in my ESXi host.
IOPS and MB/s
vscsiStats histogram do not report IOPS or MB/s. In theory, it could be calculated based on time the statitics was running and how many I/Os were collected, but this is not the purpose of this utility. If you want to know IOPS and MB/s, there are such metrics in ESXi and vCenter.
But just FYI, in the Guest OS, fio utility reports ... [r=6586KiB/s,w=2774KiB/s][r=1646,w=693 IOPS], so you can see how many IOPS and MB/s can be achieved when 4KB I/O size is used.
And when I use 1MB I/O size, I can achieve significantly higher throughput (MB/s), with significantly less IOPS ... [r=64.1MiB/s,w=26.0MiB/s][r=64,w=26 IOPS], but this is again another topic but absolutely expected behavior.
No comments:
Post a Comment