I have just answered typical question received by email. Here is the email I have got ...
Working with a customer to validate and tune their environment. We're running IOMETER and pointing at an R720 with local storage as well as an MD3200i. Local storage is 6 15k disks in RAID10. MD has 2 disk groups w/ 6 drives in each group w/ 6 15k drives. ISCSI running through a pair of Dell PC Switches that look to be properly configured. Tried MRU and RR PSP. The local disks are absolutely blowing away the MD3200i, IOPS, MB/s and Latency in a variety of specifications.
I haven't had the chance to play w/ such a well provisioned local array lately, but am surprised by the numbers, like w/ a 512k 50%/50% spec we're seeing 22,000 iops local and 5000 iops on the MD....
Maybe
I will write information you know but I believe it can be useful to get the full context.
6x15k
physical disks can give you physically around 6x180 IOPS = 1080.
But ...
1/ each IOPS is different – IO depends on block size and other access specifications like sequence/random, outstanding I/O (asynch I/O not waiting for queue ack), etc.
2/ each architecture is different:
- local virtual disk (LUN) is connected via PERC having cache
- SAN virtual disk (LUN) is connected over SAN which brings another complexity & latency (NIC/HBA queues, switches, storage controller queues or LUN queues, …)
- Local RAID controller is designed for single server workload => single thread can get full performance of disk performance and if more threads are used then performance drop down
- Shared RAID controller is designed for multiple workloads (servers/threads) => each thread can get only portion of full storage performance but each other thread will get same performance. This is fair policy/algorithm for shared environment.
The
cache and particular controller IO optimization can give you significantly
better IOPSes so that’s why you get 5,000 from MD and 22,000 from local
disk/PERC. But
22,000 is too high number to believe it works directly with disks so there is
definitely cache magic.
Here
are widely used average IOPSes for different type of disks:
- 15k disk = 180 IOPS
- 10k disk = 150 IOPS
- 7k disk = 80 IOPS
- SSD/MLC = 2500 IOPS
- SSD/SLC = 5000 IOPS
Please
note that
- these are average numbers used for sizing. I have seen SATA/7k disk in Compellent handling over 200 IOPses but it was sequential access and disks were quite overloaded because latency was very high!!!
- SSD numbers significantly differs among different manufacturers
All
these calculations can give you available IOPSes for read or write to
non-redundant virtual disk (LUN/volume). This means single disk or RAID 0. If
you use redundant RAID you have to calculate RAID write penalty
- RAID 10 = 2
- RAID 5 =4
- RAID 6 = 6
So
you can see this is a quite complex topic and if you really want to show the
customer the truth (who knows what is pure true? :-) ) then you have to
consider all statements above.
Typical
issues of IOmeter measuring without enough experience:
- Small target disk file (entered in blocks = 512B). The disk target file must be bigger than cache. I usually use the file between 20000000 (approx. 20GB) and 80000000 blocks (approx. 40GB).
- Small number of threads (in IOmeter terminology workers)
- Workload generated from single server. Do you know you can run dynamo on another computer and connect it to IOmeter over network? Then you will see more managers (servers) and you can define workers and access specifications from single GUI.
Hope
this helps at least to someone and I would appreciate deeper discussion on this topic.
No comments:
Post a Comment