Last week I've been notified by my colleague about long term VMware vSphere issue described in VMware KB 2048016. The issue is that vSphere Data Protection restores a thin-provisioned disk as a thick-provisioned disk. This sounds like relatively big operational impact. However after reading VMware KB I've explained to my colleague that this is not typical issue or bug but it is rather expected behavior of VMware's CBT (change block tracking) technology and VADP (VMware API for data protection) framework. It's important to mention that it should behave like that only when you do initial full backup of thin provisioned VM which has never been powered on. In other words, if VM was ever powered on before initial backup procedure you shouldn't experience this issue.
After above explanation another logical question appeared.
I decided to do a test to prove real storage vMotion behavior and know the truth. Everything else would be just speculations. Therefore I’ve test storage vMotion behavior of thick to thin migration and space reclamation in my lab where I have vSphere/ESX 5.5 and EqualLogic storage with VAAI support. To be honest the result surprised me in positive way. It seems that svMotion can save the space even I do svMotion between datastore with the same block and there is VAAI enabled.
You can see thick eager zeroed 40GB disk in screenshot below ...
Provisioned size is 44GB because VM has 4 GB RAM and therefore 4 GB swap file on VMFS.
Used storage is 40GB.
After live storage vMotion with conversion to Thin it saved the space.
Used storage is just 22 GB. You can see result at screenshot below ...
So I have just verified that svMotion can do what you need without downtime. And I don’t even need to migrated between datastores with different block size.
It was tested on ESX 5.5, EqualLogic firmware 6.x., and VMFS5 datastores created on thin provisioned LUNs by EqualLogic. Storage thin provisioning is absolutely transparent to vSphere so this should not have impact on vSphere thin provisioning.
I know that this is just a workaround to the problem of VADP restore of never powered on VMDK but it works. It converts thick to thin and is able to reclaim unused (zeroed) space insight virtual disks.
Conclusion:
vSphere 5.5 storage vMotion can convert thick VM to thin even between datastores having same block size. At least in tested configuration. Good to know. If someone else can do the test in your environment just leave the comment. It can be beneficial for others.
After above explanation another logical question appeared.
"How you can convert thick zeroed virtual disk to thin" ... when you experience weird behavior explained above and you restore your originally thin provisioned VM as thick VM. The obvious objection is to save storage space again leveraging VM thin provisioning.My answer was to use "storage vMotion" which allows change ot vDisk type during migration. But just after my quick answer I realized there can be another potential issue with storage vMotion. If you use VAAI capable storage then storage vMotion is offloaded to the storage and it may not reclaim even zeroed vDisk space. This behavior and resolution is describe in VMware KB 2004155 named as "Storage vMotion to thin disk does not reclaim null blocks". The workaround mentioned in KB is to use offline method leveraging vmkfstools. If you want live storage migration (conversion) without downtime you would need another datastore with different block size. You can do it with legacy VMFS3 filesystem.
I decided to do a test to prove real storage vMotion behavior and know the truth. Everything else would be just speculations. Therefore I’ve test storage vMotion behavior of thick to thin migration and space reclamation in my lab where I have vSphere/ESX 5.5 and EqualLogic storage with VAAI support. To be honest the result surprised me in positive way. It seems that svMotion can save the space even I do svMotion between datastore with the same block and there is VAAI enabled.
You can see thick eager zeroed 40GB disk in screenshot below ...
Provisioned size is 44GB because VM has 4 GB RAM and therefore 4 GB swap file on VMFS.
Used storage is 40GB.
After live storage vMotion with conversion to Thin it saved the space.
Used storage is just 22 GB. You can see result at screenshot below ...
So I have just verified that svMotion can do what you need without downtime. And I don’t even need to migrated between datastores with different block size.
It was tested on ESX 5.5, EqualLogic firmware 6.x., and VMFS5 datastores created on thin provisioned LUNs by EqualLogic. Storage thin provisioning is absolutely transparent to vSphere so this should not have impact on vSphere thin provisioning.
I know that this is just a workaround to the problem of VADP restore of never powered on VMDK but it works. It converts thick to thin and is able to reclaim unused (zeroed) space insight virtual disks.
Conclusion:
vSphere 5.5 storage vMotion can convert thick VM to thin even between datastores having same block size. At least in tested configuration. Good to know. If someone else can do the test in your environment just leave the comment. It can be beneficial for others.
2 comments:
This only works if you first use something like Sdelete with in the guest OS to zeroize the free space so the VSphere will not see it as written and reclaim it.
Sdelete.exe -c (your drive letter here):
sdelete has its own risks. If it is a file share disconnect active users and block access while it runs.
First of all thanks for comment. You are absolutely right that sdelete on windows, shrink option on VM Tools defrag /L, Fling "Guest Reclaim" or other zeroing disk tool available for particular OS is necessary to run inside the guest when you want to shrink your thin provisioned disk.
But this blog post is not about shrinking your dead space (aka Dead Space Reclamation). It is about converting think disk to thin disk after VDP restore of newer Powered On VMs.
Dead space reclamation is IMHO different topic but you are right that it is somehow relevant to this topic so thanks again for your comment.
Post a Comment