Friday, April 22, 2016

VMware Tools 10.0.8 is now GA

VMware Tools 10.0.8  is now GA and live on www.vmware.com and available to all Customers.

Resolved Issues
Virtual machine performance issues after upgrading VMware tools version to 10.0.x in NSX and VMware vCloud Networking and Security 5.5.x

While upgrading VMware Tools version to 10.0x in a NSX 6.x and VMware vCloud Networking and Security 5.5.x environment, the performance of the guest operating system in the virtual machine becomes slow and unresponsive. A number of operations like, logging in and logging off through an RDP session, response for an IIS website and launching applications become slow and unresponsive.
This issue occured due to a known issue with VMware Tools version 10.0.x. This issue is resolved in this release. For more information see KB 2144236.

Full Release Notes are available here.

Broader context ...
In the past, some customers who did not needed vShield components did not installed these VMtools components to mitigate these performance and unavailability risks. It is possible to remove vShield component from installation process.

 VMware-tools-9.x.x-yyyy.exe /v /qb-! REINSTALLMODE=vomus ADDLOCAL=All REMOVE=Hgfs,WYSE,Audio,BootCamp,Unity,VShield REBOOT=ReallySuppress  

Please be aware, that in newer VMtools version vShiled component was split to two more specific components -  FileIntrospection and NetworkIntrospection.

 VMware-tools-9.4.12-2627939-x86_64.exe /v /qb-! REINSTALLMODE=vomus ADDLOCAL=All REMOVE=Audio,BootCamp,FileIntrospection,Hgfs,NetworkIntrospection,Unity REBOOT=R  

For further information of all Names of VMware Tools Components Used in Silent Installations see. VMware vSphere documentation here.

Keywords:
vmtools, vshield,  fileintrospection, networkintrospection

Friday, April 15, 2016

PowerCLI - Recent servers file is corrupt

This is just short post because I have experiences PowerCLI warning "Recent servers file is corrupt" depicted below.

 PS C:\Users\Administrator> C:\Users\Administrator\Documents\scripts\Cluster_hosts_vCPU_pCPU_report.ps1  
 WARNING: Recent servers file is corrupt: C:\Users\Administrator\AppData\Roaming\VMware\PowerCLI\RecentServerList.xml  
 UTC date time: 04/15/2016 12:32:52 Cluster: Cluster ESX name: esx01.home.uw.cz.Name pCPUs: 2 vCPUs: 19 vCPU/pCPU ratio: 9.5  
 UTC date time: 04/15/2016 12:32:52 Cluster: Cluster ESX name: esx02.home.uw.cz.Name pCPUs: 2 vCPUs: 12 vCPU/pCPU ratio: 6  

Google returns just one result here.

The solution to get out warning message is fairly simple.

Just remove corrupted file  C:\Users\Administrator\AppData\Roaming\VMware\PowerCLI\RecentServerList.xml which will be created during the next PowerCLI run.

Hope this helps some other folks.

Wednesday, April 06, 2016

ESXi host vCPU/pCPU reporting via PowerCLI to LogInsight

Some time ago I had a discussion with one of my customers how to achieve vCPU/pCPU ratio 1:1 on their Tier 1 cluster. Unfortunately, there is not any out-of-the box vSphere policy to achieve it. You can try to use vSphere HA Cluster admission control with advanced settings to achieve such requirement but it is based on CPU reservations in MHz so it would be tricky settings anyway with some additional risks for example after physical server hardware replacement.

At the end of the day we agreed that the goal can be achieved by Monitoring and Capacity Planning process. One can probably leverage VMware vRealize Operations Manager (aka vROps) or similar monitoring platform but because my customer does not have vROps and I'm not vROps expert I realized there is very simple alternative.

Let's leverage PowerShell/PowerCLI to report vCPU/pCPU ratio of ESXi hosts. 

As you can see in the script below it is pretty easy task to prepare PowerCLI report however the question is how to visualize it and send alerts in case of exceeded threshold.

And that's another simple idea. Why not leverage vRealize LogInsight?

All my readers most probably know what VMware's LogInsight (LI) is but just in case - LI is highly available and scalable syslog server appliance which main business value is an excellent reporting capabilities from unstructured data. I don't want to describe LogInsight in this blog post but another interesting feature of LI besides syslog messages it also accepts JSON messages sent via API. For more details look here.

So the whole solution is conceptually pretty easy. Bellow is the high level process.

  1. PowerCLI : Go through each ESXi and calculates vCPU/pCPU ratio
  2. PowerCLI : Compose a message including vCPU/pCPU ratio together with additional context information like timestamp, cluster name, ESXi name, number of vCPU and pCPU
  3. PowerCLI : Send the message to LogInsight via REST API
  4. LogInsight : Prepare custom analytics and create Dashboard
  5. LogInsight: Create alert to send e-mail message or trigger web hook when threshold is exceeded    
The latest script version is at GITHUB. Below is complete PowerCLI script ...

 #################################  
 # vCenter Server configuration  
 #  
 $vcenter = “vc01.home.uw.cz“  
 $vcenteruser = “readonly“  
 $vcenterpw = “readonly“  
 $loginsight = "192.168.4.51"  
 #################################  
   
 $o = Add-PSSnapin VMware.VimAutomation.Core  
 $o = Set-PowerCLIConfiguration -InvalidCertificateAction Ignore -Confirm:$false  
   
 #################################  
 # Connect to vCenter Server  
 $vc = connect-viserver $vcenter -User $vcenteruser -Password $vcenterpw  
   
 #################################  
 # Send Message to LogInsight  
 function Send-LogInsightMessage ([string]$ip, [string]$message)  
 {  
  $uri = "http://" + $ip + ":9000/api/v1/messages/ingest/1"  
  $content_type = "application/json"  
  $body = '{"messages":[{"text":"'+ $message +' "}]}'  
  $r = Invoke-RestMethod -Uri $uri -ContentType $content_type -Method Post -Body $body  
 }  
   
 #################################  
 # Count vCPU/pCPU Ratio  
 foreach ($esx in (Get-VMHost | Sort-Object Name)) {  
  $pCPUs = $esx.NumCpu  
  $vCPUs = ($esx | get-vm | Measure-Object -Sum NumCPU).Sum  
  $CPU_ratio = $vCPUs / $pCPUs  
  $date = (Get-Date).ToUniversalTime()  
  $cluster_name = get-cluster -VMHost $esx  
   
  $message = "UTC date time: $date Cluster: $cluster_name ESX name: $esx.Name pCPUs: $pCPUs vCPUs: $vCPUs vCPU/pCPU ratio: $CPU_ratio"  
  Write-Output $message  
  Send-LogInsightMessage "192.168.4.51" $message  
 }  

 disconnect-viserver -Server $vc -Force -Confirm:$false  

The PowerCLI script running in my home lab generate messages depicted below ...

 PS C:\Users\Administrator\Documents\scripts> .\Cluster_hosts_vCPU_pCPU_report.ps1  
 UTC date time: 04/06/2016 12:49:32 Cluster: Cluster ESX name: esx01.home.uw.cz.Name pCPUs: 2 vCPUs: 18 vCPU/pCPU ratio: 9  
 UTC date time: 04/06/2016 12:49:33 Cluster: Cluster ESX name: esx02.home.uw.cz.Name pCPUs: 2 vCPUs: 12 vCPU/pCPU ratio: 6  

I use scheduled tasks to send these messages periodically to LogInsight. You can see LogInsight messages in screenshot below ...

LogInsight Interactive Analysis
It is very simple to create dashboard from the analytic ...

LogInsight vCPU/pCPU Dashboard

And the last task is to create alert in LogInsight when vCPU/pCPU ratio is higher then 1 or you can be informed little bit earlier so you can set an alert when ratio is higher then 0.8 ...


Pretty easy, right?
Hope this helps broader VMware community.

And as always, any comments and thoughts are very welcome.

Monday, March 21, 2016

How to update ESXi via CLI

If you don't want to use VMware Update Manager (VUM) you can leverage several CLI update alternatives.

First of all you should download patch bundle from VMware Product Patches page available at http://www.vmware.com/go/downloadpatches. It is important to know that patch bundles are cumulative. That means you need to download and install only the latest Patch Bundle to make ESXi fully patched.

ESXCLI
You can use esxcli command on each ESXi host.

To list image profiles that are provided by the Patch Bundle use following command
esxcli software sources profile list -d /path/to/.zip
The output will look like this:
[root@esx01:~] esxcli software sources profile list -d /vmfs/volumes/NFS-SYNOLOGY-SATA/ISO/update-from-esxi6.0-6.0_update02.zip
Name                              Vendor        Acceptance Level
--------------------------------  ------------  ----------------
ESXi-6.0.0-20160301001s-no-tools  VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160302001-standard   VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160301001s-standard  VMware, Inc.  PartnerSupported
ESXi-6.0.0-20160302001-no-tools   VMware, Inc.  PartnerSupported
Now you can update the system with a specific profile:
esxcli software profile update -d /vmfs/volumes/NFS-SYNOLOGY-SATA/ISO/update-from-esxi6.0-6.0_update02.zip -p ESXi-6.0.0-20160302001-no-tools 
The output will look like this:
[root@esx01:~] esxcli software profile update -d /vmfs/volumes/NFS-SYNOLOGY-SATA/ISO/update-from-esxi6.0-6.0_update02.zip -p ESXi-6.0.0-20160302001-no-tools 
Update Result   Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.   Reboot Required: true

The last task is to reboot ESXi host as seen in the output above.
[root@esx01:~] reboot 
After reboot, you can ssh to ESXi host and verify current version.
[root@esx01:~] esxcli system version get   Product: VMware ESXi   Version: 6.0.0   Build: Releasebuild-3620759   Update: 2   Patch: 34

Note 1: The VMware online software depot is located at https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml, therefore you can use this online depot instead of local depot downloaded manually from VMware download site. To allow outgoing HTTP (tcp ports 80,443) you have to enable httpClient rule in ESXi firewall.
esxcli network firewall ruleset set -e true -r httpClient

To list profiles ...
esxcli software sources profile list -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

To update ESXi host into a particular profile ...
esxcli software profile update -d
https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
-p ESXi-6.0.0-20160302001-no-tools 

you can disable it after update
esxcli network firewall ruleset set -e false -r httpClient

Note 2: You can run an ESXCLI vCLI command remotely against a specific host or against a vCenter Server system.


ESXCLI over PowerCLI
The same can be done via PowerCLI. The code below is optimized for ESXCLI-Version2 releases in PowerCLI 6.3 R1.

#get esxcli object on particular host
$esxcli = Get-EsxCli -VMhost -V2

#list profiles in patch bundle
$arguments = $esxcli2.software.profile.list.CreateArgs()
$arguments.depot = "vmfs/volumes///update-from-esxi6.0-6.0_update02.zip"
$esxcli2.software.profile.update.Invoke($arguments)

#update to patch bundle profile
$arguments = $esxcli2.software.profile.update.CreateArgs()
$arguments.depot = "vmfs/volumes///update-from-esxi6.0-6.0_update02.zip"
$arguments.profile = "ESXi-5.5.0-profile-standard"
$esxcli2.software.profile.update.Invoke($arguments)

PowerCLI Install-VMHostPatch
You can also use special PowerCLI cmdlet Install-VMHostPatch

  1. Download the Update file “ESXi Offline Bundle” update-from-esxi6.0-6.0_update02.zip
  2. Extract the ZIP file and upload the resulting folder to a datastore on the Virtual Host.
  3. Put host in to maintenance mode
  4. Open PowerCLI
  5. Connect-VIServer
  6. Install-VMHostPatch -HostPath /vmfs/volumes/Datastore/update-from-esxi6.0-6.0_update02/metadata.zip
Note: For Install-VMHostPatch method Patch Bundle must be explicitly unzipped. 

References:
  • VMware Product Patches
  • VMware : Are ESXi Patches Cumulative 
  • Andreas Peetz : Are ESXi 5.x patches cumulative?
  • Quickest Way to Patch an ESX/ESXi Using the Command-line
  • Install-VMHostPatch
  • Home Lab Upgrade to 6.0u2
  • Friday, March 18, 2016

    What's new in PowerCLI 6.3 R1?

    PowerCLI 6.3 R1 introduces the following new features and improvements:

    Get-VM is now faster than ever!
    The Get-VM Cmdlet has been optimized and refactored to ensure maximum speed when returning larger numbers of virtual machine information. This was a request which we heard time and time again, when you start working in larger environments with thousands of VMs the most used cmdlet is Get-VM so making this faster means this will increase the speed of reporting and automation for all scripts using Get-VM. Stay tuned for a future post where we will be showing some figures from our test environment but believe me, it’s fast!


    New-ContentLibrary access
    New in this release we have introduced a new cmdlet for working with Content Library items, the Get-ContentLibraryItem cmdlet will list all content library items from all content libraries available to the connection. This will give you details and set you up for deploying in our next new feature…. 
    The New-VM Cmdlet has been updated to allow for the deployment of items located in a Content Library. Use the new –ContentLibrary parameter with a content library item to deploy these from local and subscribed library items, a quick sample of this can be seen below:

    $CLItem = Get-ContentLibraryItem TTYLinux
    New-VM -Name "NewCLItem" -ContentLibraryItem $CLItem -Datastore datastore1 -VMHost 10.160.74.38
    Or even simpler….
    Get-ContentLibraryItem -Name TTYLinux | New-VM -Datastore datastore1 -VMHost 10.160.74.38

    ESXCLI is now easier to use
    Another great feature which has been added has again come from our community and users who have told us what is hard about our current version, the Get-Esxcli cmdlet has now been updated with a –V2 parameter which supports specifying method arguments by name.
    The original Get-ESXCLI cmdlet (without -v2) passes arguments by position and can cause scripts to not work when working with multiple ESXi versions or using scripts written against specific ESXi versions.

    A simple example of using the previous version is as follows:
    $esxcli = Get-ESXCLI -VMHost (Get-VMhost | Select -first 1)
    $esxcli.network.diag.ping(2,$null,$null,“10.0.0.8”,$null,$null,$null,$null,$null,$null,$null,$null,$null)

    Notice all the $nulls ?  Now check out the V2 version:

    $esxcli2 = Get-ESXCLI -VMHost (Get-VMhost | Select -first 1) -V2
    $arguments = $esxcli2.network.diag.ping.CreateArgs()
    $arguments.count = 2
    $arguments.host = "10.0.0.8"
    $esxcli2.network.diag.ping.Invoke($arguments)

    Get-View, better than ever
    For the more advanced users out there, those who constantly use the Get-View Cmdlet you will be pleased to know that a small but handy change has been made to the cmldet to enable it to auto-complete all available view objects in the Get-View –ViewType parameter, this will ease in the use of this cmdlet and enable even faster creation of scripts using this cmdlet.

    Updated Support
    As well as the great enhancements to the product listed above we have also updated the product to make sure it has now been fully tested and works with  Windows 10 and PowerShell v5, this enables the latest versions and features of PowerShell to be used with PowerCLI.
    PowerCLI has also been updated to now support vCloud Director 8.0 and vRealize Operations Manager 6.2 ensuring you can also work with the latest VMware products.

    More Information and Download
    For more information on changes made in vSphere PowerCLI 6.3 Release 1, including improvements, security enhancements, and deprecated features, see the vSphere PowerCLI Change Log. For more information on specific product features, see the VMware vSphere PowerCLI 6.3 Release 1 User’s Guide. For more information on specific cmdlets, see the VMware vSphere PowerCLI 6.3 Release 1 Cmdlet Reference.

    You can find the PowerCLI 6.3 Release 1 download HERE. Get it today!

    Wednesday, March 16, 2016

    General recommendations for stretched vSphere HA Cluster aka Metro Cluster Storage (vMSC)

    This is just a brief blog post with general recommendations for VMware vSphere Metro Cluster Storage (aka vMSC). For more holistic view, please read white paper "VMware vSphere Metro Storage Cluster Recommended Practices"

    vSphere HA Cluster Recommended Configuration Settings:
    • Set Admission Control - Failover capacity by defining percentage of the cluster (50% for CPU and Memory)
    • Set Host Isolation Response - Power Off and Restart VMs
    • Specify multiple host isolation addresses - Advanced configuration option das.isolationaddressX
    • Disable default gateway as host isolation address - Advanced configuration option das.useDefaultIsolationAddress=false
    • Change the default settings of vSphere HA and configure it to Respect VM to Host affinity rules during failover - Advanced configuration option das.respectVmHostSoftAffinityRules=true
    • The minimum number of heartbeat datastores is two and the maximum is five. VMware recommends increasing the number of heartbeat datastores from two to four in a stretched cluster environment Advanced configuration option das.heartbeatDsPerHost=4
    • VMware recommends using "Select any of the cluster datastores taking into account my preferences" for heartbeat datastores and choose two datastores (active distributed volumes/LUNs) on each site
    • PDL and APD considerations depends on stretched cluster mode (uniform/non-uniform). However, VMware recommends to configure PDL/APD responses therefore VM Component Protection (VMCP) must be enabled and response should be set to "Power Off and Restart VMs - Conservative". Detail configuration should be discussed with particular storage vendor. 
    vSphere DRS Recommended Configuration Settings:
    • DRS mode - Fully automated
    • Use DRS VM/Host rules to set VM per site locality
    • Use DRS "Should Rules" and avoid the use of "Must Rules"
    SIOC/SDRS

    • Based on KB 2042596 SIOC is not supported
    • Based on KB 2042596 SDRS is only supported when the IO Metric function is disabled.

    Distributed (stretched) Storage Recommendations:
    • Always consult your configuration with your storage vendor
    • VMware highly recommends to use storage witness (aka arbitrator, tie-braker, etc.) in third site.
    Custom automation for compliance check and / or operational procedures Recommendations:
    • VMware recommends manually defining “sites” by creating a group of hosts that belong to a site and then adding VMs to these sites based on the affinity of the datastore on which they are provisioned. 
    • VMware recommends automating the process of defining site affinity by using tools such as VMware vCenter OrchestratorTM or VMware vSphere PowerCLITM. 
    • If automating the process is not an option, use of a generic naming convention is recommended to simplify the creation of these groups. 
    • VMware recommends that these groups be validated on a regular basis to ensure that all VMs belong to the group with the correct site affinity.
    Other relevant references:

    Friday, March 04, 2016

    How to show vCenter Instance configuration?

    Login to vCenter Server Appliance (VCSA) via ssh.

    Enable BASH access: "shell.set --enabled True"
    Launch BASH: "shell"

    Run following command to list vCenter Instance configuration.

    vc01:/etc/vmware-vpx # cat /etc/vmware-vpx/instance.cfg 
    applicationDN=dc\=virtualcenter,dc\=vmware,dc\=int
    instanceUuid=b7cc1468-6d27-4117-943f-7b1b4485028b
    ldapPort=389
    ldapInstanceName=VMwareVCMSDS
    ldapStoragePath=/etc/vmware-vpx/

    vCenter UUID is very important identifier which is unique identification of particular instance in external systems like Vmware Platform Service Controller (PSC), vROps, SRM, etc.

    UUID is in our example b7cc1468-6d27-4117-943f-7b1b4485028b

    Cisco Virtual Switch Update Manager

    Do you have Cisco Nexus 1000V in your vSphere environment? Then VSUM can be pretty handy toll for you.

    VSUM is a free virtual appliance from Cisco that integrates into the vSphere Web Client. Once deployed, VSUM allows you to do the following actions from the web client:

    • Deploy Nexus 1000v and Application Virtual Switch (AVS)
    • Upgrade the 1000v and AVS
    • Migrate virtual networking from vSwitch/VDS
    • Monitor your 1000v/AVS environment                              

    In other words, Cisco VSUM is a virtual appliance that is registered as a plug-in to VMware vCenter Server. The Cisco VSUM user interface is an integral part of VMware vSphere Web Client. The Cisco VSUM enables you to install, migrate, monitor, and upgrade the VSMs in high availability (HA) or standalone mode and the VEMs on ESX/ESXi hosts.



    Wednesday, February 17, 2016

    How to identify from the guest OS on which vCenter is virtual machine registered?

    One my customer asked me how to identify - from the VM guest operating system - in which vCenter server is that particular virtual machine registered.

    They use VM deployment from VM Templates with Customization Specifications and they would like to use vCenter locality information for additional tasks during VM deployment process.

    I was thinking about several possibilities. Considered options are listed below.

    Considered options:

    • OPTION 1: Define specific customization profile for each vCenter and have a special guest OS specific command in Customization Specification to run after sysprep and save vCenter identification somewhere to the guest file system.
    • OPTION 2: Use VM mac address for identification of vCenter Server Instance.
    • OPTION 3: leverage PowerCLI or vCLI running in guest os to communicate with vCenter.
    • OPTION 4: leverage custom VM guestinfo properties which can be read inside Guest OS.

    Option 3 is not good option at all because you would need to have network connectivity from production VMs to vCenter (management network) and therefore it has negative impact on overall security.

    Option 4 is described by William Lam here. It would need special VM templates having custom VM property like guestinfo.vcenter=VC01 which is visible in the guest info through vmtools. The command in the guest would look like
    vmtoolsd --cmd "info-get guestinfo.vcenter"
    Option 1 is relatively easy and it is leveraging the fact that Customization Specification for deployment of VM templates can run some script in guest after template deployment. I think that Option 1 is relatively good option. The only drawback is that vSphere admin would need to manage more customization specification and specific scripts to store vCenter identification somewhere in guest os filesystem which introduces some additional management overhead but it is acceptable if you ask me.

    Option 2 intrigued me technically so let's elaborate on this option. Option 2 is leveraging the fact that "vCenter Server instance ID" is used for generating virtual machine MAC addresses and MAC address is well known digital identifier which can be relatively simply identified in any operating system.  So what this "vCenter Server instance ID" is? Each vCenter Server system has a vCenter Server instance ID. This ID is a number between 0 and 63 which is randomly generated at installation time, but can be reconfigured after installation. Here in vSphere 6.0 documentation is written that ... According to this scheme, a MAC address has the following format:
    00:50:56:XX:YY:ZZ
    where 00:50:56 represents the VMware OUI,
    XX is calculated as (80 + vCenter Server Instance ID),
    and YY:ZZ is a random number.
    Note 1:
    The formula above (80 + vCenter Server Instance ID) is in hexadecimal format therefore in decimal format it is 128 + vCenter Server Instance ID.
    Note 2:
    vCenter Server unique ID is generated randomly during vCenter installation. It can be changed after installation in the Runtime Settings section from the General settings of the vCenter Server instance and restart it. Please be aware, that existing Virtual Machines MAC addresses are not changed automatically after ID reconfiguration therefore it is good idea to change vCenter Server unique ID immediately after vCenter Server installation. There are methods how to regenerate VM mac addresses but it requires VM downtime. For more information look at VMware KB 1024025
    Below is PowerShell script example of in-guest calculation of vCenter Server Instance ID.
    $mac_str = Get-CimInstance win32_networkadapterconfiguration | where {$_.ServiceName -eq "vmxnet3ndis6"} | select macaddress | Out-String
    $mac_arr = $mac_str.split(':')
    $XX_hex = $mac_arr[3];
    $XX_dec = [Convert]::ToInt32($XX, 16)
    $VC_instance_ID = $XX_dec - 128
    $VC_instance_ID
    The script above is just an example written for Windows OS (Win2012R2) and PowerShell (4.0) to show how to automate the trick described in this blog post. Similar scripts can be prepared for other guest operating systems.

    Disclaimer:
    The script above is just an example and it works in my lab environment. You should carefully test if your script inspired by this blog post works correctly in your particular environment. I don't take any responsibility for the script and you use it in your own risk. I have spent just few minutes to write this script and I would definitely recommend to invest some more time on development and test if you want to use such script in production environment.
    Know caveats of option 2 (vCenter identification based on VM MAC address) :

    • This solution will only work for dynamically assigned MAC addresses by vCenter and not for statically configured MAC addresses by administrator
    • This solution will not work correctly for cross vCenter vMotioned VMs because they are keeping the MAC address from original vCenter
    • I didn't test how behaves VM's recovered by VMware SRM (Site Recovery Manager). If recovered VM's keep original MAC address then this solution will not work for these recovered VMs. Unfortunately, I don't have access to SRM lab to verify SRM behavior. 

    I would recommend to my customer to consider between options (1) and (2).

    Hope this helps to broader IT community and as always ... your feedback is very welcome so don't hesitate to use comments, twitter or e-mail to share your opinions and other solution alternatives.


    Wednesday, January 13, 2016

    Don't use 4K Native drives for VMware vSphere ESXi nor VSAN

    First of all, let's be absolutely clear. Disks with 4K sector size are not currently supported by VMware. See VMware KB- Support statement for 512e and 4K Native drives for VMware vSphere and VSAN (2091600)

    UPDATE: vSphere 6.5 and VSAN 6.5 introduced 512e support so 4K native drives with 512 emulation (512e) are supported. In other words, 4K native drives without 512e are still not supported. 

    UPDATE 2018-04-18: vSphere 6.7 introdiuced support of 4K native drives.

    IMPORTANT STATEMENTS FROM KB

    Does current GA version of vSphere and VSAN support 4K Native drives?
    No. 4K Native drives are not supported in current GA releases of vSphere and VSAN.

    Does current GA version of vSphere and VSAN support 512e drives?
    No. 512e drives are not supported with the current versions of vSphere and VSAN due to potential performance issues when using these drives. 

    Therefore only 512n (native) drives are supported on any ESXi (5.x, 6.x) at the moment.

    It is usually not big deal on shared storage systems (aka disk arrays) because logical volumes are virtually emulated and sector size is usually 512 by default or configurable (512 or 4K). 

    I thought that it is the same with RAID controllers because virtual volumes are also emulated by RAID controller. However, I have just recently learned that it is not true. At least not for all RAID controllers. For example DELL PERC H730 (the best RAID controller DELL currently offers) doesn't allow you to choose sector size for virtual volume. Instead, sector size is passed from physical disks to operating system - ESXi hypervisor in our case.

    Here is one real customer story with 4K native disks. 

    The customer was not able to create datastore on some disks. The error message was ...
    Call "HostDatastoreSystem.QueryVmfsDatastoreCreateOptions" for object "ha-datastoresystem" on ESXi "esxi-test" failed.  
    The error message is depicted on screenshot below.



    Server Model: PowerEdge R530 – System Revision I
    Operating System: VMware ESXi 5.5.0 build-3343343
    BIOS Version: 1.5.4
    Lifecycle Controller Firmware: 2.21.21.21
    RAID Controller: Perc H730Mini 

    RAID1 (SYSTEM)
    Physical Disk 0:1:0 Online 0 278.88 GB Not Capable SAS HDD No
    Physical Disk 0:1:1 Online 1 278.88 GB Not Capable SAS HDD No
    DATASTORE – successfully created during ESXi installation  – OK

    RAID1 (DATA)
    Physical Disk 0:1:2 Online 2 558.38 GB Not Capable SAS HDD No
    Physical Disk 0:1:3 Online 3 558.38 GB Not Capable SAS HDD No
    DATASTORE – Datastore cannot be created because 4K - FATAL ISSUE

    RAID5 (DATA)
    Physical Disk 0:1:4 Online 4 1862.50 GB Not Capable SAS HDD No
    Physical Disk 0:1:5 Online 5 1862.50 GB Not Capable SAS HDD No
    Physical Disk 0:1:6 Online 6 1862.50 GB Not Capable SAS HDD No
    DATASTORE – datastore can be created but not officially supported by VMware because 512e – RISK

    RAID Controller list of disks

    T17: C0:PD   Flags    State Type Size          S N F P Vendor   Product          Rev  P C ID SAS Addr         Port Phy DevH WU BFw  BRev
    T17: C0:------------------------------------------------------------------------------------------------------------------------------
    T17: C0:0    f1400005 00020 00   22ecb25b      0 0 0 1 TOSHIBA  AL13SXB30EN      DK02 0 0 500003969802a076 03   04  0010   1  NA   NA - 512b (512 Native)
    T17: C0:1    f1400005 00020 00   22ecb25b      0 0 0 1 TOSHIBA  AL13SXB30EN      DK02 0 0 500003969802a062 00   00  000a   1  NA   NA - 512b (512 Native)
    T17: C0:2    f1400005 00020 00   8bba5f5       0 0 0 1 HGST     HUC156060CS4204  EK11 0 0 5000cca059596e59 05   06  000e   1  NA   NA - 4kn (4k Native)
    T17: C0:3    f1400005 00020 00   8bba5f5       0 0 0 1 HGST     HUC156060CS4204  EK11 0 0 5000cca0595aa1cd 02   02  000c   1  NA   NA - 4kn (4k Native)
    T17: C0:4    f1400005 00020 00   e8e088af      0 0 0 1 SEAGATE  ST2000NX0273     NS28 0 0 5000c5008f3a8efd 04   05  000d   1  NA   NA - 512e (512 Emulation)
    T17: C0:5    f1400005 00020 00   e8e088af      0 0 0 1 SEAGATE  ST2000NX0273     NS28 0 0 5000c5008f3ae545 01   01  000b   1  NA   NA - 512e (512 Emulation)
    T17: C0:6    f1400005 00020 00   e8e088af      0 0 0 1 SEAGATE  ST2000NX0273     NS28 0 0 5000c5008f3b0661 06   07  000f   1  NA   NA - 512e (512 Emulation)
    T17: C0:20   01400005 00020 0d   0             0 0 0 0 DP       BP13G+           2.23 0 0 524180704c645200 00   08  0009   0  NA   NA

    T17: C0:100  00400005 00020 03   0             0 0 0 0 LSI      SMP/SGPIO/SEP    4402 0 0                0 00   ff  ffff   0  NA   NA

    CONCLUSION AND  LESSONS LEARNED
    It is obvious that VMware will support 4K disks sometimes in the future because industry is moving there but if you are planning to use directly attached disks choose disks with 512 sector. It is ESXi limitation at the moment. VMware VSAN is also impacted by this limitation because VSAN relies on ESXi.

    Update 2016-01-21:
    I have just received following question from one reader ...
    "How can I list physical disks connected to internal PERC8?"
    Unfortunately I don't have access to any 13G Dell server but I did a research and there should be three available methods.

    Method 1/ racadm
    If you have DRAC (Dell Remote Access/management Card) you can leverage racadm.
    Based on racadm documentation it should be possible
    • Storage.PhysicalDisk.BlockSizeInBytes (Read Only) Description This is readonly attribute. This property indicates the logical block size of the physical drive that this virtual disk belongs to. Legal Values Values: 512 or 4096
    Method 2/ Export PERC Raid Controller Log with Dell Support Live Image Version 2.0

    http://de.community.dell.com/techcenter/support-services/w/wiki/369.export-perc-raid-controller-log-with-dell-support-live-image-version-2-0-englisch

    Method 3/ perccli
    Dell has utility called perccli. You can check perccli documentation for all details but there is command for viewing physical drive details for the specified slot in the controller.

    • Syntax is perccli /c0/e32/s4 show all

    Downside of this method is that perccli binaries exist just for windows or linux so you cannot use it directly from ESXi and you have to boot for example Linux live CD.

    Method 4/ megacli (not supported)

    Third method is leveraging LSI megacli utility. Dell PERC is manufactured by LSI so it should work. LSI has megacli VIB for ESXi but it is not officially supported by VMware nor Dell.
    See details at http://de.community.dell.com/techcenter/support-services/w/wiki/909.how-to-install-megacli-on-esxi-5-x




    Tuesday, January 12, 2016

    Datacenter Infrastructure Architectural Rules

    It is always more complex but in general following rules applies to any datacenter infrastructure architecture transforming to cloud principles ...

    Compute Rule
    Compute performance is relatively cheap, but CPU context switching is pricey.
    In other words, vCPU/pCPU ratio drives your consolidation.

    Storage Rule
    Storage capacity is relatively cheap, but I/O performance and response time is pricey.
    In other words, storage performance drives your consolidation.

    Network Rule
    Network bandwidth is relatively cheap, but latency is pricey.
    In other words, network latency drives your datacenter consolidation, geo clustering, DR architecture and hybrid cloud considerations.

    Operational Rule
    The infrastructure resources (compute/storage/network) are relatively cheap, but human resources are pricey.
    In other words, automate as much as possible and keep it as simple as possible.

    Monday, December 28, 2015

    VMware NSX useful resources

    THESE INFORMATION ARE OBSOLETE AS IT IS FOR VMWARE NSX-V, LATER REPLACED BY VMWARE NSX-T AND NOW REPLACED JUST BY VMWARE NSX.

    Keeping the page just as a list of internet links to historical NSX-V and NSX-T resources.
    ==================================================================

    I'm trying to deep dive into VMware Network Virtualization (NSX) and I have decided to collect all useful resources I will find during my journey.

    There are two NSX flavors. NSX-v and NSX-T. It is good to read NSX-v vs NSX-T: Comprehensive Comparison to understand differences.

    Fundamentals
    NSX-V Design
    NSX-T Design
    NSX-V Operations
    NSX-T Operations
    NSX Automation
    NSX in Home Lab
    NSX Advanced
    NSX Security
    NSX Dynamic Routing


    Networking 


    NSX Tutorial


    Other lists of resources
    • Rene Van Den Bedem (@vcdx133) :  NSX Link-O-Rama - great list of resources gathered by Rene
    Tools
    • ARKIN - network visibility and analytics
    This list will be continuously updated.
    If you know any other useful NSX resource don't hesitate to write a comment with link.

    Thursday, December 17, 2015

    End to End QoS solution for Vmware vSphere with NSX on top of Cisco UCS

    I'm engaged on a private cloud project where end to end network QoS is required to achieve some guarantees for particular network traffics.  These traffics are
    • FCoE Storage
    • vSphere Management
    • vSphere vMotion
    • VM production
    • VM guest OS agent based backup <== this is the most complex requirement in context of QoS
    Compute and Network Infrastructure is based on
    • CISCO UCS
    • CISCO Nexus 7k and 
    • VMware NSX. 
    More specifically following hardware components has to be used:
    • CISCO UCS Mini Chassis 5108 with Fabric Interconnects 6324 
    • CISCO UCS servers B200 M4 with virtual interface card VIC1340 (2x10Gb ports - each port connected to different fabric interconnect)
    • CISCO Nexus 7k
    Customer is also planning to use NSX security (micro segmentation) and vRealize Automation for automated VM provisioning.

    So now we understand the overall concept and we can consider how to achieve end to end network QoS to differentiate required network traffics. 

    Generally we can leverage Layer 2 QoS 802.1p (aka CoS - Class of Service ) or Layer 3 QoS (aka DSCP - Differentiated Services Code Point). However, to achieve end to end QoS on Cisco UCS we have to use CoS because it is the only QoS method available inside Cisco UCS blade system to guarantee bandwidth in shared 10Gb NIC (better to say CNA) ports.

    The most important design decision point is where we will do CoS marking to differentiate required network traffics. Following two options are generally available:
    1. CoS marking only in Cisco USC (hardware based marking) 
    2. CoS marking on vSphere DVS portgroups (software based marking)
    Lets deep dive into available options.

    OPTION 1 - UCS Hardware CoS marking

    Option 1 is depicted on figure below. Please, click on figure to understand where CoS marking and bandwidth management is done.


    Following bullets describes key ideas of option 1:
    • Management and vMotion traffics are consolidated on the same pair of 10Gb adapter ports (NIC-A1 and NIC-B1) together with FCoE traffic. Active / Standby teaming is used for vSphere Management and vMotion portgroups where each traffic is by default active on different UCS fabric.
    • VTEP and Backup traffics are consolidated on the same pair of 10Gb adapter ports (NIC-A2 and NIC-B2). Active / Standby teaming is used for NSX VTEP and backup portgroup.  Each traffic is by default active on different UCS fabric. 
    • Multiple VTEPs and Active / Active teaming for backup portgroup can be considered for higher network performance if necessary.
    • All vmkernel interfaces should be configured consistently across all ESXi hosts to use the same ethernet fabric in non-degraded state and achieve optimal east-west traffic.
    Option 1 implications:
    • Two Virtual Machine's vNICs has to be used because one will be used for production traffic (VXLAN) and second one for backup traffic (VLAN backed portgroup with CoS marking).
    OPTION 2 - vSphere DVS CoS marking:

    Option 2 is depicted on figure below. Please, click on figure to understand where CoS marking and bandwidth management is done.

    Following bullets describes key ideas of Option 2 and difference against Option 1:
    • Management and vMotion traffics are also consolidated on the same pair of 10Gb adapter ports (NIC-A1 and NIC-B1) together with FCoE traffic. The difference against Option 1 is usage of single CISCO vNIC and CoS marking in DVS portgroups. Active / Standby teaming is used for vSphere Management and vMotion portgroups where each traffic is by default active on different UCS fabric.
    • VTEP and Backup traffics are consolidated on the same pair of 10Gb adapter ports (NIC-A2 and NIC-B2). Active / Standby teaming is used for NSX VTEP and backup portgroup.  The difference against Option 1 is usage of single CISCO vNIC and CoS marking in DVS portgroups. Each traffic (VXLAN, Backup) is by default active on different UCS fabric. 
    • Multiple VTEPs and Active / Active teaming for backup portgroup can be considered for higher network performance if necessary.
    • All vmkernel interfaces should be configured consistently across all ESXi hosts to use the same ethernet fabric in non-degraded state and achieve optimal east-west traffic.
    • FCoE traffic is marked automatically by UCS for any vHBA. This is the only hardware based CoS marking.
    Option 2 implications:
    • Two Virtual Machine's vNICs has to be used because one will be used for production traffic (VXLAN) and second one for backup traffic (VLAN backed portgroup with CoS marking).

    Both options above requires two vNICs per VM. It is introducing several challenges some of them listed below:
    1. More IP addresses required
    2. Default IP gateway is used for production network, therefore, the backup network cannot be routed without static routes inside guest OS. 
    Do we have any possibility to achieve QoS differentiation between VM traffic and In-Guest backup traffic with single vNIC per VM? 

    We can little bit enhance Option 2. Enhanced solution is depicted in the figure below. Please, click on figure to understand where CoS marking and bandwidth management is done.


    So where is the enhancement? We can leverage enhanced conditional CoS marking in each DVS portgroup used as NSX virtual wire (aka NSX Logical Switch or VXLAN). If IP traffic is targeted to the backup server we will mark it as CoS 4 (backup) else we will mark it a CoS 0 (VM traffic). 

    You can argue VXLAN is L2 over L3 and thus L2 traffic where we did a conditional CoS marking will be encapsulated into L3 traffic (UDP) in VTEP interface and we will lose CoS tag. However, that's not the case. VXLAN protocol is designed to keep L2 CoS tags by copying inner Ethernet header into outer Ethernet header. Therefore virtual overlay CoS tag is kept even in physical network underlay and it can be leveraged in Cisco UCS bandwidth management (aka DCB ETS - Enhanced Transmission Selection) to guarantee bandwidth for particular CoS traffics.

    The enhanced Option 2 seems to me as the best design decision for my particular use case and specific requirements.

    I hope it makes sense and someone else find it useful. However, I share it with IT community for broader review. Any comments or the thoughts are very welcome.

    Monday, November 16, 2015

    Creating a Capacity & Performance Management Dashboard in vRealize Operations 6.x

    I'm long time proponent of performance SLAs in modern virtual datacenters. Performance SLAs is nothing else than mutual agreement between service provider and service consumer. Agreement describes what performance of particular resource consumer can expect and provider should guarantee. The performance SLA is important mainly on shared resources. On dedicated resources consumer knows exactly what to expect from performance point of view. In modern virtualized datacenters is almost everything shared therefore performance SLAs are a must and all service consumers should require it.

    The most important shared resources on virtualized infrastructures having significant impact on application performance are CPU and Disk. The rest infrastructure resources - Memory and Network - are important as well but CPU and Disk performance was typical final root cause of any performance troubleshootings I did over several years. In VMware vSphere we can typically identify CPU Contention by CPU %RDY metric and Storage Contention based on disk response time of normalized I/O's. We can identify such issues during troubleshooting when infrastructure consumers are complaining about application performance. We call it reactive approach.  But more mature approach is to identify potential performance issues before application is affected.  We call it proactive approach. And that's where performance SLA's and threshold monitoring come in to play.

    Infrastructure performance SLA can looks like

    • CPU RDY is below 3% (notification threshold 2%)
    • If # of vDisk IOPS < 1000 then vDISK Response Time is below 10ms (notification threshold 7ms)

    Simple right? These two bullets above should be clearly articulated, explained and agreed between infrastructure service provider and infrastructure consumer building and providing application services on top of infrastructure.

    So now how to monitor these performance metrics? I have just found Sunny Dua and Iwan Rahabok blog post covering this topic and step by step problem solution with vRealize Operations 6.x. Sunny and Iwan prepared and shared with community customized vROps supermetrics, views and dashboards for performance capacity planning.  To be honest I do not have big experience with vROps so far but it seems to me as very helpful tool for anybody using vRealize Operations as monitoring platform.

    Let's try to build and provide mature IT with clearly articulated SLA's and with agreed expectations between service providers and service consumers.

    Friday, November 06, 2015

    VMware Tools 10 and "shared productLocker"

    VMware tools (aka VM tools, vmtools) were always distributed together with ESXi image however it changed with VMware Tools 10. VMware is now shipping VM tools also outside of the vSphere releases. For more information look at this blog post.

    Where can I get VMware Tools?

    Option 1/ VMware Tools 10 can be downloaded from my.vmware.com. More specifically from this direct URL. Please be aware, that you must be logged in to my.vmware.com before direct link works.
     
    UPDATE: Direct link to Broadcom is https://support.broadcom.com/group/ecx/productdownloads?subfamily=VMware%20Tools

    Option 2/ VMware Tools can be also downloaded from here without any login credentials required. The latest version (10.0.6 at the moment of writing this blog post) is available here. Benefit of option (2) is that there are binaries which can be use directly within guest operating systems.

    Option 3/ Open-vm-tools. This option is available for linux based and FreeBSD operating systems. You can read more about it here , in SourceForge or in GitHub. Optimally the open-vm-tools should be available through standard package manager of your unix-like operating system.

    It is worth to mention that
    • VMware Tools 10 are backward compatible and are independent on ESXi version.
    • You can share VMware Tools 10 packages with application/OS owners and they can update VMware Tools by them selves during OS update procedure. But even your OS owners will do VMtools update by them selves it is still worth to have VMware tools available in ESXi to have ability of VMtools comparison from vSphere point of view.
    VMtools versions

    VMtools reports version as a number. For example version number 9536 is version 9.10.0. You can map VMtools version number to human readable version by leveraging this file.

    Updates

    Ok, so what? If you update your VMware Tools in old way (together with ESXi image) you - or VMware Update Manager - have to upload VMware tools to each ESXi host with following impacts
    1. It takes significantly more time especially in bigger environments.
    2. You can potentially end-up with different VM tools version in different ESXi hosts in your datacenter. It can be reported as outdated VM tools after vMotion of particular VM across different ESXi hosts.
    The thing is that VMware Tools 10 and above doesn't need to be updated automatically with ESXi update on each ESXi host. You can update ESXi hosts without VMware Tools and later update VMware tools bundle just on single shared place - in shared productLocker location.

    So what actually the "productLocker" is? The "productLocker" is essentially VMware Tools directory. This directory is on each ESXi host by default however it can be reconfigured and pointed to directory on shared datastore. Such configuration is called "shared productLocker" and it enables us to do centralize update of VM tools. It is worth to mention that all your ESXi hosts must be reconfigured to use this shared location.

    Reconfiguration has to be done via ESXi host advanced configuration option UserVars.ProductLockerLocation. So it has to be changed manually on each host, you can change it automatically via custom PowerCLI script or if you have Enterprise Plus Edition you can leverage Host Profiles. The last option works for me perfectly.

    Below is screenshot showing /productLocker directory structure and content on ESXi 6 host ...

    /productLocker directory structure and content 
    If you use central location for VMware Tools then you don't need update ESXi hosts with full ESXi image but only with ESXi image without VMware Tools. See example of different profiles in ESXi 6 Update 2 image below.
    [root@esx01:~] esxcli software sources profile list -d /vmfs/volumes/NFS-SYNOLOGY-SATA/ISO/update-from-esxi6.0-6.0_update02.zip
    Name                              Vendor        Acceptance Level
    --------------------------------  ------------  ----------------
    ESXi-6.0.0-20160301001s-no-tools  VMware, Inc.  PartnerSupported
    ESXi-6.0.0-20160302001-standard   VMware, Inc.  PartnerSupported
    ESXi-6.0.0-20160301001s-standard  VMware, Inc.  PartnerSupported
    ESXi-6.0.0-20160302001-no-tools   VMware, Inc.  PartnerSupported
    Profile names with postfix no-tools can be used for ESXi update without updating VMware Tools to each ESXi host. For further details how to update ESXi hosts with particular profile see my other post - How to update ESXi via CLI.

    Current ESXi host product locker location can be displayed by esxcli command
    esxcli system settings advanced list -o /UserVars/ProductLockerLocation 
    and the output should looks like ...
    [root@esx01:~] esxcli system settings advanced list -o /UserVars/ProductLockerLocation   
    Path: /UserVars/ProductLockerLocation   
       Type: string 
       Int Value: 0
       Default Int Value: 0
       Min Value: 0   
       Max Value: 0   
       String Value: /locker/packages/6.0.0   
       Default String Value: /locker/packages/6.0.0   
       Valid Characters: *   
       Description: Path to VMware Tools repository
    To change location you can use following esxcli command
    esxcli system settings advanced set -o /UserVars/ProductLockerLocation --string-value "/vmfs/volumes/NFS-SYNOLOGY-SATA/VMtools/latest
    And you can verify that setting was changed ...
    [root@esx02:~] esxcli system settings advanced list -o /UserVars/ProductLockerLocation
       Path: /UserVars/ProductLockerLocation
       Type: string
       Int Value: 0
       Default Int Value: 0
       Min Value: 0
       Max Value: 0
       String Value: /vmfs/volumes/NFS-SYNOLOGY-SATA/VMtools/latest   
       Default String Value: /locker/packages/6.0.0   
       Valid Characters: *   
       Description: Path to VMware Tools repository
    ESXi host has to be rebooted to activate new Product Locker Location.

    Hope this helps other folks in VMware community to simplify operations with VMware Tools.

    UPDATE 2018-02-05: I have just been told about very nice PowerCLI command-lets allowing to manage VMtools.  Command-let Update-VMToolsImageLocation updates the link /productLocker in ESXi host directly to avoid host reboot and command-let Invoke-VMToolsUpgradeInVMs in the combination with shared productLocker is a very nice way how to automatically update VMtools.
    All VMtools management command-lets are available on GitHub here
    https://github.com/vmware/PowerCLI-Example-Scripts/blob/master/Modules/VMToolsManagement/VMToolsManagement.psm1

    UPDATE 2019-04-04: New VMware blog post about this topic has been published in January 2019. The blog post is here Configuring a VMware Tools Repository in vSphere 6.7U1.
    In comments is the link to PowerCLI code leveraging vSphere API to change the productLocker location. The link to the code is here. The code is clear and simple ...

     $esxName = 'MyEsx'  
     $dsName = 'MyDS'  
     $dsFolder = 'ToolsRepo'  
     $esx = Get-VMHost -Name $esxName  
     $ds = Get-Datastore -Name $dsName  
     $oldLocation = $esx.ExtensionData.QueryProductLockerLocation()  
     $location = "/$($ds.ExtensionData.Info.Url.TrimStart('ds:/'))$dsFolder"  
     $esx.ExtensionData.UpdateProductLockerLocation($location)  
     Write-Host "Tools repository moved from"  
     Write-Host $oldLocation  
     Write-Host "to"  
     Write-Host $location  
    

    References


    Wednesday, October 07, 2015

    How to restore deleted vmdk from VMFS5

    Yesterday I have got an E-mail from somebody asking me how to restore deleted vmdk from VMFS5. They deleted VM but realised there are very important data.

    Typical answer would be - "Restore from backup" - however they wrote that they don't have backup.

    Fortunately, I have never had a need to restore deleted vmdk so I was starting to do some quick research (aka googling :-) )

    I found VMware KB 1015413 with following statement ...
    "VMware does not offer data recovery services. This article provides contact information for data recovery service providers.
    Note: VMware does not endorse or recommend any particular third-party utility, nor is this list meant to be exhaustive."
    I was sitting in the VMware office so I have asked colleagues if they have any practical experience with undeleting vmdk from VMFS.  One colleague of mine suggested utility "VMFS Recovery" from DiskInternals.com. He had positive experience with this particular tool. His suggestion was to use trial version which should help to identify if recovery is possible and buy full version for recovery.

    Warning: Use any third party tool for recovery on your own risk.

    I absolutely understand that if you lose your important data you would like to try anything to recover it back however here are my general thoughts:

    • Clone or snapshot your raw disk (LUN) with VMFS before any recovery (use storage array capabilities, third party imaging tool or *nix dd command) 
    • If your data are very valuable for you consider engagement of data recovery expert services
    • When  you do recovery by your self I wish you good luck.
    If you have some other experience with this topic please share it with community in comments. 

    Thursday, October 01, 2015

    VMware VM Template deployment and MS Windows product license activation

    In the past I have been informed from some of my customers that MS Windows Server license was not properly applied and activated  during VMware VM template deployment even the Product Key was properly entered in "Customization Specification".

    I don't know if this issue still exists in the latest vSphere version however there was always pretty easy work around my customer is using since then.

    You can use Run Once commands in "Customization Template". Below, in vSphere Web Client screen shot, you can see just example where two MS-DOS commands (dir and echo) are used.


    For application and activation of MS Windows license key is leveraged tool slmgr.vbs which stands for "Windows Software Licensing Management Tool". 

    Exact Run Once commands are:
    • C:\Windows\System32\slmgr.vbs -ipk H7Y93-12345-54321-ABCDE (just example use your product key)
    • C:\Windows\System32\slmgr.vbs -ipk

    First command is used for product key application and second for activation.

    Wednesday, September 02, 2015

    Storage related problems with LSI 12Gb SAS HBA card

    Our Dell field engineer experienced strange storage problems with SAS storage connected to ESXi hosts having LSI 12Gb SAS HBAs. Datastores were inaccessible after ESXi reboot, paths were temporarily unavailable, etc. In this particular case it was DELL Compellent storage with SAS front-end ports but the problem was not related to particular storage and similar issue can be experienced on other SAS storage systems like DELL PowerVault MD, etc.

    He has found (thanks Google) Christopher Glemot's blog post describing other issues with LSI 12Gb SAS HBA. However the key point was that mpt3sas driver should be used and not msgpt3 driver which was on these particular ESXi hosts.

    The solution is relatively easy:

    • Uninstall msgpt3 driver - esxcli software vib remove –n lsi-msgpt3
    • Download the latest mpt3sas driver (currently mpt3sas-10.00.00.00-6.0-2803883.zip)
    • Unpack and upload mpt3sas driver  to ESXi host
    • Install  mpt3sas driver on ESXi host - esxcli software vib install –d /path/mpt3sas-10.00.00.00-6.0-offline_bundle-2803883.zip
    • Restart ESXi host
    Hope this helps.