vSAN – Check VM Storage Policy & Compliance

As I continue to work with vSAN I discover there’s way more to do than just move some VMs over and you’re on your way. With multiple vSAN clusters each with different configurations I needed a way to monitor the current setup and check for changes. While creating a simple script to check which VM Storage Policy is assigned to each VM isn’t very difficult, a creating a script to check the storage policy of VMs across multiple vSAN datastores proved to be a little more difficult.

We run multiple PowerCLI scripts to check health and configuration drift (thanks to a special tool created by Nick Farmer) in our environment. In the event that a new vCenter is added or new vSAN datastore is deployed, we needed a simple script that can be run without any intervention or modification. Now we can be alerted when the proper VM storage policies isn’t assigned or the current policy is out of compliance.

To further complicate things in our setup, we create a new VM Storage Policy that contains the name of the cluster in which it’s assigned.  Due to the potential differences in each vSAN cluster (stripes, failures to tolerate, replication factor, RAID, etc) having a single Storage Policy does not work for us. In the event a VM is migrated from one vSAN cluster to another we need to check that the VM storage policy matches vSAN datastore cluster policy.

What this script does is grab all the clusters in a vCenter that have vSAN enabled. For each cluster that is found with vSAN enabled, it is filtering only the VMs that live on vSAN storage (with the name of “<cluster>-vsan”. Then we get the storage based policy management (Get-SpbmEntityConfiguration) of those VMs. The script then filters for a storage policy that doesn’t contain the cluster name OR a compliance status that is compliant.

$vsanClusters = Get-cluster | Where-Object {$_.vsanenabled -eq "True"}
foreach ($cluster in $vsanClusters)
{
$Cluster | get-vm |?{($_.extensiondata.config.datastoreurl|%{$_.name}) -like "*-vsan*"} |
Get-SpbmEntityConfiguration | Where-Object {$_.storagepolicy -notlike "*$Cluster*" -or $_.compliancestatus -notlike "*compliant*"} |
Select-Object Entity,storagepolicy,compliancestatus
}

Once this is run we can see the output below. I’ve obscured the names of the VMs, but we can see that there are still 12 VMs that are using the default vSAN Storage Policy instead of the cluster-specific storage policy they should be using. In addition, we see that the compliance status is currently out of date on most of these VMs. These VMs reside on 2 separate clusters and there are also 2 VMs that were filtered because they are on local storage in these clusters instead of vSAN.

storagepolicy01-12202016

 

vSAN – Check VM Storage Policy & Compliance

Veeam v9 – New Feature Annoucements

While the need for backups hasn’t changed, how you use these backups has. Not only that, the speed at which we can recover our data is changing as well. As the cost of downtime continues to grow, having to restore an entire server just to recover one file or a small number of files just won’t cut it. Your backup needs to be backup quickly and restore even faster.

The improvements in Veeam v9 are doing just that. Veeam has been introducing faster and faster ways to backup and restore (and limit the impact on production virtual machines during backups as well) for years and v9 is no exception. There are a few new options I want to touch on that are pain points I’ve experienced in my environments.

1. Backups from Snapmirror/Snapvault destinations.
As a former NetApp admin, I love the idea of minimizing the effect of backups on my virtual machines. By enabling backup from snapmirror destinations, you can get your VMs offsite using built in software on your NetApp array, and then create off-SAN backups that aren’t limited by your snapmirror rentention schedule due to space constraints.

2. Direct NFS Backup Mode
Direct SAN access has been in Veeam Backup & Replication forever, but backing up VMs on your NFS datastores was a different story. A proxied connection was required through an ESXi host to backup these VMs. In v9, a brand new NFS client was written by the engineers at Veeam to connect directly to your NFS volumes and backup VMs without additional host impact, latency, or speed constraints.

3. Per VM-backup File chain
As the size of your backup job grows, the managing of that file gets to be painful. As your backup repository begins to fill up you’re left having to migrate the entire backup file to a new repository. By creating a Per-VM backup file chain, one job can be created for all of your virtual machines, but each VM has its own file chain. This feature is especially useful with the next feature I’ll talk about.

4. Scale-out Backup Repository
Backup repository management has always been one of the largest pain points when managing Veeam backup jobs. I remember my first Veeam setup I was limited to 2TB LUNs on my backup server and I had to create 8 of them to store my backups. As backup jobs couldn’t span repositories, this meant I was creating individual jobs tied to repositories and then rebalancing as repositories began to fill. The Scale-out backup repository feature allows a virtual backup repository to be create on top of your current physical repositories. Now fewer jobs need to be created and you’re able to take advantage of all the space in each repository. Thanks to Luca Dell’Oca for clarifying that maintenance mode and evacuation are also supported. This mean if a repository needs to be taken down (due to SAN maintenance for example) it can be marked as maintenance mode and excluded from the repository during maintenance operations.

For me, these are the 3 big features I’m happy to see in Veeam v9. There are additional features such as explorers for Oracle, Active Directory (support for AD-integrated DNS and GPO restoration!), SQL Server and SharePoint. The entire list of new features can be found at the link below.

Click here for all the feature announcements.

Veeam v9 – New Feature Annoucements

Track Datastore Add & Removes With PowerCLI

While working with the data protection team at my job I was asked if there was any way to track new datastores being added to a vSphere cluster. When new LUNs are allocated to our vSphere clusters, the data protection team isn’t always made aware ahead of time. Normally this isn’t a big deal, but in our case we have a product that requires access to specified datastores for backups. In order to maintain access to these virtual machines for backup purposes, we need to be notified when new datastores are added.

As I sat and thought about how I could accomplish this task I came up with a couple ideas, but figured a scheduled task with PowerCLI/PowerShell would be the easiest to implement. In this script we will connect to the vCenter server, get all the datastores in the cluster, write a file daily with a date stamp, then compare the current and previous day’s datastore output files and write that to a third file that will only display the new datastores that have been added or the datastores that have been removed.

I’ve broken down the script so I can explain each section making it easy to understand. Before I had any knowledge of PowerShell/PowerCLI, modifying something to fit my environment when I didn’t understand what was happening at each step was time consuming and frustrating.

1. This is where we define the name of the vCenter instance we’ll be connecting to and the name of the cluster we’re interested in.

$vCenter = "LabvCenter.domain.com"
$Cluster = "LabCluster"

2. This is where we define the output location for our datastores and difference file. I chose to drop it into a folder named for the cluster, but that can be removed.

$filePath = "C:\test\" + $Cluster + "\"

3. This is where we connect to vCenter and then immediately wait 15 seconds which can fix issues of commands running before security warnings are displayed

Connect-VIserver $vCenter
Start-Sleep -s 15

4. This will gather all the datastores in the cluster and exclude any datastore that has a name containing “*-local”. The wildcard is important because the local datastores contain “servername + -local” and if the wildcard wasn’t there all of the local datastores would be included because no datastore is named exactly “-local”

$Datastores = Get-Cluster -Name $Cluster | Get-Datastore | Where {$_.Name -notlike "*-local"}

5. I prefer the format of 2 digit month, 2 digit day, 2 digit year. This will get the current date of the system running this script, then convert it to this format of 051415 for example.

$today = (Get-Date).ToString("MMddyy")
$yesterday = (Get-Date).AddDays(-1).ToString("MMddyy")
$2DaysAgo = (Get-Date).AddDays(-2).ToString("MMddyy")

6. This will set the file name and location for the output from 2 days ago. If that file exists, it will be removed. Rather than have an output from every day saved until I manually remove it, this process seemed better. I chose to delete the file from 2 days ago as opposed to deleting yesterday’s file after we run the comparison in case we see a huge change in the difference file we can manually compare the 2 files to try to find the error.

$2DayOldFile = $filepath + $Cluster + $2DaysAgo + ".txt"
If (Test-Path $2DayOldFile){Remove-Item $2DayOldFile}

7. This will set the file path and name to the file path defined at the top, plus the cluster name, plus the date and add .txt to the end.

$CurrentFile = $filePath + $Cluster + $today + ".txt"
$YesterdaysFile = $filePath + $Cluster + $yesterday + ".txt"

8. Here we are exporting all the datastores from Step 4 by name and outputting to the file name/path defined in Step 7.

$Datastores | Select Name | Out-File $CurrentFile

9. This is where we set the name and path for the difference file that will track the datastore add/remove.

$DifferenceFile = $filePath + "Datastore-Changes" + ".txt"

10. This will read the content from today’s content and yesterday’s content.

$YesterdaysContent = Get-Content $YesterdaysFile
$CurrentContent = Get-Content $CurrentFile

11. Here we are comparing the content we just read in step 10.

$Compare = Compare-Object $YesterdaysContent $CurrentContent

12. The standard way “Compare-Object” outputs this data shows difference with a side indicator of <= or => depending on where the difference exists. Rather than remember which file was read first to determine whether a datastore was added or removed, we change the column names. If a datastore existed yesterday, but is missing today it is labeled as “Removed”. If a datastore didn’t exist yesterday, but does today it is labeled as “Added”.

$compare | foreach {
if ($_.sideindicator -eq '<=')
{$_.sideindicator = "Removed"}

if ($_.sideindicator -eq '=>')
{$_.sideindicator = "Added"}
}

13. This will take the results from step 11 with formatting of step 12 then change the column names. The list of objects compared is normally named “InputObject” and then “Added or Removed” is normally “SideIndicator”. Maybe that’s fine, but I prefer something a little easier to read. I’ve renamed “InputObject” to “Datastore” but also I add the current date and we change “SideIndicator” to “Added or Removed”. Once that is done, we output that file to the path and name defined in Step 9. The reason why we include the current date in the “Datastore” column is because we are using “-Append” with the “Out-File” command. This will add a dated entry of changes that occurred to the bottom of the existing (or new) output file. This means we aren’t overwriting the same file every day, we are just adding to it. In case you forget to check this file for a few days you won’t lose that data.

$Compare |
select @{l='Datastore' + ' - ' + (Get-Date);e={$_.InputObject}},@{l='Added or Removed';e={$_.SideIndicator}} |
Out-File -Append $DifferenceFile

Now that we know what this thing does, let’s see it in action. I have run the output over 3 days and this is how the output file is displayed. We can see that on 05-14-15 we added Lab-Datastore-10 which didn’t exist on 05-13-15. Then on 05-15-15 we removed Lab-Datastore-03 and we added -11 and -12.
image

When running the script I commented out the removal of the 2 day old file so we could compare manually. Now we have an output file created (Datastore-Changes.txt) that should show the differences.
image

Inside Datastore-Changes.txt we see that on 5/14 the datastore “Lab-Datastore-10” was added and on 5/15 we lost Lab-Datastore-03, but added 11 and 12.

image

We can delete this file at any time and the next time this script runs we’ll create a brand new file. This means there is no dependency on this file already existing in order for the script to run and doesn’t require us to keep a long list of all the datastore add/removes for all eternity. Now you’ll just need to save the script schedule it to run using Windows Task Scheduler.

Below is the full scripts with comments.

#Define the vCenter Server and Cluster
$vCenter = "LabvCenter.domain.com"
$Cluster = "LabCluster"

#Set the path location for the output files
$filePath = "C:\test\" + $Cluster + "\"

#Connect to the vCenter Server and sleep for 15 seconds (necessary for security warnings)
Connect-VIserver $vCenter
Start-Sleep -s 15

#Get a list of all the datastores
$Datastores = Get-Cluster -Name $Cluster | Get-Datastore | Where {$_.Name -notlike "*-local"}

#Get the current date in the correct format
$today = (Get-Date).ToString("MMddyy")
$yesterday = (Get-Date).AddDays(-1).ToString("MMddyy")
$2DaysAgo = (Get-Date).AddDays(-2).ToString("MMddyy")

#Delete the output from 2 days ago (Remove this section if you want to keep the history)
$2DayOldFile = $filepath + $Cluster + $2DaysAgo + ".txt"
If (Test-Path $2DayOldFile){Remove-Item $2DayOldFile}

#Set the filename to include today's date
$CurrentFile = $filePath + $Cluster + $today + ".txt"
$YesterdaysFile = $filePath + $Cluster + $yesterday + ".txt"

#Export those datastores to a TXT file
$Datastores | Select Name | Out-File $CurrentFile

#Set file name & path for difference file
$DifferenceFile = $filePath + "Datastore-Changes" + ".txt"

#Get the content for yesterday and today's files
$YesterdaysContent = Get-Content $YesterdaysFile
$CurrentContent = Get-Content $CurrentFile

#Compare yesterday's and today's files
$Compare = Compare-Object $YesterdaysContent $CurrentContent

#Change the source/target column to "Removed" and "Added"
$compare | foreach { 
      if ($_.sideindicator -eq '')
        {$_.sideindicator = "Added"}
     }

#Change the column name output to "Datastore + Date" and "Added or Removed" then output to file
 $Compare | 
   select @{l='Datastore' + ' - ' + (Get-Date);e={$_.InputObject}},@{l='Added or Removed';e={$_.SideIndicator}} |
   Out-File -Append $DifferenceFile
Track Datastore Add & Removes With PowerCLI

Add NFS Datastore to Cluster via PowerCLI

I have been digging into more and more PowerCLI the last month or so trying to explore faster ways to accomplish common tasks. Using the NetApp VSC plugin inside vCenter I can provision a brand new NFS datastore to an entire cluster in just a few clicks, but there is no built in way to do this for mounting an existing datastore. The below script is just a simple way to mount an NFS datastore to a named cluster.

$ClusterName = "ProdCluster"
$DatastoreName = "VM_Win2003_NA5"
$DatastorePath = "/vol/VM_Win2003_NA5"
$NfsHost = "192.168.1.5"
get-cluster $ClusterName | get-vmhost | New-Datastore -NFS -Name $DatastoreName -Path $DatastorePath -NfsHost $NfsHost

Or you can replace each variable with the actual value in the script when mounting multiple datastores in the same script.

get-cluster "ProdCluster" | get-vmhost | New-Datastore -NFS -Name "VM_Win2003_NA5" -Path "/vol/VM_Win2003_NA5" -NfsHost 192.168.1.5

The next step here will be running this script from vCO and passing the variables directly from vCO. Maybe one day I’ll have the time to figure out just how to do that…

Add NFS Datastore to Cluster via PowerCLI