Provision New Floating IP on Tegile

As I begin the process of reconfiguring my Tegile from a test/lab array into a production array I thought it would be a great opportunity to document more of the setup and provisioning steps involved in administering the array. In our environment we are using 10gbe without configuring LACP on the switches and letting the Tegile handle the network availability. Obviously, every environment is different, this is just the approach we took for this array.

These steps walk you through the process of provisioning an additional VLAN on the 10gbe interfaces and then creating a floating IP address that is owned by the node running the disk pool.

1. Login to the non-shared management IP of each HA Node
2. Login as “admin” with the correct password
3. Click on the “Settings” tab and then click “Network”
4. Under “Network Settings” on the left column, click on “Interface”
5. Under “Physical Network Interfaces”, click on one of the 10gbe interfaces (named ixgbe2 and ixgbe3 on this array). Click the “+” to add a VLAN
6. Enter the name of the VLAN following the guidelines below and the VLAN number and click “OK”.
a. Our naming convention is protocol + interface number + _ + VLAN number. We are using cifs on interface “ixgbe3” and the VLAN is 100
7. Click “OK” to this message about saving the config
8. Repeat step 5 for the other 10gbe interface changing the name to reflect the number of the other interface.
9. Click “Save” to bring these new VLAN online
    a. Notice that the state changes to “up” after saving
10. Now we need to assign an IP address to these interfaces. We are not using LACP, so under “IP Groups” click the “Add IP Group” button
11. Click the arrow next to “Network Properties”. Enter the name, check the boxes next to the newly created VLANs we added to each interface, then enter the IP address and subnet of this new subnet. Click “OK”
     a. The naming convention is “ipmp + _ + protocol + filer node number. IPMP is “IP Multipathing”, cifs is the protocol, and this is node “A” which is the first node
     b. Click “OK” to this message about saving the config
12. Now we see the IPMP group has been created, but isn’t up.
13. Click the “Save” button at the button
14. Click “OK” for confirmation
15. Now we can see that the interface is up
16. Repeat these steps on the other node of the HA pair. Changing the IP Group name to “ipmp_cifs2” and choosing a different IP address
17. Back on the primary node, click on “Settings” then “HA”
18. On the active resource group (we only have 1 which is “Resource Group A” click “Add Floating IP”
19. Enter the shared IP address and netmask (this is a unique IP and different than either of the IP addresses entered earlier) then choose the IP Groups we created on each node. Click “OK”
20. Now we have a new IP address that will be used by whichever node owns the Resource Group

The steps are pretty straightforward, but can be confusing in the beginning. Our local SE from Tegile walked us through this config when we were evaluating, but it was important for me to know how to do these things on my own.

Provision New Floating IP on Tegile

Change IP of vCSA

While changing the IP address of my vCenter Server is not something I’ve ever had to do before that changed this week. In my quest to separate networks into more logical groupings instead of everything living on the same subnet I had to change the IP address of my vCenter Server Appliance to place it on a new network along with the hosts it was managing. There is apparently a right way and a wrong way to do this.

I logged into the vCSA web interface (vCenterIP:5480), clicked on the “Network” tab and then click on “Address” and assumed this would be the correct place. So I changed the IP address and clicked “Save Settings” then rebooted the appliance.


Yeah…that wasn’t right. As I watched the appliance boot from the console I saw a lot of errors being thrown trying to access services running on the old address and failing. Then I decided to shut down (not reboot) the vCSA and try a different method. This is a pretty simple process, but in case you’re looking for the right way of doing it, this is what worked for me.

Once the appliance is powered off, right click and choose “Edit Settings”

Click the “Options” tab then choose “Properties” under “vApp Options”

Enter the new IP address, gateway, and any other information that is changing. If you’re moving it to a new portgroup, update that now as well and click “OK”

Once the changes have been made, power on the appliance and you should see the new addresses being referenced during start up.

And now that start up is complete, we see the new IPs listed for managing the appliance and you should be able to connect on the new IP.

Like I said, this is a very simple process. Once the vCSA was running, my hosts were notified of the change and were still in their cluster. Nothing bad happened and the lab continued to function as expected.

Change IP of vCSA

Add NFS Datastore to Cluster via PowerCLI

I have been digging into more and more PowerCLI the last month or so trying to explore faster ways to accomplish common tasks. Using the NetApp VSC plugin inside vCenter I can provision a brand new NFS datastore to an entire cluster in just a few clicks, but there is no built in way to do this for mounting an existing datastore. The below script is just a simple way to mount an NFS datastore to a named cluster.

$ClusterName = "ProdCluster"
$DatastoreName = "VM_Win2003_NA5"
$DatastorePath = "/vol/VM_Win2003_NA5"
$NfsHost = ""
get-cluster $ClusterName | get-vmhost | New-Datastore -NFS -Name $DatastoreName -Path $DatastorePath -NfsHost $NfsHost

Or you can replace each variable with the actual value in the script when mounting multiple datastores in the same script.

get-cluster "ProdCluster" | get-vmhost | New-Datastore -NFS -Name "VM_Win2003_NA5" -Path "/vol/VM_Win2003_NA5" -NfsHost

The next step here will be running this script from vCO and passing the variables directly from vCO. Maybe one day I’ll have the time to figure out just how to do that…

Add NFS Datastore to Cluster via PowerCLI