Provision New Floating IP on Tegile

As I begin the process of reconfiguring my Tegile from a test/lab array into a production array I thought it would be a great opportunity to document more of the setup and provisioning steps involved in administering the array. In our environment we are using 10gbe without configuring LACP on the switches and letting the Tegile handle the network availability. Obviously, every environment is different, this is just the approach we took for this array.

These steps walk you through the process of provisioning an additional VLAN on the 10gbe interfaces and then creating a floating IP address that is owned by the node running the disk pool.

1. Login to the non-shared management IP of each HA Node
2. Login as “admin” with the correct password
tegileIP012715-step2
3. Click on the “Settings” tab and then click “Network”
tegileIP012715-step3
4. Under “Network Settings” on the left column, click on “Interface”
tegileIP012715-step4
5. Under “Physical Network Interfaces”, click on one of the 10gbe interfaces (named ixgbe2 and ixgbe3 on this array). Click the “+” to add a VLAN
tegileIP012715-step5
6. Enter the name of the VLAN following the guidelines below and the VLAN number and click “OK”.
a. Our naming convention is protocol + interface number + _ + VLAN number. We are using cifs on interface “ixgbe3” and the VLAN is 100
tegileIP012715-step6
7. Click “OK” to this message about saving the config
tegileIP012715-step7
8. Repeat step 5 for the other 10gbe interface changing the name to reflect the number of the other interface.
tegileIP012715-step8
9. Click “Save” to bring these new VLAN online
tegileIP012715-step9
    a. Notice that the state changes to “up” after saving
tegileIP012715-step9a
10. Now we need to assign an IP address to these interfaces. We are not using LACP, so under “IP Groups” click the “Add IP Group” button
tegileIP012715-step10
11. Click the arrow next to “Network Properties”. Enter the name, check the boxes next to the newly created VLANs we added to each interface, then enter the IP address and subnet of this new subnet. Click “OK”
     a. The naming convention is “ipmp + _ + protocol + filer node number. IPMP is “IP Multipathing”, cifs is the protocol, and this is node “A” which is the first node
tegileIP012715-step11a
     b. Click “OK” to this message about saving the config
tegileIP012715-step11b
12. Now we see the IPMP group has been created, but isn’t up.
tegileIP012715-step12
13. Click the “Save” button at the button
tegileIP012715-step13
14. Click “OK” for confirmation
tegileIP012715-step14
15. Now we can see that the interface is up
tegileIP012715-step15
16. Repeat these steps on the other node of the HA pair. Changing the IP Group name to “ipmp_cifs2” and choosing a different IP address
tegileIP012715-step16
17. Back on the primary node, click on “Settings” then “HA”
tegileIP012715-step17
18. On the active resource group (we only have 1 which is “Resource Group A” click “Add Floating IP”
tegileIP012715-step18
19. Enter the shared IP address and netmask (this is a unique IP and different than either of the IP addresses entered earlier) then choose the IP Groups we created on each node. Click “OK”
tegileIP012715-step19
20. Now we have a new IP address that will be used by whichever node owns the Resource Group
tegileIP012715-step20

The steps are pretty straightforward, but can be confusing in the beginning. Our local SE from Tegile walked us through this config when we were evaluating, but it was important for me to know how to do these things on my own.

Provision New Floating IP on Tegile

Change IP of vCSA

While changing the IP address of my vCenter Server is not something I’ve ever had to do before that changed this week. In my quest to separate networks into more logical groupings instead of everything living on the same subnet I had to change the IP address of my vCenter Server Appliance to place it on a new network along with the hosts it was managing. There is apparently a right way and a wrong way to do this.

I logged into the vCSA web interface (vCenterIP:5480), clicked on the “Network” tab and then click on “Address” and assumed this would be the correct place. So I changed the IP address and clicked “Save Settings” then rebooted the appliance.

changeip012315-step1

Yeah…that wasn’t right. As I watched the appliance boot from the console I saw a lot of errors being thrown trying to access services running on the old address and failing. Then I decided to shut down (not reboot) the vCSA and try a different method. This is a pretty simple process, but in case you’re looking for the right way of doing it, this is what worked for me.

Once the appliance is powered off, right click and choose “Edit Settings”
changeip012315-step2

Click the “Options” tab then choose “Properties” under “vApp Options”
changeip012315-step3

Enter the new IP address, gateway, and any other information that is changing. If you’re moving it to a new portgroup, update that now as well and click “OK”
changeip012315-step4

Once the changes have been made, power on the appliance and you should see the new addresses being referenced during start up.
changeip012315-step5

And now that start up is complete, we see the new IPs listed for managing the appliance and you should be able to connect on the new IP.
changeip012315-step6

Like I said, this is a very simple process. Once the vCSA was running, my hosts were notified of the change and were still in their cluster. Nothing bad happened and the lab continued to function as expected.

Change IP of vCSA

Add NFS Datastore to Cluster via PowerCLI

I have been digging into more and more PowerCLI the last month or so trying to explore faster ways to accomplish common tasks. Using the NetApp VSC plugin inside vCenter I can provision a brand new NFS datastore to an entire cluster in just a few clicks, but there is no built in way to do this for mounting an existing datastore. The below script is just a simple way to mount an NFS datastore to a named cluster.

$ClusterName = "ProdCluster"
$DatastoreName = "VM_Win2003_NA5"
$DatastorePath = "/vol/VM_Win2003_NA5"
$NfsHost = "192.168.1.5"
get-cluster $ClusterName | get-vmhost | New-Datastore -NFS -Name $DatastoreName -Path $DatastorePath -NfsHost $NfsHost

Or you can replace each variable with the actual value in the script when mounting multiple datastores in the same script.

get-cluster "ProdCluster" | get-vmhost | New-Datastore -NFS -Name "VM_Win2003_NA5" -Path "/vol/VM_Win2003_NA5" -NfsHost 192.168.1.5

The next step here will be running this script from vCO and passing the variables directly from vCO. Maybe one day I’ll have the time to figure out just how to do that…

Add NFS Datastore to Cluster via PowerCLI