VSAN – Compliance Status is Out of Date

Occasionally the Compliance status of the performance service will go to the “out of date” status. This is not an alert that is thrown anywhere within vCenter. You will have to check this status by logging into the vSphere web client, locating your vCenter, choose the cluster, clicking on “Manage” then choosing “Health and Performance” under “Virtual SAN”

As I have recently fixed this issue the above screenshot shows the “Compliant” status. Below are the steps to get to that point.

1. In the box for “Performance Service” click “Edit storage policy”

2. If there is a storage policy available in the drop down, select it and click “OK”. This will apply that policy and perform the compliance check.

For the lucky few where that works, that’s all you need to do. If the storage policy list is empty you’ll need to restart the vsanmgmtd service on each of the hosts.

3. Enable SSH on each of the hosts in the VSAN cluster and using an SSH client (like putty), SSH to a host and run the following command to restart the vsanmgmtd service (this is a non-impactful operation and should be able to be performed during production hours with no impact)
a. /etc/init.d/vsanmgmtd restart

4. Repeat that command on each of the hosts in the cluster until they have all restarted their services

5. Wait 5 minutes and then check to see if you are able to select a storage policy for the performance service. If not, move on to step 6

6. Now we’ll need to restart the vSphere Profile-Driven Storage Service on the vCenter server. This is also non-impactful and should be able to be performed in the middle of the day. If you’re using vCenter on windows, connect to the Windows server and restart the “Vmware vSphere Profile-Driven Storage Service”. If using VCSA (like this example) you’ll need to SSH to the VCSA and run the command below
a. Service vmware-sps restart

7. After the vmware-sps service restarts, log out of the web client and wait for 5 minutes while the storage profile service  completes its restart.

8. Log back in to the web client, navigate to the vCenter server, click “Manage” then choose the “Storage Providers” tab

9. Click the Synchronize Providers button to resync the state of the environment

10. Wait another 5 minutes while these synchronize completes. After 5 minutes, navigate to the VSAN cluster in the web client. Click on “Manage” then choose “Settings” and locate “Health and Performance” under the “Virtual SAN” section

11. In the Performance Service box, click the “Edit Storage Policy” button

12. From the drop down list you should be able to select the appropriate VSAN storage policy and then click “OK”

13. After this is selected the compliance status should change to “Compliant” and you should be all set.

So far these are the only steps that I have needed to follow in order to fix this issue. Let me know if there are any other fixes available.




VSAN – Compliance Status is Out of Date

Deploy VSAN Witness Appliance

The VSAN Witness Host is a virtual appliance that is deployed into an existing vCenter server. When deploying a 2 node or stretched cluster, the witness appliance acts as a tie breaker to determine which node(s) are still available in the event the nodes lose communication with each other. The witness The witness is deployed just like any other virtual appliance, but will require access to the management network and the network you’ve designated as your VSAN network. This appliance must be run OUTSIDE your VSAN cluster. This means that you cannot add this host as a member of the existing VSAN cluster and you also should not run it as a virtual appliance inside your existing VSAN cluster.

1. Choose the cluster that will host the appliance. Click on “File” then “Deploy OVF Template”

2. Browse to select the .OVA file and click “Next”

3. Review the details of the appliance and click “Next”

4. Review the license agreement and click “Accept” followed by “Next”

5. Enter the name of the Witness Appliance and its location then click “Next”

6. Choose the appropriate size of the appliance and click “Next”
a. As this is a test, I’m choose the “Tiny” size. You can ignore the disk component requires for any size. As this is a virtual appliance, it will deploy the appropriately sized drives that will designated as SSD and spinning disl

7. Choose the provisioning type and click “Next”
a. This is appliance is being deployed to a separate VSAN cluster than the one it will be acting as the witness for. This appliance can be deployed on shared storage, local storage, or another VSAN datastore.

8. Choose the appropriate networks for management and witness (VSAN). In this deployment, management lives on the “VM Network” and witness (VSAN) traffic is on the “VM-VSANnetwork”. This network is shared with the vMotion network and just needed an additional VM Portgroup created on each of the hosts in the cluster where this appliance is being deployed. Click “Next”

9. Enter a root password for this appliance. Remember, this is a host that you will need to login to in order to administer so if there is a standard root password that you use it would be a good idea to use that here. Click “Next”

10. Review the deployment settings and then click “Finish”

11. Once deployed, you will need to configure the appliance like any other host. Power on the appliance and open the console, press F2 and login as root with the password you assigned in step 9

12. Scroll to “Configure Management Network” and press “Enter”

13. Ensure the Network Adapter assigned to your management network is “vmnic0”
a. Set a VLAN (if necessary) for the management network, then assign your IPv4 and/or IPv6 settings for the management network to make it accessible on your network. Assigned DNS as needed as well. Press “ESC” and then press “Y” to configure settings and restart the management network

14. Once the host can communicate on the network, add it as a new host in vCenter.
a. Remember that this host should not be part of your VSAN cluster or any other cluster. It should be a standalone host in your datacenter.

15. Select the host in the vCenter client and configure networking for it. Locate the “witnessSwitch” and click “Properties”

16. Select the “witnessPg” and click “Edit”

17. On the “IP Settings” tab, enter the IP and subnet mask for the VSAN traffic network. Click “OK” at the bottom”

18. Once you have confirmed that network settings are successful, login to the vSphere web client and navigate to the VSAN cluster to be configured

19. Click the “Manage” tab, then choose “Fault Domains & Stretched Cluster” under “Virtual SAN”

20. In the “Streteched Cluster” box click “Configure”

21. Name the fault domains and place the hosts into the appropriate fault domain. This is a 2 node cluster with 1 host in each fault domain. Click “Next”

22. Locate the VSAN witness appliance host that was added to this vcenter and click “Next”

23. Choose the flash drive for and the HDD for cache and capacity and click “Next”

24. Review the settings and click “Finish”

25. Once completed, you will now see the status of the stretched cluster as “Enabled”, the preferred fault domain and the designated witness host.

Deploy VSAN Witness Appliance

VSAN – Host Not Contributing Stats

After an upgrade or maintenance on one or more of the nodes in a VSAN cluster one of the hosts can stop contributing performance stats. This is not a production down issue, but should be addressed to see the most up-to-date stats across all the nodes.

The fix for this is one of three things, but each of them involves turning off performance statistics on the cluster which will cause all historical performance stats to be removed. My hope is that VMware will fix this issue in an upcoming release because a loss of historical is not tolerable in all environments.

1. View the health of the VSAN by logging into the vCenter web client. Navigate to the appropriate vCenter and cluster, then click the “Monitor” tab, followed by “Virtual SAN” then click on “Health.” Expanded “Performance service” and click the warning for “All hosts contributing stats”

2. At the bottom you will now see the list of hosts that are not contributing stats

3. Now that we’ve identified the problem host, we need to disable VSAN performance service temporarily. Navigate to the “Manage” tab for this cluster then click on “Health and Performance” under “Virtual SAN”

4. Click “Turn off” in the “Performance Service” box

a. Click “OK” to confirm stopping the service which will erase all existing performance data

5. Confirm Perform Service has been disabled by refreshing the page

6. SSH to the affected host (using putty or similar SSH client) we identified in step 2 (you may have to enable SSH on the host before you can connect).

7. Run the command below to restart the VSAN management agent. This should have no production impact so it is safe to perform outside of a maintenance window.
a. /etc/init.d/vsanmgmtd restart

8. Once the service has been restarted, go back to the vCenter web client and the click the “Edit” button for the Performance Service box

9. Select the appropriate storage policy from the drop down list, ensure the “Turn ON Virtual SAN performance service” box is checked and click “OK”

10. Confirm that the performance service is turned on and reporting healthy

11. Navigate back to the “Monitor” tab and then “Virtual SAN” clicking the on the “Health” section. Click “Retest” to verify that all hosts are contributing stats.

If this does not fix the issue, you can restart the process, but this time instead of restarting the vsanmgmt service on the one node, do it on all of the nodes in the cluster. Once the services have been restarted across all nodes then restart the performance service and all nodes should be contributing stats.

I have also seen a case where restarting the service on all nodes didn’t fix the problem. In that scenario I was able to fix the problem by entering maintenance mode on the problem node and choose “full data migration” so all the data would be removed from the cluster. After that was complete I completely rebuilt the host from scratch (including wiping the disks claimed by VSAN) then moving it back into the cluster. I haven’t heard from VMware of any other ways to fix this issue.

VSAN – Host Not Contributing Stats

The Beginning of Cloud Natives

Over the last 8 years I have built my career around VMware. I remember the first time I installed VMware Server at one of my jobs just to play around with and imported my first virtual machine. I had no idea what I was doing or how any of it worked, but I felt there was a future for me in this technology. As I moved on to other companies, the VMware implementations just got larger and larger; from 3 hosts all the way up to well over 1000.

Having spent time in these environments and with other users at local VMUG events and VMworld, I’ve seen that the skills required to be a VMware administrator are becoming commoditized. More people know about it than ever before, more blogs exist than ever before, and the necessity of meetings that revolve around VMware specifically seems to have run its course. While VMware remains integral to the datacenter today, there are skills we need to be developing and technologies we need to be exploring to ensure we’re not the ones being replaced when the next generation joins the workforce.

Enter Cloud Natives.
cloud natives

Cloud Natives was the idea of Dominic Rivera and myself as a means to bridge the gap between user and these new technologies. Cloud Natives looks to bring together the leaders in a technology space to present their solutions in one location. Rather than just letting vendors spew marketing material,  we take a different approach. Vendors are required to provide actual customers to present how their solutions have impacted their job and their business. No more outlandish claims, no more vanity numbers that don’t depict actual workloads, just real stories from real users.

We are kicking off 2016 with our first event on July 14th in Portland, OR. This event will be focused on one of the hottest technologies in the datacenter right now: Flash Storage. We’re bringing together the top players in the Flash Storage space and you’ll hear their customers discuss the benefits and challeneges they faced when moving away from legacy spinning disk arrays and even newer hybrid arrays.Our goal is to educate our members one event at a time.

Cloud Natives looks to bring together all the datacenter technologies into one place. Whether it’s a focus on hypervisors, traditional or next-generation storage and infrastructure, cloud providers, DevOps and automation, or anything else that is hot in the datacenter, we will be that go-to resource in the Pacific Northwest. Each event is an opportunity to evaluate multiple vendors from the perspective of the customer. With no overlapping session schedules, you can walk away better informed and get any questions answered in one event.

I encourage everyone in the Portland area to register for this event at the Cloud Natives site. Our goal is to bring a sense of community back to Portland. We want to be a place to meet and network, to encourage, to mentor and to grow in our careers. No matter the stage in our career, we all have knowledge and experience that can help someone else and it’s time we all do our part to give back to the community.

The Beginning of Cloud Natives

Cohesity – Scale-Out Secondary Storage

Backups are boring. No matter if you’re talking about swapping tapes, configuring backup jobs in your legacy agent-based  software, or spending another night restoring snapshots from your storage array; there’s just no way to make backups interesting. Cohesity aims to fix that. No, they won’t make backups sexy, but they are looking to add a bit more flash to the secondary storage market.

So what exactly is “secondary storage?” Secondary storage encompasses our backups, non-prod workloads, fileshares and the like. The secondary storage market is gaining visibility recently. With the flood of primary storage vendors, Cohesity could have been another “me-too” primary storage vendor, but they see the value in attacking an under-developed market.

The concept of Cohesity is simple. You can purchase the C2300 or C2500 models which offer 48TB or 96TB of storage respectively in each 4-node appliance (with a minimum of 3-nodes to start). Additional capactity can be added a single node at a time afterwards in 12TB or 24TB chunks depending on the model. Each node contains either 800GB or 1.6TB of flash for caching along with compute and memory. Cohesity claims they are infinitely scalable due to their distributed OASIS (Open Architecture for Scalable Intelligent Storage) architecture, though they’ve only tested up to 32 nodes at the time of this writing. Once your nodes are setup, you just point Cohesity at your vCenter Server and you now have visibility of your virtual machines.

Cohesity, leveraging VADP, is able to snapshot your configured VMs and begins ingesting all that data. The changes of these VMs are tracked (using CBT) so you’re not performing new full backups each time. All that is pretty standard in the backup world, so what sets Cohesity apart? That data is not just backed up, but it is available to actually use. Want to spin up one of these backed up VMs for testing? Space-efficient clones are created directly on the Cohesity appliance and are presented back to your ESXi hosts. Searching for a file to restore from one of these VMs? You can locate it right from the web interface and download the file without having to restore the entire VM.

The differentiator for Cohesity is not just how it scales or how simple it makes the backup process, but how it makes your backups useful. Enabling developers to access clones of your production systems to test deployments and hotfixes without impacting your production storage. Integrated QoS preventing your dev/test workloads from consuming all your resources and causing backup performance to suffer. Utilizing the onboard flash combined with global deduplication, performance of these workloads can mimic production without the cost of an all flash array.

An all-inclusive secondary storage appliance that provides visibility of data sprawl adds to the value. Often times, as production systems are backed up and cloned and cloned again, you lose sight of the origin of that data. Migrating data from one storage array to another you lose that deduplication and you’re now increasing capacity across systems to accommodate your storage footprint. By providing an all-in-one solution for your backups and dev/test workloads, you’re able to maximize your investment without the need for multiple arrays and storage targets.

The backup market is a crowded one. There are more feature rich backup software providers in the space, but many of them require the purchase of additional storage that doesn’t have the capabilities of what Cohesity provides. Having just released Version 1 in mid-October, Cohesity has a lot of capabilities in their software with what appears to be a great vision for the future. The product is still in need of refinement to simplify the process of searches, reporting, and scheduling, but the foundation of what the Cohesity team has built has me excited to see where they’ll be able to take their product.


Watch all the videos from Cohesity at Storage Field Day 8 here.

Disclaimer: During Storage Field Day 8, my expenses (flight, hotel, etc) were paid for by Tech Field Day. I am under no obligation to write about any of the presented content nor am I compensated by any of the presenting companies for such writing.

Cohesity – Scale-Out Secondary Storage

Pure Storage – Enterprise Ready, Pure and Simple

Disruption! Disk is dead! Flash forever!

When Pure Storage first came into the public eye they were loud, they were bold and everyone took notice. Whether you liked their marketing campaigns or their even louder marketing team, Pure marked the beginning of the flash revolution. They weren’t the first to do flash, they were just the first to convince us that all flash was right for our datacenter. If IOPs and low latency mattered, Pure was the vendor you needed.

With a recent IPO and new hardware platform (flasharray//m) there has been a lot going on with Pure Storage. While the announcement of their latest product was over 6 months ago (June 2015), my expectations for Pure are always high. The Tech Field Day delegates at Storage Field Day 8  got a chance to listen to what Pure Storage has been doing, their focus, and hopefully what was still to come.  This time around, however, I was left a little disappointed.

When a company who touts disruption as loud as Pure, you expect big things. Believe me, it’s not that what they’ve built isn’t impressive. A brand new, 3U dual-controller array built to eliminate single points of failure, maximize performance and display their orange logo as bright and loud as their messaging has always been is impressive. But that’s old news. This announcement feels like it was forever ago and we’re curious where Pure goes from here. What’s next for Pure?

Sadly, we don’t know. Roadmap and futures were off limits for this newly-public company. The feeling I get is that Pure is focusing on refinement, whether in their products or just in their messaging. Customers want visibility into their arrays. They want non-disruptive upgrades. They want health monitoring and alerting. They want that enterprise feeling companies like NetApp and EMC provide. Pure Storage’s focus right now is the enterprise and everything that Pure1 provides.

Pure1 is SaaS-based management and support of your Pure Storage arrays. Gone are the days of setting up management servers in each of your datacenters to manage and collect metrics of all your arrays. Pure1 allows you to login to a web interface and view statistics and alerts on all your arrays from a single portal. That’s one less server you’re forced to manage, update, and maintain to hold on to your historical data. Pure Storage arrays phone home all the important metrics (every 30 seconds) that matter so you have that single interface. New features can be delivered faster to all its customers with the collected data without the need to perform an update to your software.

Pure1 is being leveraged to open tickets on your behalf. Pure Storage said that roughly 70% of customer tickets are opened proactively by Pure1. Pure has worked at eliminating the “noise” of these alerts as well, focusing on root cause alerts as opposed to just all the alerts that may come from a cable or drive being pulled. These proactive tickets include bug fixes for issues that may come up in your environment based on your current configuration. I can recall a few times in my career where I’ve ran into a software bug on my storage array that the vendor was aware of, but never informed me I was susceptible to. This is the kind of information all customers want, but not all vendors provide.

Pure Storage has taken a non-disruptive everything approach to its software releases as well. While legacy vendors have left me reluctant to upgrade my arrays in the past, Pure Storage touted 91% customer adoption of its software within 8 months of release. Pure’s customers are trusting in its development, enough to even perform upgrades in the middle of the day while still serving production workloads. That is confidence and shows the capabilities of these arrays and Pure’s engineers. Non-disruptive no matter the operation.

Is the marketing machine that was Pure Storage dead? My hope and feeling is no. Pure is finding out who they are as $PSTG. This is a period of refinement and maturity for the 6-year-old company. We’ll have to wait and see what they’ll do next and I’m sure we’ll all take notice.


Watch all the videos from Pure Storage at Storage Field Day 8 here.

Disclaimer: During Storage Field Day 8, my expenses (flight, hotel, etc) were paid for by Tech Field Day. I am under no obligation to write about any of the presented content nor am I compensated by any of the presenting companies for such writing.




Pure Storage – Enterprise Ready, Pure and Simple

NexGen Storage – The Future is Hybrid

At Storage Field Day 8, the delegates got a sneak peak at what NexGen Storage was up to. With new product and patent announcements, there was a lot to be excited about for this hybrid array vendor.

Hybrid storage is the future. But with the death of disk and living in an all-flash world, how can that be true? While disk isn’t dead yet, it feels like it’s dying. High IOPs, low latency, and consistent performance is what makes flash so desirable. When designing a modern datacenter today, I’d be unlikely to buy spinning disk. So how can hybrid be the future?

As the cost of flash continues to drop the dependency we have on spinning disk drops as well. When it comes to enterprise storage, why pay for a fading technology when flash is becoming more and more affordable? That’s not to say disk doesn’t have a place in the datacenter; it just means the use cases are beginning to diminish. Spinning disk is generally not a place where I want my production applications to run.

NexGen Storage has been in the hybrid array space for some time. After being founded in 2010, being purchased by Fusion-io in 2013 and then eventually being spun out as its own company earlier this year, NexGen Storage has continued to focus on its hybrid arrays. The engineering efforts and customer growth didn’t stop along the way. The NexGen team stayed focused on what it does best; fast storage with predictable performance.

With the rise of flash in the datacenter, why the focus on hyrbid? With NexGen, their hybrid arrays are not just flash cache in front of spinning disk. The approach of flash and RAM caching writes and/or reads makes sense as only your working set “needs” that high speed/low latency tier, but in the event your array was unable to predict the next blocks being requested by your application the performance of spinning disk sometimes isn’t enough. Their latest model, the N5-1500, is all flash with a hybrid approach.

While the cost of flash is dropping rapidly, it still is expensive. NexGen Storage is utilizing flash over the PCIe bus for it’s caching tier which is much more expensive than just regular SSD drives. The advantage of this approach is lower latency, higher throughput, but at a more reasonable cost as you’re not filling an array with all PCIe flash. The N5 all flash series is available in 15TB to 60TB raw capacity (in 15TB increments) with all sizes coming with 2.6TB of PCIe flash.

Why the same cache tier size? With its now patented dynamic QoS, NexGen Storage is able to deliver that consistent performance businesses need for their applications. The ability to prioritize workloads and assign pre-configured QoS policies allows you to purchase a do-everything array. In many of the smaller environments I’ve worked in, you don’t have the luxury of multiple storage arrays for production and dev/test. Often time, you are hoping that your non-critical workloads do not affect your mission-critical workloads. With automated throttling and prioritizing placement of your data, you’re able to ensure development never interferes with Tier 0.

The all-flash datacenter is here today and each all-flash vendor has a different approach. This hybrid all-flash approach is what sets NexGen Storage apart along with an all-inclusive software feature set. A fast array isn’t enough anymore; you need an array that has the intelligence to deliver the performance you’re expecting at all times. Combine that with VMware Virtual Volumes support, data reduction (deduplication and compression) along with array-based snapshots and replication (between all-flash and hybrid spinning disk arrays) and you have a solution built for the next generation.

Disclaimer: During Storage Field Day 8, my expenses (flight, hotel, etc) were paid for by Tech Field Day. I am under no obligation to write about any of the presented content nor am I compensated by any of the presenting companies for such writing.

NexGen Storage – The Future is Hybrid