Pure Storage – Enterprise Ready, Pure and Simple

Disruption! Disk is dead! Flash forever!

When Pure Storage first came into the public eye they were loud, they were bold and everyone took notice. Whether you liked their marketing campaigns or their even louder marketing team, Pure marked the beginning of the flash revolution. They weren’t the first to do flash, they were just the first to convince us that all flash was right for our datacenter. If IOPs and low latency mattered, Pure was the vendor you needed.

With a recent IPO and new hardware platform (flasharray//m) there has been a lot going on with Pure Storage. While the announcement of their latest product was over 6 months ago (June 2015), my expectations for Pure are always high. The Tech Field Day delegates at Storage Field Day 8  got a chance to listen to what Pure Storage has been doing, their focus, and hopefully what was still to come.  This time around, however, I was left a little disappointed.

When a company who touts disruption as loud as Pure, you expect big things. Believe me, it’s not that what they’ve built isn’t impressive. A brand new, 3U dual-controller array built to eliminate single points of failure, maximize performance and display their orange logo as bright and loud as their messaging has always been is impressive. But that’s old news. This announcement feels like it was forever ago and we’re curious where Pure goes from here. What’s next for Pure?

Sadly, we don’t know. Roadmap and futures were off limits for this newly-public company. The feeling I get is that Pure is focusing on refinement, whether in their products or just in their messaging. Customers want visibility into their arrays. They want non-disruptive upgrades. They want health monitoring and alerting. They want that enterprise feeling companies like NetApp and EMC provide. Pure Storage’s focus right now is the enterprise and everything that Pure1 provides.

Pure1 is SaaS-based management and support of your Pure Storage arrays. Gone are the days of setting up management servers in each of your datacenters to manage and collect metrics of all your arrays. Pure1 allows you to login to a web interface and view statistics and alerts on all your arrays from a single portal. That’s one less server you’re forced to manage, update, and maintain to hold on to your historical data. Pure Storage arrays phone home all the important metrics (every 30 seconds) that matter so you have that single interface. New features can be delivered faster to all its customers with the collected data without the need to perform an update to your software.

Pure1 is being leveraged to open tickets on your behalf. Pure Storage said that roughly 70% of customer tickets are opened proactively by Pure1. Pure has worked at eliminating the “noise” of these alerts as well, focusing on root cause alerts as opposed to just all the alerts that may come from a cable or drive being pulled. These proactive tickets include bug fixes for issues that may come up in your environment based on your current configuration. I can recall a few times in my career where I’ve ran into a software bug on my storage array that the vendor was aware of, but never informed me I was susceptible to. This is the kind of information all customers want, but not all vendors provide.

Pure Storage has taken a non-disruptive everything approach to its software releases as well. While legacy vendors have left me reluctant to upgrade my arrays in the past, Pure Storage touted 91% customer adoption of its software within 8 months of release. Pure’s customers are trusting in its development, enough to even perform upgrades in the middle of the day while still serving production workloads. That is confidence and shows the capabilities of these arrays and Pure’s engineers. Non-disruptive no matter the operation.

Is the marketing machine that was Pure Storage dead? My hope and feeling is no. Pure is finding out who they are as $PSTG. This is a period of refinement and maturity for the 6-year-old company. We’ll have to wait and see what they’ll do next and I’m sure we’ll all take notice.

__________

Watch all the videos from Pure Storage at Storage Field Day 8 here.

Disclaimer: During Storage Field Day 8, my expenses (flight, hotel, etc) were paid for by Tech Field Day. I am under no obligation to write about any of the presented content nor am I compensated by any of the presenting companies for such writing.

 

 

 

Pure Storage – Enterprise Ready, Pure and Simple

NexGen Storage – The Future is Hybrid

At Storage Field Day 8, the delegates got a sneak peak at what NexGen Storage was up to. With new product and patent announcements, there was a lot to be excited about for this hybrid array vendor.

Hybrid storage is the future. But with the death of disk and living in an all-flash world, how can that be true? While disk isn’t dead yet, it feels like it’s dying. High IOPs, low latency, and consistent performance is what makes flash so desirable. When designing a modern datacenter today, I’d be unlikely to buy spinning disk. So how can hybrid be the future?

As the cost of flash continues to drop the dependency we have on spinning disk drops as well. When it comes to enterprise storage, why pay for a fading technology when flash is becoming more and more affordable? That’s not to say disk doesn’t have a place in the datacenter; it just means the use cases are beginning to diminish. Spinning disk is generally not a place where I want my production applications to run.

NexGen Storage has been in the hybrid array space for some time. After being founded in 2010, being purchased by Fusion-io in 2013 and then eventually being spun out as its own company earlier this year, NexGen Storage has continued to focus on its hybrid arrays. The engineering efforts and customer growth didn’t stop along the way. The NexGen team stayed focused on what it does best; fast storage with predictable performance.

With the rise of flash in the datacenter, why the focus on hyrbid? With NexGen, their hybrid arrays are not just flash cache in front of spinning disk. The approach of flash and RAM caching writes and/or reads makes sense as only your working set “needs” that high speed/low latency tier, but in the event your array was unable to predict the next blocks being requested by your application the performance of spinning disk sometimes isn’t enough. Their latest model, the N5-1500, is all flash with a hybrid approach.

While the cost of flash is dropping rapidly, it still is expensive. NexGen Storage is utilizing flash over the PCIe bus for it’s caching tier which is much more expensive than just regular SSD drives. The advantage of this approach is lower latency, higher throughput, but at a more reasonable cost as you’re not filling an array with all PCIe flash. The N5 all flash series is available in 15TB to 60TB raw capacity (in 15TB increments) with all sizes coming with 2.6TB of PCIe flash.

Why the same cache tier size? With its now patented dynamic QoS, NexGen Storage is able to deliver that consistent performance businesses need for their applications. The ability to prioritize workloads and assign pre-configured QoS policies allows you to purchase a do-everything array. In many of the smaller environments I’ve worked in, you don’t have the luxury of multiple storage arrays for production and dev/test. Often time, you are hoping that your non-critical workloads do not affect your mission-critical workloads. With automated throttling and prioritizing placement of your data, you’re able to ensure development never interferes with Tier 0.

The all-flash datacenter is here today and each all-flash vendor has a different approach. This hybrid all-flash approach is what sets NexGen Storage apart along with an all-inclusive software feature set. A fast array isn’t enough anymore; you need an array that has the intelligence to deliver the performance you’re expecting at all times. Combine that with VMware Virtual Volumes support, data reduction (deduplication and compression) along with array-based snapshots and replication (between all-flash and hybrid spinning disk arrays) and you have a solution built for the next generation.

Disclaimer: During Storage Field Day 8, my expenses (flight, hotel, etc) were paid for by Tech Field Day. I am under no obligation to write about any of the presented content nor am I compensated by any of the presenting companies for such writing.

NexGen Storage – The Future is Hybrid

Veeam v9 – New Feature Annoucements

While the need for backups hasn’t changed, how you use these backups has. Not only that, the speed at which we can recover our data is changing as well. As the cost of downtime continues to grow, having to restore an entire server just to recover one file or a small number of files just won’t cut it. Your backup needs to be backup quickly and restore even faster.

The improvements in Veeam v9 are doing just that. Veeam has been introducing faster and faster ways to backup and restore (and limit the impact on production virtual machines during backups as well) for years and v9 is no exception. There are a few new options I want to touch on that are pain points I’ve experienced in my environments.

1. Backups from Snapmirror/Snapvault destinations.
As a former NetApp admin, I love the idea of minimizing the effect of backups on my virtual machines. By enabling backup from snapmirror destinations, you can get your VMs offsite using built in software on your NetApp array, and then create off-SAN backups that aren’t limited by your snapmirror rentention schedule due to space constraints.

2. Direct NFS Backup Mode
Direct SAN access has been in Veeam Backup & Replication forever, but backing up VMs on your NFS datastores was a different story. A proxied connection was required through an ESXi host to backup these VMs. In v9, a brand new NFS client was written by the engineers at Veeam to connect directly to your NFS volumes and backup VMs without additional host impact, latency, or speed constraints.

3. Per VM-backup File chain
As the size of your backup job grows, the managing of that file gets to be painful. As your backup repository begins to fill up you’re left having to migrate the entire backup file to a new repository. By creating a Per-VM backup file chain, one job can be created for all of your virtual machines, but each VM has its own file chain. This feature is especially useful with the next feature I’ll talk about.

4. Scale-out Backup Repository
Backup repository management has always been one of the largest pain points when managing Veeam backup jobs. I remember my first Veeam setup I was limited to 2TB LUNs on my backup server and I had to create 8 of them to store my backups. As backup jobs couldn’t span repositories, this meant I was creating individual jobs tied to repositories and then rebalancing as repositories began to fill. The Scale-out backup repository feature allows a virtual backup repository to be create on top of your current physical repositories. Now fewer jobs need to be created and you’re able to take advantage of all the space in each repository. Thanks to Luca Dell’Oca for clarifying that maintenance mode and evacuation are also supported. This mean if a repository needs to be taken down (due to SAN maintenance for example) it can be marked as maintenance mode and excluded from the repository during maintenance operations.

For me, these are the 3 big features I’m happy to see in Veeam v9. There are additional features such as explorers for Oracle, Active Directory (support for AD-integrated DNS and GPO restoration!), SQL Server and SharePoint. The entire list of new features can be found at the link below.

Click here for all the feature announcements.

Veeam v9 – New Feature Annoucements

VeeamON 2015

As exciting as VMworld is each and every year, it appears to have lost its charm. Yes, it’s the largest gathering of virtualization people (my people) each and every year, but that is part of the problem. 20,000+ of my closest friends isn’t quite as intimate as it sounds. Oh, we’re close, but that’s more about proximity than the strength of our friendships.

The value of VMworld is the sessions, labs and networking opportunities. With sessions and labs being available after the conference ends, it’s important to maximize those networking opportunities as best you can. Trying to find someone in a crowd of 20,000 is difficult. If you’re not stalking your favorite virtualization professionals on Twitter you’ll likely miss them. Enter VeeamON.

This is the second year for VeeamON and my first year in attendance. Veeam, everyone’s favorite data protection & availability company (or at least mine), holds their annual gathering of data availability experts in Las Vegas (at the Aria hotel this year). What is normally just a function of our jobs is brought to the forefront in this conference. The size of VeeamON which has an expected attendance of 2500+ people will see many more opportunities to get one-on-one time with those in attendance.

VeeamON focuses on community, much like Veeam itself has done over the years. This dedication to educating and enabling the community is what has made Veeam Software successful and respected in the industry. Veeam is bringing out the vBrownbag crew for 19 sessions that will be available live during the conference as well. Its users and the community have helped shaped Veeam and built very passionate advocates.

With vendor sponsored sessions that consist of integration with the various aspects of the Veeam software features (such as Veeam Explorer for Storage Snapshots) as well as a focus on customer and user sessions that describe their use of Veeam solutions in their environment. That’s just the beginning. Other sessions talk about data protection as a whole and the concept of data availability in our modern datacenters. Being a Veeam user or customer isn’t required to walk away with new concepts, ideas and knowledge. Click here to view the agenda.

We’re past the point of just needing backups, we need our data readily available in many different and evolving scenarios. From backups, to disaster recovery, to sandboxes for testing, data availability gets reduced to just having a copy of your data. Veeam is pioneering the data availability movement and has the tools in place to bring data availability to the always-on enterprise. Above all else, Veeam is on your side.

VeeamON 2015

Thoughts in the Airport

Traveling is one of my least favorite things. I have never done well on flights, waiting to take off, sitting still for hours, and feeling trapped. That trapped feeling is worse when I’m stuck in the middle or window seat. If I’m not on aisle I don’t want to fly.

This time, however, it’s different. Sure, I’ve been to tech conferences before. I’ve been to a handful of VMworld’s, went to Cisco Live and a few smaller conferences as well. But Storage Field Day? This is my first time being selected as a delegate at a Tech Field Day event. As I sit in the airport I’m nervous for a completely different reason.

Tech Field Day events are filled with companies presenting their latest and greatest products and solutions. This is an event that skips the marketing and gets into the nitty gritty. The delegates (11 of us this time around) get to ask questions of the people who built these products and have a vast knowledge of their inner workings. Viewers watching the live stream can have their questions relayed to the presenters via Twitter and the #SFD8 hashtag so they gain a better understanding as well.

So why the nerves? Sitting alongside storage experts such as Howard Marks and Ray Lucchesi who run the GreyBeards on Storage podcast (which I subscribe to) and Scott D. Lowe who is an author, a blogger, a former CIO and someone is who well known and well respected in the industry. It just so happens that Howard and Scott have done a combined 36 Tech Field Day events. Alex Galbraith, Viper V.K, Jon Klaus, Dan Frith, Mark May, Enrico Signoretti, and Jarett Kulm are the remaining delegates who are also well known and respected as well. For a first timer like me, it doesn’t get much more intimidating.

That’s just the delegates, I haven’t even mentioned the presenters. We’ll be on-site at the Coho Data, Pure Storage, and Cohesity. With a recent IPO I’ll be curious to see what Pure will be showcasing and with their first GA release I’m interested in hearing more about Cohesity and where their product is at.

Violin Memory, Intel, INFINIDAT, Nimble Storage, Nexgen, Qumulo and Primary Data will also be presenting. With so little coming from Violin lately I’m curious what they’ve been up to (besides declaring that disk is dead). I’m also interested in where Nimble is at. With most of their competitors offering all flash solutions, Nimble is one of the last few hybrid-only vendors. Have they throw a bunch of SSD’s on their arrays and called it “All Flash” (ala NetApp) or are they working on something new?

As I sit in the airport at PDX waiting for my flight to arrive from Denver my nerves are about adding value to this event. Asking the questions and offering the perspective of a customer who has been responsible for deploying and administering storage over the last 7 years. Holding my own along side these storage industry experts and not letting myself get intimidated. This is out of my comfort zone, but I’m up for the challenge.

Thoughts in the Airport

Storage Field Day Here I Come!

Storage has been a component of my job for most of my IT career. It’s something that I’ve enjoyed, but hadn’t been something I’ve had the time to focus on. Coming from smaller organizations, I’ve been responsible for almost everything in the environment which rarely gave me an opportunity to be an expert in any one technology.

A few years ago the company I worked for was going through a storage refresh and I was tasked with evaluating our existing storage platform and determine our needs going forward. I spent time with nearly every major storage vendor there was going into depth on every aspect I could in order to determine the “best choice.” In the end I gained an understanding of storage that I never had before and it became a passion for me.

All that being said, I am both honored and humbled to be selected as a delegate for Storage Field Day 8! The Tech Field Day events have been something I’ve watched over the last couple of years and I have become a huge fan. These event give the viewers a chance to learn and ask questions about the latest technologies from the presenters. These events are about getting through the marketing and getting into the details. This is a great opportunity to educate yourself on the different products being presented.

Don’t miss all the presentations for Storage Field Day 8 on October 21-23. I am particularly interested in hearing more about what Coho Data and Cohesity are doing, but I’m looking forward to all the presentations.

Storage Field Day Here I Come!

Track Datastore Add & Removes With PowerCLI

While working with the data protection team at my job I was asked if there was any way to track new datastores being added to a vSphere cluster. When new LUNs are allocated to our vSphere clusters, the data protection team isn’t always made aware ahead of time. Normally this isn’t a big deal, but in our case we have a product that requires access to specified datastores for backups. In order to maintain access to these virtual machines for backup purposes, we need to be notified when new datastores are added.

As I sat and thought about how I could accomplish this task I came up with a couple ideas, but figured a scheduled task with PowerCLI/PowerShell would be the easiest to implement. In this script we will connect to the vCenter server, get all the datastores in the cluster, write a file daily with a date stamp, then compare the current and previous day’s datastore output files and write that to a third file that will only display the new datastores that have been added or the datastores that have been removed.

I’ve broken down the script so I can explain each section making it easy to understand. Before I had any knowledge of PowerShell/PowerCLI, modifying something to fit my environment when I didn’t understand what was happening at each step was time consuming and frustrating.

1. This is where we define the name of the vCenter instance we’ll be connecting to and the name of the cluster we’re interested in.

$vCenter = "LabvCenter.domain.com"
$Cluster = "LabCluster"

2. This is where we define the output location for our datastores and difference file. I chose to drop it into a folder named for the cluster, but that can be removed.

$filePath = "C:\test\" + $Cluster + "\"

3. This is where we connect to vCenter and then immediately wait 15 seconds which can fix issues of commands running before security warnings are displayed

Connect-VIserver $vCenter
Start-Sleep -s 15

4. This will gather all the datastores in the cluster and exclude any datastore that has a name containing “*-local”. The wildcard is important because the local datastores contain “servername + -local” and if the wildcard wasn’t there all of the local datastores would be included because no datastore is named exactly “-local”

$Datastores = Get-Cluster -Name $Cluster | Get-Datastore | Where {$_.Name -notlike "*-local"}

5. I prefer the format of 2 digit month, 2 digit day, 2 digit year. This will get the current date of the system running this script, then convert it to this format of 051415 for example.

$today = (Get-Date).ToString("MMddyy")
$yesterday = (Get-Date).AddDays(-1).ToString("MMddyy")
$2DaysAgo = (Get-Date).AddDays(-2).ToString("MMddyy")

6. This will set the file name and location for the output from 2 days ago. If that file exists, it will be removed. Rather than have an output from every day saved until I manually remove it, this process seemed better. I chose to delete the file from 2 days ago as opposed to deleting yesterday’s file after we run the comparison in case we see a huge change in the difference file we can manually compare the 2 files to try to find the error.

$2DayOldFile = $filepath + $Cluster + $2DaysAgo + ".txt"
If (Test-Path $2DayOldFile){Remove-Item $2DayOldFile}

7. This will set the file path and name to the file path defined at the top, plus the cluster name, plus the date and add .txt to the end.

$CurrentFile = $filePath + $Cluster + $today + ".txt"
$YesterdaysFile = $filePath + $Cluster + $yesterday + ".txt"

8. Here we are exporting all the datastores from Step 4 by name and outputting to the file name/path defined in Step 7.

$Datastores | Select Name | Out-File $CurrentFile

9. This is where we set the name and path for the difference file that will track the datastore add/remove.

$DifferenceFile = $filePath + "Datastore-Changes" + ".txt"

10. This will read the content from today’s content and yesterday’s content.

$YesterdaysContent = Get-Content $YesterdaysFile
$CurrentContent = Get-Content $CurrentFile

11. Here we are comparing the content we just read in step 10.

$Compare = Compare-Object $YesterdaysContent $CurrentContent

12. The standard way “Compare-Object” outputs this data shows difference with a side indicator of <= or => depending on where the difference exists. Rather than remember which file was read first to determine whether a datastore was added or removed, we change the column names. If a datastore existed yesterday, but is missing today it is labeled as “Removed”. If a datastore didn’t exist yesterday, but does today it is labeled as “Added”.

$compare | foreach {
if ($_.sideindicator -eq '<=')
{$_.sideindicator = "Removed"}

if ($_.sideindicator -eq '=>')
{$_.sideindicator = "Added"}
}

13. This will take the results from step 11 with formatting of step 12 then change the column names. The list of objects compared is normally named “InputObject” and then “Added or Removed” is normally “SideIndicator”. Maybe that’s fine, but I prefer something a little easier to read. I’ve renamed “InputObject” to “Datastore” but also I add the current date and we change “SideIndicator” to “Added or Removed”. Once that is done, we output that file to the path and name defined in Step 9. The reason why we include the current date in the “Datastore” column is because we are using “-Append” with the “Out-File” command. This will add a dated entry of changes that occurred to the bottom of the existing (or new) output file. This means we aren’t overwriting the same file every day, we are just adding to it. In case you forget to check this file for a few days you won’t lose that data.

$Compare |
select @{l='Datastore' + ' - ' + (Get-Date);e={$_.InputObject}},@{l='Added or Removed';e={$_.SideIndicator}} |
Out-File -Append $DifferenceFile

Now that we know what this thing does, let’s see it in action. I have run the output over 3 days and this is how the output file is displayed. We can see that on 05-14-15 we added Lab-Datastore-10 which didn’t exist on 05-13-15. Then on 05-15-15 we removed Lab-Datastore-03 and we added -11 and -12.
image

When running the script I commented out the removal of the 2 day old file so we could compare manually. Now we have an output file created (Datastore-Changes.txt) that should show the differences.
image

Inside Datastore-Changes.txt we see that on 5/14 the datastore “Lab-Datastore-10” was added and on 5/15 we lost Lab-Datastore-03, but added 11 and 12.

image

We can delete this file at any time and the next time this script runs we’ll create a brand new file. This means there is no dependency on this file already existing in order for the script to run and doesn’t require us to keep a long list of all the datastore add/removes for all eternity. Now you’ll just need to save the script schedule it to run using Windows Task Scheduler.

Below is the full scripts with comments.

#Define the vCenter Server and Cluster
$vCenter = "LabvCenter.domain.com"
$Cluster = "LabCluster"

#Set the path location for the output files
$filePath = "C:\test\" + $Cluster + "\"

#Connect to the vCenter Server and sleep for 15 seconds (necessary for security warnings)
Connect-VIserver $vCenter
Start-Sleep -s 15

#Get a list of all the datastores
$Datastores = Get-Cluster -Name $Cluster | Get-Datastore | Where {$_.Name -notlike "*-local"}

#Get the current date in the correct format
$today = (Get-Date).ToString("MMddyy")
$yesterday = (Get-Date).AddDays(-1).ToString("MMddyy")
$2DaysAgo = (Get-Date).AddDays(-2).ToString("MMddyy")

#Delete the output from 2 days ago (Remove this section if you want to keep the history)
$2DayOldFile = $filepath + $Cluster + $2DaysAgo + ".txt"
If (Test-Path $2DayOldFile){Remove-Item $2DayOldFile}

#Set the filename to include today's date
$CurrentFile = $filePath + $Cluster + $today + ".txt"
$YesterdaysFile = $filePath + $Cluster + $yesterday + ".txt"

#Export those datastores to a TXT file
$Datastores | Select Name | Out-File $CurrentFile

#Set file name & path for difference file
$DifferenceFile = $filePath + "Datastore-Changes" + ".txt"

#Get the content for yesterday and today's files
$YesterdaysContent = Get-Content $YesterdaysFile
$CurrentContent = Get-Content $CurrentFile

#Compare yesterday's and today's files
$Compare = Compare-Object $YesterdaysContent $CurrentContent

#Change the source/target column to "Removed" and "Added"
$compare | foreach { 
      if ($_.sideindicator -eq '')
        {$_.sideindicator = "Added"}
     }

#Change the column name output to "Datastore + Date" and "Added or Removed" then output to file
 $Compare | 
   select @{l='Datastore' + ' - ' + (Get-Date);e={$_.InputObject}},@{l='Added or Removed';e={$_.SideIndicator}} |
   Out-File -Append $DifferenceFile
Track Datastore Add & Removes With PowerCLI