IT Projects 3

Software Defined Storage (SDS): A Centrinet Case Study

I recently completed a very interesting client project based on MS Hyper-V and MS Storage Space, and wanted to share the details here today.

The Clients had been using the 2008 R2 Hyper-V CSV cluster with iSCSI storage, which was complex to manage. The steep costs for iSCSI storage were returning low levels of performance on the 1GbE iSCSI network. After looking at some alternative solutions, we proposed that they utilize the existing 2U DL380G7 servers’ local storage with Hyper-V 2012R2.

Details and Setup 

We used 4x Hyper-V 2012 R2 hosts by running local storage tiering with a 4x 450 GB 10K SAS drive and a 4x 400 GB SSD. Additionally, we enabled Windows data deduplication on the file system. This had the end result of maximizing the performance (since VMs run locally) while still preserving storage (deduplication).

This setup met the requirements of cost-effective high-performance storage, and simple management, but did not have redundancy without the use of shared storage.

To handle this, we implemented Hyper-V Replica to a fifth Hyper-V node (4x Hyper-V hosts continuously replicate changes on the VM to the fifth Hyper-V node). In case any of the Hyper-V hosts are experiencing hardware issues, they are able to bring up the VM in minutes. Additionally, we put a robust backup in place, which backs up VMs locally and replicates everything offsite.


The clients were thrilled with the results. Their initial backup rate was 25-35 Mbps on the shared iSCSI SAN. After project completion the backup rate increased to 200-250 Mbps on their local storage.

Active VMs are now on SSD disks, versus the slow mechanical hard drives of the past. Hyper-V Replica gives clients an ability they’ve never before had – the ability to quickly restore VMs on the Hyper-V Replica node. In less than 5 minutes the client can bring back a failed VM, instead of having to suffer through the entire restoration process. Additionally, the Replica will have made 8 copies of the VM within a 24-hour period –there’s some reassurance!


This architecture fundamentally changes the ways we normally deploy virtualization solutions. Instead of having to acquire expansive blade servers, high-performance SANs, and high-performance storage networks, this architecture gives businesses the ability to use traditional rack mount servers to start virtualization projects. Now each server can potentially support 200-250 users, and the client doesn’t need to bleed in order to have in-house virtualization solutions.

This is a perfect example of utilizing software-defined storage (SDS). The only cost to the client is the solid-state drive (SSD). All of the necessary software features are free with Microsoft Server 2012 R2. And although this is not a silver bullet solution, it is a great alternative for anyone who is willing to look outside the box to move IT one step further from the cost center



Unique Solutions for Distributed Storage

The primary objective of a distributed storage model is for local storage on each individual server to act as a pool for the cluster. In this model, the virtual machine (and any data) is stored locally for better performance. Because the data is being replicated, redundancy is achieved – opening up alternate network paths and ensuring no single point of failure.

For a distributed storage model to work properly, a high-speed network is a must. I have compiled the following list of vendors based on those who have put in a lot of work to get this model running efficiently. Here are the top four I recommend looking into; each is listed along with a short description of their solution.


Atlantis is a memory based, replicated local vSAN solution. It provides very good performance, but the setup is complex.

VMware vSAN

This solution requires three minimal servers for setup. Each server has one SSD, and at least one SAS/SATA drive. It’s still in beta, but has great feedback thus far.


Nutanix created a solution where each node has two SSD and four SAS drives. In addition, this solution requires three minimal servers for setup. Each hypervisor node has a control VM that presents all local storage to the cluster, and then handles the storage replication. This solution is definitely one to watch.

Microsoft- Hyper-V replication

Last, but not least, is the unique route taken by Microsoft with the Hyper-V replication solution. In my opinion, this is the simplest way to achieve redundancy for the following reasons:

  1. There is no need to create a storage pool, or server clusters.
  2. If there are two active Hyper-V nodes hosting different VMs, then the third can act as the replica Hyper-V node for both. Refer to Microsoft’s Distributed File System (DFS) for more information.

There are currently several vendors out there with unique solutions for distributed storage, each with their own pros and cons. No matter which of these solutions works best for you, this topic is certainly something to be watching and researching.