IT Projects 3

Software Defined Storage (SDS): A Centrinet Case Study

I recently completed a very interesting client project based on MS Hyper-V and MS Storage Space, and wanted to share the details here today.

The Clients had been using the 2008 R2 Hyper-V CSV cluster with iSCSI storage, which was complex to manage. The steep costs for iSCSI storage were returning low levels of performance on the 1GbE iSCSI network. After looking at some alternative solutions, we proposed that they utilize the existing 2U DL380G7 servers’ local storage with Hyper-V 2012R2.

Details and Setup 

We used 4x Hyper-V 2012 R2 hosts by running local storage tiering with a 4x 450 GB 10K SAS drive and a 4x 400 GB SSD. Additionally, we enabled Windows data deduplication on the file system. This had the end result of maximizing the performance (since VMs run locally) while still preserving storage (deduplication).

This setup met the requirements of cost-effective high-performance storage, and simple management, but did not have redundancy without the use of shared storage.

To handle this, we implemented Hyper-V Replica to a fifth Hyper-V node (4x Hyper-V hosts continuously replicate changes on the VM to the fifth Hyper-V node). In case any of the Hyper-V hosts are experiencing hardware issues, they are able to bring up the VM in minutes. Additionally, we put a robust backup in place, which backs up VMs locally and replicates everything offsite.


The clients were thrilled with the results. Their initial backup rate was 25-35 Mbps on the shared iSCSI SAN. After project completion the backup rate increased to 200-250 Mbps on their local storage.

Active VMs are now on SSD disks, versus the slow mechanical hard drives of the past. Hyper-V Replica gives clients an ability they’ve never before had – the ability to quickly restore VMs on the Hyper-V Replica node. In less than 5 minutes the client can bring back a failed VM, instead of having to suffer through the entire restoration process. Additionally, the Replica will have made 8 copies of the VM within a 24-hour period –there’s some reassurance!


This architecture fundamentally changes the ways we normally deploy virtualization solutions. Instead of having to acquire expansive blade servers, high-performance SANs, and high-performance storage networks, this architecture gives businesses the ability to use traditional rack mount servers to start virtualization projects. Now each server can potentially support 200-250 users, and the client doesn’t need to bleed in order to have in-house virtualization solutions.

This is a perfect example of utilizing software-defined storage (SDS). The only cost to the client is the solid-state drive (SSD). All of the necessary software features are free with Microsoft Server 2012 R2. And although this is not a silver bullet solution, it is a great alternative for anyone who is willing to look outside the box to move IT one step further from the cost center



Unique Solutions for Distributed Storage

The primary objective of a distributed storage model is for local storage on each individual server to act as a pool for the cluster. In this model, the virtual machine (and any data) is stored locally for better performance. Because the data is being replicated, redundancy is achieved – opening up alternate network paths and ensuring no single point of failure.

For a distributed storage model to work properly, a high-speed network is a must. I have compiled the following list of vendors based on those who have put in a lot of work to get this model running efficiently. Here are the top four I recommend looking into; each is listed along with a short description of their solution.


Atlantis is a memory based, replicated local vSAN solution. It provides very good performance, but the setup is complex.

VMware vSAN

This solution requires three minimal servers for setup. Each server has one SSD, and at least one SAS/SATA drive. It’s still in beta, but has great feedback thus far.


Nutanix created a solution where each node has two SSD and four SAS drives. In addition, this solution requires three minimal servers for setup. Each hypervisor node has a control VM that presents all local storage to the cluster, and then handles the storage replication. This solution is definitely one to watch.

Microsoft- Hyper-V replication

Last, but not least, is the unique route taken by Microsoft with the Hyper-V replication solution. In my opinion, this is the simplest way to achieve redundancy for the following reasons:

  1. There is no need to create a storage pool, or server clusters.
  2. If there are two active Hyper-V nodes hosting different VMs, then the third can act as the replica Hyper-V node for both. Refer to Microsoft’s Distributed File System (DFS) for more information.

There are currently several vendors out there with unique solutions for distributed storage, each with their own pros and cons. No matter which of these solutions works best for you, this topic is certainly something to be watching and researching.


Software Defined Storage: A Hot Topic

Why is Software Defined Storage such a hot topic this year? That is because it is changing the way people purchase storage. Instead of relying on large storage companies, we now have the ability to build cost effective storage solutions as needed.

Traditional storage technologies have long been considered a very expensive investment, and large storage vendors (such as EMC and NetApp) have somewhat monopolized the market. These companies have historically charged between one and several thousand dollars for regular and high performance drives. With the recent shift in demand for cloud computing, storage vendors are now enjoying the accompanying price increase of their product.

The market response to this huge shift in demand is Software Defined Storage (SDS). SDS gives businesses the ability to acquire and pool OEM drives into storage capacity for a fraction of the cost of comparable storage capacity from companies like EMC and NetApp. This technology increases the speed and agility of your company, resulting in higher productivity at a reduced cost. It may ultimately force large vendors to drop their prices.

In short, what is Software Defined Storage? It’s not just software or hardware, but rather a way to streamline the management of datacenters. Instead of being bound to any particular software or hardware vendor, you have the ability to create elastic storage on demand. Storage is on its way to becoming an important commodity, and with that shift, businesses will be able to avoid one of the largest money pits—traditional storage technology.

We would be happy to discuss with you how Software Defined Storage can help you meet your storage needs, save you money, and provide added agility and flexibility to your datacenter.

Hot Topic for 2014: Software Defined Storage

Software Defined Storage (SDS) has been a hot topic for the year of 2014 so far. What is software defined storage? It is a storage infrastructure that is managed and automated by intelligent software as opposed to the storage hardware. The main benefits of software defined storage over traditional storage are flexibility, automated management, and cost efficiency. Below, I introduce you to the major players in the software defined storage space and I have listed the requirements needed to use the space as well.

First, we will take a look at traditional server storage. Regular server based storage normally includes a RAID controller with less than 1 Ghz CPU and 256 MB – 1 GB memory. So, we can have a server with 20 cores and 256 GB + memory, but disk subsystem performance is limited by throughput/IOPS from the RAID controller card.

Read more