Software Defined Storage (SDS) has been a hot topic for the year of 2014 so far. What is software defined storage? It is a storage infrastructure that is managed and automated by intelligent software as opposed to the storage hardware. The main benefits of software defined storage over traditional storage are flexibility, automated management, and cost efficiency. Below, I introduce you to the major players in the software defined storage space and I have listed the requirements needed to use the space as well.
First, we will take a look at traditional server storage. Regular server based storage normally includes a RAID controller with less than 1 Ghz CPU and 256 MB – 1 GB memory. So, we can have a server with 20 cores and 256 GB + memory, but disk subsystem performance is limited by throughput/IOPS from the RAID controller card.
Now, let’s look at traditional SAN storage. NetApp or EMC can design storage subsystems with very respectable performance but this comes with a premium cost. You could purchase a 4 GB NetApp flash acceleration card that performs well but it will cost you an arm and a leg. Fortunately, there are existing solutions discussed below with great performance for a lesser cost.
Let’s introduce you to major players in the software defined storage space and what you will need to move to this infrastructure. Sun Micro was one of the earliest players in SDS. They introduced ZFS right before Oracle bought them. The solution is installed on top of Solaris and has the ability to truly utilize a server’s CPU and memory. Instead of using the hardware RAID controller card, it uses Host Bus Adapter (HBA) which provides a pass through type of function that allows hard drives to be directly managed by the server. Consequently, there is no performance limitation by the RAID controller. The true beauty of ZFS is that it has the ability to utilize RAM, OEM SSD, OEM SAS drives and OEM SATA drives. Therefore, not only is the performance better but the cost of ZFS is very low compared to NetApp/EMC solutions.
Microsoft is another major player in this space of course. Microsoft’s 2012 R2 storage tiering has a very strong offer to be considered. The beauty of Microsoft’s offering is that it is extremely easy to set up and maintain. Who doesn’t love easy set up and ease of maintenance? To make this solution even better, we could very easily utilize older disk arrays and add some SSDs to it. With that, Microsoft’s 2012 R2 Storage tiering could certainly compete head to head with the latest offering from NetApp/EMC with SMB 3.
There are quite a few virtual appliance based solutions out there to choose from. These normally require that hypervisor has a direct pass-through function so that it can utilize HBA to manage the disks.
If you are interested in moving to software defined storage, I have included a list of requirements below:
- Server has decent CPU and plenty of memory
- HBA (Host Bus Adapter)
- JBOD enclosure
- SAS/SATA drive
- NIC that has RDMA functions
Finally, please take a look at the screenshot below as an efficient example of software defined storage. This screenshot displays a 2012 R2 server with 4 x SSD and 4 x SATA drive with storage tiering. As you can see, the total I/Os per second is 222,215 which is in-line with whiptail. Please pay close attention to the CPU utilization percentage as well. This shows why many traditional server storage have 4K random IOPS issues. The reason being because not only disk but CPU plays a major role in this space, and there is no way that hardware RAID controller is able to keep up.