Converged Storage - Some Available Options - Part 1

Converged storage options for the various hypervisors appear to be increasing.  The coverage in the press is also on the up, causing customers to review this technology as their data centre storage requirements grow.  This type of storage solution does not suit every set of requirements but it can offer some compelling advantages:

  • Potential use of commoditised hardware to lower costs
  • Incremental expansion and, therefore, expenditure
  • Data replication to distinct hosts aiding data resiliency
  • A software layer that abstracts an underlying distributed file system and presents storage as, for example, NFS
  • ​The ability to utilise various types of storage media and even memory to optimise performance
  • Further storage enhancement such as deduplication and compression
  • Local storage of VM virtual disks providing potential for high IOPS
  • Scale out option without immediate degradation in performance due to linear-like IOPS increases (always up to a point - not an infinite possibility) 
  • Centralised tools to aid in management of large amounts of data
  • Self managing features that reduce human requirements for mundane sys maintenance tasks such as data leveling
  • Seemless integration to HV management tools, providing SAN based advanced features such as VM live migration and live storage migration, HA etc.

The following is a brief summary of the leading options available in the marketplace today.  More detailed information on each SDS option will follow in due course.  This list will undoubtedly change in the coming months.  It is by no means the complete list.

Keep in mind that these solutions are not simply Virtual Storage Appliances.  Nutanix, for example, is a complete hardware and software VDI solution.  Each of these products is different in certain aspects but all offer the ability to 'converge storage', that is, to potentially remove expensive SAN hardware and replace it with resilient software-based virtual storage layer and utilise server-local disks in order to achieve SAN/NAS like functionality.

vSphere VSAN

  • ​Released this month (March 2014) as part of vSpehere 5.5 U1
  • A reported 12,000 beta testers
  • Implemented as part of hypervisor kernel as opposed to a VSA
  • Narrow supported hardware, enterprise vendors such as HP, Dell IBM, EMC, Cisco
  • Minimum 3 nodes in a cluster.  Scales to 32 nodes, 4.5TB
  • 10GbE recommended (logical, IMO)
  • SSDs used as write buffer and read cache, HDDs used for presistent storage.  Disk read cache is actively managed
  • 100 VM limit per node
  • Policy based storage management used to setup data resiliency
  • Targeted use cases are VDI, dev and test environments


  • EMC owned product
  • VSA (an OVF template) that runs on vSphere. software is directly installed on Dom0 in Xen
  • ​VMWare Appliance exposes iSCSI target to HV iSCSI initiator
  • Three software components - Metadata Manager (MDM), ScaleIO Data Server (SDS), ScaleIO Data Client (SDC)
  • Capable of leveraging host-local PCIe flash cards,SSD and HDD
  • Tiered storage (flash/SSD and HDDs).  Cache contents in SSDs is managed by the software
  • Has two main storage concepts - Protection Domains and Storage Pools
  • Protection Domains - groups of SDS
  • ​Storage Pools - group of physical storage devices inside a Protection Domain 
  • Node and disk changes cause automatic rebalancing within the system.  Admin intervention is not required

​Atlantis ILIO USX

  • Newly released SDS solution
  • Very flexible software solution
  • Installs as VM on hypervisor (i.e. a VSA)
  • ​Two pool storage concepts - memory and capacity
  • Capacity can be DAS, NAS and SAN
  • Memory can be Flash and RAM
  • Virtualises pooled storage as 'Application Defined Storage Volumes'
  • Advanced optimisations such as inline de-duplication and compression
  • A recommended use case is the optimisation of already existing storage
  • Can scale up to 256 nodes

Maxta SDS

  • Virtual machine based SDS
  • Presents pooled storage over NFS
  • Initially VMWare support (vSphere 5.0 update 1 and higher).  KVM and Hyper-V soon.  No mention of XenServer at the moment although stated intention is to be hypervisor agnostic
  • Uses Flash as cache, HDDs for persistent storage
  • Management integrated into VM management tools.  I.e. no separate 'storage management'
  • Offers snapshots, cloning (zero copy), thin-provisioning
  • Data integrity via storage checksums
  • Provides resiliency, local-remote data replication, high availability
  • Maximises storage utilisation with inline compression and de-dupe
  • MxSP uses SSD for read and write back caching
  • Maxta compatibility is determined by HV HCL
  • Does VM level snapshots, no performance degradation as per VMFS snapshot
  • Split brain solving means that >2 nodes are required
  • There is function to build 'local copies' of data which means data is replicated locally as well.  In event of a local disk failure, rebuild over network from replicated data is avoided

StorMagic SvSAN

  • ​VSA is Linux appliance running as VM
  • VMWare 4+ and Hyper-V 3 now, KVM and Xen in roadmap
  • Minimum of two VMs, a third can be used as quorum, with no disk offering
  • Nodes are active-active - i.e. can fail either way and VMs can come back up
  • Presents storage as iSCSI target to hypervisor

Part 2 Will Contain

  • Parallels Storage Server
  • Nexenta Connect
  • Nutanix
  • ​Simplivity OmniCube

Click here to view part 2.