This blog post is originally written by Siddharth Joshi from Windows Engineering Team

Storage Spaces is Microsoft’s software defined storage technology that enables virtualized storage by grouping industry-standard disks into storage pools, and then creating virtual disks called storage spaces from the available capacity in the storage pools. Storage Spaces was introduced in Windows Server 2012 and is continued in Windows Server 2016. Windows Server 2016 also has another software defined technology called Storage Spaces Direct, which eliminates the requirement of directly attached shared storage between nodes of a Storage Spaces Direct cluster. To learn more about Storage Spaces Direct, see the below link

https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-direct-overview

Nano Server is a new installation option available with Windows Server 2016. Nano Server is a headless, remotely administered server with minimal footprint. For more about Nano Server, see this link. 

https://technet.microsoft.com/windows-server-docs/get-started/getting-started-with-nano-server

In this post, we will create a Storage Spaces pool and virtual disks on a Nano Server cluster. We have used Dell Storage MD1420 which is a JBOD enclosure connected to Nano Server nodes (Dell PowerEdge R530 servers) using 12 Gbps SAS cables. For more about MD 1420 direct attached storage, refer to below link

http://www.dell.com/in/business/p/storage-md1420/pd

Steps to create Storage Spaces Pool

  1. Physically connect the Dell MD 1420 JBOD enclosure to the Nano Servers
  2. Connect to a Nano Server using remote PowerShell.
  3. To check the firmware version of the JBOD Enclosure , run below Cmdlet

Get-PnPDevice  -Class System | where{$_.FriendlyName -like "*scsi*"} | Select HardwareID

Output of this Cmdlet will show the Hardware ID for JBOD, where 1.02 is the current firmware version of the enclosure.

HardwareID                  : {SCSI\EnclosureDELL____MD1420__________1.02, SCSI\EnclosureDELL____MD1420__________,SCSI\EnclosureDELL____, SCSI\DELL____MD1420__________1...}

4. Update the enclosure firmware to the latest available if it is not already running latest version. Latest firmware can be downloaded from Dell support site.  (ensure it is version X.X or higher)

http://www.dell.com/support/home/us/en/19/product-support/product/storage-md1420/drivers?os=w12r2

5. Once the firmware is updated, reboot the enclosure. After a reboot, the enclosure should be re-enumerated on the server. Run Get-StorageEnclosure to ensure the MD 1420 enclosure is listed.

6. Run Get-Disk cmdlet to confirm disks from JBOD are listed there.

7. By default disks from the JBOD will be offline, so the next cmdlet can be used to bring them online and initialize.

Get-Disk | Where{$_.OperationalStatus -eq "Offline"} | Initialize-Disk

8. Now to create a Storage Spaces cluster and add Nano Server nodes

  • Add Nano Servers to an Active Directory domain (All the nodes to be included in the Storage Spaces cluster).  Nano Server supports only the offline method to join a domain. Use djoin.exe by referring to 
  • Install Failover Clustering role on Nano Server by running Install-WindowsFeature -Name Failover-Clustering
  • Test if the physical disks have SCSI PR (Persistent Reservation) bit enabled (This is required only for a Storage Spaces cluster, not for standalone Storage Spaces configured only on one server). To test the SCSI PR bit for disks, run Test-Cluster cmdlet with –Disk parameter for all the nodes which will participate in the cluster.
  • Create the cluster by running New-Cluster Cmdlet. A static IP for the cluster can be provided if desired
  • Confirm Cluster is working fine and resources are online by running Get-ClusterResource.

Note: - Nano Server does not include the PowerShell modules for failover clustering so any cluster-related cmdlets cannot be run on remote PowerShell session of a Nano Server. These Cmdlets need to be run from a Management station (Windows Server 2016 in same Active Directory Domain) with failover clustering PowerShell module (part of the failover clustering feature) installed.  
(this probably needs to be reworded a little to make it clear that you must run the command locally and point to the nano server)

 9. Create a new Storage Pool

  • Get the list of physical disk which can be part of Storage Pool by running $pdisks = Get-PhysicalDisk –CanPool $True | Where BusType –EQ “SAS”
  • Run Get-StorageSubsystem to see Clustered Windows Storage is showing up there and is healthy.

PS C:\> Get-StorageSubSystem

FriendlyName                                     

------------                                   

Windows Storage on R530-SS            

Clustered Windows Storage on SSCluster

Here R50-SS is the host name of Nano node where this Cmdlet was executed and SSCluster is the Name of the Failover Cluster.

  • Create a Storage Pool named “TestPool” by running New-StoragePool -FriendlyName TestPool -PhysicalDisks $pdisks -ProvisioningTypeDefault Fixed -StorageSubSystemFriendlyName "Clustered Windows Storage on SSCluster"

  10. Run Get-StoragePool to confirm our newly created Storage pool is showing up.

  11. Virtual Disks can be created on this storage pool by running New-VirtualDisk Cmdlet.  Below are some examples

Create a Simple Virtual Disk of size 100GB named Vdisk1-Simple

New-VirtualDisk -StoragePoolFriendlyName TestPool -Size 100GB -FriendlyName Vdisk1-Simple -ResiliencySettingName Simple

Create a Mirror Virtual Disk of size 100 GB named Vdisk2-Mirror

New-VirtualDisk -StoragePoolFriendlyName TestPool -Size 100GB -FriendlyName Vdisk2-Mirror -ResiliencySettingName Mirror

Create a Parity Virtual Disk of size 100GB named Vdisk3-Parity

New-VirtualDisk -StoragePoolFriendlyName TestPool -Size 100GB -FriendlyName Vdisk3-Parity -ResiliencySettingName Parit

   12. List of virtual disks can be seen by running Get-VirtualDisk.

After initializing these virtual disks and creating volumes formatted with either NTFS or ReFS file systems, it can be used for any purpose. A scale-out file server role can also be added to the cluster so SMB shares created on these disks can be presented to any other servers in the network to consume as a storage. (For example a Hyper-V host or cluster can use these SMB shares available on Storage Spaces cluster to keep virtual machines data, or a SQL server can keep its database on these shares)