Storage Spaces Direct Requirements and Components
For Storage Spaces Direct (S2D) to work properly, there are several software and hardware requirements that need to be met. From Windows Server editions, and hardware specifications, to network configurations, everything must be up to standard to achieve optimal performance and high availability.
Other Interesting Articles
S2D-Supported Windows Server Editions
Storage Spaces Direct (S2D) can only be used on Windows Server Datacenter Edition, as this feature is not available in Windows Server Standard Edition or other editions. Here is a list of Windows Server editions that support S2D:
- Windows Server 2016 Datacenter Edition – The first version to support S2D with basic features.
- Windows Server 2019 Datacenter Edition – Improves performance, storage efficiency, and monitoring features.
- Windows Server 2022 Datacenter Edition – The latest version with improved security, faster performance, and better storage management.
Hardware Required for Storage Spaces Direct
1. Minimum of 2 Nodes (3-16 Nodes Recommended) for High Availability
S2D operates in a cluster server, which requires at least 2 nodes to function. However, to achieve high availability, it is recommended to use between 3 to 16 nodes.
Number of Nodes | Advantage |
2 Nodes | It can be used, but without automatic failover (requires manual intervention). |
3 Node | Supports automatic failover, safer against server failure. |
4-16 Nodes | High scalability, maximum performance, and better fault tolerance. |
2. Supported Drive Types
S2D supports a wide variety of Direct-Attached Storage (DAS) types, which means storage must be connected directly to the server without the need for a RAID Controller. Types of storage that can be used include:
- NVMe (Non-Volatile Memory Express) – Has the highest speed, it is perfect for caching layer.
- SSD (Solid State Drive) – Offers high performance with low latency, recommended for clusters that require high I/O.
- HDD (Hard Disk Drive) – Provides large capacity at a lower cost, ideal for cold storage or infrequently accessed data.
3. Network Requirements: RDMA via iWARP or RoCE for Best Performance
S2D requires a fast network with low latency to keep communication between nodes in the cluster efficient. Microsoft recommends using Remote Direct Memory Access (RDMA) technology, which can increase throughput and reduce the load on the CPU.
RDMA technologies available:
- RDMA over iWARP – Using TCP/IP-based protocols, it is easier to configure and more stable in an enterprise network environment.
- RDMA over RoCE (RDMA over Converged Ethernet) – Offers lower latency, but requires DCB (Data Center Bridging) to prevent packet loss.
4. Drivers & Firmware to Update
To avoid system errors and compatibility issues, all drivers and firmware used should be kept up to date. Components that must be checked include:
- Driver Network Adapter (RDMA Support) – Make sure it supports iWARP or RoCE.
- Firmware NVMe/SSD/HDD – Firmware updates for optimal data access stability and speed.
- BIOS & Firmware Server – Make sure to use the latest version to improve system compatibility and security.
- Windows Server Update – Always patch and update Windows Server to get the latest bug fixes and features.
Example Hardware Configuration for S2D
Here’s an example of a hardware configuration for an S2D environment with 3 nodes:
Servers: 3 server units with Intel Xeon or AMD EPYC processors.
Drive per Node:
- 2 x NVMe (1.6 TB) for caching.
- 4 x SSD (3.84 TB) for tiering.
- 8 x HDD (10 TB) for storage capacity.
Networking: 2 x 10 GbE NIC with RDMA (RoCE or iWARP) support.
Switch: A 10 GbE switch that supports RDMA.