Before deployment please make sure to apply requirment and prerequsites
Step 1: Deploy Windows Server
Step 1.1: Install the operating system
- Storage Spaces Direct requires Windows Server 2016 Datacenter Edition.
- You can use the Server Core installation option, or Server with Desktop Experience
Step 1.2: Connect to the servers (Remotly)
deploying/managing remotely from a separate management system
- Windows Server 2016 with the same updates as the servers it’s managing
- Network connectivity to the servers it’s managing
- Joined to the same domain or a fully trusted domain
- Remote Server Administration Tools (RSAT) and PowerShell modules for Hyper-V and Failover Clustering.
How to:
use Windows PowerShell to add each Node server to the Trusted Hosts list on your management computer:
Set-Item WSMAN:\Localhost\Client\TrustedHosts -Value MachineName1 -Force
Enter-PSSession -ComputerName <MachineName1> -Credential LocalHost\Administrator
Step 1.3: Join the domain and add domain accounts.
- join the servers to a domain
-
Add-Computer -NewName "MachineName1" -DomainName "Domain.com" -Credential "Domain\User" -Restart -Force
-
- If your storage administrator account isn’t a member of the Domain Admins group, add your storage administrator account to the local Administrators group on each node
-
Net localgroup Administrators <Domain\Account> /add
-
Step 1.4: Install roles and features
- Failover Clustering
- Hyper-V
- File Server (if you want to host any file shares, such as for a converged deployment)
- Data-Center-Bridging (if you’re using RoCEv2 instead of iWARP network adapters)
- RSAT-Clustering-PowerShell
- Hyper-V-PowerShell
$ServerList = "MachineName1", "MachineName2" $FeatureList = "Hyper-V", "Failover-Clustering", "Data-Center-Bridging", "RSAT-Clustering-PowerShell", "Hyper-V-PowerShell", "FS-FileServer" Invoke-Command ($ServerList) { Install-WindowsFeature $FeatureList}
Step 2: Configure the network
Storage Spaces Direct requires high-bandwidth, low-latency networking between servers in the cluster.
- At least 10 GbE networking is required
- and remote direct memory access (RDMA) is recommended.
for Configurations see:
Step 3: Configure Storage Spaces Direct
Step 3.1: Clean drives
Before you enable Storage Spaces Direct, ensure your drives are empty: no old partitions or other data.
Run the following Script
$ServerList = "MachineName1", "MachineName2" Invoke-Command ($ServerList) { Update-StorageProviderCache Get-StoragePool | ? IsPrimordial -eq $false | Set-StoragePool -IsReadOnly:$false -ErrorAction SilentlyContinue Get-StoragePool | ? IsPrimordial -eq $false | Get-VirtualDisk | Remove-VirtualDisk -Confirm:$false -ErrorAction SilentlyContinue Get-StoragePool | ? IsPrimordial -eq $false | Remove-StoragePool -Confirm:$false -ErrorAction SilentlyContinue Get-PhysicalDisk | Reset-PhysicalDisk -ErrorAction SilentlyContinue Get-Disk | ? Number -ne $null | ? IsBoot -ne $true | ? IsSystem -ne $true | ? PartitionStyle -ne RAW | % { $_ | Set-Disk -isoffline:$false $_ | Set-Disk -isreadonly:$false $_ | Clear-Disk -RemoveData -RemoveOEM -Confirm:$false $_ | Set-Disk -isreadonly:$true $_ | Set-Disk -isoffline:$true } Get-Disk | Where Number -Ne $Null | Where IsBoot -Ne $True | Where IsSystem -Ne $True | Where PartitionStyle -Eq RAW | Group -NoElement -Property FriendlyName } | Sort -Property PsComputerName, Count
Step 3.2: Validate the cluster
you’ll run the cluster validation tool to ensure that the server nodes are configured correctly to create a cluster using Storage Spaces Direct.
Run the following
Test-Cluster –Node <MachineName1, MachineName2> –Include "Storage Spaces Direct", "Inventory", "Network", "System Configuration"
Step 3.3: Create the cluster
you’ll create a cluster with the nodes that you have validated for cluster creation in the preceding step using the following PowerShell cmdlet.
New-Cluster –Name <ClusterName> –Node <MachineName1,MachineName2> –NoStorage
- Its recommended that a file share witness or cloud witness is configured after creating the cluster.
- If resolving the cluster isn’t successful, in most cases you can be successful with using the machine name of a node that is an active member of the cluster may be used instead of the cluster name.
Step 3.4: Configure a cluster witness
It is recommended that you configure a witness for the cluster, so that a three or more node system can withstand two nodes failing or being offline.
Step 3.5: Enable Storage Spaces Direct
to Enable S2D, Create a Pool, and Configure Cach, and Tiers , Run the Follwing Command:
Enable-ClusterStorageSpacesDirect –CimSession <ClusterName>
the command will Autometically create:
- Create a pool: Creates a single large pool that has a name like “S2D on Cluster1”.
- Configures the Storage Spaces Direct caches it enables the fastest as cache devices (read and write in most cases)
-
Tiers: Creates two tiers as default tiers. One is called “Capacity” and the other called “Performance”.
Step 3.6: Create volumes
- Using Failover Cluster Manager
- Using Powershell
Using Failover Cluster Manager
There are three major steps:
Step 1: Create virtual disk
- In Failover Cluster Manager, navigate to Storage -> Pools.
- Select New Virtual Disk from the Actions pane on the right, or right-click the pool and select New Virtual Disk.
- Select the storage pool and click OK. The New Virtual Disk Wizard (Storage Spaces Direct) will open.
- Use the wizard to name the virtual disk and specify its size.
- Review your selections and click Create.
- Be sure to check the box marked Create a volume when this wizard closes before closing.
Step 2: Create volume
The New Volume Wizard will open.
- Select the virtual disk you just created and click Next.
- Specify the volume’s size (default: the same size as the virtual disk) and click Next.
- Assign the volume to a drive letter or choose Don’t assign to a drive letter and click Next.
- Specify the filesystem to use, leave the allocation unit size as Default, name the volume, and click Next.
- Review your selections and click Create, then Close.
Step 3: Add to cluster shared volumes
- In Failover Cluster Manager, navigate to Storage -> Disks.
- Select the virtual disk you just created and select Add to Cluster Shared Volumes from the Actions pane on the right, or right-click the virtual disk and select Add to Cluster Shared Volumes.
You’re done! Repeat as needed to create more than one volume.
using powershell
The New-Volume cmdlet has four parameters you’ll always need to provide:
- FriendlyName: Any string you want, for example “Volume1”
- FileSystem: Either CSVFS_ReFS (recommended) or CSVFS_NTFS
- StoragePoolFriendlyName: The name of your storage pool, for example “S2D on ClusterName”
- Size: The size of the volume, for example “10TB”
- ResiliencySettingName: Either Mirror or Parity
With 2 or 3 servers
New-Volume -FriendlyName "Volume1" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -Size 1TB
With 4+ servers
New-Volume -FriendlyName "Volume2" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -Size 1TB -ResiliencySettingName Mirror
Using storage tiers
Get-StorageTier | Select FriendlyName, ResiliencySettingName, PhysicalDiskRedundancy New-Volume -FriendlyName "Volume4" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -StorageTierFriendlyNames Performance, Capacity -StorageTierSizes 300GB, 700GB
Step 3.7: Optionally enable the CSV cache
To set the size of the CSV cache, open a PowerShell session on the management system with an account that has administrator permissions on the storage cluster
$ClusterName = "StorageSpacesDirect1"
$CSVCacheSize = 2048
(Get-Cluster $ClusterName).BlockCacheSize = $CSVCacheSize
$CSVCurrentCacheSize = (Get-Cluster $ClusterName).BlockCacheSize
Step 3.8: Deploy virtual machines for hyper-converged deployment
The virtual machine’s files should be stored on the systems CSV namespace (example: c:\ClusterStorage\Volume1) just like clustered VMs on failover clusters.
Step 4: Deploy Scale-Out File Server for converged solutions
Step 4.1: Create the Scale-Out File Server role
Step 4.2: Create file shares
Step 4.3 Enable Kerberos constrained delegation
Add a Comment