#008 Build a Windows Server Failover Cluster - Part 4
We're getting closer to finishing the cluster build. Last week you added a third network and configured it to isolate cluster traffic (aka heartbeat).
This week's challenge will focus on setting up the shared iSCSI disks and presenting those to both cluster nodes. We'll utilize the 10.0.1.0/24 network to establish the connection between the iSCSI initiator (CLUSTER1SRV1 or CLUSTER1SRV2) and the iSCSI target (STORSRV1).
This challenge has four objectives.
- Configure a new disk for STORSRV1 (you'll do this in the VM settings).
- Create 4 virtual iSCSI disks and place them on the disk you created in step 1.
- Connect the initiators, CLUSTER1SRV1 and CLUSTER1SRV2 to the iSCSI target.
- Initialize, partition, and format each of the 4 disks. Assign drive letters to each.
Ready?
Estimated time to complete: Less than 2 hours.
Step 1: Add a disk to STORSRV1
Let's avoid placing the iSCSI virtual disks on the C: drive. This step will depend on which hypervisor you've used. In my example, I used VMware Workstation Pro.
- Shutdown STORSRV1.
- Add a 100 GB virtual disk. You can adjust this drive according to your environment. Also, do not check "Allocate all disk space now" if you're using VMware Workstation Pro. Space will be allocated as needed.
- Start the VM and initialize, partition, and format this drive using ReFS. I used the drive letter E: You can use Server Manager - File and Storage Services - Volumes - Disks to format the disk. See the second screenshot for an example. Set the drive label to iSCSI.
Step 2: Create 4 virtual disks
Next, you'll create the 4 iSCSI virtual disks using Server Manager - File and Storage Services - iSCSI. Use the iSCSI Virtual Disk Wizard to create each disk. Place these on the E drive you created in Step 1.
Once the wizard starts, configure each disk accordingly (adjust to match your environment).
Disk witness - You're building a two-node cluster with each node having the same number of votes (1). To create and maintain a quorum, a tie-breaking vote must be introduced. This disk will serve as the cluster's third voting element. You have other options for the cluster quorum configuration, such as a cloud witness or file share witness (see here for additional information on configuring the quorum). If the cluster can maintain 2 of the 3 votes, it will retain a quorum and remain online. See here for an overview of cluster quorum.
- Name - cluster1_disk_witness
- Size - 2 GB
- Dynamically expanding
- Target Name and Access - STORSRV1
- Access Servers - Add CLUSTER1SRV1 and CLUSTER1SRV2 (you'll need to add these one at a time)
- Enable Authentication - We're going to enable CHAP. Why?
Security becomes a critical factor when setting up iSCSI connectivity between your SQL Server cluster nodes and storage devices. One method to secure the iSCSI connections is using the Challenge-Handshake Authentication Protocol (CHAP). CHAP provides a way to ensure that the connection between initiators (SQL Server nodes) and targets (storage) is authenticated and secured. Without CHAP, the iSCSI connections are more vulnerable to unauthorized access, potentially exposing your storage and SQL Server data to security risks. Enter a username and password. Remember these, as you'll need them in Step 3.
Repeat the process for the following disks.
Tempdb
- Name - cluster1-tempdb
- Size - 10 GB
- Dynamically expanding
- Target Name and Access - Existing target - STORSRV1
Data
- Name - cluster1-data
- Size - 60 GB
- Dynamically expanding
- Target Name and Access -Existing target - STORSRV1
Log
- Name - cluster1-log
- Size - 20 GB
- Dynamically expanding
- Target Name and Access - Existing target - STORSRV1
When you're finished, your setup should match the screenshot below.
Go ahead and take a look at E: using File Explorer. You should see your four virtual disks.
Step 3: Connect the initiators
Next, you must connect the initiators to the target and disks. Since you have multiple networks, you need to ensure you're using the 10.0.1.0/24 network.
Log on to CLUSTER1SRV1 and CLUSTER1SRV2 and open Server Manager. Then, in the tools menu, select iSCSI initiator. If you're prompted to start the service, go ahead and click ok then select iSCSI initiator again from the tools menu.
Enter "STORSRV1" in the box and click Quick Connect. The connection will fail, but that's okay. You enabled CHAP and will need to configure the connection to use the username and password set on the target.
Select the target in the box shown below, and click Connect.
Click Advanced.
You'll need to make two changes here.
- Change the Target portal IP to 10.0.1.1/3260.
- Check the box next to "Enable CHAP Logon." Delete the current value in the Name box and enter the username and password you set on the target.
Here's how my settings look. Don't forget to repeat these steps on both servers.
Click ok. Verify the status shows as connected.
Step 4: Initialize, partition, and format each of the 4 disks. Assign drive letters as well.
The last step for this challenge. You only need to do this on CLUSTER1SRV1 or CLUSTER1SRV2 (since they're shared disks).
To perform these steps, you can use Server Manager - File and Storage Services - Volumes - Disks.
Configure the 2 GB disk as:
- Drive Letter - Q:
- File System - ReFS
- Allocation Unit Size - Default
- Volume Label - quorum
Configure the 10 GB disk as:
- Drive Letter - T:
- File System - ReFS
- Allocation Unit Size - 64K
- Volume Label - tempdb
Configure the 60 GB disk as:
- Drive Letter - E:
- File System - ReFS
- Allocation Unit Size - 64K
- Volume Label - data
Configure the 20 GB disk as:
- Drive Letter - F:
- File System - ReFS
- Allocation Unit Size - 64K
- Volume Label - log
When finished, open "This PC" and verify all disks are shown.
FYI, you won't see these on the other node just yet. But, you will next week when we complete the WSFC cluster build.
Helping Others and Sharing Your Results
That's it for this week.
If you have tips other readers can learn from, please share them in the comments. You can message me on LinkedIn or post about it and tag me with the #dbachallenges hashtag.
Feedback
If there's a DBA Challenge you'd like to see, let me know by replying to this email.
P.S. If you're a DBA managing 10s or 100s of instances and feeling overwhelmed, I’d love your input on a few quick questions:
1. What’s your biggest frustration with managing SQL Server?
2. What concerns you most about automation?
3. Where do you need the most support in automating SQL Server management?
4. What would the ideal solution be if you could solve these issues?
I'm currently building Ansible for SQL Server DBAs. It's the step-by-step program that helps overworked DBAs automate and manage large-scale environments without the need to work extra hours.
Ansible for SQL Server DBAs: Level 1
Feel free to reply to this email and share your thoughts anytime—I’m all ears!
Good luck and I'm looking forward to seeing your results!
Luke
Responses