Clustering and Sharing SCALE Volumes with TrueCommand
12 minute read.
Last Modified 2023-11-30 10:15 ESTTrueCommand-managed clusters is an experimental feature and must not be used for production or critical data management. It is intended for early testing and research purposes only.
TrueNAS SCALE SMB clustering combines the benefits of the self-healing OpenZFS file system with the open-source Gluster scalable network file system.
TrueNAS SCALE SMB clustering requires a minimum of three TrueNAS SCALE nodes (systems), but you can scale it to a substantially higher number of physical nodes. A properly configured Active Directory environment is also required for SMB or gluster clustering. Gluster data consists of volumes, which can have multiple SMB shares, stored across bricks, the basic unit of storage in the Gluster File System, on the individual servers.
- A minimum of 3 to 20 TrueNAS SCALE systems running the same release of 22.12.0 or later
- A TrueCommand instance (cloud or on-premises) running 2.3.0 or later
- An Active Directory (AD) environment with domain service roles, DNS roles, and reverse lookup zones configured.
Each TrueNAS SCALE system must have two network interfaces:
- One network interface for SMB, AD, and TrueCommand traffic (static IP/DHCP reservation recommended)
- One network interface for the node-to-node cluster traffic using static IP addresses (private network recommended)
Each TrueNAS SCALE system must also have:
- A third IP address for the cluster VIP outside of DHCP range for users to access clustered shares.
- A preconfigured storage pool(s) with appropriate performance and parity
SMB clusters created in TrueNAS SCALE Bluefin are not available for cluster expansion. TrueNAS SCALE Cobia plans to implement a method to enable new volumes for SMB cluster expansion.
Clustering is considered experimental and should not be used in a production environment or for handling critical data!
Clustering is a back-end feature in TrueNAS SCALE. You should only configure clustering using the TrueCommand web interface. Configuring or managing clusters within the TrueNAS SCALE UI or Shell can result in cluster failures and permanent data loss.
Using the clustering feature on a SCALE system adds some restrictions to that system:
- Activating clustering disables individual SMB share creation on cluster member TrueNAS systems.
- Systems join a single cluster at a time.
- Removing or migrating systems from a cluster requires deleting the entire cluster.
Cluster nodes (systems) must be on the same release of SCALE.
Supported cluster types are replicated, distributed, distributed replicated, and dispersed. Distributed dispersed clustering is not currently supported.
Configuring the cluster feature is a multi-step process that spans multiple systems.
When the SCALE, AD, and TrueCommand environments are ready, log into TrueCommand to configure the cluster of SCALE systems.
Click the Clusters icon in the upper left. Click CREATE CLUSTER to see the cluster creation options.
Enter a unique name for the cluster, and then select the systems to include from the dropdown list. A list of SCALE systems displays.
Open the Network Address dropdown for each system and choose the static IP address from the previously configured subnet dedicated to cluster traffic.
Click NEXT, verify the settings, then click CREATE.
TrueCommand might take a while to create the cluster.
After creating the cluster, TrueCommand opens another sidebar to configure it for AD connectivity and SMB sharing.
For each system:
Choose the IP address related to the primary subnet (typically the IP address you use to connect the SCALE system to TrueCommand).
Click NEXT.
For each system:
Select the interfaces to associate with the VIPs. You should select the interface configured for the SCALE system IP address.
Click Next.
Enter user for Active Directory for the cluster:
Enter the Microsoft Active Directory credentials.
Click NEXT.
SMB service does not start if the cluster systems (nodes) are incorrectly configured!
Verify the connection details are correct.
Click CONFIRM to configure the cluster, or click BACK to adjust the settings.
Creating a cluster has no visible effect on each SCALE web interface. To verify the cluster is created and active, open the SCALE Shell and entergluster peer status
. The command returns the list of SCALE IP addresses and current connection status.
In the TrueCommand Clusters screen, find the cluster to use and click CREATE VOLUME.
Enter a unique name for the cluster and select a Type.
After selecting an option in Type, enter a value based on the available storage from the clustered pools and your storage requirements in Brick Size.
Review the pools for each SCALE system in the cluster. If any system does not show the desired pool for this cluster volume, select it from the Pools dropdown.
Click NEXT.
Review the settings for the new volume and click CREATE.
TrueCommand adds new cluster volumes to the individual cluster cards on the Clusters screen.
The web interface for the individual SCALE systems does not show any datasets created for cluster volumes. To verify the volume created, go to the Shell and entergluster volume info all
.
To share a cluster volume, go to the TrueCommand Clusters screen, finding the cluster card, and click on the desired cluster volume. Click CREATE SHARE.
Enter a unique name for the share.
Select the ACL type to apply to the share from the ACL dropdown list.
(Optional) Select Readonly to prevents users from changing the cluster volume contents.
Click CONFIRM to create the SMB share and make it immediately active.
The SMB share adds to the SCALE Shares > SMB section for each system in the cluster. Attempting to manage the share from the SCALE UI is not recommended.
There are several ways to access an SMB share, but this article demonstrates using Windows 10 File Explorer. From a Windows 10 system:
Connected to the same network as the clustering environment, open File Explorer.
Clear the contents and enter
\\
followed by the IP address or host name of one of the clustered SCALE systems in the Navigation bar. Press Enter.Enter the user name and password for an Active Directory user account when prompted. Enter the Active Directory system name followed by the user account name (for example: AD01\sampuser).
Browse to the cluster volume folder to view or modify files.
A node is a single TrueNAS storage system in a cluster.
Cluster node replacement only works if you are using TrueCommand 2.3 or later and SCALE 22.12.0 or later.
New replacement nodes must have the same hardware as the old node you are replacing. The old node must also have a configuration backup that is safe and accessible.
The method you use to replace a cluster node differs depending on whether or not the node has access to the data on the brick.
If replacing a node that still has access to the data on the brick, you must first install the same SCALE version on the replacement system (node).
After installing SCALE on the new system, log into the SCALE web UI and go to System Settings > General. Click Manage Configuration, then select Upload Config. Select the configuration file from the node you are replacing and click Upload.
After applying the configuration, the system reboots and uses the same configuration as the node you are replacing. The new system automatically joins the cluster and heals damaged data before returning to a healthy state.
If the node you are replacing does not have access to the data on the brick, you must first install the same SCALE version on the replacement system (node).
After installing SCALE on the new system, access the SCALE web UI and go to Storage. Create a pool with the same name as the pool on the node you are replacing.
Go to System Settings > Shell and enter midclt call gluster.peer.initiate_as_replacement poolname clustervolumename>
Where:
- poolname is the name of the pool you created.
- clustervolumename is the name of the cluster volume you are currently using.
After the command succeeds, go to System Settings > General. Click Manage Configuration, then select Upload Config. Select the configuration file from the node you are replacing and click Upload.
After applying the configuration, the system reboots and uses the same configuration as the node you are replacing. The new system automatically joins the cluster and heals damaged data before returning to a healthy state.
TrueNAS Enterprise