Assign storage

After system setup, you must configure storage by creating pools and assigning storage to specific pools. Ensure that a pool or pools have been created before assigning storage. In the management GUI, select Pools > Actions > Add Storage.

A non-distributed array can contain 2 - 16 drives; several arrays create the capacity for a pool. For redundancy, spare drives ("hot spares") are allocated to assume read/write operations if any of the other drives fail. The rest of the time, the spare drives are idle and do not process requests for the system. When a member drive fails in the array, the data can only be recovered onto the spare as fast as that drive can write the data. Because of this bottleneck, rebuilding the data can take many hours as the system tries to balance host and rebuild workload. As a result, the load on the remaining member drives can significantly increase. The latency of I/O to the rebuilding array is affected for this entire time. Because volume data is striped across MDisks, all volumes are affected during the time it takes to rebuild the drive.

Distributed RAID arrays solve these problems because rebuild areas are distributed across all the drives in the array. The rebuild write workload is spread across all the drives rather just the single spare drive which results in faster rebuilds on an array. Distributed array configurations may contain between 4 - 128 drives. Distributed arrays remove the need for separate drives that are idle until a failure occurs. Instead of allocating one or more drives as spares, the spare capacity is distributed over specific rebuild areas across all the member drives. Data can be copied faster to the rebuild area and redundancy is restored much more rapidly. Additionally, as the rebuild progresses, the performance of the pool is more uniform because all of the available drives are used for every volume extent. After the failed drive is replaced, data is copied back to the drive from the distributed spare capacity. Unlike "hot spares" drives, read/write requests are processed on other parts of the drive that are not being used as rebuild areas. The number of rebuild areas is based on the width of the array. The size of the rebuild area determines how many times the distributed array can recover failed drives without risking becoming degraded. For example, a distributed array that uses RAID 6 drives can handle two concurrent failures. After the failed drives have been rebuilt, the array can tolerate another two drive failures. If all of the rebuild areas are used to recover data, the array becomes degraded on the next drive failure.

The management GUI automatically defaults to the recommended number of rebuild areas that is based on the current width and other settings for the distributed array. The width of the array includes the number of physical drives and rebuild areas that are needed to ensure redundancy if drives fail. The system can recommend a maximum of 4 rebuild areas for distributed arrays. Other settings, such as drive class, are also used to determine the best configuration for arrays. For example, if you are adding storage to an existing pool, the management GUI disables any drive classes that are incompatible with drives already in the pool. It then analyzes the current array configuration in the pool to recommend stripe widths and drive counts.

Contents | Monitoring | Pools | Volumes | Hosts | Copy Services | Access | Settings | More Information