Copied


NVIDIA's NCCL 2.24 Enhances Networking Reliability and Observability

Joerg Hiller   Mar 14, 2025 02:22 0 Min Read


The NVIDIA Collective Communications Library (NCCL) has introduced its latest version, 2.24, bringing significant advancements in networking reliability and observability for multi-GPU and multinode (MGMN) communication. As reported by NVIDIA Developer Blog, this release is optimized specifically for NVIDIA GPUs and networking, making it an essential component for multi-GPU deep learning training.

NCCL 2.24 New Features

The update includes several new features aimed at enhancing performance and reliability:

  • Reliability, Availability, and Serviceability (RAS) subsystem
  • User Buffer (UB) registration for multinode collectives
  • NIC Fusion
  • Optional receive completions
  • FP8 support
  • Strict enforcement of NCCL_ALGO and NCCL_PROTO

The RAS Subsystem

The RAS subsystem is one of the standout additions in NCCL 2.24. It is designed to assist users in diagnosing application issues like crashes and hangs, particularly in large-scale deployments. This low-overhead infrastructure offers a global view of running applications, enabling the detection of anomalies such as unresponsive nodes or lagging processes. It operates by creating a network of threads across NCCL processes that monitor each other's health through regular keep-alive messages.

Enhancements in User Buffer Registration

NCCL 2.24 introduces user buffer (UB) registration for multinode collectives, allowing more efficient data transfer and reduced GPU resource consumption. The library now supports UB registration for multiple ranks-per-node collective networking and standard peer-to-peer networks, offering significant performance gains, particularly for operations like AllGather and Broadcast.

NIC Fusion

With the expansion of many-NIC systems, NCCL has adapted to optimize network communication. The new NIC Fusion feature allows the logical merging of multiple NICs into a single entity, ensuring efficient use of network resources. This capability is particularly beneficial for systems with more than one NIC per GPU, addressing issues such as crashes and inefficient resource allocation.

Additional Features and Fixes

The update also introduces optional receive completions for LL and LL128 protocols, allowing for reduced overhead and congestion. NCCL 2.24 supports native FP8 reductions on NVIDIA Hopper and newer architectures, enhancing processing capabilities. Additionally, stricter enforcement of NCCL_ALGO and NCCL_PROTO is implemented, ensuring more precise tuning and error handling for users.

This update also includes various bug fixes and minor improvements, such as adjustments to PAT tuning and enhancements in memory allocation functions, enhancing the overall robustness and efficiency of the NCCL library.


Read More