Copied


NVIDIA Introduces Skip Softmax for Enhanced LLM Inference Efficiency

Timothy Morano   Dec 16, 2025 21:26 0 Min Read


NVIDIA has unveiled a new technique called Skip Softmax, integrated into its TensorRT-LLM, which promises to accelerate long-context inference. This development comes as a response to the increasingly demanding computational requirements of deploying large language models (LLMs) at scale, according to NVIDIA.

Understanding Skip Softmax

Skip Softmax is a hardware-friendly, drop-in sparse attention method designed to enhance inference speed without necessitating retraining of models. It achieves up to 1.4x faster time-to-first-token (TTFT) and time-per-output-token (TPOT), making it a significant innovation for machine learning engineers working with long-form content generation and other complex AI workflows.

The core principle of Skip Softmax involves dynamically pruning attention blocks by leveraging the mathematical properties of the Softmax function. This allows for early detection and skipping of attention blocks with negligible contribution to the final output, thus reducing computational overhead.

Benefits and Implementation

Skip Softmax is designed for compatibility with existing pretrained models using standard attention mechanisms. It's optimized for NVIDIA's Hopper and Blackwell GPU architectures, providing a seamless integration that enhances speed and efficiency. Notably, it can be combined with other optimization methods, such as using XAttention during prefill and Skip Softmax during decoding, to achieve substantial speed improvements.

Performance tests have shown that Skip Softmax can significantly reduce memory bandwidth and computational demands during both decoding and prefilling phases. For instance, on the Llama 3.3 70B model, a projected 1.36x speedup was observed during decoding, and a 1.4x speedup during prefill at 128K context length.

Accuracy and Sparsity Trade-offs

While Skip Softmax offers efficiency gains, it also maintains accuracy within a 'safe zone' of sparsity. Tests on various benchmarks indicate that a sparsity ratio of up to 50% maintains near-lossless accuracy, while pushing beyond 60% can result in accuracy drops. This makes it suitable for tasks requiring long output generation, maintaining parity with dense attention methods.

Getting Started with Skip Softmax

Skip Softmax is integrated into NVIDIA TensorRT-LLM, accessible through the LLM API. Users can configure the sparse attention settings to optimize performance based on their specific needs. This feature is supported on NVIDIA's latest data center GPUs, enabling further acceleration of attention computation.

For more technical details and to start using Skip Softmax, developers can refer to the [official NVIDIA source](https://developer.nvidia.com/blog/accelerating-long-context-inference-with-skip-softmax-in-nvidia-tensorrt-llm/).


Read More