NVIDIA TensorRT

NVIDIA TensorRT is a high-performance deep learning inference optimizer and runtime supporting quantization, pruning, and layer fusion for deploying AI models on NVIDIA GPUs.

Visit NVIDIA TensorRT →
deeplearning inference optimization gpu ai

Want to know if NVIDIA TensorRT fits your workflow?

Audit My AI Toolkit

Similar Tools in Model Compression

Qualcomm Neural Processing SDK
Qualcomm's Neural Processing SDK provides tools for model compression through quantization and pruning, optimized for...
TensorFlow Model Optimization Toolkit
TensorFlow's Model Optimization Toolkit offers APIs for pruning, quantization, and clustering to reduce model size an...
ONNX Runtime
ONNX Runtime optimizes ONNX models with quantization, pruning support, and hardware acceleration for cross-platform d...
PyTorch Quantization Tools
PyTorch's built-in quantization module enables post-training and quantization-aware training for INT8 and FP16 to com...