← Back to frameworks
T

TensorRT-LLM

Frameworks
87

NVIDIA's inference optimization toolkit for LLMs

NVIDIA's optimized inference for maximum performance

Metrics

github stars9,500
radar statustrial
framework typeinference
weekly downloads180,000

Score Breakdown

adoption80
ecosystem86
performance95

Scoring Methodology

performance35% weight

Execution speed and resource efficiency

Source: Benchmark comparisons, throughput measurements

adoption35% weight

Community size and industry usage

Source: GitHub stars, PyPI downloads, job postings

ecosystem30% weight

Integrations, plugins, and extension availability

Source: Integration count, ThoughtWorks Radar status

Data Sources

Last updated: December 24, 2025