NVIDIA Jetson AGX Thor: powerful supercomputer for humanoid robotics and AI
NVIDIA Jetson AGX Thor delivers 2070 TFLOPS of AI performance with 128GB memory in a 130W package, enabling humanoid robots and physical AI systems to run large language models and vision transformers at the edge.
yippy overview
- Jetson AGX Thor features 2070 FP4 TFLOPS of AI compute powered by NVIDIA Blackwell GPU architecture, delivering 7.5x higher performance and 3.5x better energy efficiency compared to Jetson AGX Orin
- Developer kit priced at $3,499 includes T5000 module with 128GB LPDDR5X memory, reference carrier board, and connectivity for high-speed sensor fusion through 4x 25GbE networking
- Platform targets humanoid robotics, autonomous vehicles, and industrial AI applications, with early adopters including Amazon Robotics, Boston Dynamics, Figure, Agility Robotics, and Meta
The emergence of humanoid robotics and generalist AI systems demands computing platforms capable of running foundation models locally.
Traditional embedded systems lack the memory capacity and computational throughput required for real-time multimodal processing. NVIDIA Jetson AGX Thor addresses this challenge by bringing server-class AI capabilities to edge robotics applications, enabling autonomous systems to reason, perceive, and act without cloud dependency.
For a comprehensive overview of the NVIDIA Jetson platform family, see our NVIDIA Jetson: powerful edge AI platform for professional robotics and embedded computing profile.
Architecture and hardware specifications
Released on August 25, 2025, Jetson AGX Thor represents NVIDIA's most powerful edge computing module. The platform integrates a Blackwell GPU with 96 Tensor Cores alongside a 14-core ARM Neoverse V3AE CPU, optimized for parallel AI inference and real-time control tasks. This combination enables simultaneous execution of multiple foundation models while maintaining sub-10 millisecond latency for robotics applications.
Jetson Thor is the ultimate platform for physical AI, providing a supercomputer for generative reasoning and multimodal, multisensor processing. Jetson Thor can be integrated into next-generation robots to accelerate foundation models, allowing flexibility for challenges like object manipulation, navigation, and following instructions.
T5000 and T4000 module variants
The T5000 module configuration provides 2070 FP4 TFLOPS with power configurable between 40W and 130W. Memory architecture implements 128GB LPDDR5X with 273GB/s bandwidth, sufficient for loading large language models with up to 120 billion parameters. Real-world benchmarks demonstrate 20-30 tokens per second inference for GPT-class models using quantized FP4 weights, consuming approximately 65GB RAM.
NVIDIA offers a lower-cost T4000 variant targeting applications with reduced computational requirements. The T4000 delivers 1200 TFLOPS with 64GB LPDDR5X memory, maintaining compatibility with the same software stack and carrier board design. Both modules support JetPack 7 with CUDA 13.0 and implement Server Base System Architecture for standardized deployment.
Multi-instance GPU technology
Thor introduces Blackwell Multi-Instance GPU capability, partitioning the physical GPU into up to seven hardware-isolated instances. Each instance receives dedicated computing units, memory allocation, and cache resources. This architecture enables simultaneous execution of fast-reaction control loops alongside computationally intensive reasoning tasks, critical for safe operation of autonomous systems in dynamic environments.
Connectivity and sensor integration
High-bandwidth networking infrastructure supports real-time sensor fusion requirements of modern robotics. The developer kit features a QSFP28 interface providing 4x 25GbE connectivity, addressing applications with multiple high-resolution cameras and LiDAR sensors. A standard RJ45 port delivers 5GbE for auxiliary network connectivity. This networking capacity represents a significant upgrade from previous Jetson generations designed primarily for lower-bandwidth sensor protocols.
Holoscan sensor bridge
Thor implements NVIDIA's Holoscan Sensor Bridge for camera connectivity, diverging from traditional MIPI CSI interfaces. The sensor bridge receives uncompressed video frames via Ethernet, enabling distributed camera systems with centralized processing. This approach reduces glass-to-glass latency below 10 milliseconds while simplifying cable management in complex robotic installations.
The developer kit reference design includes two camera expansion ports with standardized 50-pin interfaces, each supporting dual 4-lane CSI connections operating at 2.5Gbps per lane. Third-party carrier boards expose MIPI CSI for applications requiring direct camera integration. Video processing capabilities include encoding up to 6x 4Kp60 streams in H.265/H.264 and decoding 4x 8Kp30 content.
Software ecosystem and AI frameworks
To deliver a seamless cloud-to-edge experience, Jetson Thor runs the NVIDIA AI software stack for physical AI applications, including NVIDIA Isaac for robotics, NVIDIA Metropolis for visual agentic AI, and NVIDIA Holoscan for sensor processing.
JetPack 7 provides the complete development environment for Thor-based systems. The software stack implements CUDA 13.0 with full support for NVIDIA's tensor operations and accelerated libraries. Adoption of Server Base System Architecture eliminates previous Tegra-specific constraints, enabling developers to deploy the same code across datacenter GPUs and edge devices without modification.
Isaac platform for robotics
NVIDIA Isaac accelerates robotics development through simulation environments, pre-trained foundation models, and deployment tools. The Isaac GR00T workflow enables generalist robots capable of adapting to multiple tasks through vision-language-action models. Thor's computational capacity supports running VLA models with billions of parameters locally, eliminating latency from cloud inference.
Developers access synthetic data generation through Omniverse Replicator for training scenarios difficult to capture in physical environments. The Isaac ROS framework provides optimized perception algorithms and motion planning capabilities deployable on Thor hardware. This software-hardware integration reduces time from concept to production deployment.
Foundation model deployment
Thor's memory capacity and computational throughput enable deployment of contemporary foundation models without extensive optimization. The platform runs popular frameworks including HuggingFace Transformers, Ollama, llama.cpp, vLLM, and TensorRT-LLM. Support for FP4 precision through Blackwell Tensor Cores reduces memory requirements while maintaining inference accuracy for language and vision tasks.
Real-world deployments demonstrate stable operation of 8-billion-parameter LLMs for natural language interaction, vision transformers for scene understanding, and vision-language models for multimodal reasoning. The multi-instance GPU capability allows concurrent execution of specialized models for perception, planning, and control within a single device.
Industry adoption and applications
Major robotics companies have integrated Jetson AGX Thor into next-generation platforms. Agility Robotics incorporates Thor into the sixth generation of its Digit humanoid robot, leveraging the compute capacity for whole-body control and environmental awareness. Boston Dynamics implements Thor in the electric Atlas humanoid, enabling advanced manipulation and navigation capabilities.
The development of capable humanoid robots hinges on our ability to run powerful AI models directly on the robot, enabling real-time learning and interaction. NVIDIA Jetson Thor's server-class performance, delivered within a compact and power-efficient design, allows us to deploy the large-scale generative AI models necessary for our humanoids to perceive, reason, and act in complex, unstructured environments.
Warehouse and logistics automation
Amazon Robotics deploys Thor for autonomous warehouse systems requiring real-time decision-making in high-density environments. The platform's sensor fusion capabilities enable simultaneous tracking of inventory, navigation obstacles, and human workers. Multi-camera processing at 4K resolution supports precise manipulation and quality inspection tasks.
Figure AI utilizes Thor's computational capacity for humanoid robot development focused on manufacturing applications. The platform enables vision-language interaction allowing operators to instruct robots through natural language rather than programmed routines. This flexibility reduces deployment time for new tasks in dynamic production environments.
Medical and industrial systems
Healthcare robotics applications leverage Thor for surgical assistance and patient mobility support. The platform's reliability requirements align with medical device standards, while computational capacity supports real-time analysis of high-resolution medical imaging. Medtronic explores Thor integration for next-generation surgical robotics requiring sub-millimeter precision and adaptive control.
Industrial inspection systems from Caterpillar and Hexagon implement Thor for autonomous quality control. The platform processes multiple camera streams simultaneously, applying defect detection models trained on synthetic and real-world data. Edge deployment eliminates network latency concerns critical for real-time manufacturing decisions.
Developer kit specifications
The Jetson AGX Thor Developer Kit includes a T5000 module with integrated heat sink, reference carrier board, 140W power supply, WiFi 6E module, and 1TB NVMe SSD storage. Physical dimensions accommodate desktop development while thermal management supports sustained compute workloads. The kit ships with pre-configured software enabling immediate development without extensive system configuration.
Power and thermal management
Power architecture supports configurable profiles between 40W and 130W maximum consumption. Dynamic voltage and frequency scaling optimizes energy efficiency based on workload characteristics. Thermal solution combines a transfer plate with active cooling, maintaining safe operating temperatures during sustained AI inference tasks. Embedded systems can implement passive cooling for the 40W profile in temperature-controlled environments.
The carrier board implements advanced power management across multiple voltage domains. Microfit connector accepts 9V to 28V input for integration into robotic platforms with standardized power distribution. USB-C power delivery provides convenient desktop operation, with the system prioritizing the first connected supply when multiple sources are available.
Technical comparison with previous generation
Thor delivers 7.5x higher AI compute compared to Jetson AGX Orin while improving energy efficiency by 3.5x. Memory capacity doubles from 64GB to 128GB, enabling deployment of larger foundation models without external memory management. The Blackwell GPU architecture provides native FP4 precision support, reducing model size and inference latency for transformer-based architectures.
Networking bandwidth increases by an order of magnitude through the QSFP28 interface, addressing bottlenecks in multi-camera systems. PCIe Gen5 support accelerates storage access and peripheral connectivity. The transition to Server Base System Architecture ensures long-term software compatibility as NVIDIA's AI stack evolves.
Physical form factor maintains compatibility with existing carrier board dimensions despite larger silicon area. The reference design removes the PCIe expansion slot and GPIO header present in Orin developer kits due to space constraints from increased chip size and thermal solution requirements. Custom carrier boards can expose additional interfaces based on specific application requirements.
What applications in humanoid robotics or edge AI would benefit most from Thor's combination of high memory capacity and real-time processing? Share your implementation experiences with the yippy community.