Features
NVIDIA® GB300 Grace Blackwell Ultra Desktop Superchip
Unleash Superior AI Power on Your Desktop
The ASUS ExpertCenter Pro ET900N G3 is part of a new class of deskside AI supercomputers. Designed from the ground up to build and run AI, this revolutionary system unleashes the NVIDIA® GB300 Grace Blackwell Ultra Desktop Superchip and up to a massive 784GB of large coherent memory. Capable of developing and running large-scale AI training and inferencing workloads, this desktop system uses the NVIDIA® AI Software Stack to put an unprecedented amount of compute performance at the fingertips of AI development teams.
NVIDIA® GB300 Grace Blackwell Ultra Desktop Superchip
Up to
784 GB
of Coherent Memory
Up to
20 PFLOPS
AI Performance
NVIDIA AI Software Stack
NVIDIA
NVLink™-C2C
NVIDIA NVLink-C2C extends the industry-leading NVLink technology to a chip-to-chip interconnect between the GPU and CPU, enabling high-bandwidth coherent data transfers between processors and accelerators.
Optimized Cooling Design
NVIDIA ConnectX®-8 SuperNIC
Significantly enhances system performance for AI factories.
*Preliminary specifications, subject to change.
Features
Features an NVIDIA Blackwell Ultra GPU, which comes with the latest-generation NVIDIA CUDA® cores and fifth-generation Tensor Cores, connected to a high-performance NVIDIA Grace CPU via the NVIDIA® NVLink®-C2C interconnect, delivering best-in-class system communication and performance.
Powered by the latest NVIDIA Blackwell Generation Tensor Cores, enabling 4-bit floating point (FP4) AI. FP4 increases the performance and size of next-generation models that memory can support while maintaining high accuracy.
The NVIDIA ConnectX®-8 SuperNIC is optimized to supercharge hyperscale AI computing workloads. With up to 800 gigabits per second (Gb/s), NVIDIA ConnectX®-8 SuperNIC delivers extremely fast, efficient network connectivity, significantly enhancing system performance for AI factories.
NVIDIA DGX OS implements stable, fully qualified operating systems for running AI, machine learning, and analytics applications on the NVIDIA DGX platform. It includes system-specific configurations, drivers, and diagnostic and monitoring tools. Easily scale across multiple NVIDIA DGX Station systems, NVIDIA DGX Cloud, or other accelerated data center or cloud infrastructure.
NVIDIA NVLink-C2C extends the industry-leading NVLink technology to a chip-to-chip interconnect between the GPU and CPU, enabling high-bandwidth coherent data transfers between processors and accelerators.
AI models continue to grow in scale and complexity. Training and running these models within NVIDIA Grace Blackwell Ultra’s large coherent memory allows for massive-scale models to be trained and run efficiently within one memory pool, thanks to the C2C superchip interconnect that bypasses the bottlenecks of traditional CPU and GPU systems.
A full stack solution for AI workloads including fine-tuning, inference, and data science. NVIDIA’s AI software stack lets you work local and easily deploy to cloud or data center using the same tools, libraries, frameworks and pretrained models from desktop to cloud.
NVIDIA DGX Stations take advantage of AI-based system optimizations that intelligently shift power based on the currently active workload, continually maximizing performance and efficiency.
DGX Station supports NVIDIA Multi-Instance GPU (MIG) technology to partition the GPU into as many as seven instances for local development with multiple users, each fully isolated with its own high-bandwidth memory, cache, and compute cores.
Specifications and product images are subject to change.