This isn’t just a workstation—it’s a deskside AI powerhouse built to replace your dependence on cloud and data centers. Powered by the NVIDIA GB300 Grace Blackwell Ultra Superchip, the XpertStation WS300 delivers extreme AI compute, massive unified memory, and ultra-fast networking—all in a compact form factor designed for serious AI workloads.
With 748GB of coherent memory and dual 400GbE connectivity, you’re no longer limited by infrastructure. Train larger models, process bigger datasets, and deploy faster—all from your desk.
At its core, a 72-core NVIDIA Grace CPU works seamlessly with the Blackwell Ultra GPU, pushing up to 1,400W of raw AI performance. This tightly integrated architecture eliminates bottlenecks and unlocks true high-efficiency AI acceleration.
And the memory? A staggering combination of 496GB LPDDR5X + 252GB HBM3e—giving you cluster-level memory capacity in a single system. What typically requires racks of servers now fits under your desk.
The NVIDIA GB300 Superchip unites Grace CPU and Blackwell Ultra GPU via NVLink C2C for high-density,efficient computing.
NVIDIA ConnectX-8 SuperNIC delivers up to 800Gb/s ultra-fast, low-latency connectivity for scalable AI workloads.
748GB unified memory (HBM3e + LPDDR5X) via C2C enables faster data access and efficient AI training.
NVIDIA AI Software Stack enables seamless AI development—from fine-tuning to deployment, desktop to data center.
MSI’s XpertStation WS300 on NVIDIA DGX Station™ architecture brings data-center-class AI performance to the desktop, enabling model development, data science, and autonomous AI agents with NVIDIA OpenShell.
Accelerate deep learning and machine learning training for AI applications ranging from predictive maintenance and medical imaging analysis to natural language processing.
Accelerate end-to-end data science workflows to enable faster data ingestion, analysis, and insight generation across massive datasets.
Accelerate local inference for large and complex AI models, delivering high-speed performance for LLM token generation, data analysis, content creation, and AI chatbots.
Run advanced AI models on local data and serve as a centralized compute node for team-based fine-tuning and on-demand deployment.
Yes—for AI workloads. It offers significantly higher memory and compute.
Partially. It can handle many workloads locally but not hyperscale operations.
No. It is optimized for AI, not media workflows.
Multiple users can access via network or virtualization.
Yes, via 400GbE networking.
Highly—thanks to PCIe Gen6 and Blackwell architecture.
AI startups, research labs, healthcare, finance, defense, and tech enterprises.
