With the release of the ASUS Ascent GX10, ASUS is bringing a new kind of supercomputer to the market – compact, powerful, and cost-effective. Many customers and interested parties have questions about the system's features, possible applications, and limitations. In this article, we answer the most important FAQs about the ASUS Ascent GX10 and highlight the scenarios for which the supercomputer is particularly well-suited.
What is the ASUS Ascent GX10 and what is it used for?
The ASUS Ascent GX10 (NVIDIA DGX Spark) is a compact desktop system that delivers supercomputing performance specifically for AI workloads. Typical use cases include training, fine-tuning, and inference of LLMs, research experiments, prototyping, and running data-sensitive applications on-premises.
What distinguishes the ASUS Ascent GX10 from conventional workstations or servers?
The GX10 isn't a standard workstation: It's built on an integrated GB10 Grace-Blackwell superchip that tightly couples the CPU and GPU and is optimized for AI workloads. This enables shorter data paths, greater efficiency, and out-of-the-box deployment for AI frameworks—all while maintaining a very compact desktop form factor.
What advantages does a supercomputer like the ASUS GX10 offer for AI projects?
-
On-premises data sovereignty (no cloud dependency).
-
Performance for training & fine-tuning open LLMs.
-
Cost transparency for recurring workloads.
-
Easy integration through preconfigured software stacks (DGX Base OS, PyTorch, TensorFlow, Hugging Face).
What hardware is included in the ASUS Ascent GX10? (GPU, CPU, memory, network)
component | specification |
---|---|
Superchip | NVIDIA GB10 Grace Blackwell |
CPU | ARM v9.2-A (GB10, 12 cores) |
GPU | NVIDIA Blackwell (integrated) |
System RAM | 128GB LPDDR5x (Coherent Unified Memory) |
Storage (configurable) | 1TB / 2TB / 4TB NVMe SSD (PCIe Gen5 x4) |
network | 10G LAN, Wi-Fi 7, Bluetooth 5, ConnectX CX-7 SmartNIC |
cooling | Liquid cooling (optimized for continuous operation) |
Form factor / dimensions | Desktop format — 150 × 150 × 51 mm |
OS | NVIDIA DGX Base OS (Ubuntu-based) |
I/O (Back) | 3× USB-C (20 Gbps), 1× USB-C PD (180 W), HDMI 2.1, 10G LAN, CX-7 |
Here is the official ASUS datasheet: ASUS Ascent GX10 datasheet (PDF)
How much memory does the ASUS Ascent GX10 have and what bandwidth is available?
The ASUS Ascent GX10 features 128 GB of LPDDR5x unified memory directly connected to the NVIDIA GB10 Grace Blackwell superchip.
This shared memory (unified memory) ensures that the CPU and GPU access the same memory area, which speeds up data transfers and increases overall efficiency.
With a memory bandwidth of up to 500 GB/s, the system offers a high data rate for demanding AI workloads such as training, fine-tuning, and inference of medium-sized Large Language Models (LLMs).
This makes the GX10 ideal for open models such as LLaMA 3 , Mistral 7B , Falcon 180B or OpenHermes – even in productive continuous operation environments.
Can the memory of the ASUS Ascent GX10 be expanded to run models like DeepSeek?
In short: No —the fast, coherent system memory (128 GB LPDDR5x) is built into the GB10 platform and cannot be expanded at will like with traditional servers. You can increase SSD capacity (1, 2, or 4 TB), but this doesn't replace the necessary, high-bandwidth, GPU-near memory resources for extremely large models. For models that require significantly more RAM bandwidth (or extremely high token rates), larger, specialized systems or cloud instances are a more suitable choice.
Does the GX10 support training and inference of LLMs?
Yes. The GX10 is specifically designed for training, fine-tuning, and inference on open LLM architectures and is ideal for research and development tasks as well as productive inference runs of medium to large models (see possible model sizes below).
Which LLM models can be operated/fine-tuned on the GX10?
Open, locally executable models can be operated and fine-tuned on the GX10, e.g.: LLaMA (2/3), Mistral / Mixtral, Falcon 180B, OpenHermes, OpenChat and comparable open source models or community releases.
Closed, proprietary models such as GPT-4 cannot be run locally because their parameters are not publicly accessible.
How large can models be — is the statement “up to 400B parameters” correct?
Yes: When two GX10 systems are coupled (clustered), models with up to ~400 billion parameters are realistically manageable in a practical setup. Individual devices are optimized for small to large open models; however, very large, distributed training with dozens of GPUs remains the task of larger DGX clusters or clouds.
What is the NVIDIA GB10 Grace Blackwell Superchip?
The NVIDIA GB10 Grace Blackwell superchip is at the heart of the ASUS Ascent GX10 – a highly integrated combination of ARM v9.2 Grace CPU and Blackwell GPU, specifically designed for AI workloads.
Unlike traditional CPU/GPU systems, where data must be transferred between multiple chips, the GB10 combines both processing units in a shared memory space. This creates extremely short data paths, significantly improving efficiency, latency, and power consumption.
The GB10 achieves up to 1 PFLOP of AI computing power (FP4, sparse), making it ideal for training, fine-tuning, and inference of open Large Language Models (LLMs) such as LLaMA 3 , Mistral 7B/Mixtral , Falcon 180B , or OpenHermes .
Further advantages of the Grace-Blackwell architecture:
-
High energy efficiency combined with massive computing power
-
Coherent memory access between CPU and GPU
-
Optimized for on-premises AI systems and scientific computing
-
Future-proof architecture for next-generation generative AI models
The GB10 thus forms the basis for the ASUS Ascent GX10 to offer supercomputer performance in desktop format – specifically tailored for modern AI development.
How is the GX10 cooled and is it suitable for continuous operation?
The ASUS Ascent GX10 utilizes an advanced thermal design, described in the official datasheet as "Advanced Thermal Design." This cooling system is engineered for stable 24/7 operation under high load, ensuring the supercomputer runs reliably even during intensive AI workloads.
Can multiple GX10s be paired/scaled?
Yes. Two systems can be coupled, allowing for scaling for larger models and parallel workloads. However, for clusters with a large number of GPUs, larger DGX systems or cloud solutions are more suitable.
What is NVIDIA DGX Base OS / DGX OS — does the GX10 need it?
The GX10 runs NVIDIA DGX Base OS (Ubuntu-based) and is optimized for use with the NVIDIA AI stack. This facilitates the use of common frameworks (PyTorch, TensorFlow, HuggingFace) and reduces setup time for developers.
Can you host your own AI models like ChatGPT with the GX10?
You can deploy chatbot functionality locally with open LLMs (e.g., LLaMA-based chat solutions). OpenAI's original ChatGPT/GPT-4, however, is proprietary and cannot be hosted locally. If you need a GPT-like chat functionality, open models + fine-tuning on the GX10 are a proven alternative.
Who is the GX10 particularly suitable for — and what are its limitations?
Suitable for: Research institutions, universities, companies with sensitive data, start-ups, and development teams that want to train, fine-tune, or run LLMs locally.
Limitations: Very large, distributed training (dozens of GPUs) or extremely high token rates/real-time inference for millions of parallel users are areas where larger DGX clusters or cloud instances can be more economical/performant.
How much does the ASUS Ascent GX10 cost?
The price depends on the selected features and configuration. You can find a current overview directly in the BuyZero Shop . We also offer personalized advice on the right configuration.
Where can I find the official ASUS Ascent GX10 datasheet?
The complete data sheet with all technical details is available directly from ASUS:
ASUS Ascent GX10 datasheet (PDF)
Does ASUS have repair service in Germany?
Yes – ASUS also offers a repair and warranty service in Germany through its Business Support. BuyZero customers also benefit from a direct point of contact for support and advice.
Where does ASUS manufacture?
ASUS is a Taiwanese company with production facilities worldwide, primarily in Taiwan and China. The GX10 is manufactured in collaboration with NVIDIA specifically for the international enterprise market.
Ready for your AI projects with the ASUS Ascent GX10?
The ASUS Ascent GX10 supercomputer combines performance, efficiency, and compactness – perfect for businesses, research, and anyone who wants to run AI applications on-premises.
Discover all details and configurations directly in the Visit the BuyZero shop or get personal advice. Together we'll find the right solution for your project!
Learn more about the ASUS Ascent GX10
If you want to delve even deeper: In our first article, we present the ASUS Ascent GX10 supercomputer in detail – from the key technical specifications to the most important application possibilities.
Click here for a detailed overview of the ASUS Ascent GX10