ASUS Unveils AI POD: Powered by NVIDIA GB200 NVL72 for AI Training and Real-Time Inference
ASUS unveils its new AI POD, a complete rack solution powered by NVIDIA GB200 NVL72 AI servers, optimized for AI training and real-time inference with advanced cooling solutions.
In SC24, ASUS introduced the ASUS AI POD, which is a superb rack option for AI duties that need many watts. Built from NVIDIA’s GB200 NVL72 AI servers, this revolutionary computer is ideal for both training and real-time inference of AI models; which makes the computer the best AI computer ever.
The AI POD involves Grace Blackwell Superchip by NVIDIA and generations NVIDIA NVLink to enable GPUs, CPUs, and switches to communicate seamlessly. One way it achieves this is through training trillion-parameter large language models (LLMs) because of the increased need for fast computers in the study of AI.
In the design of the ASUS AI POD, the enhanced cooling methods include liquid-to-air and liquid-to-liquid to enhance speed and effectiveness. These systems can remove up to 95 percent heat and this is why it is critical for managing data centers.
In addition to the AI POD, ASUS also showed off its full line of servers, such as the ESC8000A-E13P, an HPC server that fully supports NVIDIA’s MGX modular design. This server can handle GPUs with a lot of cores, which makes it perfect for big AI applications.
With the help of Ubilink, ASUS has also completed a world-class supercomputing center in Taiwan with a capability of handling 45.82 PFLOPS. This building is lit by green energy, it houses a company that deals in the provision of artificial intelligence computer services including rental of cloud services and paid subscription services.
However, ASUS is still ahead of other competitors in offering these contemporary AI solutions. They provide developmental platforms with actualizations for future tasks in Artificial Intelligence.