Nvidia GB200 NVL72 Is Not Yet Ready For Training Advanced AI Models
13:07, 26.08.2025
Analytics agency SemiAnalysis has published an analysis of server solutions for training artificial intelligence and concluded that Nvidia H100 and H200 accelerators, as well as Google's TPUs, are currently better suited for training advanced models. GB200 NVL72 server racks with the latest Nvidia GPUs face problems due to the copper NVLink switchboard and imperfect diagnostics and debugging tools, which lead to downtime.
Why Training Is Not Yet Possible
In theory, the failure of a single chip is not critical; the NVL72 recommends training AI on 64 GB200 GPUs and keeping 8 more in reserve. However, connecting them requires quickly locating the fault, which is currently difficult due to limited diagnostic tools. As a result, the training process stops, checkpoints are rolled back, and repairs are delayed.
SemiAnalysis notes that there are currently no known examples of advanced model training completed on GB200 NVL72.
Analyst Recommendations and Nvidia's Focus
At the moment, analysts advise using GB200 NVL72 primarily for inference, running already trained models. Nvidia also emphasizes inference in its latest materials, although early announcements suggested parallel work on training and running models.
Future Outlook and Economic Considerations
SemiAnalysis predicts that Nvidia will be able to resolve issues with NVLink and software by the end of the year. However, the cost of ownership for a single GB200 GPU is 1.6–1.7 times higher than for the H100. To justify the investment in new accelerators, they must demonstrate at least 1.6 times greater performance with similar downtime.