Super Micro Computer, is demonstrating a selection of GPU server platforms that support NVIDIA Tesla V100 PCI-E and V100 SXM2 GPU accelerators at the GPU Technology Conference in the San Jose through March 29. For maximum acceleration of highly parallel applications like artificial intelligence (AI), deep learning, self-driving cars, smart cities, health care, big data, HPC, virtual reality and more, Supermicro's 4U system with next-generation NVIDIA NVLink interconnect technology is optimized for maximum performance.
SuperServer 4029GP-TVRT supports eight NVIDIA Tesla V100 32GB SXM2 GPU accelerators with maximum GPU-to-GPU bandwidth for cluster and hyper-scale applications. Incorporating the latest NVIDIA NVLink technology with over five times the bandwidth of PCI-E 3.0, this system features independent GPU and CPU thermal zones to ensure uncompromised performance and stability under the most demanding workloads.
The company is also demonstrating the performance-optimized 4U SuperServer 4029GR-TRT2 system that can support up to 10 PCI-E NVIDIA Tesla V100 accelerators with Supermicro's innovative and GPU-optimized single root complex PCI-E design, which dramatically improves GPU peer-to-peer communication performance. For even greater density, the SuperServer 1029GQ-TRT supports up to four NVIDIA Tesla V100 PCI-E GPU accelerators in only 1U of rack space and the new SuperServer 1029GQ-TVRT supports four NVIDIA Tesla V100 SXM2 32GB GPU accelerators in 1U.
With the convergence of big data analytics and machine learning, the latest NVIDIA GPU architectures, and improved machine learning algorithms, deep learning applications require the processing power of multiple GPUs that must communicate efficiently and effectively to expand the GPU network. Supermicro's single-root GPU system allows multiple NVIDIA GPUs to communicate efficiently to minimize latency and maximize throughput as measured by the NCCL P2PBandwidthTest.
For even more comprehensive data, checkout the Supermicro NVIDIA GPU system product lines.