General Micro Systems’ 6U, dual-CPU OpenVPX server blade employs two of Intel’s best Xeon processors paired with its Phoenix VPX450 OpenVPX motherboard in a rugged, air-cooled chassis. The single blade server includes 44 cores and 88 virtual machines, 1 TB of the fastest ECC DRAM in the world, 80 lanes of PCIe Gen 3 serial interconnects, dual 40 Gig Ethernet, plus storage and I/O.
- Server Engine—Dual socket 2.2 GHz Intel® Xeon® E5 v4 with 22 cores adds to 44 total cores and 88 virtual machines on one blade, plus 1 TB DDR4 with ECC (industry’s fastest 2,133 MT/s). The CPUs are reliably cooled using GMS patented RuggedCool specialty heatsinks and CPU retainers for maximum thermal transfer without CPU throttling.
- Interconnect Fabric—80 PCIe Gen 3 lanes at 8 Gbps move data between on-card subsystems 68 PCIe Gen 3 lanes to the OpenVPX backplane. The industry’s fastest, they assure 544 Gbps bandwidth between the Phoenix server and OpenVPX backplane switch matrix or compute nodes. Eight native SATA III lanes to connect across the backplane to mass storage card(s).
- Networking—dual front panel QSFP+ sockets accept Ethernet inserts for 10 and 40 Gb, in either copper or fiber. There is no IEEE networking standard in the commercial market faster than 40 Gb Ethernet, and it’s available in this single-blade server. In the typical use case, dual 40 Gb Ethernet fiber connections provide long-haul communication to distant sensors or intelligent nodes. Two local Ethernet ports (1 GbE and 100Base-T) provide service connections for “low speed” networking.
- Flexible Add-in Storage and I/O—The unique VPX450 can add up to four different types of plug-on modules. There are dual SAM I/O PCIe-Mini sites, usually used for MIL-STD-1553 and legacy military I/O. These sites also accept mSATA SSDs for server data storage. An XMC front panel module provides plug-in I/O such as for a video frame grabber or software-defined radio. Lastly, GMS provides an XMC carrier equipped with an M.2 site, used for either storage (OS boot, for example) or more add-in I/O.
Besides acting as a traditional OpenVPX “slot 1 controller,” the VPX450 server blade can be used as part of a compute cluster system, with each Phoenix blade providing 34,330 PassMark performance. Inter-card communication via the 68 PCIe connections can be used to create a high-performance cluster computing (HPCC) system via symmetric multiprocessing (SMP) for data mining, augmented/virtual reality, or block chain computation.
For more information, peruse the VPX450 datasheet.