Nvidia GPUs in Cloudera: “Customers are screaming for this”

Nvidia GPUs in the Cloudera Data Platform (CDP) are now generally available for enterprise customers, meeting the timeline first announced in April, and providing acceleration support to enterprise data centers.

The initial target of the offer is large on-premise data center customers who will be moved over to the CDP private cloud, according to Sushil Thomas, vice president of machine learning at Cloudera in a virtual news conference with reporters.

Nvidia’s contribution is its GPUs such as A30s and A100s that run inside servers from Cisco, Dell, HPE, Lenovo, and others.

The Nvidia-Cloudera collaboration will “democratize this space for avant garde companies with huge IT staffs,” said Scott McClellan, senior direct of the data science group at Nvidia. “We see this as a really big driver for mainstream enterprises.”

Cloudera already runs on 400,000 servers at 2,000 customers, Thomas said. Customers with machine learning and artificial intelligence work “are screaming for this” Nvidia-Cloudera approach, he  added.

The Internal Revenue Service has just begun using CDP with Nvidia GPUs, integrated with RAPIDS Accelerator for Apache Spark 3.0.  When announced in April, the IRS said it was implementing the integrated system and seeing data engineering and data science workflows for fraud detection and other mission-critical needs and seeing 3x speed improvements.

In the news conference on Tuesday, IRS was quoted as saying the Cloudera/Nvidia integration was more recently providing 10x speed improvements in workflows at half the cost for such workflows. The IRS statement was supplied by Joe Ansaldi, a technical branch chief for research, applied analytics and statistics at the IRS.

McClellan said other companies are seeing 5X full stack acceleration in data science workflows without code changes with only a 30% to 40% incremental cost over systems with CPUs only.

In one comparison Nvidia provided, a modern CPU-only four node cluster cost $160,000  compared to $247,000 for the same configuration with two Nvidia A30 GPUs.  “You’re adding very little cost and exceptional speedups,” McClellan said.

Another example pitted a modern CPU-only four node cluster against a single A100 GPU, with the former costing $223,000 and the latter priced at $292,000.

The CDS-GPU system runs on CDP Private Cloud Base, and Cloudera has an annual license for GPU usage, which lists at $7,500 per GPU for the latest generation of Nvidia Ampere GPUs, Nvidia said.

Enterprises face a common set of data science problems with jobs that are time-consuming, costly and frustrating, which Nvidia with Cloudera wants to address, the two companies said.

RELATED: Nvidia partners with VMware for ease in managing AI infrastructure