AI

Akash is an Airbnb for GPUs: owners of idle chips can rent out compute for cheap

As LLMs become an even bigger driver of AI, developers are facing a squeeze to find GPUs, especially hot H100s from Nvidia, to handle their compute tasks.

Buying Nvidia GPUs seems out of the question for researchers and smaller dev shops, with a single chip running anywhere from $30,000 to $80,000—based on interviews with buyers and analysts. That’s if they are available for purchase. Renting time on a cloud server to run generative AI can suffice but can still be expensive—at $2 per chip per hour, based on recent average prices on the web, with some prices ranging up to $9 per hour per chip.

Against that backdrop, one open cloud platform built on blockchain, startup Akash Network, is offering access to H100s and A6000s, often at $1.49 per hour or $0.49 per hour, respectively. The thing that makes Akash different from many other networks offering leasing access to GPUs is the ability to let owners of idle chip capacity rent compute on the Akash Network.  As a result, prices vary somewhat, depending on the supply at any given time.

On its website, Akash was showing on Monday night April 1 a price of $1.58 for an hourly rental for a single H100 with 80 GB of vRAM and a PCIe interface. (49 were available at that price). By comparison, an H100 with the same vRAM and a SXM5 interface was renting for $0.41, with up to eight available.

An A100 with 80 GB and PCIe could also be rented for $0.82, well below another offer for the same chip at $1.46.  For the total rental price for a chip over any number of hours, there’s an additional 20% take fee--split between provider and payor, explained Greg Osuri, founder of Askash.

Osuri told Fierce Electronics that the Akash concept is like an Airbnb for GPUs. “Anyone with a cloud capable computer can install Akash software,” he said. “Our software does multi-cloud Kubernetes. Think of this as an exchange.”

About 480 companies and individuals are contributors, while Akash employs just 20 people. Osuri said revenues are $1 million annually, but have been doubling every two months.

One discovery in running Akash has been how many companies with GPUs would like to offer them up as rentals. “People are starting to find out that lots of places have a lot of chips” they want to rent.  The concept is working well, with a 98% closing rate, he said.

Akash was founded in 2018 by Overlock Labs, a software development company formed by Osuri and Adam Bozanich.  It gets financial backing from Sequoia Scout and Galaxy.  Every aspect of it is fully open-sourced, he said. “There’s a cyberpunk aspect to Akash,” he said. “You can get to places where closed source software cannot.”  

The name ‘akash’ is interesting in itself.  Akash means “sky” in traditional Indian cosmology. Osuri laughed when he pointed out the name has been interpreted by some tricksters as Akash-net, transliterated to “Skynet,” the fictional artificial neural network-based conscious group mind AGI that was the antagonist in the Terminator film franchise.

While Akash Network clearly doesn’t have anything approaching AGI capabilities, it does offer an interesting take on the international trade legality of leasing H100s. The US government has restricted sales of H100s to China out of national security concerns while Nvidia has responded with its own variants sold into China.

“It’s legal to lease [H100] chips from the US,” Osuri noted, quickly adding that Akash has “no presence in China at all,” with most users in the US and Europe.

Akash has questioned the ability of big tech cloud providers to succeed with their own custom accelerator chips, partly because of the apparent abundance of GPUs available for lease.

And, Osuri’s not worried about getting enough sleep with his job—at least from operating Akash Network as interest in the AirBnb for GPUs concept blossoms. “I get to sleep quite a lot because of the long-term vision [where] we don’t need funding.”

RELATED: Blackwell platform puts Nvidia in higher realm for cost and energy

RELATED: Is this Kubernetes-based supercloud the future of open source?