How IBM's new supercomputer is making AI foundation models more enterprise-budget friendly

Click for: original source

Foundation models are changing the way that artificial intelligence (AI) and machine learning (ML) are able to be used. All that power comes with a cost though, as building AI foundation models is a resource-intensive task. By Sean Michael Kerner.

IBM announced that it has built out its own AI supercomputer to serve as the literal foundation for its foundation model–training research and development initiatives. Named Vela, it’s been designed as a cloud-native system that makes use of industry-standard hardware, including x86 silicon, Nvidia GPUs and ethernet-based networking.

IBM is no stranger to the world of high-performance computing (HPC) and supercomputers. One of the fastest supercomputers on the planet today is the Summit supercomputer built by IBM and currently deployed in the Oak Ridge National Laboratory.

The Vela system, however, isn’t like other supercomputer systems that IBM has built to date. For starters, the Vela system is optimized for AI and uses x86 commodity hardware, as opposed to the more exotic (and expensive) equipment typically found in HPC systems. Interesting read!

[Read More]

Tags ibm cloud management