IBM - International Business Machines Corporation

10/03/2024 | News release | Distributed by Public on 10/03/2024 21:41

IBM Research data loader enhances AI model training for open-source community

How do you overcome bottlenecks when you're training AI models on massive quantities of data? At this year's PyTorch conference, IBM Research showcased a groundbreaking data loader for large-scale LLM training. The tool, now available to PyTorch users, aims to simplify large-scale training for as broad an audience as possible.

The origins of the research

The idea for the high-throughput data loader stemmed from practical issues research scientists observed during model training, as their work required a tool that could process large amounts of data across multiple devices-all while keeping up with progressively efficient GPUs. As IBM Research notes in its blog about the release, "It's all thanks to a team of researchers who were simply building the tools they needed to get a job done."

Davis Wertheimer of IBM Research explains some of the challenges that can emerge during large-scale training: "There's something of an 80/20 rule when it comes to large-scale training. Eighty percent of all the published literature is looking at algorithmic tradeoffs between GPU memory and communication and computation. But when you actually try to build something, 80% of the time, you can depend on a very long tail of all these other practical issues because the pipeline runs at the speed of the narrowest bottleneck."

As the IBM team developed their training platform, they continued encountering bottlenecks. "As we get better and better at using our GPUs, more and more often the bottleneck is the data loader," observes Wertheimer.

This realization led to a dual development process. "There's been a parallel journey of, on the one hand, evolving our training platform, and, on the other hand, constantly evolving our data loader to keep up with the speed demands from our training platform to avoid bottlenecking it," he explains.

Key features of the world-class data loader

IBM Research's Linsong Chu outlines the essential features of the data loader:

Stateful and checkpointable: "Whenever you save a model, your data loader state is also saved, and whenever you recover from a checkpoint, both the model state and data loader states need to be recovered at the same time," says Chu.

Auto-rescaling of checkpoints: The data loader automatically adjusts to workload changes during extended training sessions. "Training could easily take weeks or months, and there are tons of reasons why you might have to rescale your workload in the middle," notes Chu.

Efficient data streaming: The system supports data streaming with zero build overhead for shuffling.

Asynchronous distributed operation: "We want the data loader to be non-blocking," Chu explains. "While saving the data loader state, we want the saving to be distributed in a form where zero communication is involved."

Dynamic data mixing: The data loader can adapt to different data mixing ratios, which is useful for evolving training needs.

Efficient global shuffling: The tool addresses memory bottlenecks when handling large datasets, making shuffling efficient even as data grows.

PyTorch native, modular and extensive: Designed for adaptability and scalability, the data loader is prepared for future growth. "What if next year we have to deal with 30 trillion, 50 trillion or 100 trillion tokens?" asks Chu. "The world is changing fast, so we need to build the data loader so it can not only survive today, but also survive for tomorrow."

Real-world performance

The IBM Research team rigorously tested their data loader over several months, running hundreds of small and large jobs. They observed stable and smooth code numbers. Moreover, the entire data loader operates asynchronously and is non-blocking.

"We leveraged a lot of built-in PyTorch capabilities in order to make all this happen," says Wertheimer. "That's why we're contributing, contributing it back."

eBook: How to choose the right foundation model
Was this article helpful?
YesNo
Tech Reporter, IBM