Decentralized Training of Foundation Models in Heterogeneous Environments

Training foundation models, such as GPT-3 and PaLM, can be extremely expensive, often involving tens of thousands of GPUs running continuously for months. These models are typically trained in specialized clusters featuring fast, homogeneous interconnects and using carefully designed software systems that support both data parallelism and model/pipeline parallelism. Such dedicated clusters can be costly and difficult to obtain. Can we instead leverage the much greater amount of decentralized, heterogeneous, and lower-bandwidth interconnected compute? Previous works examining the heterogeneous, decentralized setting focus on relatively small models that can be trained in a purely data parallel manner. State-of-the-art schemes for model parallel foundation model training, such as Megatron, only consider the homogeneous data center setting. In this paper, we present the first study of training large foundation models with model parallelism in a decentralized regime over a heterogeneous network. Our key technical contribution is a scheduling algorithm that allocates different computational “tasklets” in the training of foundation models to a group of decentralized GPU devices connected by a slow heterogeneous network. We provide a formal cost model and further propose an efficient evolutionary algorithm to find the optimal allocation strategy. We conduct extensive experiments that represent different scenarios for learning over geo-distributed devices simulated using real-world network measurements. In the most extreme case, across 8 different cities spanning 3 continents, our approach is 4.8X faster than prior state-of-the-art training systems (Megatron).

https://www.together.xyz/research/releasing-v1-of-gpt-jt-powered-by-open-source-ai

So instead of renting a machine in a single location (or multiple machines interconnected at the same data center), they rented machines from multiple locations?

Seems like a neat concept, but I’m missing the practical point.

1 Like

Instead of having a bunch of computers physically in the same data center with high bandwidth connections between them they used multiple computers connected over the internet. This is potentially cheaper because you can use more general hardware for large scale training.

I use single GPUs from Vast.ai which are much cheaper than renting from Google or other big cloud providers. However, the Vast AI GPUs only have a generic internet connection so you don’t have as much bandwidth to transfer data between them.

1 Like

Ok, that makes sense. Neat!

1 Like