Parallelizing and Training Large Language Models

Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism

This paper demonstrates a strategy for model parallelism. It splits different parts of the Transformer model in parallel across different GPUs and trains a large model with many parameters in parallel.

Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models

This paper trained multiple full language models in parallel on different types of data, for example legal text, then merges the parameters for the different language models together. This lets them do training in parallel but doesn’t help make the model larger.

We present Branch-Train-Merge (BTM), a communication-efficient algorithm for embarrassingly parallel training of large language models (LLMs). We show it is possible to independently train subparts of a new class of LLMs on different subsets of the data, eliminating the massive multi-node synchronization currently required to train LLMs. BTM learns a set of independent expert LMs (ELMs), each specialized to a different textual domain, such as scientific or legal text. These ELMs can be added and removed to update data coverage, ensembled to generalize to new domains, or averaged to collapse back to a single LM for efficient inference. New ELMs are learned by branching from (mixtures of) ELMs in the current set, further training the parameters on data for the new domain, and then merging the resulting model back into the set for future use. Experiments show that BTM improves in- and out-of-domain perplexities as compared to GPT-style Transformer LMs, when controlling for training cost. Through extensive analysis, we show that these results are robust to different ELM initialization schemes, but require expert domain specialization; LM ensembles with random data splits do not perform well. We also present a study of scaling BTM into a new corpus of 64 domains (192B whitespace-separated tokens in total); the resulting LM (22.4B total parameters) performs as well as a Transformer LM trained with 2.5 times more compute. These gains grow with the number of domains, suggesting more aggressive parallelism could be used to efficiently train larger models in future work.

Aran Komatsuzaki on bottlenecks in LLM training

Memory isn’t really a big bottleneck for mainstream LLMs nowadays, as we can use some of the tricks shown in the paper. The big bottleneck is the compute, i.e., getting enough GPU-hours. I heard going trillion-scale regime brings instability problems, but I could be wrong.

1 Like