Zero Bubble Pipeline Parallelism

Published on

January 10, 2025

Zero Bubble Pipeline Parallelism


Zero Bubble Pipeline Parallelism is a novel pipeline parallelism algorithm able to reduce the bubble of pipeline parallelism to almost zero while preserving synchronous semantics.

Check out our paper at:

Try out our implementation based on Megatron on https://github.com/sail-sg/zero-bubble-pipeline-parallelism

Play with our scheduler on HuggingFace Space

Experiments shows zero bubble pipeline parallelism can accelerate training up to 30% with a similar memory comsumption. A detailed table of experiments is coming soon.

Zero Bubble Schedules

The key of achieving zero bubble is to breaking a backward pass into a B pass and W pass. B on one stage will only depend on the B on its next stage, compared to depending on both B and W of in 1F1B.

image/png

Comparision of Schedules

  • 1F1B image/png
  • ZB1P image/png
  • ZB2P image/png
  • ZBV - Each device is assigned to exactly 2 chunks (virtual stages), where white text colors represent the first chunk and black text colors represent the second chunk. The sequence of dependencies among model chunks follows a ”V” shape pattern for both the forward and backward passes. image/png
Comparison assuming T_F=T_B=T_W1F1BZB1PZB2PZBV (Recommended)
Bubble Rate(p-1)/(m+p-1)(p-1)/3(m+p-1)00
Activation Memory (Compared to 1F1B)1x1x2x1x
Pipeline Communication Volume (Compared to 1F1B)1x1x1x2x

Optimizer Post Validation

In most practices of PP there's an all-reduce cross all pipeline stages for numerical robustness, e.g. global gradient norm for gradient clipping. INF/NAN check for mixed precision training, etc. This all-reduce breaks parallelogram and makes zero bubble impossible. Under the observation that during a stable training both the gradient clipping and INF/NAN rarely triggers, we replace the before-hand synchronizations with a post update validation.

image/png

We eagerly step the optimizers assuming the grad cliping, INF/NAN conditions are not triggered. In case an amendment to the gradient is required, a rollback will be issued and then we redo the optimizer step based on the fully reduced global state.