Getting Started with Habana: Deep Speed Optimization on Large Models

On Demand Webinar

As we see models getting larger and larger, there is a need to enable libraries and techniques to help reduce the memory size to ensure models will fit into device memory. In this webinar, you’ll learn about the basic steps needed to enable DeepSpeed on Gaudi, and show how the ZeRO1 and ZeRO2 memory optimizers and Activation Checkpointing can be used to reduce memory usage on a large model. We’ll also show you how to use Gaudi’s APIs to detect the peak memory of any model and provide guidance on when to use these techniques. This will include a live Q&A to follow.

Related links and resources:

Presenters:

Greg Serochi

Greg Serochi

Developer Advocate and Applications Engineering
Stay Informed: Register for the latest Intel Gaudi AI Accelerator developer news, events, training, and updates.