Home » pytorch

Intel® Gaudi® AI Accelerators Blog

/ pytorch
In training workloads, there may occur some scenarios in which graph re-compilations occur. This can create system latency and slow down the overall training process with multiple iterations of graph compilation. This blog focuses on detecting these graph re-compilations.
In this post, we show you how to run Habana’s DeepSpeed enabled BERT1.5B model from our Model-References repository.
In this post, we will learn how to run PyTorch stable diffusion inference on Habana Gaudi processor, expressly designed for the purpose of efficiently accelerating AI Deep Learning models.
Stay Informed: Register for the latest Intel Gaudi AI Accelerator developer news, events, training, and updates.