Home » Habana Developer Blog » We’re excited to introduce the release of Habana® SynapseAI® Software version 1.13.0

We’re excited to introduce the release of Habana® SynapseAI® Software version 1.13.0

Bringing forth numerous enhancements and updates for an improved user experience.

As part of our focus on GenAI, we have improved the performance and support for the LLaMA models family. This includes enabling FP8 inference for LLaMA 2-7B, and enabling pretraining LLaMA 2-70B. In addition, we released our latest MLPerf™ 3.1 Training models: GPT3 and Stable Diffusion v2. For more information, check out Habana’s model performance page.

As demonstrated on the latest MLPerf™ 3.1 Gaudi2 Training of GPT3, we added FP8 data type support for Gaudi2 training. See FP8 Training with Habana Transformer Engine.

A new dimension of parallelism is introduced with our new support for DeepSpeed Model Sequence Parallelism for training. See DeepSpeed User Guide for Training.

As we are rolling out our support for PyTorch eager and graph (torch.compile) modes, we also released a few examples including ResNet50 training for PyTorch and Lightning, BERT pretraining phase1 and BERT fine tuning. You can find all those references in our Model References GitHub repository. Pay attention that support for torch.compile is in early stage. Some models may not work or performance may be affected.

In this release, we’ve also upgraded versions of several libraries, including PyTorch 2.1.0, DeepSpeed 0.10.3, PyTorch Lightning 2.1.0 and TensorFlow 2.13.1. 

Lastly a reminder that the support for Ubuntu 20.04 will be deprecated and replaced with Ubuntu 22.04 starting from SynapseAI 1.14.0. You can find more information on SynapseAI 1.13.0 on Habana’s release notes page.

Tags: synapseai
Share this article: