Welcome to Habana’s developer site.

Here you will find the content, guidance, tools and support needed to easily and flexibly build new or migrate existing AI models and optimize their performance to meet your AI requirements. You can also access the latest Gaudi software to build or update your infrastructure. 

Get Started

Get access to Habana’s programmable Tensor Processor Core and SynapseAI® software stack with support for TensorFlow and PyTorch frameworks, along with our model garden, libraries, containers and tools that enable you to build popular AI models. Now supporting the new Gaudi®2 Processor!

Sign up for the latest Habana developer news, events, training, and updates.

Recent Posts and Tutorials

Check out these latest posts from our blog. 
post
One of the main challenges in training Large Language Models (LLMs) is that they are often too large to fit on a single node or even if they fit, the training may be too slow. To address this issue, their training can be parallelized across multiple Gaudi accelerators (HPUs).
post
If you want to train a large model using Megatron-DeepSpeed, but the model you want is not included in the implementation, you can port it to the Megatron-DeepSpeed package. Assuming your model is transformer-based, you can add your implementation easily, basing it on existing code.
post
We have optimized additional Large Language Models on Hugging Face using the Optimum Habana library.
post
In this release, we’ve upgraded versions of several libraries, including DeepSpeed 0.9.4, PyTorch Lightning 2.0.4 and TensorFlow 2.12.1.
Sign up for the latest Habana developer news, events, training, and updates.