One of the main challenges in training Large Language Models (LLMs) is that they are often too large to fit on a single node or even if they fit, the training may be too slow. To address this issue, their training can be parallelized across multiple Gaudi accelerators (HPUs).
If you want to train a large model using Megatron-DeepSpeed, but the model you want is not included in the implementation, you can port it to the Megatron-DeepSpeed package. Assuming your model is transformer-based, you can add your implementation easily, basing it on existing code.
In this release, we’ve upgraded versions of several libraries, including DeepSpeed 0.9.4, PyTorch Lightning 2.0.4 and TensorFlow 2.12.1.
Habana Labs, an Intel company, and Genesis Cloud are collaborating to deliver a new class of cloud instances with Habana® Gaudi®2 accelerators to enable high-performance, high-efficiency deep learning training and inference workloads in the cloud.
In the 1.10 release, we’ve upgraded versions of several libraries, including PyTorch 2.0.1, PyTorch Lightning 2.0.0 and TensorFlow 2.12.0. We have added support for EKS 1.25 and OpenShift 4.12
We’re excited to participate in this year’s ISC High Performance Compute 2023 event in Hamburg Germany. This year our team will demonstrate the capabilities of our Habana Gaudi2® processors, which deliver high-performance, high-efficiency deep learning training and inference.
Equus and Habana have teamed up to simplify the process of testing, implementing and deploying AI infrastructure based on Habana Gaudi2 processors.
In training workloads, there may occur some scenarios in which graph re-compilations occur. This can create system latency and slow down the overall training process with multiple iterations of graph compilation. This blog focuses on detecting these graph re-compilations.
In the 1.9 release, we’ve upgraded versions of several libraries, including PyTorch Lightning 1.9.4, DeepSpeed 0.7.7, fairseq 0.12.3, and Horovod v0.27.0.
In this article, you'll learn how to easily deploy multi-billion parameter language models on Habana Gaudi2 and get a view into the Hugging Face performance evaluation of Gaudi2 and A100 on BLOOMZ.
AWS and Habana collaborated to enable EFA Peer Direct support on the Gaudi-based AWS DL1 instances, offering users significant improvement in multi-instance model training performance.
AI is becoming increasingly important for retail use cases. It can provide retailers with advanced capabilities to personalize customer experiences, optimize operations, and increase sales. Habana has published a new Retail use case showing an ...
Our blog today features a Riken white paper, initially prepared and published by the Intel Japan team in collaboration with Kei Taneishi, research scientist with Riken’s Institute of Physical and Chemical Research. […]
We have upgraded versions of several libraries with SynapseAI 1.8.0, including PyTorch 1.13.1, PyTorch Lightning 1.8.6 and TensorFlow 2.11.0 & 2.8.4.
Habana’s Gaudi2 delivers amazing deep learning performance and price advantage for both training as well as large-scale deployments, but to capture these advantages developers need easy, nimble software and the support of […]
This tutorial provides example training scripts to demonstrate different DeepSpeed optimization technologies on HPU. This tutorial will focus on the memory optimization technologies, including Zero Redundancy Optimizer(ZeRO) and Activation Checkpointing.