Intel® Gaudi® AI Accelerators Blog

/ Developer Blog
Our blog today features a Riken white paper, initially prepared and published by the Intel Japan team in collaboration with Kei Taneishi, research scientist with Riken’s Institute of Physical and Chemical Research. […]
We have upgraded versions of several libraries with SynapseAI 1.8.0, including PyTorch 1.13.1, PyTorch Lightning 1.8.6 and TensorFlow 2.11.0 & 2.8.4.
In this paper we’ll show how Transfer Learning is an efficient way to train an existing model on a new and unique dataset with equivalent accuracy and significantly less training time.
In this post, we show you how to run Habana’s DeepSpeed enabled BERT1.5B model from our Model-References repository.
Habana’s Gaudi2 delivers amazing deep learning performance and price advantage for both training as well as large-scale deployments, but to capture these advantages developers need easy, nimble software and the support of […]
This tutorial provides example training scripts to demonstrate different DeepSpeed optimization technologies on HPU. This tutorial will focus on the memory optimization technologies, including Zero Redundancy Optimizer(ZeRO) and Activation Checkpointing.
The SDSC Voyager supercomputer is an innovative AI system designed specifically for science and engineering research at scale.
In this post, we will learn how to run PyTorch stable diffusion inference on Habana Gaudi processor, expressly designed for the purpose of efficiently accelerating AI Deep Learning models.
In this tutorial we will learn how to write code that automatically detects what type of AI accelerator is installed on the machine (Gaudi, GPU or CPU), and make the needed changes to run the code smoothly.
The Habana® team is excited to be at re:Invent 2022, November 28 – December 1. We’re proud that Amazon EC2 DL1 instances featuring Habana Labs Gaudi deep learning accelerators are providing an […]
We have upgrade versions of several libraries with SynapseAI 1.7.0, including DeepSpeed 0.7.0, PyTorch Lightning 1.7.7, TensorFlow 2.10.0 & 2.8.3, horovod 0.25.0, libfabric 1.16.1, EKS 1.23, and Open Shift 4.11.
The Habana® team is excited to be in Dallas at SuperComputing 2022. We look forward to sharing the latest performance advances for Gaudi®2, expanded software support and partner solutions from Supermicro, Inspur […]
Today MLCommons® published industry results for their AI training v2.1 benchmark that contained an impressive number of submissions
In this tutorial, we will demonstrate fine tuning a GPT2 model on Habana Gaudi AI processors using Hugging Face optimum-habana library with DeepSpeed.
Three new Gaudi2 server solutions feature the Habana Gaudi2 deep learning processor, which demonstrated leading deep learning time-to-train in the June 2022 MLPerf benchmark
One of the key challenges in Large Language Model (LLM) training is reducing the memory requirements needed for training without sacrificing compute/communication efficiency and model accuracy.
AI is transforming enterprises with valuable business insights, increased operational efficiencies and enhanced user experiences with innovative applications that can fuel growth. However, enterprise customers are challenged by the daily demands of balancing growing their businesses while developing, testing and managing AI and deep learning models, workloads and systems from start to application deployment.
Stay Informed: Register for the latest Intel Gaudi AI Accelerator developer news, events, training, and updates.