Home » FAQs

FAQs

Intel Developer Cloud Console where you can access a node with eight Intel® Gaudi®2 accelerators on an hourly basis for a very reasonable cost. Please see the Get Access section for details on getting access

Use the GPU Migration Tool. The GPU Migration tool will convert python code with CUDA and other GPU specific commands to code that Gaudi can understand. This is done real-time during execution; the original code is not modified.  You simply need to add the GPU_migration library into the model script. The GPU migration toolkit user guide will walk you through the steps to ensure the model is functional on the Intel® Gaudi®  accelerators.

There are two main places to find fully documented and optimized models, the Intel Gaudi Model References and the Optimum-Habana Library from Hugging Face.  Please see the Getting Started page for more information. These two GitHub repositories contain fully optimized and documented models with all the instructions needed to download the dataset and run the models.

Specifically, the Getting Started page provides information on how to migrate models to the Intel® Gaudi® processor, watch videos and direct links to our detailed documentation. First-time users of Intel Gaudi processors are encouraged to refer to our Quick Start Guide

Start with the Optimum-Habana library. This is a dedicated library that allows all Hugging Face Transformer and Diffusion based models to run on Intel® Gaudi® accelerators. You can go to the Hugging Face page to get started.

We recommend that most users run the Intel PyTorch Docker image, as this contains all the Intel Gaudi software, drivers and libraries needed to run models successfully. Please refer to the Docker installation instructions for direct instructions how to pull and run the Intel Gaudi Docker images

The performance numbers for training and inference are on the developer website here. This includes the latest MLPerf performance numbers.

Please go to the Intel Gaudi Developers Forum, https://forum.habana.ai to post questions and see responses from the Intel Gaudi team and the broader community.

Go to the Software Verification in the Installation guide to get confirmation on the current version of Intel Gaudi Software on your platform.  

Yes, you can use the TPC Kernel SDK from Gaudi. This is a TPC-C based kernel library, so any custom CUDA kernel would have to be converted to a Gaudi TPC Kernel. See here: TPC Programming — Gaudi Documentation 1.14.0 documentation (habana.ai) 

If you still have questions, post your questions on the Intel Gaudi forum for developers. For Specific questions related to using PyTorch models on Github, you can post specific questions in the Issues section of the Model References GitHub page.