Home » Get Started » Using Hugging Face

Using Hugging Face

Hugging Face is an Open-Source Repository of Transformer and Diffuser Based Models and Datasets for Generative AI and LLMs.  It includes training (Including DeepSpeed) and inference sorted into tasks. The Intel Gaudi accelerators team has partnered with Hugging Face to create the Optimum Habana library to allow Hugging Face models to run on the Intel Gaudi AI accelerator. The library replaces generic Hugging Face code with Gaudi configurations and is pre-configured for use on many popular training and inference tasks. The Tasks and model examples are all fully validated and documented so it’s very easy to start training and inference in just a few minutes.

Follow the video below for a step-by-step training on how to get started using Hugging Face on Intel® Gaudi® accelerators.

Video

Get started using Hugging Face on Intel® Gaudi® AI accelerators.

Loading Hugging Face Models​​​

Installing Optimum Habana and running examples can be done in these easy steps:

  1. Get Access to a Gaudi Node and run the Intel Gaudi PyTorch Docker image, Using Intel Gaudi software version 1.15.0 *​
  2. Install the Optimum Habana Library​:   ​
    pip install optimum[habana]​==1.11.0​
  3. Install the Examples repository​​
    cd ~
    git clone -b v1.11.0 https://github.com/huggingface/optimum-habana​
  4. Select the Hugging Face Tasks from the location below and follow the directions in the README:​​ 
    cd ~/optimum-Habana/examples​

​* If you are unsure what version of Intel Gaudi software you are running, you can refer to Software Verification page and Support Matrix in the documentation​​

In most cases you will want to use the Stable release, as shown above. This is using the most current version of the optimum-habana library from pypi and checking out the same tag in GitHub for the Optimum-Habana model examples​

​If you need to use the latest working version, you should install the optimum-habana library from source (pip install git+https://github.com/huggingface/optimum-habana.git) and use the and Main branch from Github for all model examples.​

Stay Informed: Register for the latest Intel Gaudi AI Accelerator developer news, events, training, and updates.