Using LocalGPT with Llama2
Published:
Using LocalGPT on Intel® Gaudi®2 AI accelerators with the Llama2 model to chat with your local documentation
Llama2 Fine-Tuning with Low-Rank Adaptations (LoRA) on Intel® Gaudi®2 AI Accelerator
Published:
Fine-tune Llama2 more efficiently with Low-Rank Adaptations (LoRA) on Intel Intel Gaudi2 AI accelerators
Profiling and Optimization
Published:
How to Use Gaudi’s Profiler tool and Tensorboard Plug-in to provide to modify any model for better performance.
Detecting Dynamic Shapes
Published:
How to find Dynamic Data and Ops in your models, and ways to help reduces these for better performance
Large Model usage with minGPT
Published:
This tutorial provides example training scripts to demonstrate different DeepSpeed optimization technologies on HPU.
Getting started with AWS DL1 and PyTorch
Published:
Set up an Amazon EC2 DL1 instance and start training a PyTorch model on Gaudi
Accelerate Transformer training with Optimum Habana
Published:
Migrate Hugging Face model to Habana Gaudi
Getting Started with Hugging Face Transformers
Published:
Set up a Habana Gaudi instance on Amazon Web Services, and fine-tune a BERT model …
Finetune Transformers Models with PyTorch Lightning
Published:
An adaptation of Finetune transformers models with pytorch lightning tutorial using Habana Gaudi AI processors. This notebook …
Introduction To PyTorch Lightning
Published:
An adaptation of Introduction to PyTorch Lightning tutorial using Habana Gaudi AI processors.
In this tutorial, we’ll go over the basics of lightning by preparing models to train on the MNIST Handwritten Digits dataset
PyTorch Mixed Precision
Published:
Overview Mixed precision is the use of both 16-bit and 32-bit floating-point types in a …
Training a Classifier
Published:
An adaptation of training a classifier tutorial using Habana Gaudi AI processors. In this tutorial, we …
Quick Start
Published:
An adaptation of PyTorch Quickstart tutorial using Habana Gaudi AI processors