Lean how to use TGI-gaudi and Langchain to build and deploy a RAG application
Learn how to execute scalable model development with Fully sharded data parallel (FSDP) training using PyTorch and Intel Gaudi Accelerators
We are excited to see Meta release Llama 2, to help further democratize access to LLMs. Making such models more widely available will facilitate efforts across the AI community to benefit the world at large.
Habana Labs, an Intel company, and Genesis Cloud are collaborating to deliver a new class of cloud instances with Habana® Gaudi®2 accelerators to enable high-performance, high-efficiency deep learning training and inference workloads in the cloud.
We’re excited to participate in this year’s ISC High Performance Compute 2023 event in Hamburg Germany. This year our team will demonstrate the capabilities of our Habana Gaudi2® processors, which deliver high-performance, high-efficiency deep learning training and inference.
Announcing a new End-to-End use case showing Training of a semantic segmentation model for Autonomous Driving
In this article, you'll learn how to easily deploy multi-billion parameter language models on Habana Gaudi2 and get a view into the Hugging Face performance evaluation of Gaudi2 and A100 on BLOOMZ.
AI is becoming increasingly important for retail use cases. It can provide retailers with advanced capabilities to personalize customer experiences, optimize operations, and increase sales. Habana has published a new Retail use case showing an ...
In this article, you will learn how to use Habana® Gaudi®2 to accelerate model training and inference, and train bigger models with 🤗 Optimum Habana.