Skip to content
Menu
Home
Get Started
Get started with Intel® Gaudi® AI accelerators
Using Hugging Face
Tutorials
Get Access
Get Optimized
Model Performance Data
Generative AI and Large Language Models
Software Setup and Installation
Kernel Libraries
Intel® Gaudi® accelerators on Premise
Videos
Documentation
Installation Guide
PyTorch User Guide
DeepSpeed User Guide
Catalog
Containers
Generative
Large Language Models
Computer Vision
Natural Language Processing
PyTorch
Forum
Explore More
Intel Gaudi Blog
Events and Webinars
FAQs
Support
Menu
Home
Get Started
Get started with Intel® Gaudi® AI accelerators
Using Hugging Face
Tutorials
Get Access
Get Optimized
Model Performance Data
Generative AI and Large Language Models
Software Setup and Installation
Kernel Libraries
Intel® Gaudi® accelerators on Premise
Videos
Documentation
Installation Guide
PyTorch User Guide
DeepSpeed User Guide
Catalog
Containers
Generative
Large Language Models
Computer Vision
Natural Language Processing
PyTorch
Forum
Explore More
Intel Gaudi Blog
Events and Webinars
FAQs
Support
Home
»
BERT
Intel® Gaudi® AI Accelerators Blog
/ BERT
01/31/2023
Pre-Training the BERT 1.5B model with DeepSpeed
In this post, we show you how to run Habana’s DeepSpeed enabled BERT1.5B model from our Model-References repository.
Read more
BERT
,
DeepSpeed
,
Gaudi
,
Gaudi2
,
pytorch
,
synapseai