Fine-Tune LLMs and Optimize Inference with Intel® Gaudi® Accelerators
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
Gain hands-on proficiency using Intel® Gaudi® accelerators by focusing on fine-tuning large language models (LLMs) and optimizing inference techniques.
During the hands-on stage of the workshop, complete the custom fine-tuning of state-of-the-art models by taking advantage of the Intel Gaudi accelerator capabilities.
The workshop covers these Intel® Gaudi® technology topics:
- Discover effective techniques for fine-tuning LLMs.
- Learn about parameters that are specific to the Intel Gaudi accelerator for training and inference.
- Learn the best practices for model development.
- Explore resources in APIs and libraries, including Microsoft DeepSpeed*, Optimum for Intel® Gaudi® AI accelerators, and PyTorch*.