Build Advanced RAG Applications with Intel® CPUs and GPUs
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
In this immersive, hands-on session, delve into the world of local large language models (LLM) and discover how to harness their power for retrieval augmented generation (RAG)-based AI applications. This session equips you with the skills and knowledge to design and implement RAG-based AI systems using local LLMs, eliminating the need for cloud-based services and ensuring data privacy and security.
Developers get the opportunity to follow the examples and code through Intel® Tiber™ AI Cloud.
Topics in this workshop include:
- Understand the fundamentals of local LLMs and RAG-based AI.
- Gain hands-on experience with integrating local LLMs with RAG-based AI systems.
- Deploy local LLMs using popular frameworks and tools, including Hugging Face* Transformers and PyTorch*.
- Maximize security on your own hardware, ensuring optimal data privacy.
- Discover how to integrate local LLMs with RAG-based AI systems, retrieving relevant information, augmenting it with context, and producing text similar to human speech.
- Explore real-world use cases and case studies of local LLM-powered RAG-based AI applications.