Tech.Decoded Library
Here you’ll find a continuously growing library of knowledge curated to help you get the most out of modern hardware, bolster your competitive edge, and get to market faster.
355 结果
清除所有筛选条件
Artificial Intelligence (AI)
标题
Learn about Contrastive Language Image Pretraining (CLIP) architecture to train embedded models with Intel® Gaudi® 2 accelerators.
Learn how Intel® AMX, the built-in AI accelerator in 4th Gen Intel® Xeon® processors, plus Intel-optimized PyTorch accelerate training & inference.
Learn how to quickly train LLMs on Intel® processors, and then train and fine-tune a custom chatbot using open models and readily available hardware.
本文展示了如何在图像标题数据集上微调多模态大型语言模型 (MLLM),Meta Llama-3.2-11B-Vision-Instruct。
This article provides a detailed documentation on how to install and use JAX.
oneDNN Graph API, supported in PyTorch 2.0, leverages aggressive fusion patterns to accelerate inference and generate efficient code on AI hardware.
This article demonstrates how to run AI Upscaling model on Intel's AI Boost Neural Processing Unit (NPU).
The article explains how to create a RAG based browser extension using OpenVINO to efficiently summarize the content from the web or pdf files.
This article shows how to develop an AI travel agent for answering travel and tourism related queries on AI PCs.
This in-depth solution demonstrates how to train a model to perform language identification using Intel® Extension for PyTorch. Includes code samples.
This article shows how to to create an AI Avatar Chatbot on Intel® Xeon® Scalable Processors and Intel® Gaudi® Al Accelerators with PyTorch and OPEA.
Get an intro to the scikit-learn machine-learning library, plus Intel's extension for it, performance benefits, and a step-by-step code walkthrough.
Get a primer on LLM optimization techniques on Intel® CPUs, then learn about (and try) Q8-Chat, a ChatGBT-like experience from Hugging Face and Intel.
UC Davis accelerates prompt-driven GenAI for data visualization using Intel® Extension for PyTorch* on Intel® GPUs.
An Intel® processor and ASUS AI server improved accuracy for detecting hand joint erosion, a symptom of Rheumatoid Arthritis.
Guide GenAI models to make more accurate predictions by ensuring GenAI systems are built on solid, data-driven foundations to reach their potential.
A developer’s guide to getting started with Generative AI with Intel AI technologies
Discover proven methods of dealing with LLM hallucinations in your enterprise GenAI applications and increasing their reliability.
Get practical tips for developing AI applications in the cloud.
This article demonstrates how to boost PyTorch Inductor performance on Windows for CPU Devices with Intel oneAPI DPC++/C++ Compiler
Explore and download the final kits from Intel and Accenture* built to simplify AI development for key industry use cases—energy and utilities, retail, manufacturing, financial services, and more.
Expand your skills in AI training and inference performance, including finding and fixing bottlenecks using Intel-optimized AI tools and libraries.
Introducing Intel® Tiber™ AI Cloud, built on the backbone of Intel® Tiber™ Developer Cloud and designed for production-scale AI deployments.
Use the OpenVINO™ toolkit to optimize and deploy generative AI models on Intel® Core™ Ultra processors, the backbone of AI PCs from Intel.
Implement a genetic algorithm to perform an offload computation to a GPU using numba-dpex for Intel® Distribution for Python*.
Supercharge your generative AI solutions with this guide's top tips and tricks for LLM fine-tuning and inference.
This article shows the initial performance results for Llama 3.2 on Intel's AI product portfolio, including Intel® Gaudi® AI accelerators, Intel® Xeon® processors, and AI PCs.
This article guides you the process of upscaling images generated by Stable Diffusion with the StableDiffusionUpscalePipeline from the diffusers library.
Intel is proud to be one of the founding members of the Unified Acceleration Foundation (UXL), an open unified parallel compute ecosystem for Edge, AI, HPC, IoT & more. It all started one year ago!
Build, optimize, and deploy AI apps on AI PCs with ONNX and OpenVINO™ toolkit across diverse environments.
The untapped opportunities offered by AI PCs are largely due to the integration of CPU, GPU, and NPU resources.
We are sharing our initial performance results of Llama 3 models on the Intel AI product portfolio using open-source software.
Using Roboflow and Intel® Xeon® processors, Blue Eco Line created a computer vision system capable of identifying and monitoring pollution.
A SYCLomatic tool from the Intel® oneAPI Base Toolkit achieved a 2.0x speedup on an Intel GPU without manual tuning.
A guide on how to perform (INT8 and INT4) quantization on an LLM (Intel/neural-chat-7b model) with Weight Only Quantization (WOQ) technique.
Overcome limitations in Python with Intel® Distribution of Python, which enables developers to achieve near-native performance for multithreaded apps.
Learn how to build a practical GenAI solution by exploring examples from ChatQnA and Microsoft Copilot, powered by Intel® Gaudi® AI Accelerators.
Eliminate slow, inefficient AI with optimization techniques that deliver stunning performance and scalability in the data center.
Hugging Face uses hardware acceleration, small language models, and quantization to run state-of-the-art open source LLMs on a typical PC.
Get in-depth performance insights for your OpenVINO™ toolkit deep learning model-based applications targeting CPU, GPU, and NPU.
Learn the basics of deploying AI applications on AI PCs and get expert tips and resources.
See how this platform uses Intel® Gaudi® 2 AI accelerators to ensure data privacy and security without sacrificing accuracy and scalability.
Get a comprehensive evaluation of Intel CPU and GPU performance within the cutting-edge context of federated learning and an ASUS healthcare solution.
Intel® oneAPI Deep Neural Network Library (oneDNN) increases deep learning performance on various hardware architectures.
Profiling Data Parallel Python with Intel® VTune™ Profiler. Analyze and speed up NumPy, Numba, Python, and PyTorch applications
Learn the best practices and tools for building high-performance generative AI applications on Intel’s budget-friendly GPUs.
Learn how to convert a PyTorch model to the GPT-Generated Unified Format, a binary file format that optimizes LLM storage and processing.
Get the steps for running an open source Stable Diffusion model on Intel® Gaudi® AI accelerators to create your own unique piece of art.
Introducing the low-precision quantized open LLM leaderboard, a new tool for finding high-quality models that can be deployed on a given client.
Explore how small form-factor AI PCs can deftly run the Llama 3 70B parameter model locally and at lower cost than a workstation.
The newly developed SYCL backend in llama.cpp—a light, open source LLM framework—enables developers to deploy on the full spectrum of Intel GPUs.
Explore expert techniques for programming with SYCL to develop optimized multi-GPU applications on Intel® Tiber™ Developer Cloud.
页 / 5