OpenVINO™ 工具套件:一款可轻松实现“一次写入,处处部署”的开源 AI 工具套件。
最新特性
Easier Model Access and Conversion
Product |
Details |
---|---|
New Model Support |
New models supported: Phi-4-Mini, jina-clip-v1, and bce-embedding-base-v1. |
OpenVINO™ Model Server Updates |
Now supports vision language models (VLM), including Qwen2-VL, Phi-3.5-vision, and InternVL2. |
GenAI and LLM Enhancements
Expanded model support and accelerated inference.
Feature |
Details |
---|---|
CPU Plug-in Optimizations |
Reduced the binary size through optimization of the CPU plug-in and removal of the General Matrix Multiplication (GEMM) kernel. |
Kernel Optimization |
Optimized new kernels for the GPU plug-in significantly boost the performance of long short-term memory (LSTM) models used in many applications, including speech recognition, language modeling, and time series forecasting. |
More Portability and Performance
Develop once, deploy anywhere. OpenVINO toolkit enables developers to run AI at the edge, in the cloud, or locally.
Product |
Details |
---|---|
Intel® Hardware Support |
|
GenAI API Enhancements |
|
Back End Integration |
Preview: The new OpenVINO toolkit back end for ExecuTorch enables accelerated inference and improved performance on Intel hardware, including CPUs, GPUs, and NPUs. |
Paged Attention and Continuous Batching Updates | Enhanced LLM performance and efficient resource use with the implementation of paged attention and continuous batching by default in the GPU plug-in. |
注册了解独家消息、提示和版本发布
率先了解关于英特尔® 发行版 OpenVINO™ 工具套件的一切新内容。注册后,您可以获得抢先了解产品更新和发布信息、独家受邀参加网络研讨会和活动、培训和教程资源、竞赛公告以及其他突发新闻。
资源
社区与支持
探索各种参与方式,并及时了解最新公告。