AI PC Development
ONNX* (Open Neural Network Exchange) Model and ONNX Runtime
ONNX* model is a machine learning model format, and ONNX Runtime is a cross-platform inference and training machine learning accelerator. It works with Intel® platforms and allows developers to improve model performance while easily targeting multiple platforms. ONNX Execution Providers (EP) enable certain hardware acceleration technologies to run AI models. Intel platforms have two optimized EPs: the OpenVINO Execution Provider and the DirectML EP.
Installation Guides
Use the following guides to get started with ONNX:
Demos and Resources
Accelerate AI on Windows* with Intel® NPUs
See how Microsoft* and Intel collaborate on NPU technology.
Support for Intel® AI Boost
Learn about the developer preview of DirectML with ONNX Runtime that enables support for Intel® Core™ Ultra processors with Intel® AI boost.