TensorFlow* and Intel® oneAPI Deep Neural Network Library
TensorFlow* and Intel® oneAPI Deep Neural Network Library
Rapid growth in AI and machine learning innovations and workloads necessitates constant developments in both software and hardware infrastructure. Developers of TensorFlow* (which is the Google* end-to-end open source, machine learning framework) and Intel® oneAPI Deep Neural Network Library (oneDNN) have been collaborating closely to enable users to fully use new hardware features and accelerators, with a focus on x86 architecture. This talk covers recent projects such as int8† and bfloat16‡ vectorization support that brings custom oneDNN operations to basic TensorFlow software and the upcoming Intel® XPU device plug-in for TensorFlow.
†Intel® Advanced Vector Extensions 512 (Intel® AVX-512) with Vector Neural Network Instructions (VNNI)
‡Intel AVX512 for bfloat16
Speaker
Penporn Koanantakool is a senior software engineer at Google. She leads the TensorFlow performance optimization collaboration with Intel. Penporn holds a PhD in computer science from the University of California, Berkeley, and a bachelor of engineering degree in computer engineering from Kasetsart University, Thailand.
产品和性能信息
性能因用途、配置和其他因素而异。请访问 www.Intel.cn/PerformanceIndex 了解更多信息。