| ||||
|
|
pip install openvino Assume you have an ONNX export of your PyTorch model:
The easiest way to get the runtime is via pip, though for the full Model Optimizer, download the full OpenVINO toolkit. intel deep learning deployment toolkit
Ditch the Complexity: Supercharge Inference with the Intel Deep Learning Deployment Toolkit pip install openvino Assume you have an ONNX
mo --input_model my_model.onnx --output_dir ./optimized_model Here is a Python snippet to run your newly minted IR model: though for the full Model Optimizer