This example demonstrates how to perform inference using YOLOv8 in C++ with ONNX Runtime and OpenCV's API.
https://github.com/iamstarlee/YOLOv8-ONNXRuntime-CPP.git
This example demonstrates how to perform inference using YOLOv8 in C++ with ONNX Runtime and OpenCV's API.
Transpose op is added to the YOLOv8 model, while make v8 and v5 has the same output shape. Therefore, you can run inference with YOLOv5/v7/v8 via this project.To export YOLOv8 models, use the following Python script:
from ultralytics import YOLO
# Load a YOLOv8 model
model = YOLO("yolov8n.pt")
# Export the model
model.export(format="onnx", opset=12, simplify=True, dynamic=False, imgsz=640)
Alternatively, you can use the following command for exporting the model in the terminal
yolo export model=yolov8n.pt opset=12 simplify=True dynamic=False format=onnx imgsz=640,640
import onnx
from onnxconverter_common import float16
model = onnx.load(R"YOUR_ONNX_PATH")
model_fp16 = float16.convert_float_to_float16(model)
onnx.save(model_fp16, R"YOUR_FP16_ONNX_PATH")
In order to run example, you also need to download coco.yaml. You can download the file manually from here
| Dependency | Version |
|---|---|
| Onnxruntime(linux,windows,macos) | >=1.14.1 |
| OpenCV | >=4.0.0 |
| C++ Standard | >=17 |
| Cmake | >=3.5 |
| Cuda (Optional) | >=11.4 \<12.0 |
| cuDNN (Cuda required) | =8 |
Note (2): Due to ONNX Runtime, we need to use CUDA 11 and cuDNN 8. Keep in mind that this requirement might change in the future.
mkdir build && cd build
cmake ..
make
build directory.``c++
//change your param as you like
//Pay attention to your device and the onnx model type(fp32 or fp16)
DL_INIT_PARAM params;
params.rectConfidenceThreshold = 0.1;
params.iouThreshold = 0.5;
params.modelPath = "yolov8n.onnx";
params.imgSize = { 640, 640 };
params.cudaEnable = true;
params.modelType = YOLO_DETECT_V8;
yoloDetector->CreateSession(params);
Detector(yoloDetector);
``