Change ONNX model at runtime

TobiasUhmann
Posts: 9
Joined: Mon Nov 27, 2023 8:11 am

Change ONNX model at runtime

Postby TobiasUhmann » Mon Nov 27, 2023 10:12 am

Hi everyone,

in our project, we have been using TensorFlow Lite for microcontrollers so far to run some AI models on the ESP32. But now, we want to run ONNX models that cannot be converted to TensorFlow Lite. So luckily we found out about ESP-DL, which even has a tutorial on running ONNX models: https://docs.espressif.com/projects/esp ... h-tvm.html

Unfortunately, however, I cannot find a way to change my ONNX model at runtime. The mentioned tutorial describes how to generate C++ code for a specific ONNX model and another tutorial shows how to write the code on your own. But there doesn't seem to be an API that simply takes an ONNX binary and dynamically builds the required objects in heap. I need this because we are updating the model at runtime and don't know what layers the model consists of etc. (the ONNX model is generated from sklearn, TensorFlow, etc.). In TensorFlow Lite I can load an arbitrary model like this:

Code: Select all

const tflite::Model *model = tflite::GetModel(this_model->binary);
Is there an equivalent in ESP-DL?

Thanks a lot in advance!

BlueSkyB
Posts: 4
Joined: Tue Nov 28, 2023 3:01 am

Re: Change ONNX model at runtime

Postby BlueSkyB » Tue Nov 28, 2023 6:28 am

If you use the TVM approach, the TVM architecture does not support change model at runtime.
ESP-DL does not support this feature too. Firstly, ESP-DL does not have graph parsing function, so there is no interface to import ONNX models. Secondly, there is no mechanism to change model at runtime.
Maybe you can implement the functionality by building the model layer by layer yourself.

TobiasUhmann
Posts: 9
Joined: Mon Nov 27, 2023 8:11 am

Re: Change ONNX model at runtime

Postby TobiasUhmann » Tue Nov 28, 2023 7:48 am

Thanks a lot for the confirmation.

Who is online

Users browsing this forum: No registered users and 10 guests