in our project, we have been using TensorFlow Lite for microcontrollers so far to run some AI models on the ESP32. But now, we want to run ONNX models that cannot be converted to TensorFlow Lite. So luckily we found out about ESP-DL, which even has a tutorial on running ONNX models: https://docs.espressif.com/projects/esp ... h-tvm.html
Unfortunately, however, I cannot find a way to change my ONNX model at runtime. The mentioned tutorial describes how to generate C++ code for a specific ONNX model and another tutorial shows how to write the code on your own. But there doesn't seem to be an API that simply takes an ONNX binary and dynamically builds the required objects in heap. I need this because we are updating the model at runtime and don't know what layers the model consists of etc. (the ONNX model is generated from sklearn, TensorFlow, etc.). In TensorFlow Lite I can load an arbitrary model like this:
Code: Select all
const tflite::Model *model = tflite::GetModel(this_model->binary);
Thanks a lot in advance!