esp-dl model conversion issues: current model not supported by esp-dl

ketavery
Posts: 2
Joined: Thu May 30, 2024 9:28 pm

esp-dl model conversion issues: current model not supported by esp-dl

Postby ketavery » Tue Jun 04, 2024 10:35 pm

I have been trying for a bit to get a basic CNN deployed onto my esp32s3, but it seems no matter how I configure my CNN there always ends up being a matmul left in my model after optimizing the onnx. I have tried following a variety of guides, this one specifically: https://blog.espressif.com/hand-gesture ... 6d7e13fd37. After retracing their steps and using their exact custom model I get the same issue:

When generating my quantization table I get:

Code: Select all

Generating the quantization table:
MatMul is not supported on esp-dl yet
Traceback (most recent call last):
  File "handexample.py", line 44, in <module>
    calib.generate_quantization_table(model_proto,calib_dataset, pickle_file_path)
  File "calibrator.py", line 344, in calibrator.Calibrator.generate_quantization_table
  File "calibrator.py", line 231, in calibrator.Calibrator.generate_output_model
  File "calibrator.py", line 226, in calibrator.Calibrator.check_model
ValueError: current model is not supported by esp-dl
Does anybody have any tips on why optimize_fp_model() continuously returns optimized models that have unsupported layers? Is this something anybody else has experienced?

Who is online

Users browsing this forum: Baidu [Spider] and 107 guests