Being new to this forum, let me briefly introduce myself.
I am a retired electrical engineer who has spent the last 15 years of his career in software engineering for other people. Now I have found the time to do it for myself and with the ESP32 and some other platforms. Depending on the project at hand I switch between two development environments:
either ESP-IDF, running under Eclipse on Ubuntu 18.04 in a VirtualBox under Windows 10
or Arduino-IDE running under Windows 10
My current question arose in the ESP-IDF. Starting with the example program gpio_example_main.c I modified the ISR to this:
Code: Select all
static void IRAM_ATTR gpio_isr_handler_v(void* arg)
{
if (gpio_get_level(GPIO_V_IN)) {
// rising edge
gpio_set_level(GPIO_V_LO, 0);
gpio_set_level(GPIO_V_HI, 1);
} else {
// falling edge
gpio_set_level(GPIO_V_HI, 0);
gpio_set_level(GPIO_V_LO, 1);
}
}
Looking at the signals by oscilloscope I found a pretty stable time skew between the signals GPIO_V_IN and GPIO_V_HI of 5.6 microseconds. In view of a clock frequency of 40 MHz (?) and a simple ISR I expected much less. My application needs a skew of less than 1 microsecond. Is there anything I can do to get close to that?
Indeed there seems to be a lower-level interrupt handling as declared in ...esp-idf\components\esp32\include\esp_intr_alloc.h. But no usage examples are given - or then I could not find them. Moreover higher level interrupts (above level 3) seem to require coding in assembler which means another steep learning curve.
Thanks for any help!
Peter