I'm using the internal oscillator (136kHz on ESP32C3) as it's the lowest power option for deep sleep. I don't have an external 32kHz crystal.
I wake the unit on the hour, do some stuff for a few seconds, then deep sleep till the next hour usign the internal 136kHz oscillator to keep RTC during sleep. I see the wake up time might drift by up to +/-20s per hour. This seems strongly temperature-dependent, so even calibrating every wake (rtc_slow_cal) doesn't help much.
How do others manage this? (Without resorting to a higher-accuracy clock source)
Managing RTC clock drift in deep sleep?
-
- Posts: 3
- Joined: Tue Nov 14, 2023 7:14 pm
Re: Managing RTC clock drift in deep sleep?
A couple of months ago, I made a short video about this (https://www.youtube.com/watch?v=fZAR8WTKiSg). I tested it on ESP32-S3, but it should be valid for the ESP32-C3 as well.
As you noted, too, the clock drift is strongly temperature-dependent. This is compounded by the fact that the internal oscillator is calibrated when the chip is active (as opposed to in deep sleep), which increases the chip's temperature.
Before using discrete oscillators or crystals or RTC chips, you can improve the behavior in a couple of ways:
(1) if you regularly connect to WiFi, you can get the current NTP time and calculate the mean internal RC oscillator frequency. If your temperature is somewhat stable or close to periodic between successive timestamps, this will improve accuracy a lot.
(2) if you are not limited by battery lifetime, you can switch to the so-called internal fast RC oscillator, which is more accurate (I show the accuracy of it near the end of the video)
(3) if you feel adventurous and get a lot of time-temperature samples for your chip, you can numerically estimate the temperature behavior and compensate for it (I think I showed it in the second half of the video). Note that there's a bug in ESP-IDF that crashes an ESP32-S3 when sampling the internal temperature and using WiFi at the same time, and the ULP has issues reading the internal temperature sensor on the S3 as well – I don't know if the ESP32-C3 is affected by these issues.
Algorithmically, these are more or less all options.
As you noted, too, the clock drift is strongly temperature-dependent. This is compounded by the fact that the internal oscillator is calibrated when the chip is active (as opposed to in deep sleep), which increases the chip's temperature.
Before using discrete oscillators or crystals or RTC chips, you can improve the behavior in a couple of ways:
(1) if you regularly connect to WiFi, you can get the current NTP time and calculate the mean internal RC oscillator frequency. If your temperature is somewhat stable or close to periodic between successive timestamps, this will improve accuracy a lot.
(2) if you are not limited by battery lifetime, you can switch to the so-called internal fast RC oscillator, which is more accurate (I show the accuracy of it near the end of the video)
(3) if you feel adventurous and get a lot of time-temperature samples for your chip, you can numerically estimate the temperature behavior and compensate for it (I think I showed it in the second half of the video). Note that there's a bug in ESP-IDF that crashes an ESP32-S3 when sampling the internal temperature and using WiFi at the same time, and the ULP has issues reading the internal temperature sensor on the S3 as well – I don't know if the ESP32-C3 is affected by these issues.
Algorithmically, these are more or less all options.
Re: Managing RTC clock drift in deep sleep?
Thanks, I'll check out that video and if I get to revisit this I'll consider these options. Hearing someone else's thoughts largely reaffirms my understanding of the behaviour and what options we might have.mattersofintrigue wrote: ↑Tue Nov 14, 2023 7:38 pmA couple of months ago, I made a short video about this (https://www.youtube.com/watch?v=fZAR8WTKiSg). I tested it on ESP32-S3, but it should be valid for the ESP32-C3 as well
...
Algorithmically, these are more or less all options.
In my experience the higher-frequency osc didn't have any better accuracy, so I might not have set that up right or maybe that's just how the C3 is.
In my case, we also control the wifi AP that these units connect to, so the option we're looking into is whether we can encode a timestamp into the AP's beacon, and have the ESP32C3 nab the timestamp without actually having to connect to the AP, but merely by scanning. We'll need to determine experimentally whether this is better than simply attempting to blindly connect to the last AP without a scan - if successful this might be faster, albeit at higher power consumption, than waiting a few seconds for a scan to complete.
Temperature effect were definitely noticeable if the ESP32C3 had wifi enabled just before going to sleep - the calibration is performed on wake (when at ambient) but when entering sleep the heat build up from the active wifi then causes some drift as the unit cools back to ambient again.
Another option is wake regularly between the major RTC syncs, and recalibrate the RC osc against the crystal osc and immediately sleep again - effectively compensating for changing temperature in realtime.
-
- Posts: 3
- Joined: Tue Nov 14, 2023 7:14 pm
Re: Managing RTC clock drift in deep sleep?
Right, if you control the AP, you can simplify option 1. Set the ESP to promiscuous mode, wait until the beacon arrives (102.4ms max) and read wifi_pkt_rx_ctrl_t's timestamp, which should correspond to the field in the beacon frame (uptime in microseconds). While this gives only relative time, you can use it to compensate for the RC oscillator drift. The advantage of this approach (vs custom headers) is that you don't need custom code on the AP, but only timestamps are more accurate than ESP's internal RTC.
Note that the timestamp field should be 8 bytes long, but afaik is only 4 bytes on ESP-IDF. Disabling modem-sleep (and light sleep) will improve accuracy.
Note that the timestamp field should be 8 bytes long, but afaik is only 4 bytes on ESP-IDF. Disabling modem-sleep (and light sleep) will improve accuracy.
Re: Managing RTC clock drift in deep sleep?
I looked into this recently, too, on ESP32.
First I tried to use the slow clock frequency adjustment register (RTC_CNTL_SCK_DCAP) to periodically tune the frequency in deep sleep wakeup stubs. After too long, I noticed that increasing the frequency is detrimental to sleep power consumption, so that was scrapped.
I had more luck using the stub to accumulate actual time by periodically waking up and calculating the actual elapsed time based on the current actual frequency. This led to the revelation that the frequency changes dramatically between RTC sleep and wake states - so much so that in the brief window of time between deep sleep and the wake stub trying to determine the frequency, it may have slowed by more than 5000ppm.
This is worth looking into separately, in my opinion. There is probably scope to simply and significantly improve the ESP-IDF's tracking of time in deep sleep by adjusting for the higher deep sleep slow clock frequency. (Consider how often reports of slow clock inaccuracy describe devices waking up too soon vs the alternative.)
The typical 2nd stage bootloader and app startup is quite sluggish with logging enabled, so by the time RTC calibration rolls around, the slow clock has slowed down even more, and so the obtained value will be even further from the actual deep sleep frequency.
In searching for some examples just now, I found a discussion that reached a similar conclusion as I did citing 0.7% speedup in sleep here: https://github.com/espressif/esp-idf/issues/6687
Unfortunately this rate of change differs between devices. I had one with as low as 1000ppm, and one above 5000ppm. A few WROOM32Es off the same tape were clustered within ~200ppm of 4200ppm. I couldn't find any correlation between this value and any on-board information (eg. VREF) so unfortunately any calibration would need to be external. I fiddled with all sorts of registers, but nothing had any positive effect without an unacceptable increase in power consumption. If you don't mind >100uA there are many options, but I wasn't interested in anything that added >1uA.
Anyway, once the stub algorithm was in place and the device's specific sleep/wake difference ppm was dialled in, I found it is possible to get within about 100ppm error at the cost of a few hundred nA increase in average deep sleep current. I would have to go back and test again to refresh my memory to be sure. (I was sick to death of it by this point.)
Only very very late into all this did I find that there is the ability to output the slow clock using GPIO25/26. This would have saved a lot of time:
If you're curious, enable that and then alternate between sleep/wake every 1min. If your oscilloscope can keep count of the frequency on that pin, you'll see a bimodal distribution manifest as the slow clock transitions between the faster sleep frequency and the slower wake frequency.
tldr- If you're up for a challenge, there might be some gains to be made here. Otherwise, life's too short: use an external RTC and move on.
First I tried to use the slow clock frequency adjustment register (RTC_CNTL_SCK_DCAP) to periodically tune the frequency in deep sleep wakeup stubs. After too long, I noticed that increasing the frequency is detrimental to sleep power consumption, so that was scrapped.
I had more luck using the stub to accumulate actual time by periodically waking up and calculating the actual elapsed time based on the current actual frequency. This led to the revelation that the frequency changes dramatically between RTC sleep and wake states - so much so that in the brief window of time between deep sleep and the wake stub trying to determine the frequency, it may have slowed by more than 5000ppm.
This is worth looking into separately, in my opinion. There is probably scope to simply and significantly improve the ESP-IDF's tracking of time in deep sleep by adjusting for the higher deep sleep slow clock frequency. (Consider how often reports of slow clock inaccuracy describe devices waking up too soon vs the alternative.)
The typical 2nd stage bootloader and app startup is quite sluggish with logging enabled, so by the time RTC calibration rolls around, the slow clock has slowed down even more, and so the obtained value will be even further from the actual deep sleep frequency.
In searching for some examples just now, I found a discussion that reached a similar conclusion as I did citing 0.7% speedup in sleep here: https://github.com/espressif/esp-idf/issues/6687
Unfortunately this rate of change differs between devices. I had one with as low as 1000ppm, and one above 5000ppm. A few WROOM32Es off the same tape were clustered within ~200ppm of 4200ppm. I couldn't find any correlation between this value and any on-board information (eg. VREF) so unfortunately any calibration would need to be external. I fiddled with all sorts of registers, but nothing had any positive effect without an unacceptable increase in power consumption. If you don't mind >100uA there are many options, but I wasn't interested in anything that added >1uA.
Anyway, once the stub algorithm was in place and the device's specific sleep/wake difference ppm was dialled in, I found it is possible to get within about 100ppm error at the cost of a few hundred nA increase in average deep sleep current. I would have to go back and test again to refresh my memory to be sure. (I was sick to death of it by this point.)
Only very very late into all this did I find that there is the ability to output the slow clock using GPIO25/26. This would have saved a lot of time:
Code: Select all
static void output_rtc_slow_clock(int gpio)
{
assert(gpio == 25 || gpio == 26);
if(gpio == 25)
{
REG_SET_BIT(RTC_IO_PAD_DAC1_REG, RTC_IO_PDAC1_MUX_SEL_M);
REG_CLR_BIT(RTC_IO_PAD_DAC1_REG, RTC_IO_PDAC1_RDE_M | RTC_IO_PDAC1_RUE_M);
REG_SET_FIELD(RTC_IO_PAD_DAC1_REG, RTC_IO_PDAC1_FUN_SEL, 1);
}
else if(gpio == 26)
{
REG_SET_BIT(RTC_IO_PAD_DAC2_REG, RTC_IO_PDAC2_MUX_SEL_M);
REG_CLR_BIT(RTC_IO_PAD_DAC2_REG, RTC_IO_PDAC2_RDE_M | RTC_IO_PDAC2_RUE_M);
REG_SET_FIELD(RTC_IO_PAD_DAC2_REG, RTC_IO_PDAC2_FUN_SEL, 1);
}
REG_SET_FIELD(SENS_SAR_DAC_CTRL1_REG, SENS_DEBUG_BIT_SEL, 0);
REG_SET_FIELD(RTC_IO_RTC_DEBUG_SEL_REG, RTC_IO_DEBUG_SEL0, RTC_IO_DEBUG_SEL0_150K_OSC);
}
tldr- If you're up for a challenge, there might be some gains to be made here. Otherwise, life's too short: use an external RTC and move on.
Re: Managing RTC clock drift in deep sleep?
Wow, some valuable stuff in there, thanks!
> use an external RTC
Do you mean external crystal? (or either, I suppose) unless the ESP32's crystal driver circuit isn't particularly low-power...?
-
- Posts: 3
- Joined: Tue Nov 14, 2023 7:14 pm
Re: Managing RTC clock drift in deep sleep?
Thanks for the valuable info, boarchuz. I agree that it’s generally not worth it to improve the internal RTC except by manually calculating the average frequency between two time synchronizations. The oscillator’s characteristics are just too unfavorable to compensate well – the temperature coefficient is upwards of 1000ppm/°C, and there's a frequency shift between wake and sleep modes, which is possibly related to its voltage coefficient in addition to its temperature coefficient.
Let’s assume your AP is on a fixed channel, that the AP sends accurate timestamps in its beacon, and that you receive it passively. The interval between two beacons is 102.4ms, which means you are using WiFi RX for around 51ms on average. WiFi RX eats around 80mA for the ESP32-C3. But you also need time to wake up, to set up WiFi, to process packets, etc. So let’s assume you require 80mA for 100ms, for every time-sync. If you want to get the timestamp once every hour, that gives an average current consumption of
80mA * 100ms / 3600s = 2.2 µA.
So you need around 2 µA to get a timestamp every hour under more or less optimal conditions (fixed channel with accurate timestamps under your control.) Your clock will still drift between syncs, however, such as when temperature changes. While you can account for the wake/sleep frequency drift, compensating for temperature takes a lot of additional current: if your wakeup + sampling (either frequency against the main crystal or temperature against a full numerical model) + compensation routine takes 10ms every minute (most of it would be wakeup + measuring a given number of cycles), that’s
20mA * 10ms / 60s = 3.3µA
of additional current.
While this number is in the ballpark of what you need for a crystal + ESP’s drive current, accurate low-power 32.768 kHz oscillators or RTC chips require below 1.5 µA, some of them significantly less. And their accuracy beats everything you can do in software.
Tl;dr If your chip connects to WiFi occasionally anyways, then retrieve a timestamp to replace IDF’s clock calibration (i.e. use the timestamp to set the current time as well as to update the frequency). Depending on the chip, this can improve your clock by a sizeable margin since you also account for the wake/sleep frequency shift. Usually, I get <1s drift per hour in this way. If that’s not enough (such as when you need more accuracy, or you connect to WiFi too rarely, or when temperature is too unstable), a low-power external oscillator or RTC is the way to go.
Let’s assume your AP is on a fixed channel, that the AP sends accurate timestamps in its beacon, and that you receive it passively. The interval between two beacons is 102.4ms, which means you are using WiFi RX for around 51ms on average. WiFi RX eats around 80mA for the ESP32-C3. But you also need time to wake up, to set up WiFi, to process packets, etc. So let’s assume you require 80mA for 100ms, for every time-sync. If you want to get the timestamp once every hour, that gives an average current consumption of
80mA * 100ms / 3600s = 2.2 µA.
So you need around 2 µA to get a timestamp every hour under more or less optimal conditions (fixed channel with accurate timestamps under your control.) Your clock will still drift between syncs, however, such as when temperature changes. While you can account for the wake/sleep frequency drift, compensating for temperature takes a lot of additional current: if your wakeup + sampling (either frequency against the main crystal or temperature against a full numerical model) + compensation routine takes 10ms every minute (most of it would be wakeup + measuring a given number of cycles), that’s
20mA * 10ms / 60s = 3.3µA
of additional current.
While this number is in the ballpark of what you need for a crystal + ESP’s drive current, accurate low-power 32.768 kHz oscillators or RTC chips require below 1.5 µA, some of them significantly less. And their accuracy beats everything you can do in software.
Tl;dr If your chip connects to WiFi occasionally anyways, then retrieve a timestamp to replace IDF’s clock calibration (i.e. use the timestamp to set the current time as well as to update the frequency). Depending on the chip, this can improve your clock by a sizeable margin since you also account for the wake/sleep frequency shift. Usually, I get <1s drift per hour in this way. If that’s not enough (such as when you need more accuracy, or you connect to WiFi too rarely, or when temperature is too unstable), a low-power external oscillator or RTC is the way to go.
Who is online
Users browsing this forum: No registered users and 115 guests