Manipulating mono/stereo audio from within a pipeline
Posted: Wed Aug 12, 2020 10:12 am
Hi (please be gentle ),
Board: ESP32-LyraT
Dev Env: CLI on macOS (all works fine)
What I am trying to do:
Take a guitar, plugged into the aux in (I believe guitar is a mono signal and the LyraT board will automagically make it stereo by default?), manipulate the audio samples (ideally each left/right channel independently), then output to headphone jack.
What I've done so far:
Using the 'pipelines' feature of the ADF library I am taking the audio from aux in (as I said, its a guitar, so mono input but mapped to both left/right by default I think), processing it and then outputting it to the audio jack/headphone jack in stereo (each channel being duplicate of mono input?). I get sound when I bypass my Audio Element... so its working that far.
I am getting a bit confused at this point with how the actual processing inside an Audio Element (pipeline item) works...
I took the equalizer code, stripped it right back and I call my custom method like this (inside the _process call):
I am literally proxying the buf and r_size so i can manipulate the audio. This is where I get more confused...
So let's assume:
Audio: 44100, 16bit, stereo (but from mono source)
I would like to know how can I 'iterate' over each of the samples for each channel independently and do something with each? My assumption is that given its 16 bits per sample and its something like R/L/R/L/R/L in the data stream (I think thats the default via: I2S_CHANNEL_FMT_RIGHT_LEFT)
I currently have something like:
I am clearly missing something in relation to the payload received by my method, just not sure what. I have tried looking at the documentation for i2s and see its nbits per channel, alternating between channels but Its not working as I expect?
Many thanks and apologies if I missed something silly.
Board: ESP32-LyraT
Dev Env: CLI on macOS (all works fine)
What I am trying to do:
Take a guitar, plugged into the aux in (I believe guitar is a mono signal and the LyraT board will automagically make it stereo by default?), manipulate the audio samples (ideally each left/right channel independently), then output to headphone jack.
What I've done so far:
Using the 'pipelines' feature of the ADF library I am taking the audio from aux in (as I said, its a guitar, so mono input but mapped to both left/right by default I think), processing it and then outputting it to the audio jack/headphone jack in stereo (each channel being duplicate of mono input?). I get sound when I bypass my Audio Element... so its working that far.
I am getting a bit confused at this point with how the actual processing inside an Audio Element (pipeline item) works...
I took the equalizer code, stripped it right back and I call my custom method like this (inside the _process call):
Code: Select all
apply_custom_manipulation((char *)my_element->buf, r_size);
So let's assume:
Audio: 44100, 16bit, stereo (but from mono source)
I would like to know how can I 'iterate' over each of the samples for each channel independently and do something with each? My assumption is that given its 16 bits per sample and its something like R/L/R/L/R/L in the data stream (I think thats the default via: I2S_CHANNEL_FMT_RIGHT_LEFT)
I currently have something like:
Code: Select all
static void apply_custom_manipulation(char *buffer, int size) {
int16_t *samples = buffer; // 16bit RLRL ?
for (size_t i = 0; i < size; i+=2) { // so work in batches of 2 - right&left, then another right&left...
samples[i] = samples[i] * 1; // right? (does not seem to do anything)
samples[i+1] = samples[i+1] * 0; // left? seems to control both channels?
}
}
Many thanks and apologies if I missed something silly.