Parallel output to components in the audio_pipline
Posted: Mon Aug 24, 2020 9:57 am
Is there a method/implementation built into the audio_pipeline to take a stereo element, and simultaneously link other stereo elements with it?
As an example, if I want to hear realtime DSP, I use i2s_reader (from CODEC) ---> DSP element in pipeline (i.e. EQ) ---> i2s_writer (back to codec) - simple enough. But now say I want to feed EQ into i2s_writer - for local low latency monitoring - and also feed EQ into an HTTP stream in the pipeline so that someone else can hear what I'm doing over a server? So, is it possible for EQ to feed 2 writers, or is it possible for i2s_writer to feed both hardware and http stream?
i2s_reader ------> DSP -----> i2s_writer
......................... \_____http_stream
OR
i2s_reader -----> DSP -----> i2s_writer ----> http_stream
As an example, if I want to hear realtime DSP, I use i2s_reader (from CODEC) ---> DSP element in pipeline (i.e. EQ) ---> i2s_writer (back to codec) - simple enough. But now say I want to feed EQ into i2s_writer - for local low latency monitoring - and also feed EQ into an HTTP stream in the pipeline so that someone else can hear what I'm doing over a server? So, is it possible for EQ to feed 2 writers, or is it possible for i2s_writer to feed both hardware and http stream?
i2s_reader ------> DSP -----> i2s_writer
......................... \_____http_stream
OR
i2s_reader -----> DSP -----> i2s_writer ----> http_stream