Latency - TheTechnobear/SSP GitHub Wiki
WORK IN PROGRESS this is a big topic, at present I'm going to just give an overview with a few examples, that came up on the forum recently.
Overview
Latency is an inevitable part of any digital system. Many think of it as 'bad', partly due to its undesirable knock on effects. but due to the inevitable nature, I find it better to just understand it, and determine how we will handle it, and when it will bite use.
Note: as latency (and jitter) is common to ALL digital systems, we will start with a general overview that is NOT specific to the SSP. as we will see the issues and strategies we use to handle are common for all hardware/software, rather than SSP specific. but I will highlight examples using the SSP, but remember every DAW, digital eurorack module has the same issue!
What is Latency and Jitter?
Latency is any delay betwen an an input and an output. so if a module takes an input, puts it thru a FX, then outputs its signal ... then the time taken between in to out = delay = latency.
Latency can (and is) introduced at many stages, but simplistically we can thing of :
IO Latency
any time we do Analog (e.g. physical jacks) -> Digital, Digital -> Analog , or send receive over usb or midi. we have latency.
Processing computers take time to process a signal e.g. to make the FX we have to process it.
Algorythm. linked to processing, but something like a filter by design introduces a delay.
Each of the above, can be measured in milliseconds, and some is much larger than others e.g. the IO latency is often the most signifant. in reality they are also combinational, ie. a eurorack module will have ALL the above summed.
e.g. signal in -> process -> signal out
One of the nice aspects of Latency is it is consistent, and so we can have strategies to overcome it (Delay Compensation, more on this later)
However, some of these 'delays' can vary in time, e.g. IO latency or process MAY vary according to how much load they are under. this variation is called jitter, this is highly undesirable, see we cannot compensate for something that is not fixed. for this reason jitter is something we try to handle as developers using various techniques.
Why do digial systems have latency?
I should start by saying , even analog systems have latency... e.g. a Filter may introduce a small delay.
But why is it so inherent in digital systems? two main reasons....
- time stops for no man in audio processing, audio is a continuous stream... it doesnt matter if your printer pauses printing for a while, but 'pausing audio' will create silence.
- jitter due to above, its better to have a consistent signal with delay, that one that varies continuosly.
so basically, we introduce delays by using buffersin audio processing to 'smooth out' the processing load and potential IO jitter. (even at the expense of complexity , and arguably worst cpu performance)
how much big these buffers (and so delay/latency) is determined by the power of your cpu, and also how processing (complexity) you are performing. e.g. a powerful desktop may use 128 samples (as does the SSP @ 48k) , but a sluggish laptop may use 512 or even 1024 in contrast, a very simple eurorack module might use only 16 (!) samples, because it does little and is highly focused on a single task.
Issues
we really only have one issue with Latency and that is the delay, and 'mixing' signal that have different delays due to different 'pathways'. however, this single issue, can show up in different forms, which some may think are different. but they are the same underlying issue really.
Delay / Response
(talk midi -> audio , e.g. differnt synths/modules)
Mixing
(wet vs dry)
Phase
(small delays, causing phasing issues)
Strategies to handle
as above, really we have only one issue to handle... and that is how to deal with a delayed signal, or rather multiple delayed signals. again, the strategies are pretty much the same, but many may feel they are different.
the first thing to realise is we cannot 'go back in time' if we have an undelayed signal and we want to combine with a delayed signal, the ONLY solution is to delay the undelayed signal.
generalising, if we have multiple delayed signals.. to combined we will have to eventually delay ALL signals to the worst delay.
Serialisation
one strategy to handle delayed signals is to make sure EVERY signal passes thru exactly the same path.
(wet/dry example in module)
Delay Compensation
(midi example?)
Temporary example !
Conversation from Forum explaination I hve , as a reminder of points I want to bring up here:
example
lets create a real world example…
SSP Plts → Make Noise QPAS → DAW
yes, this is possible, in lots of different ways with the SSP. lets do it simply initially, and talk about other options/limitations afterwards…
so a simple stereo fx example…
PLTS → OUT 3/4 INP 5/6 → OUT 1/2
then take the SSP hardware outputs 3/4 into QPAS inputs (or what ever eurorack fx you are using) then QPAS output into inputs 5/6.
connect your SSP to computer via USB, to use as an audio interface. and use the SSP audio interface outputs 1/2 which will give you what you want.
whats also interesting is your audio interface 3/4 will also contain the ‘dry’ PLTs output, which can be very useful. i.e. you COULD use OUT 1/2 as fully wet, OUT 3/4 and so record/mix both in the daw. (there is however an ‘issue’ which we discuss later :wink: )
ok, now truth is the above was a bit overly complicated …we could also use. PLTS → OUT : 3/4 INP 1/2 → OUT 1/2 now connected QPAS to inputs 1/2…
the reason I used 5/6 in first example is beginners often incorrectly assume there is some kind of magic connection between inputs and outputs … there is NOT , INP 1/2 in no way relates to OUT 1/2.
real world
ok, so lets now talk about ‘real life’ complications…
in any digital system we have latency ( * ) caused by buffering and processing delays, we see this not only in hardware but also computer systems e.g. audio interfaces/daws. the SSP is no different… any time we take an input , or an output we have latency. (in order of )milliseconds)
so the above patch, will incur latency in the following places sending signal to its output, to the eurorack fx module receiving the signal from the eurorack into the SSP. sending signal to the DAW.
there is also ‘processing delay’ within the SSP, similarly if you are using a digital eurorack fx module, it will also have similar IO and processing delays.
this is not generally an issue, often we can ignore in a simple chain, or we can use delay compensation to work around - but its something we need to be aware of.
a good example of why… was my talk above of using OUT 1/2 as dry, and OUT 3/4 as wet. the issue here, is the dry signal will be ‘out of phase’ with the wet signal, it’ll be a few milliseconds earlier that the wet signal. this can of course, be ‘handled’ in the DAW by delaying the dry (1/2) signal (aka delay compensation) to bring it inline with the wet signal (3/4)
however, if you do the wet/dry mix within the eurorack module (qpas in this example), you wont have this ‘issue’ as everything is delayed already by the same amount.
it may feel like this is better as its simpler, and indeed often is… however, in practice, if you’re using a daw, often its better to do such mixing in the DAW.
so as frustrating as delay compensation is, its a very useful skill to have, and get comfortable with… as its pretty much a must-have skill when combining hardware and software.
( * ) latency comes at different stages, but Im not going into this… I’ll just talk about total latency.
something else we have not discussed here is the relationship between the SSP audio interface IO and how they relate to INP and OUT modules and flexibility there. but all I’ll say for now, is the ‘default’ mapping, that maps physical to virtual io ports has a bit of flexibility. and also whilst I used 1/2 above for simplicity… Id often use the higher numbers (above 16) so I dont mix physical IO with virtual IO… unless thats what I actually want :wink: