Creating Transforms - Quefumas/gensound GitHub Wiki
Using aliasing/Transform Chain
Suppose there is one kind of reverb which should be applied to several tracks. It is then convenient to declare:
R = Reverb(...)
...
track1 *= R
track2 *= R
Then we can easily experiment with different types of Reverb just by changing the first line.
This is limited but is still quite useful, as we can combine several Transforms into one:
postFX = LowPassFilter(...)*Compress(...)*Reverb(...)
And then reuse this transform chain as if it's a single Transform.
Passing a Signal to a function in order to simulate a Transform
We can use a function to perform more complex manipulations of a signal, returning another signal as a result. Here is a basic example:
# This works, but there is a better way below
def BasicDelay(sig, delay): # receives a Signal
return sig + 0.5*sig*Shift(delay) # returns a signal
BasicDelay(WAV(kushaura), 1e3).play()
Since this goes against the entire Signal-Transform paradigm of Gensound, there is a function decorator made especially for this purpose:
from gensound import transform # lowercase, not Transform!
@transform
def BasicDelay(sig, delay): # function accepts and returns signal, just as before
return sig + 0.5*sig*Shift(delay)
s = WAV(kushaura)*BasicDelay(1e3)
# BasicDelay is called here without its first argument, and treated like any other Transform
s.play()
This works even when attaching other Transforms to the Signal, either before or after:
s = WAV(kushaura)*Reverse()*BasicDelay(1e3)*Reverse() # this works
But does not yet work within an independent Transform Chain, as described previously on this page:
transforms = BasicDelay(1e3)*Reverse() # not yet implemented
TODO make function transforms chainable
Subclassing Transform
This approach is required whenever actual manipulation of the audio samples is needed, and it's not hard at all,
though it requires some familiarity with the Audio class, which is invisible to casual users.
Briefly explained, Audio
is a wrapper for an instance of numpy.ndarray
(audio
property), having shape (num_channels, num_samples)
, and provides additional functions.
Two functions need to be overridden: __init__
and realise
. The former typically saves arguments to instance variables, and the second performs the audio manipulation directly on the given argument (no return value).
As an example, take a simplified version of the Downsample
transform:
class Downsample_example(Transform):
""" Make every nth sample override the next (n-1) samples.
For example, if n = 4, samples 1, 2, 3 will all be set to sample 0,
then samples 5, 6, 7 will be set to sample 4, etc.
This process is also called 'decimation'.
"""
def __init__(self, n=2):
self.n = n # save to instance
def realise(self, audio):
l = audio.length - audio.length % self.n
# ignore last samples if number of samples is not divisible by n
for i in range(1, self.n):
audio.audio[:,i::self.n] = audio.audio[:,0:l:self.n] # override samples at location (n*k+i)