APU - LeFreq/Singularity GitHub Wiki

Audio Processing Unit.

Preliminary input spec:

  • list of individual frequencies. Duration is all expected to be equal. Send a different list for changing lists.
  • Output to speaker: yes or no (if no, will store waveform to memory address?)
  • and(x, y, z) point in space for stereo, spatial positioning of the audio sound.

Preliminary outp

  • volume (hardware should be built onto the box, as audio plays a critical role in the OS)
  • projected stereo position (x, y, z): this position, phenomologically merges with the color of the nodes there.

Audio positioning in 3d, is done through, phasing, along the x axis, subtle volume changes in front to back positions, and reflection for up and down with a surface assumed at the bottom. However, in coordinate space there is no ground, so should find a use for this asymmetry -- probably where there is a meta stack of directives from the system as a whole, but does not want to affect local activities.

This is a subprocessor to take (potentially lists) of frequency, duration data. If a list is given, wave forms are calculated and summed in the APU for generation to the speaker.

Alternatively, it can take a stream of time-domain intensity data to implement any recording.

Also, shoul dhave the ability to do 3d stereo-spatial mapping to complement the CYBERSPACE visuals. A person's particular location in this space is set (more or less) once, and they live there, until they 1) change their node name, 2) becomes a high-reputation object creator and they start gravitating towards the center of their octant.

Color to wave mapping?:

  • Red: how many frequencies, pure red is pure sine wave?
  • Yellow: How intense other waves are in your fundamental (pure yellow=solid under the fundamental curve=noise?)
  • Blue: how close to the edge the fundamental is. pure blue is a sawtooth at the forward edge. small blue is sawtooth at the trailing edge...?