AudioFeedback - LeFreq/Singularity GitHub Wiki

Singularity can utilize an (mostly) unused dimension of the computing hardware: audio. You want to have a conversation with your computer. But you don't to imitate that you're talking to a human because it's not -- and you're not so desperate to make it so. So, you use the computer to give you precise and concise audio feedback when you type things in as it responds to you and, boom, the net has a voice.

There are 5 main tiers for audio notifications from the OS (These should be encoded in hardware, so no context-switching required). These tones change when the object, but start with a basic sine wave?:

  • debug (28160Hz, 1/64s, like tapping glass)
  • info (7040Hx, 1/16s)
  • warn (object failed to handle exception, 1760Hz, 1/4s)
  • error (app failed to handle exception, 440Hz, 1s)
  • fatal (system failed to handle exception, unending low-freq sound (110Hz, 4s), reboot in x seconds, set at boot to ACU)
These notifications (freq, duration) should be settable? An interpret error (like user syntax error at command-line) counts as an application-level failure. Other notification from other nodes/objects modulate these basic idea, and either change the fundamental or timbre of the tone. A MSG from another user could be interpreted as an "unhandled exception" from an app. While a msg from another object on another node, would be seen as a WARN. Frequency (and volume) could modulate from the fundamental based on how many hops the message came from. Frequency should indicate TYPE (of object: what is it`s color?), while volume should represent DISTANCE.

Consider an exception stack. A large set of objects all involved in an exception, each adds it's own tone, dimninishing with the size of the object relative to the app vs. error source. This creates a complex tone at the output. The wave summation is all done at the APU, so no CPU time is needed.

Most OSs limit to basic OS messages and audio playback of prerecorded tones, but we can do more. We can design a language in audio that the user can learn to interpret. This can replace text-based (or graphical) OS messages and allows the user to relate to the computer without needing to be in front of it.

Much like a mechanical engineer can tell what's going on when a machine is running by the sound of it, this audio language can tell the sophisticated user how their application is doing things, instead of a generic "black box" that is mysterious to nearly everyone.

There are 5 dimensions to organize audio data(?):

  • frequency -- phase-pitch
  • timbre --
  • duration -- echo
  • volume intensity -- depth
  • ambience/reverb
But this is probably old paradigm. The real deal is to use phase-separetion (giving L-R coordinate data) and reverb-timing (giving depth coordinate data) and non-reverb, open-ended directional data (to get a cue on height) to create a 3d audio landscape, much like a blind person uses hi-freq audio clicks to place things in a visual coordinate space. The audio processor can then create a hearable landscape so that you don't have to sit in front of the computer to know everything that is going on on the network. Harmonics, whereby waveforms line up, can be mapped into a coordinate system to do this.

New messages from people who've never reached out to you before get assigned a unique freq, but have a deep reverb/ambience effect like coming from far, far away (think blade runner ambience) (stereo affect is to start with some base, low frequency (lower than audible?), and sparate this by phase adding in layers on top of this -- the ear combines the effect into wide stereo...). As communication occurs more often, they move closer together in audiospace until they are normal tones of a message arrival. As different types of interaction occur, the tones get more rounded, mellow with a unique signature based on source and message type.

Also a simple audio tone when an object crossing occurs. Each object could have it's own frequency and some DSP algorithm could combine sounds when objects are incorpoeratd into one another, to develop some sanity and order to the audio landscape. Lessor objects could recede into low-intensity, short events inside larger objects.

After basic power-on tests, the first piece of audio data is the system beep, indicating basic hardware initialized and running. The duration of this should be related to the number of bytes loaded.

The next stage is subprocessor initialization. A synthesized tone can be wave-shaped to provide exact data feedback of processor states. Different processors may be set at different audio frequencies indicating basic clock rate, these can be added together in a single wave, giving the base tone for the system. A language of harmonics can be made meaningfully, 5 2/3 tones, like organ pipes and the Macintosh system chime, giving meaningful audio information about your system.

Each line at bootup emits a tone specific to the processor that did the task. Also, each command line that is run, gives a tone also, indicating the result. Quick tasks are high-frequency, big tasks are lower freq, etc...


  • Most of the audio can be patterned off of Blade Runner sounds. Except for the two-beep error feedback. Only one is needed.
⚠️ **GitHub.com Fallback** ⚠️