CyberSpace - LeFreq/Singularity GitHub Wiki

So, of course, the ultimate OS needs an EQUALLY ultimate visualization system and UI, so we'll be going full 3D (good enough for the universe, good enough for us), making cyberspace a reality. Click the link -- it's a partial visualization of random, untagged data in the Cloud. With real data in the context of your peers, you'd see perhaps millions of nodes, many of them receeding from view, with colors indicating the category of data it represents and size relevant to your current search context. In theory, we can use the power of the visual cortex to process over 1Mb of data/second. Since the retina can detect individual photons, we can effectively create a neural interface into your brain. For prevention of Black Ice (neuromancer), we don't use direct cortical connections, but one layer of indirection, via the visual system, to protect the user. Imagine. Lolz.

Further, we have some awesome input ideas involving multilayered spheres, perhaps with screens on them. It's more than a just a sci-fi term because these are actually data and code objects you'll be navigating. Reality organizes A LOT of data in 3d, so I'm quite certain we can do it here with this project, adding color to make a 3+1 dimensional visualization system for all knowledge and data.

It can be proven that 3D (+1, like our universe) is the best and only practical way to do LargeDataVisualization. (It takes roughly log2 n categorizations made by humans when they enter the data to make it work (until full-blown AI can do it naturally). Once people become familiar with your ontology, it will take half of that (log4).), because most of your categorization has already been done. It exploits the power of the brain's visual cortex to do most of the work. This essentially creates the ideals of Hermann Hesse's GlassBeadGame.

The desktop as a metaphor works when you see the computer as a utility, like something you get WORK out of. But it is MUCH more capable than that. It a universal simulator, capable of transcending the laws of physics inside of digital logic. Sticking with the observable universe, that means it is capable of the diversity and utility that the universe itself has created.

The key points to making it work are eliminating spurious non-data, like visual flourishes that mean nothing (most of your windowing icons). Think Ed Tufte on Visualizing Data. Each shape and color will be tied to something meaningful, maximizing correspondence to the actual world, making it easier for people to learn. Nothing spurious or arbitrary.

Spheres, rather than corners on objects, for example. A sphere can be specified with 1 parameter. Compare the geometric equations of a sphere to a cube -- a cube requires more data. Activity across the network can be like momentary palette re-assignment, flashing like a neuron, showing where the activity lies. Significance by radius, age by how saturated the color is. Depth (z-axis) can be how relevant the data is to a user's current search terms (brightness?). Proximity by relation (edge associations create "gravity"). Shape can indicate the number of users interacting with a node ("popularity").

On the sphere would be a map of sorts, much like a crop-circle of its internal structure. This is better than rendering complex shapes which won't be very aesthetically pleasing. Once could consider the refrqction index of the node to show the level of object integration. Simple objects would be transparent while complex, fractal-like objects would have greater refraction....? This graph of objects would be very high-level and abstract to prevent people from taking your ideas on the network, but, they could be allowed to look deeper into your objects to see greater detail (the subgraphs of nodes, for example, of the fractal graph). Objects would have transparency, but also a color, like a gemstone or colored glass, depending on how sophisticated the object was. Users would be opaque and data would be opaque, but not spherical.

Entering a object node (navigating into it) inverts the object and runs it as an app, going full-screen as needed.
Entering a user node lets you see a directory of objects...

New users can be seeded a coordinate for where they will start in the coordinate space, based on the user's interests.


Two major arenas: the Cloud holding all ordered objects and the Soup, holding a more acoustic space focussing on messages.

Messages without a destination are sent to the Soup, and whoever has an attention level set at a low-enough threshold will receive these "random" messages from the network. This threshold can be set automatically depending on the mode of the user, and long periods without any messages might always be low-enough threshold so that no one can be completely "deaf" to the Soup.


Nodes in CYBERSPACE can connect to any other in the messages network, including those inside another node. Once this link is established, the inner node gravitates to the edge of the outer node (which must give permission) in the visualization, making contact.
⚠️ **GitHub.com Fallback** ⚠️