Sufficent coordinate spaces - ab3nd/TinyRobo GitHub Wiki
What is a sufficient coordinate space for a task?
I had initially wanted to do this project with local sensing, very limited local communication, and no absolute positioning.
UI precludes no positioning. The positions of the robots are known well enough to display them on the screen, and the top-down view of the area assumes that you can see the area from the top down, more or less.
Could have a fog-of-war style of visualization, where the immediate area around the robots is all that is initially visible. Still assumes that the robots locations in the world are known, at least to the user interface, and at that point, there's no reason for it not to provide them back to the robots.
If the user interface is also displaying the positions of the robots in realtime, then it is also keeping track of their positions, so again, they have essentially-global (really just in the frame of the UI work area, but since they're not leaving that, it is kind of their "globe").
Local-only sensing and messaging is still cool
So what would a UI look like that didn't have any of the global localization stuff?
At first, nothing, but then each robot could build its local map, and the local maps could be merged as the robots developed the coordinate system. Eventually, the system would converge to having the "global" localization and top-down view, but it would have been built without the system having to start with a global map.
This makes it sound like the most interesting thing to do is have communicating agents build a global map, which is pretty well covered in the literature anyway.
The UI could also just not show the robots, and the user's gestures would be to a swarm of robots that is known to be "out there", but doesn't have any known locations. Then the gestures would be targeted at the environment, and would do things like describing the desired changes, rather than the agents used to do those changes.
An invisible-robots UI fits in well with the idea that the program is generated and sent to the robots once, and that they are then on their way to do the thing, without having to report back or be followed. Only very intermittent communication between the swarm and the UI, no feedback or position updating.
Every robot has an absolute sense of its location
Makes a lot of things easier, since instructions can be formulated in terms of positions to go to, and algorithms like A* can be used to get there.
Doesn't work well indoors or in a disrupted environment. GPS denial is easy.
Quorum sensing and gradients
If the robots have directional sensing for messages, then gradients across the body of the robot are easy. The sensor that has the highest value is the "upstream" side.
If the robots have distance sensing for messages, then gradient propagation of a message can append the distance from the message sender to the message receiver, so they can figure out minimum distance WITHIN the spanning tree. They can't, however, get the actual minimum distance (the path might be crooked).
If they know distance and angle, then they can do various trig-based methods to figure out the real distance (or vector math. Each robot passes along "I see the target at this heading and distance", and then the robot works with the known heading and distance to the robot sending the message)