Command Routing - randyhook/knynet GitHub Wiki
The following diagram depicts an overview of the components involved for an Agent to hear a command and store the knowledge for possible later action.
Let's look at each component.
Audio, consisting of the message, "Chairbot. Take me outside.", is picked up by the robot's Audio Sensor. The robot's SensoryProcessor passes this SensoryData onto the Agent.
The Agent is aware of its AudioAgency, so it passes the data along. The AudioAgency determines that the data contains natural language, so it employs NaturalLanguageAgency to parse and classify the message and so the Agent can now deal with standard SensoryEncoded data. In this case, the data is SpokenLanguage and it has been classified as a Command.
At this point, the Agent stores the SpokenLanguage in its Memory, even if the Command was not addressed to it. The reason is that the Agent may still make use of what it heard, whether now or in the future. For example, if the Agent heard Kenny's mom say "Kenny, I don't want you going outside today. You still have a cold.", it could take this statement into consideration if Kenny asks it to take him outside later.
The Agent also passed on the SpokenLanguage to the DecisionEngine. The DecisionEngine determines if the command was addressed to the Agent and if so, consults the Laws and its KnowledgeBase to forumulate a Goal, which is to bring Kenny outside. Note that there are many more variables involved in Goal creation, like considering the Chain of Command, but for this example we are simplifying. The Goal is now part of the Agent's Belief-Desire-Intention Model. The Agent is now ready to take action.
Simulator Note
In the simulation, we would send SpokenLanguage directly to SimAgent.