LIO - quasylab/sibilla GitHub Wiki

This is a formalism introduced in "A Generic Mean Field Convergence Result for Systems of Interacting Objects" where a system consists of a population of N identical interacting objects. At any step of computation, each object can be in any of its finitely many states. Objects evolve following a clock-synchronous computation.

Each member of the population must either execute one of the transitions that are enabled in its current state (by executing an action), or remain in such a state. This choice is performed according to a probability distribution that depends on the whole system state.

The stochastic process associated with a LIO specification is a Discrete Time Markov Chain (DTMC) whose states consist of a vector of counting variables associating each state with the number of agents in this state. LIO models have been used to describe a number of case studies in different application domains.

To describe the structure of a LIO Model, let us consider a system composed of N agents that can be either blue or red coloured. The goal is to guarantee that, without any centralized control, agents evolve to a balanced configuration where the number of blue agents is similar to the number of red agents.

We can assume that each agent in the system can be either in the state B, indicating that the agent is blue coloured, or in the state R, when the agent is red. A blue agent B can execute action changeInRed to become a red agent R, while a red agent R can execute action changeInBlue to become a blue agent B. find:

Each LIO Model consists of a sequence of elements declaring:

  • constants and model parameters;
  • actions;
  • agent states;
  • measures;
  • predicates;
  • system configurations.

Constans and Parameters

Constans and parameters are used to associate values to name. However, differently from constants, parameters can be changed from the runtime to consider the model under different conditions. The syntax is the same as the one used for Population Model.

In our red-blue scenario we can consider three parameters:

param NUMBER_OF_RED = 10
param NUMBER_OF_BLUE = 10

param meetingProbability = 0.25

These parameters represent the nubmer of red agent in the system, the number of blue agents in the system, and the probability that, in a step, one agent meets another agent.

Actions

An action represent an activity that an agent executes in a step. Each action has a (unique) name and a probability:

action changeInBlue = %R*meetingProbability
action changeInRed  = %B*meetingProbability

In our model action changeInBlue is executed with probability %R*meetingProbability representing the fact that an agent is met (with probability meetingprobability) and the colour of this agent is red. The latter is computed as %R that represnts the fraction of agents in the state R. Decalartion of action changeInRed is similar.

Agent States

The behaviour of an agent is represented as an automaton. The declaration of each state has the following syntax:

state <statename> =
    <action>.<statename>
    (+
    <action>.<statename>)*
endstate

In our case, the declaration si the following:

state B =
    changeInRed.R
endstate

state R =
    changeInBlue.B
endstate

Configurations

A configuration represent the initial state of a system. It associates a name to a population of agents. The syntax is the following:

system <configurationname> =
    <agentname>(#<arity>)?(|<agentname>(#<arity>)?)*
endsystem

A population consists of a sequence of agent expressions separated by the symbol |. Each agent expression consists of an agent name following by the (optional) arity.

In our scenario the following declaration is considered:

system InitialConfiguration =
    B#NUMBER_OF_BLUE | R#NUMBER_OF_RED
endsystem

In this case we have a population consistsing of NUMBER_OF_BLUE blue agents and NUMBER_OF_RED red agents.

⚠️ **GitHub.com Fallback** ⚠️