Configure DSG - intelvwi/DSG GitHub Wiki
DSG disaggregates the operations on a scene (e.g. physics, script execution, client communication) and distributes them across a set of simulators to scale virtual world operations. For instance, it is a common challenge for existing virtual worlds to support a large number of concurrent participants in the same virtual space, usually due to communication bottlenecks from the server that hosts the virtual space to all the clients connecting to it. With DSG, we can distribute the communication operations to multiple servers. In some other cases, it could be that the physics simulation becomes the bottleneck (say, too many physical objects to be simulated). With DSG, we can distribute the physics simulation to multiple servers. Therefore, in configuring a DSG system to support large workloads, you will configure a set of simulators that will each run a portion of the workload and all work together to support the activities and interactions inside the same virtual space. We will illustrate the configuration process through some examples. For the purpose of description and configuration of a DSG system, we define the concept of actors first:
An actor in a DSG system is a simulation engine that executes the set of operations to provide some functionality in a virtual world. For example, the physics actor, or the physics engine, is the set of operations that simulate physics; the script actor, or the script engine, is the set of operations that execute scripts defined for an object. In addition, the set of operations that handle the communication with clients is the client management actor, and the set of operations that maintains a persistent copy of the scene over time and through system restarts is the persistence actor. It maintains a copy of the scene that is always accessible and it periodically persists the scene to a database or some other storage.
A sample configuration of a DSG system, which we have run and tested a lot, is that a set of client manager (to balance the load of client connections), script, physics and persistence actors are configured to run in separate simulators. The simulators can then run on a number of servers. The DSG synchronization service glues results from these simulators together, synchronize the Scene state, and present a consistent view to clients. Note that in a DSG system, depending on resource previsioning , different actors can be configured to run in separate simulators, or some/all of them configured to run in the same simulator. In the discussion below, we focus on demonstrating how to set up the sample configuration discussed above where all actors run in separate simulators.

The figure illustrates the different simulators that need to be configured in the sample configuration mentioned above. Each simulator is configured to run one actor (as we said before, if needed, it is possible to configure multiple actors to run inside the same simulator). All simulators are connected via an overlay network (different topologies are support, for instance, a star topology, or a tree topology, that connects all the simulators) and the DSG synchronization service to synchronize the replicated scenes on each simulator. Clients are allowed to connect to the client managers only and the clients will be load balanced across the client managers. Usually, the client managers would be located in different geographical locations to provide better network connectivity to clients in those locations. With the support of the DSG synchronization service, clients logging into different client managers will be able to experience the same scene and interact with each other. Our DSG code is implemented based on the OpenSim. So each DSG simulator is an OpenSim simulator plus DSG services. For an unmodified OpenSim simulator, it runs all the actors (persistence, physics, script, and client managers) in the same simulator. In a DSG system, however, we need to configure which actor(s) to run in each simulator. For instance, for the simulator that runs physics engine, we will enable the physics engine while disabling scripts, persistence, and client management (by not allowing clients to connect to that simulator). In general, below is what we need to do in configuring a DSG system. First, we need to provision the workloads and map the DSG simulators to servers. For example, you may each simulator to a separate server. Or you have a powerful server, you may map the persistence, script, and physics to the same powerful server; then map the client managers each to a server that has abundant outgoing bandwidth. (For the current version of DSG, the load balancing is done statically. It is of future work to enable dynamic load balancing. See our microcell based load balancing discussion.) Second, for each DSG simulator, we need to go through the following steps for configuration:
- Enable DSG modules.
- Configure the actor(s) to run on the simulator: Enable the actor(s) to run on the simulator and disable others. Depending on the number of concurrent client connections to your system and the capabilities of your server machines, you need to configure a number of client managers to distribute the communication workload. [Details to link to below]
- Define DSG statistics logging parameters (if stats such as DSG BW, server CPU, memory, etc, are desired to be logged).
- Set the grid service URLs to point to the grid servers that have been setup and running. DSG can run with both ROBUST or Simian grid services. This step is similar to grid service URLs setup in OpenSim. [Some discussions on Simian grid setup?]
- Define the virtual space inside which activities will take place. OpenSim defines regions with a fixed size of 256x256. A larger space needs to be composed of multiple regions and region crossing has its problems and limitations. In the examples below, we assume that the virtual space is the size of an OpenSim region. We have on-going work on addressing larger virtual spaces (see quark work). As of current DSG implementation, for each DSG simulator, you need to define a unique region name, region UUID, and region location for the region defined for the DSG simulator. See OpenSim configuration for how to configure a region. Notes: Assuming the virtual space is the size of an OpenSim region, to enable each DSG simulator to store a copy of the scene of the virtual space (content inside the space), based on OpenSim’s mechanism, we define one OpenSim region for each DSG simulator. Because such a region is configured as a container to hold a copy of the same scene, the regions defined on different DSG simulators need to have a location (Region Location) that is far away enough from each other, so that once a user logs into a client manager (which has a region defined that holds a copy of the scene), the user will not see the other regions defined for other simulators on the map and therefor will not get confused.
- Define the topology that will connect the simulators together. The topology we used and tested a lot of is the star topology, where the simulator that hosts the persistence actor is the central hub (hereafter referred as the hub simulator) and all other simulator connects to it. That includes configuring the hub simulator to listen on a given port for incoming connections, and configuring the other simulators to connect to the hub simulator when they are started.

For the configuration steps 1 to 3 above, OpenSim.ini is the configuration file we need to edit. For steps 4 to 5, we need to work with Regions.ini. For the other configuration items in OpenSim.ini and Regions.ini that are not touched by DSG configuration, please refer to OpenSim’s configuration instructions. Or, you can leave the configuration items not touched by DSG untouched (in default settings) and just edit the configuration items mentioned in the instructions here.
[RegionSyncModule]
Enabled = True
ActorID = "persist" //this id is to identify each DSG simulator
1. Enable persistence functions:
This example uses MySQL on localhost (the machine where the persistence simulator will run)
[Startup]
storage_connection_string = "Data Source=localhost;Database=opensim;User ID=DSG;Password=dsg;"
storage_plugin = "OpenSim.Data.MySQL.dll"
2. Disable scripts:
[XEngine]
Enabled = False
3. Disable physics:
[Startup]
physics = "RemotePhysicsEngine"
4. Disable client management:
No configuration changes are needed. Do not publish the region name of the persistence region.
- Enable scripts
[XEngine]
Enabled = True
- Disable physics and client management (same as above in Persistence actor)
- Disable persistence of the scene
storage_connection_string = ""
storage_plugin = "OpenSim.Data.Null.dll"
- Enable physics: specify physics to point to the physics engine you’ll use for the system (e.g. OpenDynamicsEngine, or BulletSim)
physics = "BulletSim"
- Disable script engine and client management (same as above in Persistence actor)
- Disable persistence of the scene (same as above in script engine)
-
Enable client access to the simulator. The simplest form is to publish all the URLs of all client managers and ask a user to pick one to connect to (for example, the location of each client manager can be disclosed and a user can choose one that locates close to him/her). A better solution is to publish one URL and redirect user connections to one of the client managers, given certain load balancing policies. This can be done with the help of grid service (see Simian).
-
Disable script, physics, and persistence operations (see above).
[RegionSyncModule]
DetailLogEnabled = true ; Turns on every message detail logging. Default is ‘false’
DetailLogDirectory = "../logs/" ; Where to store the logs. The directory MUST exist. Default is “.”
DetailLogPrefix = "log-%ACTORID%-" ; Prefix added to each detail log file. Default is “log-%ACTORID%-“.
DetailLogMaxFileTimeMin = 5 ; Number of realtime minutes recorded into each log file until rolling over to a new file. Default is 5 minutes.
StatisticLoggingEnable = true ; Enable DSG statistic gathering. Default is ‘false’.
LogIntervalSeconds = 10 ; Seconds between each sampling and writing out of statistics
LogSyncConnectorEnable = true ; Enable SyncConnector statistic gathering and logging. Default is ‘false’.
LogSyncConnectorDirectory = "../logs/" ; Where to store the logs. The directory MUST exist. Default is “.”.
LogSyncConnectorFilenamePrefix = "conn-%CONTAINER%-%THISACTOR%-%OTHERSIDEACTOR%-"
LogSyncConnectorIncludeTitleLine = true ; Whether to start each log file with a line naming the fields
LogSyncConnectorFileTimeMinutes = 10 ; Number of realtime mintues recoreded into each log file until rolling over to a new file. Default is 10 min.
LogSyncConnectorFlushWrites = false ; Whether to do a flush after each write. Defaults to ‘false’ but useful for tracking crashes.
LogRegionEnable = true
LogRegionDirectory = "../logs/"
LogRegionFilenamePrefix = "%CATEGORY%-%CONTAINER%-%REGIONNAME%-"
LogRegionIncludeTitleLine =true
LogRegionFileTimeMinutes = 10
LogRegionFlushWrites = false
LogServerEnable = true
LogServerDirectory = "../logs/"
LogServerFilenamePrefix = "%CATEGORY%-%REGIONNAME%-"
LogServerIncludeTitleLine = true
LogServerFileTimeMinutes = 10
LogServerFlushWrites = false
LogLLUDPBWAggEnabled = true
LogLLUDPBWAggDirectory = "../logs/"
LogLLUDPBWAggFilenamePrefix = "%REGIONNAME%-LLUDPAggBytes"
LogLLUDPBWAggIncludeTitleLine = true
LogLLUDPBWAggFileTimeMinutes = 10
LogLLUDPBWAggFlushWrites = false;
LogLLUDPBWAggInterval=10 ; 10 seconds
;;The following setup is for a Simian grid service setup
AssetServerURI = http://intsim.jf.intel.com/Grid/?id=
AuthenticationServerURI = http://intsim.jf.intel.com/Grid/
AvatarServerURI = http://intsim.jf.intel.com/Grid/
Cap_GetTexture = http://intsim.jf.intel.com/GridPublic/GetTexture/ ;; Redirect texture download from simulator to a http server that serves the texture downloading
FriendsServerURI = http://intsim.jf.intel.com/Grid/
GridServerURI = http://intsim.jf.intel.com/Grid/
GridUserServerURI = http://intsim.jf.intel.com/Grid/
GroupsServerURI = http://intsim.jf.intel.com/Grid/
InventoryServerURI = http://intsim.jf.intel.com/Grid/
PresenceServerURI = http://intsim.jf.intel.com/Grid/
SimianServiceURL = http://intsim.jf.intel.com/Grid/
UserAccountServerURI = http://intsim.jf.intel.com/Grid/
We only list the configuration necessary for defining a unique region for each simulator. For the other items in Region.ini, please see OpenSim configuration for details.
[psa00]
RegionUUID = ace73893-adc9-4431-847c-120001200000
Location = 5000,5000
[pea00]
RegionUUID = ace73893-adc9-4431-847c-121001200000
Location = 5100,5000
[sea100]
RegionUUID = ace73893-adc9-4431-847c-122001200000
Location = 5200,5000
[cm100]
RegionUUID = ace73893-adc9-4431-847c-124001200000
Location = 5400,5000
[cm200]
RegionUUID = ace73893-adc9-4431-847c-125001200000
Location = 5500,5000
Here we show the configuration of setting up a star topology, where the simulator hosting the persistence actor is the central hub in the topology, and the other simulators will all connect to it for setting up DSG connections.
OpenSim.ini
[RegionSyncModule]
IsHub = True
Regions/Region.ini
SyncServerAddress = 127.0.0.1
SyncServerPort = 15000
OpenSim.ini
[RegionSyncModule]
IsHub = False
Regions/Region.ini
SyncServerAddress = IP address of the simulator machine persistence actor runs on
SyncServerPort = 15000