Data Types - TUMFARSynchrony/SynthARium GitHub Wiki

Contents

General

Data Types


Format

Required variable: <name>: <type>

Optional variable: <name>?: <type>

Object: { … }

Array: [ … ]


Message

{
	type: string,
	data: any
}

Variables

  • type: API endpoint. Possible types see bellow. For endpoint details, see Backend API.
  • data: contents of the message. e.g. for type “saveSession” data: Session

Message Types

For API details, see Backend API.

Type Data Type Target
SUCCESS Success Client
ERROR Error Client
PING { ...optional arbitrary data } Either
SESSION_DESCRIPTION RTCSessionDescription Client (Used internally by Connection)
CONNECTION_PROPOSAL CONNECTION_PROPOSAL Client (Used internally by Connection)
CONNECTION_OFFER CONNECTION_OFFER Server (Used internally by Connection)
CONNECTION_ANSWER CONNECTION_ANSWER Client (Used internally by Connection)
ADD_ICE_CANDIDATE ADD_ICE_CANDIDATE Server (Used internally by Connection)
SAVE_SESSION Session Server
SAVED_SESSION Session Client
SESSION_CHANGE Session Client
DELETE_SESSION SessionIdRequest Server
DELETED_SESSION SessionIdRequest Client
GET_SESSION_LIST {} Server
SESSION_LIST list of Session Client
CHAT ChatMessage Server
CREATE_EXPERIMENT SessionIdRequest Server
JOIN_EXPERIMENT SessionIdRequest Server
LEAVE_EXPERIMENT {} Server
START_EXPERIMENT {} Server
STOP_EXPERIMENT {} Server
EXPERIMENT_CREATED {session_id: str, creation_time: int} Client
EXPERIMENT_STARTED {start_time: int} Client
EXPERIMENT_ENDED {end_time: int, start_time: int} Client
ADD_NOTE Note Server
KICK_PARTICIPANT KickRequest Server
BAN_PARTICIPANT KickRequest Server
KICK_NOTIFICATION KickNotification Client
BAN_NOTIFICATION KickNotification Client
MUTE MuteRequest Server
SET_FILTERS SetFiltersRequest Server
SET_GROUP_FILTERS SetGroupFiltersRequest Server
GET_FILTERS_DATA GetFiltersDataRequest Server
GET_FILTERS_DATA_SEND_TO_PARTICIPANT GetFiltersDataSendToParticipantRequest Server
FILTERS_DATA dict [participant_id, FiltersData] Client
GET_FILTERS_CONFIG {} Server
FILTERS_CONFIG FilterConfig Client

Error

{
	code: number,
	type: string,
	description: string
}

Variables

  • code: HTTP response status code. See mdn web docs
  • type: error type, see bellow.
  • description: error description

Error Types

  • NOT_IMPLEMENTED
  • INTERNAL_SERVER_ERROR
  • INVALID_REQUEST
  • INVALID_DATATYPE
  • UNKNOWN_ID
  • DUPLICATE_ID
  • UNKNOWN_SESSION
  • UNKNOWN_EXPERIMENT
  • UNKNOWN_PARTICIPANT
  • UNKNOWN_USER
  • UNKNOWN_SUBCONNECTION_ID
  • BANNED_PARTICIPANT
  • PARTICIPANT_ALREADY_CONNECTED
  • EXPERIMENT_ALREADY_STARTED
  • INVALID_PARAMETER
  • NOT_CONNECTED_TO_EXPERIMENT
  • EXPERIMENT_RUNNING
  • ALREADY_JOINED_EXPERIMENT

Pong

{
	handled_time: number
	ping_data: any
}

Variables

  • handled_time: time the PING message was handled which triggered the sending of this PONG message.
  • ping_data: data from a PING message.

Success

{
	type: string
	description: string
}

Variables

  • type: success types, could be used to identify the success message with a previous API request. E.g. "SAVED_SESSION". Can be used internally to identify what happened
  • description: Description of successful request that can be used to inform the user. e.g. “Successfully saved session”.

Success Types

Type Trigger
SAVE_SESSION Send after successful SAVE_SESSION request
DELETE_SESSION Send after successful DELETE_SESSION request
JOIN_EXPERIMENT Send after successful JOIN_EXPERIMENT request
START_EXPERIMENT Send after successful START_EXPERIMENT request
STOP_EXPERIMENT Send after successful STOP_EXPERIMENT request
ADD_NOTE Send after successful ADD_NOTE request
CHAT Send after successful CHAT request
KICK_PARTICIPANT Send after successful KICK_PARTICIPANT request
BAN_PARTICIPANT Send after successful BAN_PARTICIPANT request
MUTE Send after successful MUTE request
ADD_ICE_CANDIDATE Send after successful ADD_ICE_CANDIDATE request
SET_FILTERS Send after successful SET_FILTERS request
SET_GROUP_FILTERS Send after successful SET_GROUP_FILTERS request

Session

{
	id: string,
	title: string,
	description: string,
	date: number,
	time_limit: number,
	record: boolean,
	participants: [ Participant ],
	creation_time: number,
	start_time: number,
	end_time: number,
	notes: [ Note ],
	log: WIP,
}

Variables

  • id: unique session id generated by the backend after saving the session. When creating a new session, this field is initially left blank.
    • Default: "", read only for client
  • title: experiment title set by the experimenter
  • description: experiment description set by the experimenter
  • date: planned starting date / time of the experiment in milliseconds since January 1, 1970, 00:00:00 (UTC)
  • time_limit: experiment time limit in milliseconds since January 1, 1970, 00:00:00 (UTC)
  • record: whether the experiment is to be recorded
  • participants: list of invited participants (see Participant)
  • creation_time: time the experiment was created
    • Default: 0, read only for client
  • start_time: time the experiment started in milliseconds since January 1, 1970, 00:00:00 (UTC).
    • Default: 0, read only for client
  • end_time: time the experiment ended in milliseconds since January 1, 1970, 00:00:00 (UTC).
    • Default: 0, read only for client
  • notes: list of Notes the experimenter added during the experiment.
    • Default: [], read only for client
  • log: log created by the backend.
    • Default: [], read only for client
creation_time start_time end_time State
0 0 0 Planned state, no experiment created/started
>0 0 0 Experiment created and in waiting state
0 >0 0 INVALID STATE
0 0 >0 INVALID STATE
>0 >0 0 Experiment created and started. In running state
>0 0 >0 INVALID STATE
0 >0 >0 Experiment ended and closed
>0 >0 >0 Experiment ended but still available

TODO

  • Define log

Participant

{
	id: string,
	participant_name: string,
	muted_video: boolean,
	muted_audio: boolean,
	position: {
		x: number,
		y: number,
		z: number
	},
	size: {
		width: number,
		height: number
	},
	chat: [ ChatMessage ],
	banned: boolean,
	view: [ CanvasElement ],
        video_filters: [ Filter ],
        audio_filters: [ Filter ],
        video_group_filters: [ GroupFilter ],
        audio_group_filters: [ GroupFilter ],
}

Variables

  • id: unique id for this participant in a Session, generated by the backend after saving the Participant / Session. When creating a new Participant in a Session, this field is initially left blank.
    • Default: "", read only for client
  • name: name of the participant
  • muted_video: whether the participants' video is forcefully muted by the experimenter.
  • muted_audio: whether the participants' audio is forcefully muted by the experimenter.
  • position: x, y, z coordinates of the participant's video feed on the canvas
  • size: width and height of the participant's video feed on the canvas
  • chat: chat log between experimenter and participant. See ChatMessage
  • banned: whether the participant is banned from the experiment
  • view: the asymmetric view that the participant sees in a session. See CanvasElement
  • video_filters: Active video Filters for this participant
  • audio_filters: Active audio Filters for this participant
  • video_group_filters: Active video GroupFilters for this participant
  • audio_group_filters: Active audio GroupFilters for this participant

ParticipantSummary

{
	participant_name: string,
	position: {
		x: number,
		y: number,
		z: number,
	},
	size: {
		width: number,
		height: number
	},
	chat: [ ChatMessage ],
}

Variables

  • participant_name: name of the participant
  • position: x, y, z coordinates of the participant's video feed on the canvas
  • size: width and height of the participant's video feed on the canvas
  • chat: chat log between experimenter and participant. See ChatMessage

Note

{
	time: number,
	speakers: [ string ],
	content: string
}

Variables

  • time: time the note was saved in milliseconds since January 1, 1970, 00:00:00 (UTC)
  • speakers: list of Participant IDs for participants speaking when the note was saved
  • content: text content the experimenter wrote

ChatMessage

Message format for sending messages between users.

{
	message: string,
	time: number,
	author: string
	target: string
}

Variables

  • message: Message contents
  • time: Time the message was send in milliseconds since January 1, 1970, 00:00:00 (UTC)
  • author: Author of the message. Participant ID or "experimenter"
  • target: Intended receiver
    • For participant: always "experimenter"
    • For experimenter: specific participant ID or "participants" for broadcast

CanvasElement

Object that contains the information of one video stream on a canvas

{
	id: string,
	participant_name: string,
	size: {
		width: number,
		height: number
	},
	position: {
		x: number,
		y: number,
		z: number,
	},
}

Variables

  • id: id of the participant
  • participant_name: name of the participant
  • size: width and height of the participant's video feed on the canvas
  • position: x, y, z coordinates of the participant's video feed on the canvas

FilterDict

{
	id: string,
	name: string,
	channel: string,
	groupFilter: boolean,
	config: FilterConfig
}

FilterConfig = {
  [key: string]: FilterConfigArray | FilterConfigNumber;
}

FilterConfigNumber = {
	min: number;
	max: number;
	step: number;
	value: number;
	defaultValue: number;
}

FilterConfigArray = {
	value: string;
	defaultValue: string[];
	requiresOtherFilter: boolean;
}

Variables

  • id: filter id, can be used to access this filter from another filter
  • name: filter name (unique identifier)
  • channel: filter channel, describes if the filter is either a "video", "audio" or "both"
  • groupFilter: True for a GroupFilter, False for a Filter
  • config: Filter configuration. Contains all variables of the filter that need to be configured upon experiment start. Necessary for dynamic filters
    • each config has a unique key with either
      • a FilterConfigNumber or
      • a FilterConfigArray

Filter

FilterDict with groupFilter = False.

{
	id: string,
	name: string,
	channel: string,
	groupFilter: boolean = False,
	config: FilterConfig
}

Filter Types

  • MUTE_AUDIO
  • MUTE_VIDEO
  • DELAY
  • ROTATION
  • EDGE_OUTLINE
  • FILTER_API_TEST
  • SPEAKING_TIME
  • GLASESS_DETECTION

RotationFilter

Extends Filter. This filter needs two variables as input (direction, angle). Direction can be either clockwise or anti-clockwise. Angle needs a number as input between 1 and 180. The filter config then looks like:

{
	"name": name,
	"id": id,
	"channel": "video",
	"groupFilter": False,
	"config": {
		"direction": {
			"defaultValue": ["clockwise", "anti-clockwise"],
			"value": "clockwise",
			"requiresOtherFilter": "false",
		},
		"angle": {
			"min": 1,
			"max": 180,
			"step": 1,
			"value": 45,
			"defaultValue": 45,
		},
	},
}

GroupFilter

FilterDict with groupFilter = True.

{
	id: string,
	name: string,
	channel: string,
	groupFilter: boolean = True,
	config: FilterConfig
}

GroupFilter Types

  • SYNC_SCORE

SetFiltersRequest

{
	participant_id: string,
	audio_filters: [ Filter ],
	video_filters: [ Filter ]
}

Variables

  • participant_id: all or Participant ID for the requested endpoint. If participant_id is all, filters will be applied to all participants
  • audio_filters: list of audio filters. See Filter
  • video_filters: list of video filters. See Filter

SetGroupFiltersRequest

{
	audio_group_filters: [ GroupFilter ],
	video_group_filters: [ GroupFilter ]
}

Variables

  • audio_group_filters: list of audio group filters. See Group Filter
  • video_group_filters: list of video group filters. See Group Filter

KickRequest

{
	participant_id: string
	reason: string
}

Variables

  • participant_id: ID of the participant that should be kicked
  • reason: Reason for kicking the participant

KickNotification

{
	reason: string
}

Variables

  • reason: Reason for being kicked

SessionIdRequest

{
	session_id: string
}

Variables

  • session_id: Session ID for the requested endpoint

MuteRequest

{
	participant_id: string
	mute_video: boolean
	mute_audio: boolean
}

Variables

  • participant_id: Participant ID for the requested endpoint
  • mute_video: Whether the participants video should be muted
  • mute_audio: Whether the participants audio should be muted

GetFiltersDataRequest

{
	filter_name: string
	filter_channel: string
	filter_id: string
}

Variables

  • filter_name: Name of the requested filter.
  • filter_id: Filter ID of the requested filter. Can be either the id or 'all' for all filters with this name.
  • filter_channel: Filter channel of the requested filter. Can be either 'audio', 'video' or 'both'.

GetFiltersDataSendToParticipantRequest

{
	participant_id: string
	filter_name: string
	filter_channel: string
	filter_id: string
}

Variables

  • participant_id: Participant ID for the requested endpoint. Can be either id of one participant         or 'all' for all participants.
  • filter_name: Name of the requested filter.
  • filter_id: Filter ID of the requested filter. Can be either the id or 'all' for all filters with this name.
  • filter_channel: Filter channel of the requested filter. Can be either 'audio', 'video' or 'both'.

FiltersData

{
	video: List[ FilterData ]
        audio: List[ FilterData ]
}

Variables

  • video: list of video filter data.
  • audio: list of audio filter data.

FilterData

{
	id : string
        data : any
}

Variables

  • id: filter id
  • data: filter data. Contains the data to be sent.

FilterData Example

{
	id: '9b977403-639b-44d0-995a-f061a52c6170',
        data: {"Glasses Detected": True}
}

FilterConfig

{
	TEST: Filter[],
	SESSION: Filter[]
}

Variables

  • TEST: includes all filter configs for Testing
  • SESSION: includes all filter configs for the Session

ConnectionProposal

{
	id: string,
	participant_summary: ParticipantSummary | null,
}

Variables

  • id: Identifier of this offer. The ID should not be related to any other ID, e.g. participant ID. Must be used in the ConnectionAnswer to identify the answer
  • participant_summary: Optional summary for the participant the new offer is for. See ParticipantSummary

ConnectionOffer

{
	id: string,
	offer: RTCSessionDescription,
	participant_summary: ParticipantSummary | null,
}

Variables


ConnectionAnswer

{
	id: string,
	answer: RTCSessionDescription,
}

Variables


RTCSessionDescription

{
	sdp: string,
	type: "offer" | "pranswer" | "answer" | "rollback",
}

Variables

  • sdp: Session Description Protocol
  • type: Type of the session description

AddIceCandidate

{
	id: string,
	candidate: RTCIceCandidate
}

Variables

  • id: ID of the sub-connection which is associated with the candidate
  • candidate: candidate sent by client, see RTCIceCandidate

RTCIceCandidate

{
	candidate: string,
	sdpMid: string,
 	sdpLineIndex: integer,
	usernameFragment: string
}

Variables

  • candidate: A string describing the properties of the candidate
  • sdpMid: A string containing the identification tag of the media stream with which the candidate is associated,
  • sdpLineIndex: A number property containing the zero-based index of the m-line with which the candidate is associated, within the SDP of the media description,
  • usernameFragment: A string containing the username fragment

See MDN docs for more details.

⚠️ **GitHub.com Fallback** ⚠️