Usage - TUMFARSynchrony/SynthARium GitHub Wiki
(This wiki page is under construction and to be updated by Jan 2024)
This page gives you a detailed overview of the various workflows and use cases of which we have 3: Experimenter, Participant, and Developer workflows. Here we outline what you can do with the experimental hub as an experimenter, how your participants will experience the platform, and if you want to make modifications to the code, some guidance on how you can do so. We will also document some learnings and best pactices we gained through using the platform in experiments ourselves (i.e. hosting in the cloud, ...).
Experimenter Workflow
Want to conduct an online study using our experimental-hub, here are some steps you could follow when you already have the hub running and configured to your networking needs:
- Create a new experiment. An experiment is the equivalent of one trail or user study. For each set of participants, it is suggested to duplicate and create Fill out the required fields duration, record, and participants' info. Rearranging the participants' video location on the screen is also done in this step.
- Join the experiment page. Wait for all participants to join and start the experiment.
- End the experiment. If recorded, the video and audio data can be accessed in the folder
backend/sessions/{session_id}
. Each file is saved asparticipantID_date_timestamp.mp3/mp4
, the participantID can be looked up in thebackend/sessions
folder and the respective{session_id}.json
. The filename format for the date isyyyymmdd
and the timestamp ishhmmss
.
Seting up and Hosting
Creating an Experiment
Running an Experiment
Post-processing
Post-processing is designed to address potential data loss from the real-time analysis filter, with the primary objective being facial extraction from recorded participants’ videos. This functionality is accessible through the menu on the experimental hub homepage. Users then will have to go through 3 pages for the post-processing:
- Homepage
Users will be presented with options to select the experiment they wish to extract. These options will display both the session name and its ID, conditional upon the availability of videos within that session folder. If no errors occur, users will be redirected to the progress page.
- Progress
The extraction process utilizes the FeatureExtraction module from OpenFace, and this page shows users that there are currently FeatureExtraction processes running, so they are not able to do any post-processing at the moment. After successful extraction, the hub will redirect to the result page.
- Result
This page is basically a post-processing homepage, accompanied by a success message displaying the result directory. This result directory corresponds to the processed folder under the respective session ID.
Developer Workflow
A part of the vision of the experimental-hub is to behave as an experimental sandbox to new WebAR filters and a testbed for evaluating various filter pipelines. If you want to use the hub in this way, you can develop your own filter to be use in an experiment or add your own frontend feature. Here are some potential steps to follow divided into frontend and filter changes:
Extending Filters
Filters are explained in more detail here: First, here is some basic background regarding filters: In the experimental-hub, each participant has an audio and/or a video stream. You can have filters for each of these different channels. In our terminology throughout our project, a filter can either be an analysis or manipulation filter or both. [TODO: add image]
- An analysis filter is a filter that processes the data but makes no changes to the output stream. Other filters can depend on the result of an analysis filter to perform their manipulation.
- A manipulation filer is a filter that, independent of any analysis from data of the stream, performs a visual or audio change to the stream (e.g. edge detection or audio delay filters).
- An analysis and manipulation filer is a filter that performs both an analysis and manipulation, in any order, interpreting and manipulating the video or audio stream.
Filters can be chained (either in the frontend interface or by creating a separate analysis/manipulation filter) and the order in which they are chained matters.
- Once having looked through how to create example filters and looking through the provided "default" filters, decide on the purpose of your filter.
- ()
Frontend changes
(In Progress) We are currently working on making instructions for changes to the frontend and defining best practices for it.
Extending Session Creation
Extending Realtime Filter Feeback
Extending Post Processing
Participant Workflow
Have you been invited to take part in an experiment and would like to know more about the experimental hub or are trying to trouble shoot with the researcher you are coordinating with? Here are some steps to help guide you:
- Visit the link provided by the experimenter. If you have joined the link early and the experimenter is having trouble seeing that you have joined the call, it might be because the call hadn't started yet. Simply refresh the webpage and join again and you will be prompted for your consent once again.
- Once accepting consent, you will be in the Lobby. The experiment will not have officially started yet, you will be prompted when it does and your screen might change to see other participants or whatever the experiment you are taking part in requires. Take some time to familiarize yourself with the experimenters instructions and make sure your set up is appropriate and make any necessary changes the experimenter contacts you about. If you disconnect for some reason, you can rejoin the call by revisiting the same link provided to you again.
- Now the experiment will have started and you follow the tasks provided as you are best able to. If you drop from the call for whatever reason, you can revisit the same link provided to you again.
- Afterwards, the experiment is over, there maybe some survey to fill out or a second link to visit for the next part of the experiment.