#SSITranslator Documentation - isir/greta GitHub Wiki
Greta is a real-time three dimensional embodied conversational agent with a 3D model of a woman compliant with MPEG-4 animation standard. It is able to communicate using a rich palette of verbal and nonverbal behaviours. Greta can talk and simultaneously show facial expressions, gestures, gaze, and head movements.
Two standard XML languages FML and BML allows the user to define her communicative intentions and behaviours based on standard architecture architecture.
Greta can be used with different external TTS softwares. Currently she can speak various languages: English, Italian, French, German, Swedish and Polish.
Greta is used in various European projects: ANIMATAS, ARIA-VALUSPA, Council-of-Coaches, ILHAIRE, REVERIE, SSPNet, TARDIS, VERVE, and national French projects: A1:1, Acorformed, Anipev, CECIL,IMMEMO, GV-LEX, Impressions, MOCA, Social-touch, TAPAS (French/English, Japanese). It is integrated in several open-source architectures for human-agent interaction: ARIA-VALUSPA/AVP, SEMAINE, Agents United Plateform, and lately in IAVA.
The wiki is structured in order to give you enough knowledge to use the Greta platform. As you can see in the sidebar at right the wiki is divided in 2 areas:
- Getting started with Greta: here you will find tutorials about how install the platform and getting start with it. It will give you a guide in the installation's steps and also examples to how use the platform, generate gestures, facial expressions, hand shapes and instances for interacion;
- Functionalities: in this section you will have more details about the modules of the platform, what are thier functionalities and how to use them;
We have another repo: greta-lgpl which is the LGPL version, the version before September 2024. Some of the latest functionalities might not be included.
Port | Connection (left: server, right: client) | Comment |
---|---|---|
1234 | LM-studio - Mistral.py | when you run LLM locally |
3150 | meaning_miner_adapter_server.py - meaning_miner_adapter_client.py | MeaningMiner |
4000 | Mistral.java - Mistral.py | Mistral LLM |
4040 | DeepGramFrame.java - DeepGram.py | DeepGram ASR |
4444 | ASAPFrame.java -[actual neural network].py | ASAP |
5000 | OpenFaceOfflineZeroMQ.exe - any | TurnManagement, feedback |
5960 | turnManagement.java - turnManager.py | TurnManagement, feedback |
5961 | turnManagement.java - turnManager.py | TurnManagement, turnManagement |
8087 | SpeechRecognizerFrame.java - nothing | GoogleASR receiver |
8088 | SpeechRecognizerFrame.java - nothing | GoogleASR receiver |
9000 | mic_server.py - any | Microphone |
50150 | OpenFaceOfflineZeroMQ.java- RealTimePipeFinal.py | MODIFF-MI |
50151 | RealTimePipeFinal.py - ASAPFrame.java | MODIFF-MI |
50167 | MICounserlorIncrementalDA - Drinking.py - RealTimePipeFinal.py | MODIFF-MI |
50168 | MICounserlorIncrementalDA - Drinking.py - RealTimePipeFinal.py | MODIFF-MI |
50200 | “MICounserlorIncrementalDA - Drinking.py” - any | MI |
61616 | Broker.java - any | ActiveMQ |
- MODIFF-MI: adaptive facial expression
- MI: motivational interview
- MM: meaning miner