How it works - LiquidGalaxyLAB/Presentation-Tool GitHub Wiki

General flow

basic structure

The user interface is a web application and it connects with the API by HTTP requests, sending and retrieving information to the back-end endpoints. It can be opened in different devices that are in the same network (cell phones, tablets, desktops and laptops) on a simple browser.

The back-end server is located inside the master machine of the Liquid Galaxy, there is a Node.js express HTTP server that is connect to a MongoDB database. The API endpoints receive the requests from the web app and trigger the parser functions, that are considered the heart of the API due to the fact it controls all actions that the API performs, from CRUD operations with the presentations to managing the execution scripts that will be performing actions directly to the Liquid Galaxy's shell.

When talking about executing a presentation for example, the users select the presentation they want on the user interface, then the id of this presentation is sent via HTTP request, it's received by the execute endpoint that retrieves the presentation from the database and triggers the execute function on the parser, that is responsible to manage the exact time and place to execute each bash media execution script.

The scripts receives all the parameters given by the API, including which LG screen should do the action and in what position, so when the action is for a slave, it auto connects to the selected slave by SSH.

To make sure all parts are connected and know exactly what is happening, the control of errors and responses was highly though and is a very important part of the project.

Requisites

  • MongoDB Community - The database of the project to store the presentation objects in documents. MongoDB was chosen due to its NoSQL document oriented approach. As the presentation object follows the JSON notation, the format used by Mongo on its documents makes it easier to just get the document and use it in the Nodejs server, no extra parsers are needed as both BSON (used by Mongo, it is a binary data format like a JSON) and JSON formats are pretty similar and JavaScript can deal with them pretty well.

  • Nodejs, npm and nodemon - The API was created using Nodejs together with a few important libraries like Express that helps on creating the API endpoints and serving the HTTP server. NPM comes along with Nodejs and it is used to install all the needed libraries. The nodemon tool provides real time running of the server and re-compilation, making it easier to see errors, correct them and continue execution, without having to restart the whole server.

  • FEH, MPV and FFPLAY - FEH is a very light and powerfull image viewer, all images that are displayed on the Liquid Galaxy screens are displayed using this viewer. MPV is the software used to open all the videos on LG screens, just like FEH, it is a very light and powerful tool, that gives the capability of customizing the way media is shown, like removing the borders and working with geometry and aspect ration of the media. In the case of playing audio, FFPLAY was the chosen library to open audio media on the master machine.

  • ImageMagick - Considered one of the fastest and most efficient image edition tools, Image Magick is used on this project in several ways. From cutting the images that are going to be in shared screens to identifying image formats, resolution and so on.

  • Vuejs - The whole front-end was developed using the Vuejs framework. It was chosen due to its very powerful and easy way to integrate HTML, CSS and JavaScript code. As well for the very useful State Management system called Vuex. Managing the states of the data was something very difficult to do when creating a presentation for example, as many things need to be considered when building the presentation JSON and uploading all media to the correct spots, Vuex was a huge game changer to help deal with all those difficulties. Another very important consideration for choosing Vuejs is because it creates its own server, so using a simple command on the terminal and we have it already up and ready to use.

  • Vuetify - Working with pure HTML and CSS can be very challenging when we want to make very beautiful UI components in a short amount of time. Vuetify is a plugin that has a catalog of material design based components that can be used in Vue projects. With a nice grid system and different small components, like cards, buttons, etc.

How it works: API

Endpoints

CREATE [POST]

/presentation/create

UPLOAD [POST]

/storage/upload

DELETE [DELETE]

/presentation/delete/:id

GETALL [GET]

/presentation/getall

UPDATE [PATCH]

/presentation/update

EXECUTE [GET]

/presentation/execute/:id

STOP [GET]

/presentation/stop

CLEAN_STORAGE [GET]

/storage/clean

IMPORT [POST]

/share/import

EXPORT [GET]

/share/export/:id

Parser structure

The presentation json

The heart of the application is the way you configure a presentation. It is based on the presentation object that all decisions are made inside the backend. It's very important to follow how a presentation has to be created and updated so everything can work without any bugs.

The presentation template was created using the JSON format, so it can be easily used with Nodejs and MongoDB, and also work perfectly fine with the JavaScript used on frontend.

Below we have the template of the presentation JSON with all fields explained.

{
	"_id":"",               //this id will be auto-generated by the db REQUIRED*
	"id":"",                //this id has to be generated by the client REQUIRED*
	"title":"",             //title of the presentation REQUIRED*
	"description":"",       //description of the presentation
	"category":"",          //the category this presentation best fits in
	"maxscreens":"",        //the number of screens the current Liquid Galaxy has REQUIRED*
	"audiopath":"",         //the path + filename of the audio if wanted the functionality to play the same one for the whole presentation
	"slides":[              //array of slides that will be showed. It can have as many objects as needed
		{ 
		"id":"",              //id of the slide has to generated by the client REQUIRED*
		"duration": "",       //for how long this slide will run REQUIRED*
		"audiopath":"",       //the path where to store the audio
		"flyto":"",           //destination to fly to on Google Earth
		"screens":[           //array of screens, in this section we decide what will be displayed on each screen. It can have as many objects as needed
			{
			"screennumber":"",       //screen number of the LG screen these media will be displayed REQUIRED*
			"media":[                //array containing all media that will be displayed on this screen at this specific slide
				{
				"id":"",              //id of the media REQUIRED*
				"filename":"",        //name of the file that will be uploaded REQUIRED*
				"type":"",            //type of media (image or video) REQUIRED*
				"storagepath":"",     //path to store the media REQUIRED*
				"position":"",        //position will be the name of where to be placed (center, top, bottom or middle) REQUIRED*
				"sharing":"",         //if true means it is a sharing media. Omit this field if sharing media is not the case
				"partner":"",         //if sharing is true, assign this field with the number of the screen on the right of the current screen number. Omit this field if sharing media is not the case
				},			
			],
			}		
		]
		}
	]
  
}

The next part is when updating a presentation. It's almost the same thing as creating, the only difference is that you need to put the presentation info inside the data field and the _id of the document in the id field.

{
	"id":"",       //the _id of the document that needs to be updated REQUIRED*
	"data":"",     //all the fields you want to update following the presentation json format REQUIRED*
}

The media storage method

All media comes from either the UPLOAD endpoint or the IMPORT (for this case only .zip are accepted). Every media that arrives via HTTP request is download inside the Presentation-Tool/backend/storage/all directory.

After that, the storage module, moves the received media to the correct LG screens. It creates a folder that is named by the presentation id field that is passed alongside the request. If a media is supposed to appear at two screen at the same time (the sharing screen case), the storage module uses Imagemagick to cut the image in two, and send the left part for the selected screen and the right part to its partner on the right.

Inside the master, the media for screen 1 are located in Presentation-Tool/backend/storage/[id-of-the-presentation] and on the slaves they are on /storage/[id-of-the-presentation]

Upload media has also a very important format that has to be followed. Below there is the template:

{	
	
	"media":[                   //array of media, you can send more than one media per time REQUIRED*
		{"file":""},        //the actual file that is going to be uploaded REQUIRED*
	]
	{"storagepath":""},         //the path where to send this presentation. It has to be the same one defined on the json REQUIRED*
	"screens":[                 //array of screens. Contains information about each uploaded file. Order it in the same order of upload
		{
		"screen":"",        //the screen this media will go to REQUIRED*
		"partner":"",       //if wanted the sharing screen functionality, add the partner screen number. If it is not the case omit it
		"type":""           //type of media (image or video) REQUIRED*
		},
		
	]
	
}

Database schema

MongoDB is divided into dbs, those dbs have collections a collection has a document. A db can have many collections and a collection can have many documents. This project is organized as follows:

  • presentationsDB - the main database
  • presentations - the collection that will store all the documents (this is inside the presentationsDB)
  • [auto-generated-id] - a document where a presentation is stored

To facilitate crud operations with the database, a Mongoose schema was set up. The document schema is presented bellow:

PresentationSchema = new mongoose.Schema({
  id:{
    type: String,
    require: true
  },
  title: {
    type: String,
    require: true
  },
  description: {type: String},
  category: {type: String},
  audiopath: {type: String},
  maxscreens:{type: Number},
  openlogos:{type: Boolean},
  slides: [{
    _id:false,
    id: {type: String, require:true},
    duration: {type: Number, require: true},
    audiopath: {type:String},
    flyto: {type: String},
    screens: [{
      _id:false,
      screennumber: {type: Number, require:true},
      media: [{
        _id:false,
        id: {type: String, require:true},
        filename: {type: String, require:true},
        type: {type: String, require:true},
        storagepath: {type: String, require: true},
        position: {type: String, require:true},
        sharing: {type: String},
        partner: {type: Number}
      }]
    }]
  }]
})

How it works: Liquid Galaxy

Connection

When talking about communicating to a Liquid Galaxy the first thing that needs to be reminded is the fact that it has its own internal network. It’s very important to know this connection exists and that it is the responsible to maintain the connection between the master machine and all the slaves, sending and receiving packets every time something changes on Google Earth. This internal network is created by the install.sh located inside the github repository, alongside other SSH configurations, to make all the paths open to communicate between the nodes.

As Liquid Galaxy follows the master-slave concept, the master node is meant to be responsible to delegate work to the slaves, and be “the leader”. Because of that when you want to access Liquid Galaxy from a computer that is not part of the cluster, the best practice is to communicate directly to the master machine, and then ask the master to send data to the slaves.

When talking about data types that are used within LG, Google Earth works with a markup language called KML. The Keyhole Markup Language is markup language based on XML that is used to display geographical notation, like creating placemarks, drawing shapes and uploading 3D models. To control the data between master-slaves, there is configuration setted called viewsync. Inside the drivers.ini file, the viewsync is activated and configured inside master and slaves to tell who is the master machine, who are the slaves and the number of offset they use and also links the controls and send packets to the slaves when the center coordinates move inside master, so the slaves can follow the same center coordinate and keep on synched.

Other types of data can also be used on LG, since it is installed in a UBUNTU operating system, basically any type of data can be displayed there. We can have as an example the game Pong (located inside the Github repository), which consists of a client-server application and the data sent between the master and slaves are transferred using a technology called socket.io. What it sends is basically text, to update some part of the graphical, to sync everything together.

There are two main ways to communicate externally with a LG: via SSH or using an API. SSH is network protocol that allows users to access and manage servers remotely using the internet. On LG we use this technology by having a computer in the same network as the cluster, and establishing a connection using SSH commands and the IP address of the master node, with that, we’re able to access directly a terminal inside the master node. It is also possible to send files and folders using this protocol, making it a very easy and powerful way of communication.

Using an API to communicate to LG is a little bit different from the SSH method. Basically it’ll have an API that would be waiting for requests on the master machine IP in a specified port. When a HTTP request is done, the API is supposed to handle it by sending a response and if it’s possible to continue, execute what was requested in the master machine and delegate the jobs to the slaves. As it is an API, we can add logic to the executed commands, it gives more freedom than via SSH when thinking about implementing more difficult logics.

With all that explained, this project uses the API method to communicate with the Liquid Galaxy from a external device, due to the fact that different execution logics based on users choice are made. The web application sends HTTP requests to the server located inside the master, that will contain the media and the instructions, like to which node the media will be displayed, for example. Then, it uses the SSH protocol, so the master can talk to the slaves, send them what they need and tell them what to do.

As the project API is specifically to execute different scripts on LG the best idea is to put it inside the master machine. If the API is located in a different computer (like a side server), it will make a unnecessary challenge to do all the tasks via SSH, as I would be having to send media and scripts every time a user wants an action to be done, making the process too slow and not efficient.

Fly to

The fly to functionality or “query” as known by others, consists of writing a file inside the /tmp folder. This file is consumed by Google Earth that uses the data inside it to perform an action.

To use this functionality you need to go to /tmp directory on the master and create a file called query.txt. It has to be this name exactly or it won’t be consumed.

The API uses npm library FS to write the file in the correct place using the search tag, so the final file ends with something like this for example:

search=Brasil

This means it will search on Google Earth the location of Brasil and display it with a Placemark.