Node RED - xxyypp/IBM_Dementia GitHub Wiki
Introduction
Since Paige was designed using IBM Watson tools, especially Node-RED, the device must be compatible with every one of these services. In addition, the Dementia Protector should also be wearable and able to connect to the internet wirelessly to use all the IBM watson functionalities. For the above reasons, a Raspberry Pi was used for the prototyping.
The first model tested that meets these requirements is the Raspberry Pi Zero W, specifically because of its small size. However, after a few tests, it was clear that this device doesn’t have the necessary computational power to provide smooth prototyping and probably wouldn’t even be able to run our final Node-RED flows. The Raspberry Pi 3 B+ was therefore our next and final approach. The raspberry Pi is used with an Adafruit PiTFT screen to display the UI.
Setting up
What is Node-RED
"Node-RED is a programming tool for wiring together hardware devices, APIs and online services in new and interesting ways." -https://nodered.org
Node-RED is the brain of the device. It runs on the Raspberry Pi to allow the IBM Watson services to work with Paige’s Hardware, provides the displayed UI and more.
Setting up Node-RED
Node-RED comes already installed on Raspbian and can directly be used on any Raspberry Pi that is running the OS. Once the tool is installed and updated, the following link will guide you through launching Node-RED. (If you encounter problems while setting up Node-RED, guides are on the same link)
https://nodered.org/docs/hardware/raspberrypi
Intrinsically, Node-RED comes with a library of already installed Nodes that can be found on the left side of the editor. However, Paige requires a few additional Nodes to operate. Instructions on how to add Nodes can be found on the aforementioned link.
The additional required Nodes are the following:
- node-red-dashboard
- node-red-node-watson
- node-red-node-google
- node-red-node-dropbox
Once these have been added to the palette, the flows designed to operate the device can be copied into the editor. To do this follow these steps:
- Copy the Final_BOT.json file in this repository
- Go to Import -> Clipboard on the node-RED editor
- Paste the json file into the window
- Deploy the flows
Additional Setup
For the information to be shared among the Node-Red, the app, and the cloud, an MQTT broker must be set up on the Raspberry Pi (here is how to do it: https://www.youtube.com/watch?v=AsDHEDbyLfg) The IBM Watson nodes, Dropbox Node and Google Nodes need credentials to function properly. The Watson credentials can be obtained through IBM Cloud (on Bluemix) by creating a Text-to-Speech and Watson Assistant service. Similarly, the Google API key for the “Google Directions” functionality can be found on https://www.console.developer.google.com. Lastly, the Dropbox node's credentials can be obtained by creating a Dropbox API account for which the steps are explained, when you click on the node, in the editor. The Watson Assistant Node also needs to be connected to a Workspace that contains all the dialogs and intents. A “Paige” Workspace was created for this purpose. It has the following ID:
Paige workspace ID: 2c9e65e8-c4c6-44b2-bc19-902b906eed2b
Once, all the setup is done, Paige can be used by opening 127.0.0.1:1880/speaking in the background and 127.0.0.1:1880/ui for the user interface.
Final_BOT functionalities
1. Voice Input
An HTML voice recording constantly runs in the background on 127.0.0.1:1880/speaking to record any audio coming into the microphone connected to the Raspberry Pi. This webpage cooperates with a JavaScript that handles audio processing. The speaking page shuts off the microphone for 8 seconds after it sends and audio message to the rest of the flows to ensure that Paige doesn’t hear herself talk and refreshes automatically every 30 seconds to ensure that the microphone doesn’t stop recording because of a timeout.
2. User Interface (UI) and audio output
The Device’s UI constantly on 127.0.0.1:1880/ui. This UI contains the phone connection and microphone status, an obvious HELP button, the current heart rate measured by the device, the date and time and a text box displaying the Assistant output. The audio output also comes from this page and can therefore be played by any device on the network that goes onto the Raspberry Pi’s IP address. Another tab of the UI page contains all the sensor readings and graphs with past values and is shared online for anyone interested in the user’s wellbeing. (note: the UI is designed to fit on the screen while chromium is on full screen)
3. Physical Button control
The four physical buttons on the Adafruit PiTFT screen connected to the raspberry Pi are configured to operate the device instead of voice inputs. They have the following uses from top to bottom:
- Assistant activation, equivalent of saying “Hello Paige”
- Affirm, equivalent of saying “yes”
- Reject, equivalent of saying “no”
- Conversation flow reset, doesn’t have a speech equivalent. Is used to reset/deactivate the Assistant (can be reactivated by saying “Hello Paige” or pressing button 1)
4. Watson Assistant communication
When audio is received by the device or a button is pressed, the message is sent to the Assistant node for processing. The output will depend on the input and where the Assistant is in the conversation and will be sent to the UI page.
5. Sensor readings
Every 20 seconds, the Raspberry Pi runs a Python script (allsensors.py) that reads the data read by the sensors. The measured values of heart rate, temperature and humidity are sent to UI for display. If these values are outside of some set thresholds they will also trigger a response from the Assistant.
6. Phone connection status
Every 10 seconds, the Raspberry Pi runs the iwtgetid
command in the terminal through Node-RED to check if it is connected to the phone. The connection status is then passed onto the UI to appear as “Phone connected” or “Phone disconnected”.
7. Data logging
The values measured by the sensors are also saved onto a file called heartbeats.txt
on the Raspberry Pi’s Desktop. This file is then shared online through a secure Dropbox account that can be available to any user that knows the password.
8. Directions
After the user requires help (through the HELP button, voice input or App), the Raspberry Pi will receive the coordinates of the user’s current location and the selected “safe place”. These values are treated by the Google Direction node that will output directions towards the “safe place”.
9. MQTT Communication
All the communication from phone to Raspberry Pi is done through MQTT. An MQTT Broker runs on the Raspberry Pi and since the phone is on the same network, all the data published by the phone can be read by the Raspberry Pi, vice versa.
10. To do list
The To do list that appears on the phone can also be displayed on Paige's screen by swiping on the screen once the UI is opened.
11. Periodic check
Periodically, the Device will ask whether the user is doing alright and expect a response from the user before allowing him to do anything else.
Current Limitations
The prototype doesn’t perform all its functionalities without any trouble, the notable limitations are as follows:
* Inaccurate sensor readings
The temperature and humidity sensor measures ambient parameters instead of actual body temperature and humidity, which means that the values aren’t accurate and varies a lot with ambient weather conditions. The pulse sensor is more accurate but can have false readings at times, these cause unwanted responses by the Assistant and therefore errors in operation.
* Dialog limitations
The user can easily get confused if they don’t follow the flow of the conversation well or if multiple actions occur at the same time. Resetting the conversation flow is an easy way to get the assistant back on track but this can be frustrating for the user. The variety of conversation understood by the assistant is also very limited.
* Microphone problems
The API used for the speech recognition has a built-in timeout interval that turns off the mic if no voice is detected for a certain period of time. While running it on the Raspberry Pi, this timeout is not constant meaning that it is hard to predict when the mic will turn off. Currently, this is worked around by having the speaking page refresh every 30 seconds (which turn the mic back on). However, this means that if the timeout delay is shorter than 30 seconds, the mic will still turn off and if the refresh occurs during the 8 seconds where the mic is supposed to be turned off, the Assistant might hear itself speak. Also, the “Phone connected” message on the UI only corresponds to the 8 seconds turn off after speech is transmitted to the Assistant, it doesn’t react to the mic turning off because of errors.
* Periodic Checking
The check functionality is currently disabled since it clashes with the rest of the possible dialogues. That is, if the check occurs while the user is talking with the Assistant, it will run into an error and will have to be reset. To enable the check functionality, simply change the repeat action of the check action to the desired interval.
* Data Logging
Currently the “heartbeats.txt” file is overwritten every time we measure a new sensor value, this can easily be changed by changing the /home/pi/Desktop/heartbeats node’s “action” setting to “append to file” but this will cause the text file to get a new line every 20 seconds causing 4320 lines/day making it very unclear.
Raspberry Pi settings
To upgrade the prototype a few extra settings can be configured on the Raspberry Pi to make it look more like a finished product.
Turn on Node-RED on boot
The instruction on how to do this are found on the following link: https://nodered.org/docs/hardware/raspberrypi Doing this means that after turning the Raspberry Pi on, you only have to open the two browser tabs to make the Prototype work.
Open browser tabs on boot
By opening the browser tabs in full screen on boot, we can directly start using the device when the Raspberry Pi is turned on, without any set up. To do this follow these steps:
- Enter
sudo nano /home/pi/.config/lxsession/LXDE-pi/autostart on the terminal
window - Add the following lines to the opened file
@xset s off
@xset -dpms
@xset s noblank
@chromium-browser --kiosk http://127.0.0.1:1880/speaking
@chromium-browser --kiosk http://127.0.0.1:1880/ui
Note
The webpages might ask for a reload when the they open or say that the address can’t be reached, to avoid this add @sleep 30
(30 = seconds, can be changed if not enough) before the @chromium
lines to leave enough time for Node-RED to start running and the Raspberry Pi to connect to the internet