Evaluation - ofithcheallaigh/masters_project GitHub Wiki

Verification and Validation

At the start of the project, a number of goals were defined, such as data collection, analysis using machine learning techniques, development of neural networks and deployment onto a constrained device.

As the project progressed, regular meetings were held with my supervisor to update him on the progress being made to ensure the goals were being met in a timely manner.

These meetings would review the progress made since the last meeting by showing the results of any analysis carried out. Discussions would be had on any issues or concerns the work produced, and a plan would be put in place to investigate the issues so the project kept moving forward.

Validation was completed on one phase of the research before progressing to the next, as the validation allowed the project to move to the next phase. For example, when the analysis for the machine learning phase of the project indicated that object detection and localisation were possible, but machine learning techniques alone we not sufficient, the research progressed to the investigation of neural networks.

Critical Appraisal

The main goal of the project was to develop an embedded machine learning system to assist people with visual impairments to navigate in an unknown indoor environment by detecting and localising obstacles.

For privacy reasons, using an image dataset was not a realistic option. When looking for potential datasets, none could be found, which meant that data had to be collected specifically for this research with data collected from a number of large obstacles. with the datasets in place and an initial analysis of the data was carried out to understand how well the data could be classified.

Initial results looked promising, with the classification algorithms generating high accuracy scores when looking at the full datasets in terms of a binary analysis (is there an object there?) and a grid analysis (can we localise the data to a grid location?). To back up the results obtained in this analysis, a further investigation using classification algorithms was carried out. This time, three of the datasets were used for training, and one was held back for testing. This analysis showed a drop in the accuracy scores for the new, unseen data used during testing. This analysis pointed to the potential need for using deep learning techniques. This pattern of keeping one dataset as unseen data was carried out in such a way as to ensure each dataset was the unseen data once.

The neural network development followed an iterative path where the hyperparameters were tuned with a view of achieving the best possible accuracy and loss scores. A number of parameters were investigated, such as epochs, batch size, and learning rate as well as investigating the impact changing the number of layers and neurons had on the results. The initial investigation showed that the neural network approach was promising, however, more work would need to be done with this, such as further optimisation as well as passing unseen data to the model.

With the initial positive results from the neural network, a TensorFlow Lite model was generated and deployed onto the constrained device to allow inference to be carried out. There were a number of trials, it was seen that while the system would carry out inference on data passed to it, the results returned were always the same. Unfortunately, time ran out before this issue could be thoroughly investigated to find the root cause. To understand this issue, as well as a deeper investigation of the Arduino code, it may be helpful to investigate other potentially constrained devices, because, since the accuracy results for the model were quite good, the issue could be in the end device or the code running on it.

Overall, this project offers a potential path to developing an object detection and navigation system for people with visual impairments, as the generated models show promise and the ever-evolving ecosystem of hardware options allows for better development.