SDD (Software Design Document) - CankayaUniversity/ceng-407-408-2021-2022-Autonomous-Drone-Control GitHub Wiki

Table of Contents

1.INTRODUCTION

  • 1.1 Purpose

  • 1.2 Scope

  • 1.3 Glossary

  • 1.4 Overview of Document

  • 1.5 Motivation

2.SOFTWARE ARCITECTURE

  • 2.1 Class Diagram

  • 2.2 Decomposite Diagram

  • 2.3 Data Flow Diagram

    • 2.3.1 Level 0 of DFD

    • 2.3.2 Level 1 of DFD

  • 2.4 Activity Diagram

  • 2.5 Sequence Diagram

3. USE CASE REALIZATIONS

  • 3.1 Image Processing Software

    • 3.1.1 Aruco Marker
  • 3.2. Artificial Intelligence Software

    • 3.2.1 Dataset

    • 3.2.2 CNN

  • 3.3 Autopilot Software

    • 3.3.1 Limitation

    • 3.3.2 Task Command

4. REFERENCES

List of the Figure

  1. Figure 1: Class Diagram
  2. Figure 2: Decomposite Diagram
  3. Figure 3: Level 0 of DFT
  4. Figure 4: Level 1 of DFT
  5. Figure 5: Activity Diagram
  6. Figure 6: Sequence Diagram
  7. Figure 7: Software Structure of Autonomous Drone
  8. Figure 8: Mission Planner Moving Drone on Simulation
  9. Figure 9: Mission Planner Do Mission

1. INTRODUCTION

1.1 Purpose

By making use of image processing and artificial intelligence software, it is aimed that the unmanned aerial vehicle detects certain objects and transmits information to the autopilot and completes its movement by avoiding the objects fully / semi-autonomously. In this way, the unmanned aerial vehicles, which are rapidly becoming widespread today, are fully automated according to the task, eliminating the human factor and minimizing the risks.

1.2 Scope

Unmanned aerial vehicles that can perform fully automatic missions, especially in military areas, are a very important need. Meeting this need will reduce the loss of people to almost zero in the military defence of countries. Drones have become popular in recent years and have started to be used in almost every field that will benefit humanity in this regard. People from all over the world, students, academics, private and public sectors continue to launch new projects and new products related to drones. Despite being one of the fastest-growing areas of technology, drone technology has its own shortcomings. One of the most important of these is safe flight with object recognition. Drones cause the most accidents by crashing into objects while moving. They can crash into electricity poles, trees, hilly and mountainous places, buildings, and even more people, cars, etc. In addition, the delay in the use of drone technology near the ground (control patrols in factories, controls between buildings, close-up automatic drone commercials, etc.) is due to the inability of drones to fly safely in these areas due to crashes. Various technologies are being considered as solutions to these issues, and the implementation of object recognition algorithms on the drone that we are planning to do and the project of providing automatic path control according to objects is one of the most important steps that can be taken in this regard.

1.3 Glossary

glossary

1.4 Overview of Document

In this document, the integration process of artificial intelligence and image processing algorithms into an unmanned aerial vehicle and the process of gaining the ability to perform missions are explained. It is aimed to turn deep learning methods into software, to take matrix calculations on images as inputs in accordance with deep learning methods, and to calculate the outputs to be obtained by means of software. It is our ultimate goal that the device gains the ability to perform autonomous missions by transmitting the outputs to be obtained from image processing or artificial intelligence algorithms, which will work in harmony with each other, to the autopilot software.

1.5 Motivation

Artificial intelligence, which has rapidly increased in popularity in our country and in the world and adapted to people's lives very quickly, unmanned aerial vehicles that revolutionized the history of the world war, and our interest in image processing technology, which is an indispensable element of these two fields, played a key role in our participation in such multi-engineering work. At the same time, we, as senior students of Çankaya University Computer Engineering, who have adopted the ideal of carrying our country further than all the countries of the world in technological developments, are motivated to carry the name of our school and our country to the top.

Artificial intelligence technologies, which are at the base of the change and transformation in the world of computers and electronics, have started to be used in almost every sector. The most important revolution since the industrial revolution, where human power turned into machine power, is the development of artificial intelligence. Machines are developing not only in physical power but also in mental power. There are two most important factors needed for this development: 1. Data 2. Qualified people with high knowledge As a country, it is aimed to extract these low-cost resources from our own essence and to integrate it into the development of artificial intelligence and our own products. If it works integrated with artificial intelligence, the thing that reaches high performances and will be indispensable in the near future is "image processing". The information about an image and the information that will be uploaded to those images by mathematical operations made on pixel basis over images such as photographs or videos are very exciting. And finally, autopilot technology, which is fully task-oriented and fully automatic, is the most basic element of unmanned vehicles. The combination of autopilot, artificial intelligence and image processing will pave the way for the emergence of a product that will minimize human loss. And we will be proud of the emergence of this product.

2. Software Architecture

In order to change the motion directions of the unmanned aerial vehicle, the photograph data of the selected objects will be read with the image processing software. Information about the pixel details of the photos will be taken as input to the artificial intelligence algorithm. As a result of the model formed, learning outcomes will be obtained. Objects will be detected according to the learning outcomes. The distance information of the detected object will be transmitted to the autopilot software via the communication protocol with the image processing algorithm. According to the information he received, the autopilot will provide the movements of the unmanned aerial vehicle.

  1. The selected objects will be run on an independent device and will be learned by a deep learning algorithm.
  2. Continuous pixel information of video images will be taken in real time.
  3. Pixel information will be analysed in real time with an artificial intelligence algorithm that has been learned.
  4. With the detection rate of 90% and higher by the artificial intelligence algorithm, the information of the detected object will be transmitted to the image processing algorithm.
  5. The image processing algorithm will transmit the distance and direction information of the object to the autopilot as a result of the detection of the object.
  6. The autopilot will maintain or change its direction according to the information it receives.

The system will operate with automatic mission-ready power-up. Against the risks that may occur, the user will be connected to the autopilot with a remote control and will enable the unmanned aerial vehicle to be directed.

2.1 Class Diagram

classdiagram

     Figure 1: Class Diagram 

2.2 Decomposite Diagram

decompositediagram

     Figure 2: Decomposite Diagram

2.3 Data Flow Diagram

2.3.1 Level 0 of DFD

Mission: Mission is the process by which the drone detects certain objects and performs evasive maneuvers (with changes in speed and direction) and these occur fully autonomously.

dfdlvl0

       Figure 3: Level 0 of DFD 

2.3.2 Level 1 of DFD

dfdlvl1

       Figure 4: Level 1 of DFD

2.4 Activity Diagram

activitydiagram

       Figure 5: Activity Diagram

2.5 Sequence Diagram

sequencediagram

       Figure 6: Sequence Diagram

3.USE CASE REALIZATIONS

softwarestructure

       Figure 7: Software Structure of Autonomous Drone 

3.1 Image Processing software

It will be used to determine the location, distance and distinguishing features of the object. It will be ensured that the features of the objects are transmitted as input to the artificial intelligence algorithm at the pixel level. With some filters to be used, the distinguishing features on the image will be determined.

3.1.1 Aruco marker

A standard size object that will be used in pixel cm conversion to determine the distance information of the object and to be calibrated for the first time.

3.2 Artificial Intelligence Software

It is the software that will be used in the formation of the learning algorithm and the learned model. This software uses the CNN structure, which is one of the deep neural network models, and puts the inputs into various linear algebraic equations and gives learning output according to the result obtained.

3.2.1 Data set

Data sets are at the heart of the Artificial Intelligence Algorithm's ability to provide learning. The more diverse the datasets contain, the higher the accuracy of the AI algorithm in learning outcomes. Having as many images as possible in the objects we selected while creating the data set will increase our learning percentage. If the learning rate is low, the data set can be expanded.

3.2.2 CNN (Convolutional Neural Networks)

Convolutional Neural Networks (CNN) is a class of artificial neural networks most commonly used for image analysis in deep learning. Because CNN uses ReLU as activation, there can be a large number of data in the nerve and learning can take place without burden.

3.3 Autopilot software

It is software that enables unmanned vehicles to perform autonomous or semi-autonomous tasks.

3.3.1 Limitation

Since device security is more important than anything else, developers make limits to secure the device with the help of autopilot. These limitations can be speed, incline, ascent, descent, etc.

3.3.2 Task Commands

According to the outputs from image processing and artificial intelligence software, information is transferred to the autopilot. The transmitted information is made meaningful on autopilot. Acceleration, deceleration, maneuvering, stopping and all other movements of our unmanned aerial vehicle according to the position of the object and the object information are transferred to the hardware with the autopilot software.

missionplaner_1

     Figure 8: Mission Planner Moving Drone on Simulation Pixhawk

missionplaner_2

     Figure 9: Mission Planner Do Mission 

4. REFERENCES

[1] About UAVs [Online]. Available: https://en.wikipedia.org/wiki/Unmanned_aerial_vehicle

[2] Study about Drone [Online]. Available: https://internetofthingsagenda.techtarget.com/definition/drone

[3] Study about “Attitude Control of Multicopter” [Online]. Available: https://core.ac.uk/download/pdf/295548558.pdf

[4] Study about “Building a Quadcopter with Arduino” [Online]. Available: https://abis.bozok.edu.tr/indir.php?file_id=710

[5] Pixhawk Overview [Online]. Available: https://ardupilot.org/copter/docs/common-pixhawk-overview.html

[6] About multicopters [Online]. Available: https://www.dronedoktoru.com/multikopter-nedir.html

[7] How UART works [Online]. Available: https://herenkeskin.com/uart-nedir-ve-nasil-calisir/

[8] Shobhit Bhatnagar, “I Classification of Fashion Article Images using Convolutional Neural Networks,” 2017, Fourth International Conference on Image Information Processing (ICIIP) [Online]. Available: https://ieeexplore.ieee.org/document/8313740

[9] Xia Zhao, “A novel three-dimensional object detection with the modified You Only Look Once method,” 2018, International Journal of Advanced Robotic Systems [Online]. Available: https://journals.sagepub.com/doi/full/10.1177/1729881418765507

[10] Daniel Pestana, “A Full Featured Configurable Accelerator for Object Detection with YOLO, “ 2021, IEEE [Online]. Available

https://ieeexplore.ieee.org/document/9435338

[11] About YOLO algorithm [Online]. Available: https://medium.com/deep-learning-turkiye/yolo-algoritmas%C4%B1n%C4%B1-anlamak-290f2152808f

[12] About YOLO algorithm [Online]. Available: https://www.youtube.com/watch?v=vRqSO6RsptU

[13] You Only Look Once: Unified, Real-Time Object Detection [Online]. Available: https://arxiv.org/pdf/1506.02640.pdf

[14] About Residual blocks — Building blocks of ResNet [Online]. Available: https://towardsdatascience.com/residual-blocks-building-blocks-of-resnet-fd90ca15d6ec

[15] Convolutional Neural Network (CNN) [Online]. Available: https://medium.com/@tuncerergin/convolutional-neural-network-convnet-yada-cnn-nedir-nasil-calisir-97a0f5d34cad

[16] Smart Autopilot Drone System for Surface Surveillance and Anomaly Detection via Customizable Deep Neural Network [Online]. Available: https://www.researchgate.net/publication/338529296_Smart_Autopilot_Drone_System_for_Surface_Surveillance_and_Anomaly_Detection_via_Customizable_Deep_Neural_Network

[17] Detection of a Moving UAV Based on Deep Learning-Based Distance Estimation [Online]. Available: https://www.mdpi.com/2072-4292/12/18/3035/htm