SRS (Software Requirements Specification) - CankayaUniversity/ceng-407-408-2021-2022-Autonomous-Drone-Control GitHub Wiki

CONTENTS:

1.INTRODUCTION

  • 1.1 Goal

  • 1.2 Project Scope

  • 1.3 Advantages

  • 1.4 Glossary

  • 1.5 Documental Overview

2.GENERAL DESCRIPTION

  • 2.1 Product Perspective (Product Features)

  • 2.2 Drone Preparation

  • 2.3. Artificial Neural Network Model & Image Processing Methodology

  • 2.4. Artificial Neural Network & Image Processing Algorithm Result - Drone Steering with Autopilot

  • 2.5 Restrictions

  • 2.6 Risks

3. SYSTEM REQUIREMENTS

  • 3.1. External Interface Requirements

      3.1.1. User Interface
    
      3.1.2. Hardware Interface
    
      3.1.3. Software Interface
    
      3.1.4. Communication Interfaces
    
  • 3.2 Functional Requirements

     3.2.1 Profile Management - Use Case
    
     3.2.2 Settings Menu - Use Case
    
     3.2.3 Detected Object Pass - Use Case
    
     3.2.4 Usage Menu - Use Case
    
  • 3.3 Performance Requirements

  • 3.4 Software System Features

     3.4.1. Portability
    
     3.4.2. Performance
    
     3.4.3. Availability
    
     3.4.4. Adaptability
    
     3.4.5. Scalability
    
  • 3.5. Security Requirements

4.REFERENCES

Figures List:

Figure 1: Profile Management - Use Case

Figure 2: Settings Menu - Use Case

Figure 3: Detected Object Pass - Use Case

Figure 4: Usage Menu - Use Case

1.INTRODUCTION

1.1 Goal

The goal of this project is to provide automatic flight-path control according to objects by implementing object recognition algorithms on the drone. Computational sense-making of the objects seen with the unmanned aerial vehicle camera with image processing and artificial intelligence applications, processing this information and the information from the drone autopilot with a processor, creating new movement commands of the drone and transferring these commands to the autopilot are intended, which allows the drone to automatically move according to them without hitting the objects on the path.

1.2 Project Scope

Drones have become popular in recent years and have started to be used in almost every field that will benefit humanity in this regard. People from all over the world, students, academics, private and public sectors continue to launch new projects and new products related to drones. Despite being one of the fastest-growing areas of technology, drone technology has its own shortcomings. One of the most important of these is safe flight with object recognition. Drones cause the most accidents by crashing into objects while moving. They can crash into electricity poles, trees, hilly and mountainous places, buildings, and even more people, cars, etc. In addition, the delay in the use of drone technology near the ground (control patrols in factories, controls between buildings, close-up automatic drone commercials, etc.) is due to the inability of drones to fly safely in these areas due to crashes. Various technologies are being considered as solutions to these issues, and the implementation of object recognition algorithms on the drone that we are planning to do and the project of providing automatic path control according to objects is one of the most important steps that can be taken in this regard.

1.3 Advantages

With this project, drones will start to fly more safely and will get smarter. Accidents and crashes caused by drones hitting objects will begin to decrease, and human and animal injuries and deaths will begin to decrease. In this way, the use of drones in these areas will increase, which is already expensive and therefore has reservations about its use in areas that can experience decimation. In addition, with drones getting smarter in the field of object vision, new technologies will be paved in this regard. For example, it can pave the way for hundreds of future drone technology issues such as automatic tracking of moving objects, dropping cargo on certain objects in a certain place, drones discharged from the automatic charging station by precision landing at the automatic charging station with object vision and continuing to the next patrols as automatic charging. As a result, accident incidents will decrease, new drone technologies will be paved, and its development and use will increase like any technology that has become safer.

1.4 Glossary

CNN: Convolutional Neural Networks

YOLO: You Only Look Once

ReLU: Rectified Linear Unit

GCU: Ground Control Unit [UAV ground control station (UAV – Unmanned aerial vehicles)]

1.5 Documental Overview

This document generally describes the safety of drones on objects on the path, the advantages to be gained if this security is provided, and how the technology will develop in this way. It gives information about autopilot software, flight hardware, communication hardware, mini-computer hardware and informs about image processing and artificial intelligence algorithms to be used on object vision, and the advantages of these algorithms over other algorithms in general.

2. GENERAL DESCRIPTION

2.1 Product Perspective (Product Features)

The project of Implementation of Object Recognition Algorithms on Drone and Providing Automatic Path Control According to Objects consists of hardware and software parts. As hardware, it basically consists of drone, drone autopilot (open-source autopilot), processor (jetson board) and camera. As software, autopilot software (open-source autopilot software), communication software, and image processing and artificial intelligence algorithm software that will work on the jetson card are combined and consist of drone control software, which is the decision-making mechanism.

2.2 Drone Preparation

We will use a small multicopter as a drone. The main reason for this is that drones are dangerous and expensive for testing. A multicopter with 10 inches and six propellers will do. We will build the drone completely ourselves. We will choose the engine, esc, propeller part, which is the thrust block, so that they are compatible with each other. As autopilot, we will choose an open-source autopilot, the autopilot we are considering is Pixhawk. We will have a telemetry for two-way communication. We will build the battery ourselves from lithium-ion cells. There will be a regulator that we can regulate 12V and 5V from the battery. We will design and manufacture the body with 3D programs according to the components we have chosen. On the front will be the camera zone. There will be a minicomputer place for image processing and artificial intelligence. Jetson Xavier was considered as a minicomputer.

2.3 Artificial Neural Network Model and Image Processing Methodology

A data set of objects that the drone can see will be created for image processing and artificial intelligence. This dataset creation process will be done with photographs of these objects taken from different angles. The drone needs to recognize these objects with great precision. For this, a large part (about 80%) of the photographic dataset will be used for training. The rest of the dataset will be used to test the correct functioning of the system. If necessary, we will enlarge this dataset until we reach the accuracy we wish.

Many systems are used for training and recognition algorithms. When we observed our system, we concluded that it should be a structure that will provide high accuracy solutions even when moving fast, especially since the drone moves fast.

As a result of our research, we chose the structure suitable for expanding the speed, accuracy and learning capacity when desired, Convolutional Neural Networks (CNN) artificial neural network on the training data sets and You Only Look Once (YOLO) algorithm as the algorithm. We decided to train our datasets with CNN and set up the algorithm with YOLO. We found that the YOLO algorithm is several times faster than other algorithms we researched, such as RetinaNet. Speed is very important for us in the algorithm, as the drone can accelerate to 60 km/h. The algorithm should not compromise on accuracy while performing such fast processing, YOLO can recognize objects with a very low error rate while performing fast processing. As we mentioned before, we will test our system with training datasets and expand our system with new datasets until we reach the desired performance. It gave us another reason to choose YOLO with extensible datasets and accurate prediction performance when multi-dataset.

CNN is widely used in deep learning to analyse images. CNN uses Rectified Linear Unit (ReLU) as its activation function. It is used for multi-layer deep learning models. Our visuals in the training data in the Convolutional Layer have dimensions W: width x H: height x 3: RGB (Red Green Blue). By performing MaxPooling, we can reduce the size of images with high dimensions without losing their properties. Thus, we will be able to make fast and accurate predictions. Our neural network model will have 3 stages (Input, hidden and output layer). We will have output as the number of objects we have determined from this model plus the number of cases where none of these objects are present.

2.4. Artificial Neural Network & Image Processing Algorithm Result - Drone Steering with Autopilot

Artificial neural network and image processing algorithm are planned to work on Jetson board. The routing information obtained with the outputs obtained because of this algorithm processed with the images taken from the drone camera will be transmitted to the autopilot from the Jetson card. It is planned to communicate with UART ports on Jetson and Pixhawk at 115.200 baud rate over UART. UART communication will be done over 8bit data, by selecting parity bit and 1 stop bit and timeout values. As the operating system, Linux Ubuntu operating system was chosen because it is more reliable and has less losses.

20 fps images coming from the camera will be able to be processed with our artificial neural network and image processing system. The whole system is on the drone and the system is independent from the ground control station. In this way, 2-way communication delays are prevented with telemetry, thus both the decrease in system performance and accidental crashes due to slowdown are prevented, and the task can be completed even in the presence of devices such as signal cutters in the environment. By seeing the system performance, system performance will be increased with extra data sets if necessary.

2.5 Restrictions

The biggest limitation of our project is the processing power, by implementing object recognition algorithms on the drone and providing automatic path control according to objects. Cameras have been minimized with the development of technology and they do not need much energy, but the processors that will take images from these cameras and process it and make artificial neural network analysis are not very small and their energy needs are higher. If these operations are carried out at the ground control station, we will lose as much time as the signals coming and going from the drone, and since this loss of time will cause the drone to process and receive commands compared to the previous state of the drone, it is likely to experience troubles, accidents and incidents. In addition, when there are signal breakers in the environment, the connection between the ground control station and the drone may be cut off, so that the drone will be left without a command and crash. For all these reasons, we decided to keep the processor on the drone, despite the weight and processing power constraints.

2.6 Risks

The drone is an expensive system. In addition, it can accelerate very quickly, go fast and crash due to a technical problem or user error. As the drones get bigger and the carbon-fiber propeller, whose propellers are more efficient, the damage to the environment and people increases. Because of these risks, we decided to make the drone as small as possible for our project tests and use a plastic propeller instead of a carbon-fiber propeller.

3. SYSTEM REQUIREMENTS

3.1. External Interface Requirements

3.1.1. User Interface

The user will operate on the Windows operating system.

3.1.2. Hardware Interface

Required equipment is below:

● A gimbal camera with high resolution, UART communication, and high FPS.

● High performance minicomputer with UART communication, camera input, CUDA and CuDNN features

● Cables for communication

● HDMI cable for image transmission

● Drone body

● Propeller 4 Pcs.

● Engines 4 Pcs.

● Electronic Speed Controller

● Controls

● Autopilot

● Telemetry

● Battery

● GPS

3.1.3. Software Interface

We will develop an automatic flying drone as a prototype. We will use the open source Pixhawk Cube Orange autopilot for this drone and we will use the Mission Planner software interface for this autopilot. We will develop with Python programming language for autopilot software development, artificial intelligence software development and image processing software development, and the integrated software environment we will use for Python will be PyCharm. We will use TensorFlow and OpenCV libraries for Python. We will use the Linux Ubuntu operating system for Jetson Xavier.

3.1.4. Communication Interfaces

An extra communication interface not required.

3.2 Functional Requirements

3.2.1 Profile Management - Use Case:

Use Case:

• Start computer software

• Enter username / password

• Change settings

• Exit

Diagram:

            Figure 1: Profile Management - Use Case

Brief Explanation:

In our system, users cannot enter the system directly, users must be authorized. The use of drones is not legally open to everyone, people who have officially obtained the UAV0-UAV1 authorization certificate can use the drone. Users and passwords will be given to those who receive the authorization certificate and know how to use the system, so they will be able to change the settings of the system and use the system and log out whenever they would like.

Step-by-Step Explanation:

  1. Anyone can start computer software.
  2. Only authorized and qualified officials can log in to the system with a username and password.
  3. The authorized user can change the system settings.
  4. The user can log out of the system at any time when the usage is over.

3.2.2 Settings Menu - Use Case:

Use Case:

• Setting up a roadmap with waypoints

• Setting waypoints heights

• Adjusting drone speed

• Exit

Diagram:

     Figure 2: Settings Menu - Use Case

Brief Explanation:

With the settings, the user will be able to set the roadmap from google map. It will be able to determine from where and from which points the drone will pass. The drone will be able to adjust its speed. Height and speed adjustments can be made separately for the desired road parts, and the drone will not be active according to these new settings without sending these settings to the drone via telemetry. The user will be able to log out of the computer software at any time.

Step-by-step explanation:

  1. The user sets up the roadmap by creating waypoints on the map.
  2. User sets the height of waypoints; each waypoint can be at separate height.
  3. User sets the drone speed. Standard drone can set speed or set speed separately for each path section.
  4. The user can log out of the computer software at any time.

3.2.3 Detected Object Pass - Use Case

Use Case:

• Setting how many meters before objects change path

• Setting the transition of objects to the right, left, through or over

• Setting how many meters to pass objects

• Setting how many meters after passing objects to enter the defined path again

• Exit

Diagram:

     Figure 3: Detected Object Pass - Use Case

Brief Explanation:

Objects will be recognized in the system we train with artificial neural network and image processing algorithms. These settings are made on this menu for testing how close the drone should pass to objects and for drones of different sizes and different speeds.

Step-by-step Explanation:

  1. According to the speed and size of the drone, how many meters before the objects should change the path, is adjusted.

  2. According to the speed and size of the drone, it is set that it should pass from the right, left, inside or over the objects.

  3. According to the speed and size of the drone, how many meters to pass the objects, is adjusted.

  4. According to the speed and size of the drone, how many meters it will enter the defined path after passing the objects, is adjusted.

  5. Exit from system after all settings.

3.2.4 Usage Menu - Use Case

Use Case:

• Send settings to drone

• Launch drone

• Send the new settings to the drone if necessary.

• Pause

• Land

• Take off

• Continue

• Finish the mission

Diagram:

       Figure 4: Usage Menu - Use Case

Brief Explanation:

The settings made for the drone are sent to the drone via telemetry and the drone starts, so that the drone automatically takes off and starts to go according to the roadmap. The drone travels completely automatically, so the user cannot make extra directions. When the drone encounters the objects in the data set while on the path, it applies the transition protocol defined for that object from that object (from the right, left, under, inside). When it encounters an object that is not in the dataset, it applies the defined transition procedure for the objects that are not in the dataset. At any time, new settings are sent to the drone via telemetry, and as soon as these settings are sent, the new settings become active and the drone moves according to these settings. The user can pause the mission at any time so that the drone waits in the air, can land the drone at any time, take off and resume the mission at any time. The user can finish the task at any time, or if this command is not given until the drone finishes the task, the task will be completed automatically. When the mission is over, the drone will automatically land. If the user exits the computer software during the mission, the drone will finish the mission and land. After landing, GCU and drone are turned off.

Step-by-step Explanation:

  1. The settings are sent to the drone via telemetry.
  2. The command to start the drone mission is given from the computer.
  3. The drone automatically takes off and starts to go according to the roadmap.
  4. When the drone encounters an object in the dataset, it completes the previously defined transit protocol for that object.
  5. When the drone encounters an object that is not in the dataset, it completes the previously defined transition protocol for the object that is not in this dataset.
  6. The user can send the new settings to the drone via telemetry at any time, so that the new settings are active and the drone continues to fly with these settings.
  7. The user can pause, land, take off at any time. Then he can resume the task.
  8. The mission ends and the drone automatically lands.
  9. GCU and drone are turned off.

3.3 Performance Requirements

The requirements of the minicomputer that we will use for image processing and artificial neural network, are below:

  1. 8 GB 128-bit LPDDR4x 59,7 GB/sec RAM or equivalent
  2. 384 Core NVIDIA Volta™ GPU Equipped with 48 Tensor Cores or equivalent
  3. 6-core NVIDIA Carmel Arm® v8.2 64-bit CPU or equivalent
  4. 7-Way VLIW Image Processor or equivalent

3.4 Software System Features

3.4.1. Portability

The project will be able to work with open-source autopilots and autopilots with the appropriate SDK. Therefore, it can be transferred to most different drones on the market. It can be used not only for aircraft, but also for land and submarine vehicles.

3.4.2. Performance

• Each picture (frame) in the image taken from the camera will transmitted through the image processing software in 0.05 seconds.

• The received images will be transmitted through deep learning software within 0.04 seconds.

• It will be transferred to autopilot in 0.05 seconds.

• Image processing, deep learning, communication and autopilot software will run for a frame in a total of 0.14 seconds.

3.4.3. Availability

Our device will have two modes, semi-autonomous and fully autonomous. Thanks to these modes, the flight can be used completely automatically or with control assistance.

3.4.4. Adaptability

The image processing and artificial intelligence software and hardware used on the device are compatible to work with different autopilot hardware.

3.4.5. Scalability

The system has no scalability requirements.

3.5. Security Requirements

Since drones pose a danger to humans and animals by impact and system tests will be carried out, there should be no living being in the environment.

4.REFERENCES

[1] https://arxiv.org/abs/1804.02767

[2] https://medium.com/deep-learning-turkiye/yolo-algoritmas%C4%B1n%C4%B1-anlamak-290f2152808f

[3] https://www.youtube.com/watch?v=vRqSO6RsptU

[4] https://towardsdatascience.com/residual-blocks-building-blocks-of-resnet-fd90ca15d6ec

[5] https://en.wikipedia.org/wiki/Residual_neural_network

[6] https://stackoverflow.com/questions/58594235/yolo-training-yolo-with-own-dataset

[7] https://ieeexplore.ieee.org/document/9435338