Time complexity - Falmouth-Games-Academy/comp350-research-journal GitHub Wiki
Time Complexity is a concept in computer science that deals with the quantification of the amount of time taken by a set of code or algorithm to process or run as a function of the amount of input [1]. In most cases, the term refers not to the time that passes while the program is running, but instead refers to the number of times a statement is performed [2].
To describe the time complexity of a given program or function we use Big O Notation.
To measure the time complexity of a program, it's entirely possible to use the running time of the program, however, this can be very misleading. The reason being is that not all machines have equal processing power and therefore would give too much variance to be reliable for measuring time complexity.
To ensure that time complexity can be measured reliably without external factors such as computational time, an alternate method is required. While it's possible to calculate the number of times an operation is carried out, this is difficult and in some cases unnecessary. Instead, it's more beneficial to identify the most important operation with the program, this is referred to as the basic operation [3]. The basic operation should be the operation contributing the most to the total running time of the program, once found if you calculate the total number of times the basic operation is executed.
Knowing each of the following time complexities will help to asses whether code will scale effectively, it is also incredible useful to view different solutions to the same problem as you can better determine which one will perform better [4]. To recap time complexity determines how well an algorithm performs regardless of the machine it is run on.
This describes an algorithm which takes the same time to compute regardless of the input size. For example if a function takes the same to process one million elements as it does to process 10 then we can say that it has a constant growth rate or 0(1).
Linear time algorithms are incredibly common and they are essentially the opposite of constant time. This complexity means that as the input grows the processing time also grows. A function which has a linear time complexity has a growth rate of 0(n).
Quadratic Time complexity has a growth size of 0(n^2). If the input size is 2 then it will perform roughly 4 operations, similarly if the input size is 8 then it will perform roughly 8 operations and so on.
An algorithm is said to be solvable in polynomial time if the number of steps required to finish for any given input is 0(n^k). These algorithms are said to be quite fast however the processing time increases rapidly as the input grows and therefore should be avoided unless specifically required [5].
An algorithm which features a logarithmic time complexity usually means that each operation divides the problem in half every time. And example of this is if you were to open a phone book from the middle you will know that if the name you are looking for is alphabetically bigger then you continue to look right, otherwise look left. This is known as O(n log n).
This is known to be slightly slower than a linear algorithm but still much better than a quadratic one and takes up time at a rate of O(n log n).
A exponential algorithm runs at base 2 and running time doubles every time as the input size grows. This is represented as O(2^n).
Factorial multiplication of all positive numbers less than itself, for example:
5! = 5 x 4 x 3 x 2 x 1 = 120
This grows incredibly quickly:
20! = 2,432,902,008,176,640,000
[4]
This is represented as O(n!).
[1] https://www.techopedia.com/definition/22573/time-complexity
[2] https://www.geeksforgeeks.org/understanding-time-complexity-simple-examples/
[3] Levitin, A., 2012. Introduction to the design & analysis of algorithms, Boston: Pearson,.