Writing efficient alghoritms - danielep71/VBA-PERFORMANCE GitHub Wiki
3. Writing efficient algorithms
The performance of an algorithm can change with a change in the input size.
What happens when your algorithm should work with a collection of 100,000 elements or 100 million? Or 10 billion? In computer science, that is where “Asymptotic Notations” like Big “O” notation come into play.
When working with big data, for example, Big “O” notation is especially useful in analyzing algorithms. The tool helps programmers calculate the scalability of an algorithm or count how many steps it must execute to give output based on data the program works on (writing code that works without knowing the ins and outs of Big “O” notation, in certain situations, could be foolish or even disastrous).
3.1 The Big “O” notation
Big “O” notation is the language we use for talking about how long an algorithm takes to run (time complexity) or how much memory is used by an algorithm (space complexity).
It expresses an algorithm's best, worst, and average-case running time and mathematically describes its complexity as the input size grows.
Big “O” notation is thus a metric for determining an algorithm's efficiency.
The “O” is short for “Order of”. So, if we’re discussing an algorithm with O(n), we say its order of, or rate of growth, is n or linear complexity.
An important thing to note is that the running time when using Big “O” notation does not directly equate to time as we know it (e.g., seconds, milliseconds, microseconds, etc.). Instead, we measure the number of operations it takes to complete.
Many factors, such as the processor, the language, or the run-time environment, influence the analysis of running times. In our case, we can think of “time” as the number of operations or steps to complete a problem of size n.