Last Updated:

Asymptotic analysis of algorithms

The analysis of comparing the time-consuming algorithms performed to solve an instance of a certain problem, with large amounts of input data, is called asymptotic. An algorithm that has less asymptotic complexity is the most efficient.

analysis of algorithms

In asymptotic analysis, algorithm complexity is a function that allows you to determine how quickly the algorithm's operating time increases as the amount of data increases.

The main growth estimates found in asymptotic analysis are:

  • Ο (O-large) – the upper asymptotic assessment of the growth of temporal function;
  • Ω (Omega) - the lower asymptotic assessment of the growth of temporal function;
  • Θ (Theta) – lower and upper asymptotic estimates of the growth of temporal function.

Let n be the amount of data volume. Then the growth of the function of the algorithm f(n) can be limited to the functions g(n) asymptotically:

DesignationDescription
f(n) ∈ Ο(g(n))f is bounded from above by the function g with precision to a constant multiplier
f(n) ∈ Ω(g(n))f is bounded from below by the function g with precision to a constant multiplier
f(n) ∈ Θ(g(n))f is bounded on the bottom and top by the function g

For example, the cleaning time of the room linearly depends on the area of this very room (Θ (S)), i.e. with the growth of the area by n times, the cleaning time will also increase by n times. Searching for a name in the phone book will require a linear time Ο(n), if you use a linear search algorithm, or a time that logarithmically depends on the number of records (Ο(log)2(n))), in the case of binary search.

For us, the Ο-function is of the greatest interest. In addition, in subsequent chapters, the complexity of the algorithms will be given only for the upper asymptotic boundary.

The phrase "the complexity of the algorithm is Ο(f(n))" means that with an increase in the amount of input data n, the time of operation of the algorithm will increase no faster than some constant multiplied by f(n).

Important rules of asymptotic analysis:

  1. O(k*f) = O(f) – the constant factor k (constant) is discarded because as the amount of data increases, its meaning is lost, for example:

O(9,1n) = O(n)

  1. O(f*g) = O(f)*O(g) – the estimation of the complexity of the product of two functions is equal to the product of their complexities, for example:

O(5n*n) = O(5n)*O(n) = O(n)*O(n) = O(n*n) = O(n2)

  1. O(f/g)=O(f)/O(g) – the estimation of the complexity of a particular of two functions is equal to the quotient of their complexity, for example:

O(5n/n) = O(5n)/O(n) = O(n)/O(n) = O(n/n) = O(1)

  1. O(f+g) is equal to the dominant of O(f) and O(g) – the estimation of the complexity of the sum of functions is defined as an estimate of the complexity of the dominant of the first and second terms, for example:

O(n5+n10) = O(n10)

Counting the number of operations is a tedious and, importantly, not at all necessary. Based on the above rules, in order to determine the complexity of the algorithm, it is not necessary, as we did before, to count all operations, it is enough to know what complexity this or that design of the algorithm (operator or group of operators) has. Thus, an algorithm that does not contain loops and recursions has a constant complexity of O(1). The complexity of a loop that performs n iterations is O(n). The construction of two nested cycles dependent on the same variable n has a quadratic complexity of O(n2).

Here are the most common difficulty classes:

  • O(1) is the constant complexity;
  • O(n) is the linear complexity;
  • A(nand) – polynomial complexity;
  • O(Log(n)) is the logarithmic complexity;
  • O(n*log(n)) is a quasilinear complexity;
  • O(2n) – exponential complexity;
  • O(n!) is the factorial complexity.