Skip to main content
Back to Articles

Understanding Big O Notation

Learn how to analyze algorithm efficiency using Big O notation, with practical examples and common complexity classes.

January 1, 202610 min readBy Mathematicon

Understanding Big O Notation

Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. In computer science, it's used to classify algorithms according to how their run time or space requirements grow as the input size grows.

What is Big O?

Big O notation describes the upper bound of an algorithm's complexity. It tells us the worst-case scenario of how an algorithm will perform.

$$O(f(n))$$

Where $f(n)$ is a function describing the growth rate relative to input size $n$.

Common Complexity Classes

O(1) - Constant Time

The algorithm takes the same amount of time regardless of input size.

def get_first_element(arr):
    return arr[0]  # Always one operation

O(log n) - Logarithmic Time

The algorithm's time grows logarithmically. Binary search is a classic example.

def binary_search(arr, target):
    left, right = 0, len(arr) - 1
    while left <= right:
        mid = (left + right) // 2
        if arr[mid] == target:
            return mid
        elif arr[mid] < target:
            left = mid + 1
        else:
            right = mid - 1
    return -1

O(n) - Linear Time

The algorithm's time grows linearly with input size.

def find_max(arr):
    max_val = arr[0]
    for num in arr:  # n iterations
        if num > max_val:
            max_val = num
    return max_val

O(n log n) - Linearithmic Time

Common in efficient sorting algorithms like merge sort and quicksort.

O(n²) - Quadratic Time

Often seen in algorithms with nested loops.

def bubble_sort(arr):
    n = len(arr)
    for i in range(n):  # n iterations
        for j in range(n - 1):  # n iterations
            if arr[j] > arr[j + 1]:
                arr[j], arr[j + 1] = arr[j + 1], arr[j]

Comparing Growth Rates

$$O(1) < O(\log n) < O(n) < O(n \log n) < O(n^2) < O(2^n) < O(n!)$$

Practical Tips

  1. Drop constants: $O(2n) = O(n)$
  2. Drop lower order terms: $O(n^2 + n) = O(n^2)$
  3. Consider the worst case by default
  4. Space complexity matters too!

Conclusion

Understanding Big O notation is crucial for writing efficient code and choosing the right algorithms for your problems. It helps you make informed decisions about trade-offs between time and space complexity.

Share this article