Demystifying Big-O Notation: The Ultimate Guide to Understanding Algorithm Complexity
Big-O notation is a mathematical notation that is used to describe the performance or complexity of an algorithm, specifically how long an algorithm takes to run as the input size grows. Understanding Big-O notation is essential for software engineers, as it allows them to analyze and compare the efficiency of different algorithms and make informed decisions about which one to use in a given situation. In this guide, we will cover the basics of Big-O notation and how to use it to analyze the performance of algorithms.
What is Big-O?
Big-O notation is a way of expressing the time (or space) complexity of an algorithm. It provides a rough estimate of how long an algorithm takes to run (or how much memory it uses), based on the size of the input. For example, an algorithm with a time complexity of O(n) means that the running time increases linearly with the size of the input.
What is time complexity?
Time complexity is a measure of how long an algorithm takes to run, based on the size of the input. It is expressed using Big-O notation, which provides a rough estimate of the running time. An algorithm with a lower time complexity will generally be faster than an algorithm with a higher time complexity.
What is space complexity?
Space complexity is a measure of how much memory an algorithm requires, based on the size of the input. Like time complexity, it is expressed using Big-O notation. An algorithm with a lower space complexity will generally require less memory than an algorithm with a higher space complexity.
Examples of time complexity
Here are some examples of how different time complexities are expressed using Big-O notation:
- O(1): Constant time. The running time is independent of the size of the input.
- O(n): Linear time. The running time increases linearly with the size of the input.
- O(n²): Quadratic time. The running time is proportional to the square of the size of the input.
- O(log n): Logarithmic time. The running time increases logarithmically with the size of the input.
- O(2^n): Exponential time. The running time increases exponentially in the size of the input.
Big-O notation uses mathematical notation to describe the upper bound of an algorithm’s running time. It provides a way to compare the running time of different algorithms, regardless of the specific hardware or implementation being used.
It’s important to note that Big-O notation only provides an upper bound on the running time of an algorithm. This means that an algorithm with a time complexity of O(n) could potentially run faster than an algorithm with a time complexity of O(log n) in some cases, depending on the specific implementation and hardware being used.
Additionally, Big-O notation only considers the dominant term in the running time equation. For example, an algorithm with a running time of O(n² + n) would be simplified to O(n²).
For a detailed discussion on algorithms and coding patterns, take a look at Grokking the Coding Interview.
Example algorithms and their time complexities
1. Linear search - O(n)
Here are a few examples of algorithms written in Python, along with their time complexities expressed using Big-O notation:
def linear_search(arr, x): for i in range(len(arr)): if arr[i] == x: return i return -1
This linear search algorithm has a time complexity of O(n), since the running time is directly proportional to the size of the input. If the input size is n, the linear search will take n steps to complete.
2. Binary search - O(log n)
def binary_search(arr, x): low = 0 high = len(arr) - 1 while low <= high: mid = (low + high) // 2 if arr[mid] < x: low = mid + 1 elif arr[mid] > x: high = mid - 1 else: return mid return -1
This binary search algorithm has a time complexity of O(log n), since the running time increases logarithmically with the size of the input. If the input size is n, it will take approximately log(n) steps to complete the binary search.
3. Bubble sort - O(n²)
def bubble_sort(arr): n = len(arr) for i in range(n): for j in range(0, n-i-1): if arr[j] > arr[j+1]: arr[j], arr[j+1] = arr[j+1], arr[j]
This bubble sort algorithm has a time complexity of O(n²), since the running time is quadratic in the size of the input. If the input size is n, it will take approximately n² steps to complete the bubble sort.
4. Quick Sort - O(n*log n)
def quick_sort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right)
This implementation of quick sort has a time complexity of O(n log n), on average. In the worst case, the time complexity is O(n²), if the pivot is always the smallest or largest element in the array.
The quick sort algorithm works by selecting a pivot element and partitioning the array into three subarrays: one containing elements less than the pivot, one containing elements equal to the pivot, and one containing elements greater than the pivot. It then recursively sorts the left and right subarrays.
The time complexity of quick sort is determined by the number of recursive calls that are made. Each recursive call processes approximately n/2 elements, so the total number of recursive calls is approximately log(n). The running time of each recursive call is O(n), so the total running time is O(n log n).
5. Fibonacci sequence - O(2^n)
Here is an implementation of an algorithm for calculating the n'th element of the Fibonacci sequence in Python:
def fibonacci(n): if n == 0: return 0 elif n == 1: return 1 else: return fibonacci(n-1) + fibonacci(n-2)
This implementation of the Fibonacci sequence has a time complexity of O(2^n), which is exponential in the size of the input. This is because each call to fibonacci(n) results in two additional recursive calls to fibonacci(n-1)
and fibonacci(n-2)
.
The time complexity of this algorithm is determined by the number of recursive calls that are made. Each recursive call processes one element, so the total number of recursive calls is approximately 2^n. The running time of each recursive call is O(1), so the total running time is O(2^n).
Example algorithms and their space complexities
Here are a few examples of algorithms written in Python, along with their space complexities expressed using Big-O notation:
1. Linear search - O(1)
def linear_search(arr, x): for i in range(len(arr)): if arr[i] == x: return i return -1
This linear search algorithm has a space complexity of O(1), since it only requires a constant amount of memory regardless of the size of the input. It creates a single variable (i) to store the loop index, but the memory required for this variable is constant.
2. Fibonacci sequence - O(n)
def fibonacci(n): if n == 0: return 0 elif n == 1: return 1 else: return fibonacci(n-1) + fibonacci(n-2)
This algorithm for calculating the nth element of the Fibonacci sequence has a space complexity of O(n), since it uses recursion and the call stack grows with the size of the input. Each recursive call requires additional memory to store the state of the function, and the total amount of memory required increases linearly with the size of the input.
3. Merge sort - O(n)
def merge_sort(arr): if len(arr) > 1: mid = len(arr) // 2 left = arr[:mid] right = arr[mid:] merge_sort(left) merge_sort(right) i = j = k = 0 while i < len(left) and j < len(right): if left[i] < right[j]: arr[k] = left[i] i += 1 else: arr[k] = right[j] j += 1 k += 1 while i < len(left): arr[k] = left[i] i += 1 k += 1 while j < len(right): arr[k] = right[j] j += 1 k += 1
This merge sort algorithm has a space complexity of O(n), since it creates two additional arrays (left and right) to store the left and right halves of the input during each recursive call. The memory required to store these arrays increases linearly with the size of the input.
4. Quick Sort - O(log n) or O(n)
def quick_sort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right)
The quick sort algorithm works by selecting a pivot element and partitioning the array into three subarrays: one containing elements less than the pivot, one containing elements equal to the pivot, and one containing elements greater than the pivot. It then recursively sorts the left and right subarrays.
Quick sort has a space complexity of O(log n), on average. In the worst case, the space complexity is O(n), if the pivot is always the smallest or largest element in the array. This is because the call stack grows with the number of recursive calls, and the number of recursive calls is logarithmic in the size of the input.
5. Create Matrix - O(n²)
def create_matrix(n): matrix = [] for i in range(n): matrix.append([0] * n) return matrix
This algorithm creates an n x n matrix filled with zeros. It does this by creating a new list of n zeros for each row of the matrix, and appending each row to the matrix. The space complexity of this algorithm is O(n^2), since the size of the matrix grows with the square of the size of the input (n). The memory required to store the matrix increases proportionally to the size of the input.
Conclusion
In conclusion, understanding and being able to effectively use Big-O notation is crucial for success in coding interviews. With a little practice and effort, you can become an expert in Big-O and increase your chances of acing your coding interviews in 2023 and beyond. Don't let the fear of Big-O hold you back - with the right knowledge and preparation, you can easily conquer it and succeed in your coding career. If you want to learn more about algorithms and Big-O, check out Grokking the Coding Interview and Grokking Dynamic Programming for Coding Interviews.
Check Design Gurus for some interesting courses on Coding and System Design interviews.
Here are some more interview prep sources: