A way to describe how an algorithm's performance scales with input size.
Big O Notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. In computer science, it's used to classify algorithms according to how their running time or space requirements grow as the input size grows. It helps us understand the efficiency of an algorithm, especially for large inputs, by focusing on the "worst-case" scenario.
When we talk about algorithm efficiency, we primarily consider two aspects:
Big O Notation provides a high-level understanding, abstracting away constant factors and lower-order terms, allowing us to compare algorithms based on their fundamental scaling behavior.
Let's explore the most common Big O notations, their characteristics, and examples.
An algorithm runs in constant time if its execution time does not depend on the size of the input. No matter how large the input data set is, the time taken to complete the operation remains the same.
Example: Accessing an element in an array by its index.
function getFirstElement(arr) {
return arr[0]; // Always takes one step, regardless of array size
}
Click "Access First Element" to see how quickly the first element is retrieved, regardless of array size.
Logarithmic time complexity means the execution time grows very slowly as the input size increases. This typically occurs in algorithms that divide the problem into smaller halves in each step, like binary search.
Example: Binary search in a sorted array.
function binarySearch(arr, target) {
let low = 0, high = arr.length - 1;
while (low <= high) {
let mid = Math.floor((low + high) / 2);
if (arr[mid] === target) return mid;
else if (arr[mid] < target) low = mid + 1;
else high = mid - 1;
}
return -1;
}
Enter a number to search (1-20). Watch how binary search quickly narrows down the possibilities.
Linear time complexity means the execution time grows proportionally with the input size. If the input doubles, the time taken roughly doubles. This is common for algorithms that need to process each element in the input once.
Example: Finding the maximum value in an unsorted array.
function findMax(arr) {
let max = arr[0];
for (let i = 1; i < arr.length; i++) { // Loop runs 'n' times
if (arr[i] > max) max = arr[i];
}
return max;
}
Enter a number to search (1-10). Watch how linear search checks each element one by one.
This complexity is common in efficient sorting algorithms. It's better than quadratic time but worse than linear time. It often arises from algorithms that divide the problem (log n) and then perform linear work on each division (n).
Example: Merge Sort, Quick Sort.
function mergeSort(arr) {
if (arr.length <= 1) return arr;
const mid = Math.floor(arr.length / 2);
const left = mergeSort(arr.slice(0, mid));
const right = mergeSort(arr.slice(mid));
return merge(left, right); // Merge takes O(n)
}
// The recursive calls create log n levels, each doing O(n) work.
Quadratic time complexity means the execution time grows proportionally to the square of the input size. If the input doubles, the time taken quadruples. This is often seen in algorithms with nested loops.
Example: Bubble Sort, Selection Sort, Insertion Sort.
function bubbleSort(arr) {
for (let i = 0; i < arr.length; i++) { // Outer loop: n
for (let j = 0; j < arr.length - 1 - i; j++) { // Inner loop: ~n
if (arr[j] > arr[j + 1]) {
[arr[j], arr[j + 1]] = [arr[j + 1], arr[j]];
}
}
}
}
Exponential time complexity means the execution time doubles with each addition to the input size. These algorithms are highly inefficient and become impractical very quickly as the input grows.
Example: Recursive calculation of Fibonacci numbers without memoization.
function fibonacci(n) {
if (n <= 1) return n;
return fibonacci(n - 1) + fibonacci(n - 2); // Recalculates many times
}
Factorial time complexity is the most severe. The execution time grows extremely rapidly, making these algorithms practical only for very small input sizes. This often involves generating all possible permutations.
Example: Solving the Traveling Salesperson Problem using brute force.
function permutations(arr) {
if (arr.length === 0) return [[]];
const firstEl = arr[0];
const rest = arr.slice(1);
// ... recursive permutation logic ...
}
This interactive graph illustrates how different Big O complexities scale as the input size (n) increases. Move the slider to see the relative performance.
When determining the Big O complexity of an algorithm, we follow a few important rules to simplify the expression:
Big O notation describes the rate of growth, not the exact time. Constant factors don't significantly affect the growth rate for large inputs.
Example: An algorithm that takes 2n
steps is still O(n).
function processTwoArrays(arr1, arr2) {
// First loop: O(n)
for (let i = 0; i < arr1.length; i++) { /* ... */ }
// Second loop: O(n)
for (let i = 0; i < arr2.length; i++) { /* ... */ }
// Total: O(n + n) = O(2n) which simplifies to O(n)
}
When an algorithm has multiple terms in its complexity expression, we only keep the term that grows the fastest (the dominant term) as 'n' approaches infinity.
Example: An algorithm with O(n² + n)
complexity simplifies to O(n²)
.
function complexOperation(arr) {
// Nested loop: O(n^2)
for (let i = 0; i < arr.length; i++) {
for (let j = 0; j < arr.length; j++) { /* ... */ }
}
// Single loop: O(n)
for (let k = 0; k < arr.length; k++) { /* ... */ }
// Total: O(n^2 + n) which simplifies to O(n^2)
}