Computer science problems often have multiple solutions/algorithms for a given problem. For example, what algorithm would you use to find the largest number in an array of integers? You could sort the array and get the largest number in O(n log(n)) time, or simply iterate through the array and maintain a variable that keeps track of the largest number in O(n) time.

We usually want to use the optimal (most efficient) algorithm because it makes our software run faster. In order to find the optimal algorithm, it helps to understand the major time complexity 'classes' and the common problems and algorithms that fit within each class.

**O(1)**

- Single operation, e.g. setting a value or performing a math operation
- Lookup/delete nth element from an array
- Lookup/insert/delete element from a hash table (average case, where there are few collisions)

**O(log n)**

- Continuously splitting a data set or number of operations in half
- Binary search
- Search/insert/delete element from a binary search tree (average case, where tree is balanced)
- Search/insert/delete element from a min/max heap

**O(n)**

- Iterating through an entire data set
- Search/insert/delete from a linked list

**O(n log n)**

- Merge sort, quick sort, heap sort

**O(n^2)**

- Looking at all pairs of items
- Selection sort, insertion sort, bubble sort
- 0/1 knapsack problem using dynamic programming

**O(2^n)**

- Continuously doubling a data set or number of operations
- Naive fibonacci
- Travelling salesman problem using dynamic programming

**O(n!)**

- Generating all permutations of a set
- Naive travelling salesman problem