# What’s the Big O Deal, Anyways? Part 2 of 3

## Jumping in to the 7 growth notations of asymptotic runtime.

This article is part 2 of a shallow dive into Big O Notation. The first article is available here: https://ogroetz.medium.com/starting-down-the-big-o-path-part-1-of-191d8710bce6

In my previous article, we went over what an algorithm and Big O Notation is, and 2 rules when beginning to factor Big O space and time complexities. This post will cover 4 of the 7 major growth notations when factoring what the complexity of an algorithm is, but please know that there are a few minor growth notations I won’t go over in my posts. As I always like to say in my blogs…If there is anything that I have misspoken about, or important info I may miss altogether, please reply to this post…I love learning.O(1)

A runtime of *O(1*) is the most ideal runtime of an algorithm. Here, the runtime is not dependent upon the input size ’n’. Our algorithm will have the same runtime whether the input is 1 element, or 1,000 elements. This is **constant** time. Some good examples of constant time: accessing an element of an array by index, or using push and pop methods on an array. The space required by our algorithm of O(1) is also constant, it will not grow as the data grows. I understand this to be because the array is what holds the space in memory, not the elements individually of the array.

*O(log n*)

**Logarithmic runtime** is where runtime grows in proportion to the logarithm of the input size. A **logarithm** is the power to which a number must be raised in order to get some other number¹. An example of a logarithm: log₁₀ 100 = 2 ; because 10² = 100 -or- log₂ 8 = 3 because 2³ = 8. The simplest answer I have found of what a logarithm is: “How many of this number do we multiply to get that number?”² ← To be even more ‘clear’ you don’t multiply the number against itself each time and count how many it took to get to our final number but if a*a = b / then multiple a*b =c / a*c = d and so forth until you get to your final number such that log₁₀ 10,000 = 4 (10⁴ = 10,000).

In logarithmic time, the time it takes to run our algorithm is directly related to the input size ’n’. Here, time goes up linearly, but n increases exponentially. Logarithmic time is the opposite from exponential time. When something grows exponentially, it is multiplying, but when it grows logarithmically, it is divided. Many cases of *O(log n*) call Binary Search as an example, and I’ll let you research that for yourself (hint: also look up Binary Search Flamenco Dancers on youtube if you’re more of a visual learner).

*O(n*)

**Linear Time**. If the time to execute an algorithm is directly proportionate to the input size ’n’, our algorithm has a linear time complexity. If passed a data set of 10 elements, it will take 10 times as long as it would if it had been passed 1 element. Because the correlation of time or space is directly related to ’n’, on our graph, you’ll see that O(n) is the line that follows the direct diagonal of the graph.

Often times, if an algorithm uses a single ‘for’ loop, we can assume the algorithm has a linear runtime O(n), as we know that a loop will iterate through all elements of the data set. A for loop might return early, if for example a matching string was found earlier in the iterations, but Big O, again, is working off the worst case scenario.

*O(n log n*)

Many sorting algorithms, because they compare/order two values at a time, will have a runtime of O(n log n). This **linearithmic runtime** includes algorithms like quicksort, mergesort and heapsort. Because many algorithms are used to sort data, this complexity class becomes one that is often referred to to figure out space used and runtime. In these cases, we have a linear runtime with a coefficient of ’n’. If there are two loops present, the inner loop must be iterating independently from the outer loop (if dependently the complexity would be O(n)).

O(n log n) does not represent exponential growth, whatsoever. O(n log n) is not that much further off from a runtime, actually, as would be O(n). By this I mean, that an algorithm with rate of growth of O(n log n) has a closer complexity value to that of linear than the next, worse case of a quadratic runtime.

Phew! Let’s call it here for now. Next week I’ll be back to pick up with the remaining 3 growth complexities. I know, I know…I have you on the edge of your Big O’ seats!

- University of Minnesota | “http://www.mclph.umn.edu/mathrefresh/logs.html” (unknown author)
- Math Is Fun | “https://www.mathsisfun.com/definitions/logarithm.html” (unknown author)