# Starting Down the Big O’ Path (Part 1 of 3)

## Learning Big O Notation for improving app performance.

Well the time has come…to learn Data Structures & Algorithms. I feel like I should sign off for a year, close the office door after hanging a note on the other side that reads “I’m going in, wish me luck!” I’ve decided to start with Big O Notation, and bring you along with me. This is a BIG topic, so I’m only going to attempt to scratch the surface, and this post might have to become one of many as just an introduction to start. I know nothing about this topic currently, I’m merely just trying to teach myself using numerous sources and videos. If I get anything wrong, pls feel free to let me know, I’m happy to learn!

If you’re anything like me, you’re embarrassed to ask, what is an algorithm? An algorithm is a set of instructions given to a computer to perform certain, specific tasks. Whatever your level of experience may be in programming, we have all had some interaction with algorithms, more than likely in the form of functions. Big O Notation is the analysis of the complexity or performance of an algorithm. Big O can be used to factor, approximately, how much time an algorithm takes to run and how much machine space it uses, independent of the machine it runs on. This becomes widely important for programs that plan to scale or when designing code for already immense applications. Oh, and it does not matter what language you code in…Big O will be there silently judging you…I mean lurking in the bottlenecks letting you know how to improve your program. From what I have seen so far, Big O specifically refers to the worst case scenario of efficiency. There are a number of different solutions/algorithms that will solve the same problem a function tackles. Comparing the runtime or space complexity between them is how to determine which option of the algorithm will be best used in our program.

There are a number of common orders of growth for algorithms. Along with the chart in my first picture, the best way these have been laid out, so that my brain could comprehend their order of ‘domin[ation]’ was in Michael Sambol’s video ‘Big-O notation in 5 minutes — The basics’¹:

The ‘O’ here is order of magnitude, upper bound of run time, or worst-case growth rate function. The ’n’ is the input size(example: with an array, that would be the number of elements in the array). The larger the order, the worst the algorithm will perform. In Big O, runtime will be determined by its growth relative to the input we give to the algorithm. I found a brilliant line in educative’s definition of Big O: “It doesn’t tell you how fast or how slow an algorithm will go, but instead about how it changes with respect to its input size.”²

Before we go into the order of growth notations, it is important to know that #1 — Big O ignores constants(aka: coefficients). What does this mean? If you multiply a function by a constant, it will only influence the growth by that constant amount, the algorithms will still grow in the rate that they would have grown without the constant: linearly, exponentially, logarithmically, etc. Constants will matter significantly on small scale runtimes: of course a runtime of O(100n) runs slower than a runtime of O(n), but Big O really comes into play and is useful for runtimes in large scale. When scale is involved, the constants can be dropped, because this is where it truly matters that instead of comparing a 2 linear functions, we are comparing a constant function *O(1*) vs. a linear function *O(n*) vs. a quadratic function *O(n*²). The difference at a such a large input at scale is so drastic between the different growth notations, that adding a constant to any of them would not drastically deter what growth notation category the algorithm falls into, Big O is offering us relative comparison, and again, with it we are working on worst case scenario. Interestingly enough, I also found a source in my readings that expressed another reason for ignoring the constants is because different computers have different constant factors, and in turn access memory at differing speeds. Because we are concerned with the Big O of the algorithm and not the hardware of the machine, it is best to ignore constant factors (but I’ll let you read more about asymptotic analysis if you’re into that kind of thing, kidding this is all asymptotic analysis).

#2 — When factoring Big O: If you have a function that gives you two( or more ?) terms, ignore the lower order (non-dominant) terms. Example from Cracking the Code Interview: O(5*2ⁿ +1000n¹⁰⁰) becomes O(2ⁿ). The constant 5 doesn’t matter and 1000n¹⁰⁰ is a lower order term and can be dropped.

There are 7 order of growth notations above, but I understand that there are actually others. In part deux, my hopefully not bad sequel, I’m going to focus on breaking down these 7. Let’s try to tackle ’em all, but not like mathematicians, instead like novice bootcamp grads who need/want to know more.

**Hope to see you in the next post**…don’t give up yet!

Cited Resources:

- YouTube | ‘Big-O notation in 5 minutes — The basics’ by Michael Sambol
- educative | “Big O Notation: A primer for beginning devs”
- Medium | “Medium Subscript Text and Medium Superscript Format” by Casey Botticello

Great Resource: (Thx @AlveeAkand!) https://www.bigocheatsheet.com/