Big-O notation is great because it gives us a precise way of being vague. We want to be somewhat vague because we don't want to waste time figuring out exactly how many clock cycles an algorithm takes to run (which also depends on the language, compiler, and machine).
But we want to be vague in a precise way. Ignoring constants is fine, and so is ignoring coefficients, but we want to capture the essence of how the work we need to do scales up to different input sizes.
This section will talk about the mathematical definition of Big-O, which is as follows:
If $f(n)$ and $g(n)$ are functions, then:
$f(n) = O(g(n))$
Is true if there is some constant value $c$ that we can multiply $g(n)$ and have $g(n)$ be greater than or equal to $f(n)$ for sufficiently large values of $n$.
For example, take our function from before:
$f(n) = 2 \cdot n + 4$
Then we can say that:
$f(n) = O(n)$
Because there is some value $c$ (for example 3) where, for sufficiently large values of $n$, $3 \cdot n$ is greater than $2 \cdot n + 4$.
Big-O analysis basically only considers the "order" of a function such as whether it is constant, linear, quadratic or exponential, rather than the full definition of the function.
What is the Big-O of the following functions? Try to think of each answer before seeing it.
$f(n) = 12 \cdot n^2 + 20$
$f(n) = 3 \cdot n^2 + 4 \cdot n + 15$
$f(n) = 5 \cdot n^2 + 2^n + 1000$
So basically to simplify a function with Big-O notation, you find the term which is the biggest for large values of $n$, without the coefficient.
Copyright © 2023 Ian Finlayson | Licensed under a Attribution-NonCommercial 4.0 International License.