Home CPSC 340

Algorithm Analysis

 

Measuring Algorithm Speed

There are usually many algorithms that solve the same problem. Oftentimes some of these algorithms are more efficient than others.

As we've discussed, the choice between different data structures also has a big effect on the efficiency of programs. For example, removing something from a linked list is a lot faster than removing something from an array. Likewise, adding to the end of a doubly-linked list is a lot faster than adding to the end of a singly-linked list.

But how much faster are these things? Today we'll look at ways of measuring how efficient different algorithms and data structures are.


 

Using Execution Time

One way of measuring how fast an algorithm is is to actually time it. We can write multiple programs, compile them, and run them to see which one runs faster.

This program adds a number of integers to the end of a singly-linked list.

This program adds a number of integers to the end of a doubly-linked list.

If we try running these programs, we'll see the affect of using a singly vs. doubly linked list. If we only add a few elements, the program is so fast that it makes little difference. Storing a lot of data, however, and the difference becomes clear:

Number of ElementsSingly-LinkedDoubly-Linked
100.07 s.08 s
1,000.09 s.07 s
10,000.22 s.09 s
50,0003.40 s.11 s
100,00013.31 s.12 s
200,00056.06 s.14 s
500,00018 m0.27 s

As you can see, the difference is enormous, the singly linked list takes longer and longer when the number being added gets bigger, while the doubly linked list grows much more slowly. With the largest input size, the doubly linked list took 4000 times longer.

Timing programs can give a very clear idea of how efficient a given program is. But there are some major drawbacks:


 

Order of Growth

An alternative way to quantify the efficiency of an algorithm is to express it in terms how much more work we have to do for larger and larger inputs. The algorithm to add a node at the end of a singly-linked list is given below:

  1. Create the new node with the right data.
  2. Set the new node's next field to NULL.
  3. If the list is empty, set the head to the new node and return.
  4. Search through the list from the head for the last node:
    1. Check if this node is last.
    2. If not, move onto the next node.
  5. Set the last node's next field to the new node.

If we have an empty list, this algorithm will not need to loop at all looking for the end, so the algorithm will have 4 steps. If we have 10 items, the algorithm will loop through all 10 items to find the end, and have 24 steps (20 for the loop and 4 for the others). If we have 100 items, the algorithm will loop through all 100 to find the end, and have 204 steps. With 1000 items, the algorithm will have 2004 steps.

We could write a function to give us the number of steps the algorithm takes as a function of the size of the linked list like this:

$steps(n) = 2 \cdot n + 4$

Now we can say how many steps are needed for any input size. However, this is a bit too precise than we actually want to be. In algorithm analysis, we are just looking for a rough idea of how many steps an algorithm takes.

In this case, when we start having more and more nodes, it should be clear that the loop dominates the algorithm. The fact that there are 4 extra steps doesn't really matter.

Likewise, the fact that there are two steps in the loop doesn't really matter. When we turn this algorithm into a program, the loop could have more or less Java instructions (for example you can sometimes do things in one line of code or two, depending on how you write it). Likewise, the compiler can turn a Java instruction into any number of machine instructions.

So in algorithm analysis, we will just say this algorithm takes $O(n)$ steps. That comes from removing the coefficient of 2, and the constant of 4. What this tells us is that the number of steps scales linearly with the size of the linked list.

A graph where the x axis is the input size, the y axis is the number
of stpes.  The graph depicts a diagonal line.
$O(n)$

 

Big-O Notation

Big-O notation is how algorithm efficiency is expressed. It basically expresses how the runtime of an algorithm scales with the input. In Big-O notation, the algorithm above is expressed as $O(n)$.

The $O$ stands for "order of" and the $n$ is how big the input to the algorithm is. So with an input size of $n$ (the number of things in the linked list), this algorithm take around $n$ steps to complete.

Big-O lets us capture the most important aspect of an algorithm without getting lost in details like how many seconds it takes to run, or figuring out exactly how many machine code instructions it needs to accomplish some task.

The algorithm for adding to the end of a doubly linked list is given below:

  1. Make the new node with the given data.
  2. Set the node's next to NULL.
  3. If the list is empty:
    1. Set the head to node.
    2. Set the tail to node.
    3. Set node's prev to NULL.
  4. Otherwise:
    1. Set node's prev to tail.
    2. Set tail's next to node.
    3. Set tail to node.

There is no loop in this algorithm. It will always take the same number of steps no matter how big or small the list is. Again, the exact number doesn't matter. Whether it's 6 instructions, 5 instructions, or 25 instructions, the number won't change based on how big the list is. This means that the algorithm is constant time. This is expressed as $O(1)$ in Big-O notation.

A graph where the x axis is the input size, the y axis is the number
of stpes.  The graph depicts a horizontal line.
$O(1)$

The smaller the complexity, the better. So $O(1)$ is better than $O(n)$. Note, however, that for small values of $n$, a $O(n)$ algorithm could actually run faster. For example, adding to an empty singly-linked list could be faster than adding to an empty doubly-linked list. However, what happens with larger input sizes is generally all we care about.


 

Definition of Big-O

Big-O notation is great because it gives us a precise way of being vague. We want to be somewhat vague because we don't want to waste time figuring out exactly how many clock cycles an algorithm takes to run (which also depends on the language, compiler, and machine).

But we want to be vague in a precise way. Ignoring constants is fine, and so is ignoring coefficients, but we want to capture the essence of how the work we need to do scales up to different input sizes.

This section will talk about the mathematical definition of Big-O, which is as follows:

If $f(n)$ and $g(n)$ are functions, then:

$f(n) = O(g(n))$

Is true if there is some constant value $c$ that we can multiply $g(n)$ and have $g(n)$ be greater than or equal to $f(n)$ for sufficiently large values of $n$.

For example, take our function from before:

$f(n) = 2 \cdot n + 4$

Then we can say that:

$f(n) = O(n)$

Because there is some value $c$ (for example 3) where, for sufficiently large values of $n$, $3 \cdot n$ is greater than $2 \cdot n + 4$.

Big-O analysis basically only considers the "order" of a function such as whether it is constant, linear, quadratic or exponential, rather than the full definition of the function.


 

Examples

What is the Big-O of the following functions? Try to think of each answer before seeing it.

  1. $f(n) = 12 \cdot n^2 + 20$

  2. $f(n) = 3 \cdot n^2 + 4 \cdot n + 15$

  3. $f(n) = 5 \cdot n^2 + 2^n + 1000$

So basically to simplify a function with Big-O notation, you find the term which is the biggest for large values of $n$, without the coefficient.


 

Finding the Order of an Algorithm

There are a couple of rules for finding the order of an algorithm:

  1. If you do two parts in sequence (one after the other), then you add them together.
  2. If you do something inside of a loop, then you multiply it by how many times the loop runs.

For example, the following method finds the biggest item in an array:


int max(int[] array) {
    int max_so_far = array[0];

    for(int i = 1; i < array.length; i++) {
        if(max_so_far < array[i]) {
            max_so_far = array[i];
        }
    }

    return max_so_far;
}

So for this one, we have three parts in sequence, so we will add them together. Then we need to multiply the steps done in the loop by how many times the loop will run (which is the length of the array - 1).

If we count the first and last instructions as 1 step, and the body of the loop as 3 (the if statement, the incrementing of i, and the check if we continue), then we have:

$f(n) = 3 \cdot n + 2$

Which is simplified to:

$O(n)$

Most times we won't bother to count the steps at all when doing this. We just find the loops and use them to determine the order. If there are no loops, then it must be $O(1)$.


 

More Examples

The following algorithm prints a multiplication table of a given size:


void printMultTable(int size) {
    for(int i = 0; i <= size; i++) {
        for(int j = 0; j <= size; j++) {
            System.out.print(i * j + " ");
        }
        System.out.print("\n");
    }
}

This has nested loops. Following the rule that we have to multiply what's done in the loop by how many times it executes, this algorithm is $O(n^2)$. This is because the print statement takes a constant amount of time, and we multiply this by $n$ for one loop, and then another $n$ for the other loop. In total that's some constant multiplied by $n^2$ which simplifies to just $O(n^2)$.


How about this algorithm to resize an array?

  1. Make a new, resized array.
  2. Copy the elements of the old array into the new one.
  3. Reset the old array to point at the new one.

This is trickier since this is written as pseudo-code instead of Java. However, thinking about it, you should agree that the only loop required is in step 2, which needs to go through the whole array once, making this $O(n)$.


Lastly, how about this algorithm to push onto an array stack?


// push on a new value
public void push(Type item) {
    top++;
    array[top] = item;
}

Because there are no loops, this must be $O(1)$.


 

Comparison of Complexities

So far we have seen algorithms with $O(1)$, $O(n)$, and $O(n^2)$ complexity. By the way, "complexity" refers to how the steps taken scale with the input. Larger Big-O values mean greater complexity.

There are other possible Big-O complexities, some of which we will see over the course of the semester. Of course the lower complexity, the less time our algorithms will need for large inputs.

The following figure shows a few common complexities compared:

This graph shows curves for common complexities, the larger values are higher
Comparison of common complexities

We will see many more examples of algorithm analysis through the semester!

Copyright © 2019 Ian Finlayson | Licensed under a Creative Commons Attribution 4.0 International License.