If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

## Computer science theory

### Course: Computer science theory>Unit 1

Lesson 5: Insertion sort

# Analysis of insertion sort

Like selection sort, insertion sort loops over the indices of the array. It just calls insert on the elements at indices $1,2,3,\dots ,n-1$. Just as each call to indexOfMinimum took an amount of time that depended on the size of the sorted subarray, so does each call to insert. Actually, the word "does" in the previous sentence should be "can," and we'll see why.
Let's take a situation where we call insert and the value being inserted into a subarray is less than every element in the subarray. For example, if we're inserting 0 into the subarray [2, 3, 5, 7, 11], then every element in the subarray has to slide over one position to the right. So, in general, if we're inserting into a subarray with $k$ elements, all $k$ might have to slide over by one position. Rather than counting exactly how many lines of code we need to test an element against a key and slide the element, let's agree that it's a constant number of lines; let's call that constant $c$. Therefore, it could take up to $c\cdot k$ lines to insert into a subarray of $k$ elements.
Suppose that upon every call to insert, the value being inserted is less than every element in the subarray to its left. When we call insert the first time, $k=1$. The second time, $k=2$. The third time, $k=3$. And so on, up through the last time, when $k=n-1$. Therefore, the total time spent inserting into sorted subarrays is
$c\cdot 1+c\cdot 2+c\cdot 3+\cdots c\cdot \left(n-1\right)=c\cdot \left(1+2+3+\cdots +\left(n-1\right)\right)$ .
That sum is an arithmetic series, except that it goes up to $n-1$ rather than $n$. Using our formula for arithmetic series, we get that the total time spent inserting into sorted subarrays is
$c\cdot \left(n-1+1\right)\left(\left(n-1\right)/2\right)=c{n}^{2}/2-cn/2$ .
Using big-Θ notation, we discard the low-order term $cn/2$ and the constant factors $c$ and 1/2, getting the result that the running time of insertion sort, in this case, is $\mathrm{\Theta }\left({n}^{2}\right)$.
Can insertion sort take less than $\mathrm{\Theta }\left({n}^{2}\right)$ time? The answer is yes. Suppose we have the array [2, 3, 5, 7, 11], where the sorted subarray is the first four elements, and we're inserting the value 11. Upon the first test, we find that 11 is greater than 7, and so no elements in the subarray need to slide over to the right. Then this call of insert takes just constant time. Suppose that every call of insert takes constant time. Because there are $n-1$ calls to insert, if each call takes time that is some constant $c$, then the total time for insertion sort is $c\cdot \left(n-1\right)$, which is $\mathrm{\Theta }\left(n\right)$, not $\mathrm{\Theta }\left({n}^{2}\right)$.
Can either of these situations occur? Can each call to insert cause every element in the subarray to slide one position to the right? Can each call to insert cause no elements to slide? The answer is yes to both questions. A call to insert causes every element to slide over if the key being inserted is less than every element to its left. So, if every element is less than every element to its left, the running time of insertion sort is $\mathrm{\Theta }\left({n}^{2}\right)$. What would it mean for every element to be less than the element to its left? The array would have to start out in reverse sorted order, such as [11, 7, 5, 3, 2]. So a reverse-sorted array is the worst case for insertion sort.
How about the opposite case? A call to insert causes no elements to slide over if the key being inserted is greater than or equal to every element to its left. So, if every element is greater than or equal to every element to its left, the running time of insertion sort is $\mathrm{\Theta }\left(n\right)$. This situation occurs if the array starts out already sorted, and so an already-sorted array is the best case for insertion sort.
What else can we say about the running time of insertion sort? Suppose that the array starts out in a random order. Then, on average, we'd expect that each element is less than half the elements to its left. In this case, on average, a call to insert on a subarray of $k$ elements would slide $k/2$ of them. The running time would be half of the worst-case running time. But in asymptotic notation, where constant coefficients don't matter, the running time in the average case would still be $\mathrm{\Theta }\left({n}^{2}\right)$, just like the worst case.
What if you knew that the array was "almost sorted": every element starts out at most some constant number of positions, say 17, from where it's supposed to be when sorted? Then each call to insert slides at most 17 elements, and the time for one call of insert on a subarray of $k$ elements would be at most $17\cdot c$. Over all $n-1$ calls to insert, the running time would be $17\cdot c\cdot \left(n-1\right)$, which is $\mathrm{\Theta }\left(n\right)$, just like the best case. So insertion sort is fast when given an almost-sorted array.
To sum up the running times for insertion sort:
• Worst case: $\mathrm{\Theta }\left({n}^{2}\right)$.
• Best case: $\mathrm{\Theta }\left(n\right)$.
• Average case for a random array: $\mathrm{\Theta }\left({n}^{2}\right)$.
• "Almost sorted" case: $\mathrm{\Theta }\left(n\right)$.
If you had to make a blanket statement that applies to all cases of insertion sort, you would have to say that it runs in $O\left({n}^{2}\right)$ time. You cannot say that it runs in $\mathrm{\Theta }\left({n}^{2}\right)$ time in all cases, since the best case runs in $\mathrm{\Theta }\left(n\right)$ time. And you cannot say that it runs in $\mathrm{\Theta }\left(n\right)$ time in all cases, since the worst-case running time is $\mathrm{\Theta }\left({n}^{2}\right)$.

This content is a collaboration of Dartmouth Computer Science professors Thomas Cormen and Devin Balkcom, plus the Khan Academy computing curriculum team. The content is licensed CC-BY-NC-SA.

## Want to join the conversation?

• I am not able to understand this situation- "say 17, from where it's supposed to be when sorted? "
how running time can be 17⋅c⋅(n−1). for this situation. Please make it clear.
• Basically, it is saying:
-Suppose the insert function, at most, performs 17 comparisons each time it is called (because the array is almost sorted )
-A comparison costs c and we perform 17 of them per insert, so the cost of an insert is 17 * c
-The insertion sort function calls the insert function on each of the n-1 elements (we get the 1st element for free), so:
cost of an insert * number of inserts = (17 * c) * (n-1)

Hope this makes sense
• why wont my code checkout

var insert = function(array, rightIndex, value) {
for(var j = rightIndex; j > 0 && array[j-1] > value; j--) {
array[j] = array[j-1];
}
array[j] = value;
};

var insertionSort = function(array) {
for(var i = 0; i < array.length; i++){
insert(array, i, array[i]);
}
};

var array = [22, 11, 99, 88, 9, 7, 42];
insertionSort(array);
println("Array after sorting: " + array);
Program.assertEqual(array, [7, 9, 11, 22, 42, 88, 99]);
• You shouldn't modify functions that they have already completed for you, i.e. insert() , if you want to pass the challenges. If you change the other functions that have been provided for you, the grader won't be able to tell if your code works or not (It is depending on the other functions to behave in a certain way).
• can the best case be written as big omega of n and worst case be written as big o of n^2 in insertion sort?
• Thank you for this awesome lecture. Would it be possible to include a section for "loop invariant"?
• Loop invariants are really simple (but finding the right invariant can be hard):
They are statements that must be true before the loop and after the loop.

For insertion sort we have the loop invariant:
"After the kth iteration, elements a[0] to a[k] are sorted"

Before we start the first loop (we have perfomed 0 iterations) k=0 thus this is true:
"After iteration 0, elements a[0] to a[0] are sorted" i.e. the first element is trivially sorted

In the first loop we insert a[1] in order on a[0].
After we complete the loop the first time (k=1 ) this is true:
"After iteration 1, elements a[0] to a[1] are sorted"

In the second loop we insert a[2] in order on a[0] to a[1]
After we complete the loop the second time (k=2 ) this is true:
"After iteration 2, elements a[0] to a[2] are sorted"

...
In the kth loop we insert a[k] in order on a[0] to a[k-1]
After we complete the loop the kth time this is true:
"After iteration k, elements a[0] to a[k] are sorted"

...
On the last loop, ( the (n-1)th loop for an array with n elements),
we insert a[n-1] in order on a[0] to a[n-2]
After we complete the loop the (n-1)th time this is true:
"After iteration n-1, elements a[0] to a[n-1] are sorted"

So the loop invariant proves that insertion sort actually sorts the array, because it tells us that after the last iteration elements a[0] to a[n-1] are sorted
• Can we make a blanket statement that insertion sort runs it omega(n) time?
• Yes, you could. In the best case (array is already sorted), insertion sort is omega(n). You can't possibly run faster than the lower bound of the best case, so you could say that insertion sort is omega(n) in ALL cases.
• what about Omega can I say that insertion sort is Ω(n)
or it is constant Ω(n^0)=Ω(1) ?

Because in some cases, if it is sorted, we only need to add it at the end, and this is the least possible

Big-Ω (Big-Omega) notation
• There is no single answer. It depends on the scenario i.e. best case, worst case, average case, etc.

Recall that:
By definition, if f(n) is Big Theta(g(n)) then f(n) is Big O(g(n)) and f(n) is Big Omega(g(n)).

Since the running time of insertion sort, in the best case, is Big Theta(n), it is also Big Omega(n).

By similar logic, it is:
- Big Omega(n) in the "almost sorted case".
- Big Omega(n^2) in the worst case
- Big Omega(n^2) in the average case
• I don't understand how O is (n^2) instead of just (n); I think I got confused when we turned the arithmetic summ into this equation:
c⋅(n−1+1)((n−1)/2)=cn^​2/2−cn/2

I inferred from the text that it takes for granted some mathematical theorem that I have no idea of. Is that what's happening? Thanks!
• Could we also say the runtime is O(n) best case? Since best is when the array is already sorted, so it will take no longer than linear time.
Yes

By definition, if f(n) is Big Theta(g(n)) then f(n) is Big O(g(n)) and f(n) is Big Omega(g(n)).

Since the running time of insertion sort, in the best case, is Big Theta(n), it is also O(n).
• No sure why following code does not work. I keep getting "A function is taking too long..." message. Any help?
var insert = function(array, rightIndex, value) {
for(var j = rightIndex;
j >= 0 && array[j] > value;
j--) {
array[j + 1] = array[j];
}
array[j + 1] = value;
};

var insertionSort = function(array) {
for (var i=1; i < array.length ; i++){
insert (array,i,array[i]);
}
};

var array = [22, 11, 99, 88, 9, 7, 42];
insertionSort(array);
• The insertionSort function has a mistake in the insert statement (Check the values of arguments that you are passing into it).

When your insertionSort function reaches the end of your array its call to insert is adding a copy of the last element of the array onto the end of itself. This causes the array to increase in size, which makes the condition in the for loop true. It repeats this forever until you get the error message.

Good Luck
• c * (n-1+1)((n-1)/2) = cn^2/2 - cn/2

Can someone explain to me how the formula is derived?
For example: why (n-1+1) and why divide by 2?
• (n-1+1)((n-1)/2) is the sum of the series of numbers from 1 to n-1. It uses the stand arithmetic series formula.

Here's the derivation:
If S is the sum of the numbers 1 to k:
S = 1 + 2 + 3 + ... + k-2 + k-1 + k
And then in reverse
S = k + k-1 + k-2 + ... + 3 + 2 + 1
If we add them we get:
2S = (1+k)+(2+k-1)+(3+k-2)+...+(k-2+3)+(k-1+2)+(k+1)
2S = (1+k) + (1+k) + (1+k) ... k times
2S = (1+k) * k
So:
S= (1+k)/2 * k
Or
S= (first term + last term)/2 * number of terms
For a video see: