Algorithm Best-case Worst-case Expected time Average time In place?
Bucket sort Θ(𝑛2 ) Θ(𝑛)
Build-Max-Heap Θ(𝑛)
Comparison- Ω(𝑛 log 𝑛)
based
Counting sort Θ(𝑛 + 𝑘)
Heap-Extract- 𝑂(log 𝑛)
Max
Heap-Increase- 𝑂(log 𝑛)
Key
Heap-Maximum Θ(1)
Heapsort 𝑂(𝑛 log 𝑛)
Insertion Θ(𝑛) Θ(𝑛2 )
Max-Heap-Insert 𝑂(log 𝑛)
Merge 𝑂(𝑛 log 𝑛)
Partition Θ(𝑛) 𝑤𝑖𝑡ℎ 𝑛
=𝑟−𝑝+1
Quicksort Θ(𝑛 log 𝑛) Θ(𝑛2 ) 𝑂(𝑛 log 𝑛) 𝑂(𝑛 log 𝑛)
Quicksort 𝑂(𝑛 log 𝑛)
Radix sort Θ(d(𝑛 + 𝑘))
CHAPTER 1 – THE ROLE OF ALGORITHMS IN COMPUTING
1.1 - ALGORITHMS
An algorithm is any well-defined computational procedure that takes some value, or set of values, as input and
produces some value, or set of values as output. It is a sequence of computational steps that transform the
input into output (e.g. like baking a cake). It is said to be correct if, for every input instance, it halts with the
correct output. The only requirement is that the specification must provide a precise description of the
computational procedure to be followed.
An instance of a problem consists of the input needed to compute a solution to the problem.
There are two characteristics that are common to many interesting algorithms:
1. There are many candidate solutions.
2. There are practical applications.
A data structure is a way to store and organize data in order to facilitate access and modifications.
It is unknown whether or not efficient algorithms exist for NP-complete problems. If an efficient algorithm
exists for any one of them, then efficient algorithms exist for all of them. A small change to the problem
statement can cause a big change to the efficiency of the best known algorithm.
1.2 – ALGORITHMS AS TECHNOLOGY
1
,Computing time is a bounded resource and so is space in memory. Total system performance depends on
choosing efficient algorithms as much as on choosing fast hardware. Algorithms are at the core of most
technologies used in contemporary computers. It is at larger problem sizes that the differences in efficiencies
between algorithms become particularly prominent.
CHAPTER 2 – GETTING STARTED
We will examine insertion sort and merge sort and analyse their running time. The analysis introduces a
notation that focuses on how that time increases with the number of items to be sorted.
2.1– INSERTION SORT
It solves the sorting problem. The numbers that we wish to sort are also known
as the keys. Insertion sort is an efficient algorithm for sorting a small number of
elements. It works the way many people sort a hand of playing cards. Note: at
all times, the cards held in the left hand are sorted, and these cards were
originally the top cards of the pile on the table. The input numbers are sorted in
place: the numbers are rearranged within the array A.
A[j] indicates the “current card”. Subarray A[1…j-1] constitute the currently sorted hand: they are the elements
originally in positions 1 through j-1, but now in sorted order. Subarray A[j+1…n] correspond to the pile of cards
on the table.
Loop invariant (note: it looks like induction with a base case and an inductive step):
• Initialization: it is true prior to the first iteration of the loop.
• Maintenance: if it is true before an iteration of the loop, it remains true before the next iteration.
• Termination: when the loop terminates, the invariant gives us a useful property that helps show that
the algorithm is correct.
Note: compound data are typically organized into objects, which are composed of attributes or fields. A
particular field is accessed using the field name followed by the name of its object in square brackets:
length[A]. Also: short circuiting → only evaluate the expression y if x evaluates to False in the expression “x or
y”.
2.2 – ANALYSING ALGORITHMS
Analysing an algorithm has come to mean predicting the resources that the algorithm requires. Most often we
want to measure computational time. The RAM model contains instructions commonly found in real
computers: arithmetic (add, subtract, etc.), data movement (load, store, copy) and control (return, etc.). Each
such instruction takes a constant amount of time.
2
, The time taken by Insertion Sort depends on the input. It can take different amounts of time to sort two input
sequences of the same size depending on how nearly sorted they already are. → It is traditional to describe the
running time of a program as a function of the size of its input. Input size is often the number of items in the
input. The running time is the number of primitive operations or “steps” executed. One line may take a
different amount of time than another line, but we shall assume that each execution of the ith line takes time
ci, which is a constant. We can write this in a simpler notation (using Θ, 𝑂, etc.). See figure in 2.1 for the running
time of insertion sort. Even for inputs of a given size, an algorithm’s running time may depend on which input
of that size is given, e.g. best case for Insertion Sort is when the array is already sorted → then, the running
time is a linear function of n. If the input is reverse ordered, the running time is a quadratic function of n.
It is the rate of growth of the running time that really interests us. We therefore consider only the leading term
of a formula. We also ignore the leading term’s constant coefficient. We usually consider one algorithm to be
more efficient than another if its worst-case running time has a lower order of growth (may not be the case for
very small input sizes).
2.3 – DESIGNING ALGORITHMS
Insertion sort uses an incremental approach. However, there is also a divide-and-conquer approach. This is
typically followed by useful algorithms which are recursive in structure. They break the problem into several
subproblems that are similar to the original problem but smaller in size.
• Divide the problem into a number of subproblems.
• Conquer the subproblems by solving them recursively.
• Combine the solutions to the subproblems into the solution for the original problem.
For merge sort it operates as follows:
• Divide the n-element sequence to be sorted into two subsequences of n/2 elements each.
• Conquer: sort the two subsequences recursively using merge sort.
• Combine: merge the two sorted subsequences to produce the sorted answer.
The key operation is the merging of two sorted sequences in the “combine” step: 𝑀𝑒𝑟𝑔𝑒(𝐴, 𝑝, 𝑞, 𝑟) such that
𝑝 ≤ 𝑞 < 𝑟. The procedure assumes that the subarrays A[p…q] and A[q+1…r] are in sorted order. It merges
them to form a single sorted subarray that replaces the current subarray A[p…r]. It takes time Θ(𝑛) where n =
r − p + 1. Compare it with two piles of cards facing up on a table. You compare the two cards on the top and
remove the smallest card. We repeat this step until one input pile is empty, at which time we just take the
3