EN: Algorithms (2ILC0) is a course taught at Eindhoven University of Technology. It is a mandatory course for Bachelor Computer Science and Engineering students. The course is given in the first quartile of the third year. Algorithms discusses techniques for optimization like back tracking, greedy ...
Algorithms (2ILC0) Summary Q1 2021
Contents
Part I: Techniques for optimization ........................................................................................ 2
Back tracking ..................................................................................................................... 2
Greedy algorithms ............................................................................................................. 3
Dynamic programming ...................................................................................................... 5
Part II: Graph algorithms ....................................................................................................... 8
Shortest paths ................................................................................................................... 8
Max flow .......................................................................................................................... 10
Applications of flow .......................................................................................................... 11
Part III: Selected topics ....................................................................................................... 13
NP-hardness ................................................................................................................... 13
Extra ................................................................................................................................... 18
Short overview of graph algorithms ................................................................................. 18
Researchers overview ..................................................................................................... 19
1
Algorithms (2ILC0) Summary Q1 2021 by Isabel Rutten
,Part I: Techniques for optimization
For each instance there are (possibly) multiple valid solutions. The goal is to find an optimal
solution: either minimization problem (associate cost to every solution, find min-cost
solution) or maximization problem (associate profit to every solution, find max-profit
solution). Examples of optimization problems are Traveling Salesman Problem, Knapsack
Problem and Linear Programming.
Back tracking
Just try all solutions. This can be applied to almost all problems, but gives very slow
algorithms. Try all options for the first choice, for each option, recursively make other
choices. It can be sped up using pruning.
Example 1: Traveling Salesman Problem (TSP)
Given are 𝑛 cities and the (non-negative) distances between them, the goal is to find
shortest tour visiting all cities and returning to starting city.
Input: matrix 𝐷𝑖𝑠𝑡[1. . 𝑛, 1. . 𝑛], where 𝐷𝑖𝑠𝑡[𝑖, 𝑗] = distance from 𝑖 to 𝑗
Output: permutation of {1, … , 𝑛} such that visiting cities in that order gives min-length tour
Backtracking for TSP: First city is city 1 (w.l.o.g.). Try all remaining cities as next city. For
each option for next city, recursively try all ways to finish the tour. In other words, for each
recursive call a) remember which choices we already made (= part of the tour we fixed in
earlier calls) and b) which choices we still need to make (= remaining cities, for which we
need to decide visiting order). When all choices have been made: compute length of tour,
compare to length of shortest tour found so far. Thus the parameters of the algorithm are a)
and b), now renamed to 𝑅 and 𝑆 respectively. Then we get 𝑇𝑆𝑃_𝐵𝑟𝑢𝑡𝑒𝐹𝑜𝑟𝑐𝑒1(𝑅, 𝑆).
Note: this algorithm computes length of optimal tour, not tour itself.
The calculations of the algorithm can be shown using a branching tree.
Using linked lists for 𝑅 and 𝑆 where 𝑛𝑅 is the size of 𝑅 and 𝑛𝑆 the size of 𝑆, we have as total
running time 𝑇(𝑛𝑅 , 𝑛𝑆 ) = 𝑛𝑆 ⋅ (𝑂(1) + 𝑇(𝑛𝑅+1 , 𝑛𝑠+1 )) with 𝑇(𝑛𝑅 , 0) = 𝑂(𝑛𝑅 ) so 𝑇(𝑛𝑅 , 𝑛𝑆 ) =
𝑂((𝑛𝑅 + 𝑛𝑆 ) ⋅ 𝑛𝑠 ! and so the total running time is 𝑂(𝑛!).
There are two conceptually different ways to speed up the algorithm:
1. Use better ‘bookkeeping’ data structures to search faster through the recursion tree
e.g. maintain new variable lengthSoFar (= length of initial part given by 𝐴[1]. . 𝐴[𝑘]. New
running time: 𝑂((𝑛 − 1)!) so still very slow.
2. Use ‘pruning’ to search through a smaller recursion tree
e.g. next to 1), don’t recurse if initial part cannot lead to optimal tour.
Example 2: 1-dimensional clustering
Given is 𝑋 = set of 𝑛 numbers (points in 1D) and parameter 𝑘, the goal is to find a
partitioning of 𝑋 into 𝑘 nonempty clusters of minimum cost. Cost of a single cluster: sum of
distances to cluster average. Cost of total clustering: sum of costs of its clusters.
> same structure for backtracking as TSP (also regarding parameters), faster with pruning.
2
Algorithms (2ILC0) Summary Q1 2021 by Isabel Rutten
, Greedy algorithms
Construct solution iteratively, always make choice that seems best. This can be applied to a
few problems, but gives fast algorithms. Only try option that seems best for first choice
(greedy choice), recursively make other choices.
Steps for greedy algorithms:
1. Try to discover structure of optimal solutions: what properties do optimal solutions have?
a) what are the choices that need to be made?
b) do we have optimal substructure?
Optimal solution = first choice + optimal solution for subproblem(s)
c) do we have greedy-choice property for the first choice?
2. Prove that optimal solutions indeed have these properties: prove optimal substructure and
greedy-choice property (see example 1 for general structure greedy-choice proof).
3. Use these properties to design an algorithm and prove correctness: proof by induction
(possible because optimal substructure).
Example 1: Activity-Selection
Input: set 𝐴 = {𝑎1 , … , 𝑎𝑛 } of 𝑛 activities. for each 𝑎𝑖 : start time start(𝑎𝑖 ), end time end(𝑎𝑖 ).
Valid solution: any subset of non-overlapping activities
Optimal solution: valid solution with maximum number of activities
Greedy approach:
1a) We need to choose the first activity in the optimal solution, then the second, etc.
1b/2) There is optimal substructure: optimal solution = optimal first activity + optimal
selection from activities that do not overlap first activity. This is proven with definitions.
1c/2) We have greedy-choice property: we can select first activity “greedily” and still get an
optimal solution by selecting the activity that ends first. This lemma is proven according to
the general structure for greedy-choice property:
- take an arbitrary optimal solution - if OPT contains greedy choice, then done
- else modify OPT so that it contains greedy choice, without decreasing the quality of the
solution - or derive a contradiction
This general structure also includes having standard text that can be used in proof for any
greedy-choice property. Afterwards, there is a modification, which is problem-specific.
Lemma: Let 𝑎𝑖 be an activity in 𝐴 that ends first. Then there is an optimal solution to the
Activity-Selection Problem for 𝐴 that includes 𝑎𝑖 .
Proof: Let OPT be an optimal solution for 𝐴. If OPT includes 𝑎𝑖 then the lemma obviously
holds, so assume OPT does not include 𝑎𝑖 . We will show how to modify OPT into a solution
OPT* such that (i) OPT* is a valid solution, (ii) OPT* includes 𝑎𝑖 , and (iii) size(OPT*) ≥
size(OPT) i.e. quality(OPT*) ≥ quality(OPT). Thus OPT* is an optimal solution including 𝑎𝑖 ,
and so the lemma holds. To modify OPT we proceed as follows.
Let 𝑎𝑘 be the activity in OPT ending first, let OPT* = (OPT \ {𝑎𝑘 })∪ {𝑎𝑖 }. Then OPT* includes
𝑎𝑖 and size(OPT*) = size(OPT). We have end(𝑎𝑖 ) ≤ end(𝑎𝑘 ) by definition of 𝑎𝑖 , so 𝑎𝑖 cannot
overlap any activities in OPT \ {𝑎𝑘 }. Hence, OPT* is a valid solution.
3) Thus we get the following algorithm, which is correct by induction on |𝐴|, using the optimal
substructure and greedy-choice property, and of which the running time is 𝑂(𝑛2 ) if
implemented naively and 𝑂(𝑛) after sorting on finishing time and implemented more cleverly.
3
Algorithms (2ILC0) Summary Q1 2021 by Isabel Rutten
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller IsabelRutten. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $4.35. You're not tied to anything after your purchase.