Experimental studies
Limitations of experimental analysis:
1. Experimental running times of two algorithms are difficult to directly compare unless the
experiments are performed in the same hardware and software environments;
2. Experiments can be done only on a limited set of test inputs;
3. The algorithm must be fully implemented;
Theoretical analysis: to capture the order of growth of an algorithm’s running time, we will
associate, with each algorithm, a function 𝑓(𝑛) that characterizes the number of primitive
operations that are performed as a function of the input size 𝑛. We will characterize running times
in terms of the worst case, as a function of the input size, 𝑛, of the algorithm.
The constant function 𝑓(𝑛) = 𝑐, the logarithm function 𝑓(𝑛) = log 𝑛 (base 2), the linear function
𝑓(𝑛) = 𝑛, the n-log-n function 𝑓(𝑛) = 𝑛 log 𝑛 (base 2), the quadratic function 𝑓(𝑛) = 𝑛! , the cubic
function 𝑓(𝑛) = 𝑛" , the exponential function 𝑓(𝑛) = 𝑏 # ,
#
𝑛(𝑛 + 1)
, 𝑖 = 1 + 2 + 3 + ⋯ + (𝑛 − 1) + 𝑛 =
2
$%&
#
𝑎#(& − 1
, 𝑎$ = 1 + 𝑎! + 𝑎" + ⋯ + 𝑎#'& + 𝑎# =
𝑎−1
$%&
Asymptotic analysis
Big-𝑂 notation à upper bounds.
Given functions 𝑓(𝑛) and 𝑔(𝑛), we say that 𝑓(𝑛) is 𝑂(𝑔(𝑛)) if there is a real constant 𝑐 > 0 and an
integer constant 𝑛) ≥ 1 such that 𝑓(𝑛) ≤ 𝑐 ∗ 𝑔(𝑛) for 𝑛 ≥ 𝑛) .
Big-𝛺 notation à lower bounds.
Given functions 𝑓(𝑛) and 𝑔(𝑛), we say that 𝑓(𝑛) is Ω(𝑔(𝑛)) if there are positive constants 𝑐 and 𝑛)
such that: 𝑓(𝑛) ≥ 𝑐 ∗ 𝑔(𝑛) for all 𝑛 ≥ 𝑛)
Big-𝜃 notation à tight bounds.
Given functions 𝑓(𝑛) and 𝑔(𝑛), we say that 𝑓(𝑛) is 𝜃(𝑔(𝑛)) if 𝑓(𝑛) is 𝑂(𝑔(𝑛)) and 𝑓(𝑛) is Ω(𝑔(𝑛))
Big-Oh proof:
1. State what to prove
à “We have to show there exist 𝑐, 𝑛) > 0 such that: 𝑓(𝑛) ≤ 𝑐 ∗ 𝑔(𝑛) for all 𝑛 ≥ 𝑛) ”
2. Simplify the equations.
3. Choose the constants accordingly.
à “𝑐 = ⋯ and 𝑛) = ⋯ satisfy the equation.”
Establish the time complexity of an algorithm:
1. State the size of the input 𝑛.
2. Count the number of operations:
• 𝑇*+,-.$/0 (𝑛) = “number of primitive operations performed for an input of size 𝑛”
• We use constants for groups of operations (since the big-Oh notation discards
constants). You should indicate what each constant means.
3. Determine the complexity in Big-Oh notation.
4. Prove the complexity in big-Oh notation.
,Recursion
• Base case(s)
o Situation in which no recursive calls are made;
o There should be at least one base case (multiple base cases are also possible);
o Every possible chain of recursive calls must eventually reach a base case;
• Recursive case
o Contains a call to the recursive method;
o Each recursive call should make progress towards a base case;
Linear recursion: recursive case contains at most one recursive call.
Binary recursion: recursive case makes two recursive calls.
Multiple recursion: recursive case makes more than two recursive calls.
The binary search algorithm
• Input: a sorted array and a number you want to find, let’s call this 𝑥;
• Approach: compare 𝑥 with the middle element. If you have not found it, proceed to the left
or right depending on the value of 𝑥 (bigger or smaller than the middle?).
Time complexity of recursive algorithms
1. State the size of the input 𝑛.
2. State the recurrence equation by counting operations
𝑇(0) = 𝑐)
𝑇(𝑛) = 𝑐& + 𝑇(𝑛 − 1) 𝑖𝑓 𝑛 > 0 à recurrence relationship
3. Establish the closed form of the recurrence equation: 𝑇(𝑛) = ”a formula that does not
involve 𝑇”.
a. Method 1: take an educated guess and prove the correctness of your guess.
b. Method 2: determine the closed form by repeatedly unfolding 𝑇.
4. Determine the complexity in big-Oh notation.
5. Prove the complexity in big-Oh notation.
By the definition it is correct that a linear algorithm is 𝑂(2# ), but that’s a useless answer, we are
always interested in the tightest complexity in big-Oh.
Big-𝑂 notation à upper bounds.
Given functions 𝑓(𝑛) and 𝑔(𝑛), we say that 𝑓(𝑛) is 𝑶(𝒈(𝒏)) if there are positive constants 𝑐 and
𝑛) such that: 𝑓(𝑛) ≤ 𝑐 ∗ 𝑔(𝑛) for all 𝑛 ≥ 𝑛)
Big-Ω notation à lower bounds.
Given functions 𝑓(𝑛) and 𝑔(𝑛), we say that 𝑓(𝑛) is 𝛀(𝒈(𝒏)) if there are positive constants 𝑐 and
𝑛) such that: 𝑓(𝑛) ≥ 𝑐 ∗ 𝑔(𝑛) for all 𝑛 ≥ 𝑛)
Big-𝜃 notation à tight bounds.
Given functions 𝑓(𝑛) and 𝑔(𝑛), we say that 𝑓(𝑛) is 𝜽(𝒈(𝒏)) if 𝑓(𝑛) is 𝑂(𝑔(𝑛)) and 𝑓(𝑛) is Ω(𝑔(𝑛))
Space complexity
Time complexity: the number of operations that is executed.
Space complexity: the amount of memory that is needed.
𝑆*+,-.$/0 (𝑛) = “maximum amount of memory needed at any point in the algorithm for an input of
size 𝑛” à when we have a function, we can use the techniques we have seen so far to compare
algorithms.
,Memory in Java:
• The call stack: method arguments, local variables.
• The heap: arrays, objects.
• Primitive types are copied when calling a method, references to data on the heap are
copied when calling a method.
#
In a binary recursive function, we have for example 𝑇(0) = 𝑐0, 𝑇(𝑛) = 𝑐1 + 2 ∗ 𝑇( ! ) and 𝑆(0) =
# #
𝑐0, 𝑆(𝑛) = 𝑐1 + 𝑆( ! ). We don’t have 2 ∗ 𝑆( ! ) here, because after the first recursive call returned, its
stack frame has been popped, thereby the space has been freed up for the second recursive call.
à Remember: space can be reused, time cannot!
Space complexity will never be higher than the time complexity, because all space needs to be
allocated, which takes time.
A program version which contains a loop has a better space complexity than a same program
which uses recursion, because the recursive version pushes stack frames.
Can we make sure that recursion consumes less space?
• In many languages: Yes, tail recursion.
o Tail call: a method call performed as the final action of a method.
o If all recursive calls of a method are tail calls, the method is tail recursive.
o Tail calls return immediately, one no longer needs the current method’s stack frame
containing the method parameters and local variables.
o When performing a tail call, it can reuse the stack frame of the current method.
• In Java: No.
Arrays
Arrays testing for equality
§ Comparing references: 𝑎𝑟𝑟𝑎𝑦1 == 𝑎𝑟𝑟𝑎𝑦2, 𝑎𝑟𝑟𝑎𝑦1. 𝑒𝑞𝑢𝑎𝑙𝑠(𝑎𝑟𝑟𝑎𝑦2)
§ Comparing elements: 𝑗𝑎𝑣𝑎. 𝑢𝑡𝑖𝑙𝑠. 𝐴𝑟𝑟𝑎𝑦𝑠. 𝑒𝑞𝑢𝑎𝑙𝑠(𝑎𝑟𝑟𝑎𝑦1, 𝑎𝑟𝑟𝑎𝑦2)
o Uses == for primitives and uses 𝑎𝑟𝑟𝑎𝑦1[𝑘]. 𝑒𝑞𝑢𝑎𝑙𝑠(𝑎𝑟𝑟𝑎𝑦2[𝑘 ]) for objects.
Add element 𝑒 to 𝑎𝑟𝑟𝑎𝑦1 at index 𝑖:
§ Shift forward the 𝑛 − 𝑖 non-null elements 𝑎𝑟𝑟𝑎𝑦1[𝑖] to 𝑎𝑟𝑟𝑎𝑦1[𝑛 − 1].
§ Set element at index 𝑖 of 𝑎𝑟𝑟𝑎𝑦 to 𝑒
Remove element 𝑒 at index 𝑖 of 𝑎𝑟𝑟𝑎𝑦1:
§ Shift backwards the 𝑛 − 𝑖 − 1 non-null elements 𝑎𝑟𝑟𝑎𝑦1[𝑖 + 1] to 𝑎𝑟𝑟𝑎𝑦1[𝑛 − 1]
Insertion sort on arrays
§ For 𝑘 from 1 to 𝑛 − 1 do
§ Insert 𝑎[𝑘] at its proper location within 𝑎[0], 𝑎[1], … , 𝑎[𝑘]
The 𝑐𝑙𝑜𝑛𝑒() function makes a shallow copy of an array. Therefore, when an array stores object,
the elements are copied by references.
Deep copy: array2 should contain identical copies of the objects in array1 (copied by value).
§ Class of object has to implement interface Cloneable and override the 𝑐𝑙𝑜𝑛𝑒() function.
§ Create array2 and copy elements one by one into arra2 by invoking 𝑐𝑙𝑜𝑛𝑒() on each.
Array expansion is done by making a (deep) copy into a larger array (𝑂(𝑛) time and space).
Singly linked lists
A sequence of nodes starting from a head pointer. Tail is last node of the list; its next reference is
null. Traversal: moving through the list following nodes’ next references.
Each node stores an element and a reference to the next node in the list.
Insertion at the head: create new node, insert new node, update head.
Insertion at the tail: create new node, find tail by traversing the list, update tail’s next, update tail.
Insertion at the tail (efficient): keep reference to tail (in addition to head).
Remove head: update head to current head’s next.
Remove tail: find last but one node by traversing, update tail, set tail’s next to null.
Circularly linked lists
Singly linked list, with tail’s next pointing to head, rather than null (no explicit head).
Rotate: updates the tail by following its next reference (implicit head).
Insert at head: create new node with next reference pointing to tail’s next, update tail’s next to
point to the new node.
Insert at tail: insert at head, rotate.
Remove head: get explicit pointer to head. If identical to tall, there is only one node, so set tail to
null. Otherwise, set tail’s next to head’s next.
Remove tail: get explicit pointer to one before tail. If identical to tall, there is only one node, so set
tail to null. Otherwise, set one before tail’s next to tail’s next (head), update tail to be one before
tail.
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller sachakorte. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $5.88. You're not tied to anything after your purchase.