100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
Summary AI P&T - Midterm Reading Material $6.05   Add to cart

Summary

Summary AI P&T - Midterm Reading Material

 3 views  0 purchase
  • Course
  • Institution

All the material you need to know before the first P&T exam

Preview 3 out of 19  pages

  • October 19, 2024
  • 19
  • 2023/2024
  • Summary
avatar-seller
AI P&T lecture notes
Complexity Theory

Many computational problems in artificial intelligence and computer science are intractable: They are
difficult to solve given reasonable bounds on the available computation time and memory space
available. This intractability is due to the nature of these problems; not because we haven’t found
smart algorithms to solve them yet. The field in theoretical computer science that aims to capture
this notion of intractability is computational complexity theory.

There are 3 sorts of computational problems:

- Search problems
- Optimization problems
- Decision problems (typically interested in this for complexity theory)

When comparing or analysing algorithms, we are typically only interested in the order of growth of
the running time as a function of the input size, and discard of constants, lower-range polynomials
etcetera. We use the big O notation (used to express an asymptotic upper bound) for this. Examples:

- O(1) – constant time – running time is independent of the size of the input.
- O(n^c) – polynomial time – also known as linear, quadratic, cubic, etc.
- O(c^n) – exponential time – c is a constant (strictly) larger than 1.

Next to the O(.) upper bound notation, there is a similarly defined lower bound notation:

A function f is Ω( g ( n )) if there are constants c ≥0 ,n 0 ≥ 1 such that cg (n)≤ f (n), for all n ≥ n0.

When a function f is both upper-bounded and lower-bounded by g modulo some constants (i.e., f
grows ‘as hard’ as g) we use the following notation:

A function f is Θ( g ( n ) ) if there are constants c 1 , c2 ≥ 0 ,n 0 ≥ 1 such that c 1 g (n)≤ f (n)≤ c 2 g(n), for
all n ≥ n0 .

We define the worst case, best case, and average case complexity of an algorithm as follows:

- The algorithm uses at most f w (n) elementary instructions for every possible input of size
n ↔ The algorithm has worst case complexity f w (n)
- The algorithm uses at least f b (n) elementary instructions for every possible input of size n ↔
The algorithm has best case complexity f b (n)
- The algorithm uses at average f a (n) elementary instructions for every possible input of size
n ↔ The algorithm has average case complexity f a (n)
P is the class of decision problems D that are solvable in polynomial-time. More formally, P is the
class of decision problems D : I → {yes, no} for which there exists an algorithm with worst case time
complexity O(|i|c ) (for a constant c) that can decide whether D(i) = yes or D(i) = no for every instance
i ∈ I.

NP is the class of decision problems D for which there exist a polynomial-time verifiable proof for yes-
instances D(iyes ). More formally, NP is the class of decision problems D : I → {yes, no} for which there
c
exists an algorithm with worst case time complexity O(|i yes| ) (for a constant c) that can verify that
D(iyes ) = yes for every yes-instance iyes ∈ Iyes.

,co−NP is the class of decision problems D for which there exist a polynomial time verifiable proof for
no-instances D(ino). More formally, co−NP is the class of decision problems D : I → {yes, no} for which
c
there exists an algorithm with worst case time complexity O(|i no| ) (for a constant c) that can verify
that D(ino) = no for every no-instance ino ∈ Ino.

A decision problem is in P if there is an algorithm that can decide it in polynomial time; it is in NP if
there is an algorithm that can verify yes-instances in polynomial time, given a so-called certificate (an
actual solution, or other information that suffices as proof of being a yes-instance). We only need to
show the existence of such an algorithm – a constructive proof: we show that there exists an
algorithm by actually constructing such an algorithm.

If one can show that a decision problem can be verified in polynomial time, but not decided in
polynomial time, one has effectively proven that P ≠ NP.

For two decision problems D1 and D2 we say that D1 reduces to D2, if there exists an algorithm A that
transforms any instance i1 of D1 into an instance i2 = A(i1) of D2 such that i2 is a yes-instance for D2 if
and only if i1 is a yes-instance for D1. We furthermore say the reduction is a polynomial-time
P
reduction if the algorithm A runs in polynomial time. The notation D 1 ≤ m D2means that D1 is
reducible in polynomial time to D2. The ‘m’ in this notation can be ignored for now.

If D 1 ≤ mP D 2 holds, then the following statements hold:

- If D2 can be decided in polynomial time, then so can D1
- If D1 cannot be decided in polynomial time, then D2 cannot either

In general, to show that a problem D1 polynomial-time reduces to a problem D2 by mathematical
proof, all of the next proof steps have to be explicated:

1. Describe an algorithm A that transforms instances for D1 into instances for D2, possible using
an example
2. Consider an arbitrary instance i 1 for D1. Prove that i 2= A(i 1 ) is a yes-instance for D2 if and
only if i 1 is a yes-instance for D1. The if and only if has two sub-steps:
a. Assume i 1 is a yes-instance for D1. Show that then also i 2 is a yes-instance for D2 (i.e.,
give a convincing argument)
b. Assume i 2 is a yes-instance for D2. Show that then also i 1is a yes-instance for D1 (i.e.,
give a convincing argument). Or as an alternative to this sub-step: assume i 1 is a no-
instance for D1. Show that then also i 2 is a no-instance for D2
3. Show that A runs in polynomial-time

If the reduction works both ways, then it is symmetrical.

NP-hard problems derive their name from the fact that they are at least as hard to compute as any
problem in the class NP:

- A problem P is NP-hard if for all decision problems D ∈ NP there exists a polynomial-time
reduction to P.
- A decision problem D that is both NP-hard and in NP is called NP-complete.

An NP-complete problem D is a problem in NP such that each and every (other) problem D ′ in NP can
be reduced, in polynomial time, to D.

, In 1971 Stephen Cook showed that every problem in NP can be reduced in polynomial-time to CNF-
Sat. In Section 4 we claimed, that even if we cannot directly prove that problems like CNF-Sat cannot
be decided in polynomial time, we can prove that if there exists a polynomial-time algorithm for one
of them, there exists one for all of them. That is because all these problems can be reduced, directly
or indirectly, from CNF-Sat using a polynomial-time reduction.

To show that your problem P is NP-hard, you must provide a polynomial-time reduction from a
known NP-hard problem Q to P.

Poole & Mackworth

3.5.2 – Depth-first search

Backtracking: the algorithm selects a first alternative at each node, and it backtracks to the next
alternative when it has pursued all of the paths from the first selection.

Algorithms (including search algorithms) can be compared on

- the time taken,
- the space used, and
- the quality or accuracy of the results.

The time taken, space used, and accuracy of an algorithm are a function of the inputs to the
algorithm. Computer scientists talk about the asymptotic complexity of algorithms, which specifies
how the time or space grows with the input size of the algorithm. An algorithm has time (or space)
complexity O(f(n)) – read “big-oh of f(n)” – for input size n, where f(n) is some function of n, if there
exist constants n0 and k such that the time, or space, of the algorithm is less than k*f(n) for all n>n0>.

An algorithm has time or space complexity Ω(f(n)) for input size n if there exist constants n0 and k
such that the time or space of the algorithm is greater than k*f(n) for all n>n0. An algorithm has time
or space complexity Θ(f(n)) if it has complexity O(f(n)) and Ω(f(n)). Typically, you cannot give
a Θ(f(n)) complexity on an algorithm, because most algorithms take different times for different
inputs. Thus, when comparing algorithms, one has to specify the class of problems that will be
considered.

Depth-first search is appropriate when

- space is restricted
- many solutions exist, perhaps with long paths, particularly for the case where nearly all paths
lead to a solution or
- the order in which the neighbors of a node are added to the stack can be tuned so that
solutions are found on the first try.

It is a poor method when

- it is possible to get caught in infinite paths, which occurs when the graph is infinite or when
there are cycles in the graph
- solutions exist at shallow depth, because in this case the search may explore many long paths
before finding the short solutions, or
- there are multiple paths to a node, for example, on a n×n grid, where all arcs go right or
down, there are exponentially paths from the top-left node, but only n^2 nodes.

3.8.1 – Branch and bound

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller donjaschipper. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $6.05. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

64438 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 14 years now

Start selling
$6.05
  • (0)
  Add to cart