100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
Summary Introduction to Artificial Intelligence 1, Endterm $8.30   Add to cart

Summary

Summary Introduction to Artificial Intelligence 1, Endterm

 14 views  0 purchase
  • Course
  • Institution

Summary for the 2nd part of the course Introduction to Artificial Intelligence 1 at Tilburg University. The summary covers the knowledge needed for the Endterm, which constitutes for 70% of the final grade. Passed it with 7.0 without studying, just by bringing the notes with me to the exam.

Preview 4 out of 59  pages

  • May 27, 2023
  • 59
  • 2022/2023
  • Summary
avatar-seller
Lecture 4: Local Search - Deterministic Problem
Solving
- How can we explore a search if we have limited resources?
- How to quickly find a solution that may not be optimal, but good enough?

Search methods discussed so far:
➔ find a path through the search space,
➔ keep track of previous states.

For many problems:
➔ we only want to know the final configuration,
➔ we don’t need the best possible solution (the path)



Local search algorithms
- look at neighboring states to decide what to do next;
- do not keep track of the states that have been reached;
- don’t care about the fact that there may be a better solution somewhere else.

Local search algorithms operate using a single current node (rather than multiple paths) and
generally move only to neighbors of that node.
1) they use very little memory—usually a constant amount;
2) they can often find reasonable solutions in large or infinite (continuous) state spaces for which
systematic algorithms are unsuitable

Local search algorithms are useful for solving pure optimization problems, in which the aim is to find
the best state according to an objective function.

The state-space landscape has both “location” (defined by the state) and “elevation” (defined by the
value of the heuristic cost function or objective function).
- If elevation corresponds to cost, then the aim is to find the lowest valley—a global minimum;
- if elevation corresponds to an objective function, then the aim is to find the highest peak—a
global maximum.
★ (You can convert from one to the other just by inserting a minus sign.) Local search
algorithms explore this landscape.
➔ A complete local search algorithm always finds a goal if one exists;
➔ an optimal algorithm always finds a global minimum/maximum.




1

,Hill-Climbing (steepest-ascent version)

the hill-climbing search algorithm is simply a
loop that continually moves in the direction of
increasing value—that is, uphill
- sometimes called greedy local search
because it grabs a good neighbor state
without thinking ahead about where to
go next.

Imagine trying to climb a mountain:
- in the fog,
- with amnesia
- and not really a mountain. (the difference is: moving in only one dimension)

Local maxima: a local maximum is a peak that is higher than each of its neighboring states but lower
than the global maximum
Ridges: result in a sequence of local maxima that is very difficult for greedy algorithms to navigate.
Plateaux: a plateau is a flat area of the state-space landscape.
- It can be a flat local maximum, from which no uphill exit exists, or a shoulder, from which
progress is possible.
- A hill-climbing search might get lost on the plateau.

★ Local maximum can also be local minimum, depending on what you are looking for;)
★ If we have two different states to go, we go to a higher one, the steepest

Hill-Climbing

➢ Keeps track of one current state (no backtracking)
➢ Does not look ahead beyond the immediate neighbors of the current state (greedy)
➢ On each iteration moves to the neighboring state with highest value (steepest ascent)
➢ Terminates when a peak is reached (no neighbor has a higher value).

Problems
- Local Maxima (from one local maxima you can't move to another
local maxima/one dimension)
- Plateaus (flat local maximum or shoulder)
- Ridges (see figure)
➔ Sequence of local maxima that are not directly connected)
➔ Each local maximum only has worse connecting states.
➔ Common in low-dimensional state spaces




2

,Improvements
- Allow for a limited number of sideways moves (if on plateau that is really a shoulder- A
shoulder is a plateau that leads to a local maximum), instead of moving one step in one
dimension, allow for more steps
➔ Higher success rate + Higher number of moves
- Stochastic hill climbing: random selection between the uphill moves, with probability related
to steepness.
➔ highest probability to go to the steepest utility
- First-choice hill climbing: random testing of successors until one is found that is better than
the current state. Not inspecting all of states, it’s not really necessary in Hill-Climbing because
it doesn’t have a lot of states
➔ Good strategy when testing all successors is costly
- Random-restart hill climbing: do a number of hill-climbing searches from randomly selected
states. “If at first you don’t succeed, try, try again.”
➔ If each hill-climbing search has probability of success p then solution will be found on
average after 1/p restarts.
➔ Will eventually find the correct solution because the goal state will be the initial state.

Hill-Climbing
● If elevation = objective function → find the global maximum or highest peak → hill
climbing.
● If elevation = cost → find the global minimum or lowest valley → gradient descent.

Simulated Annealing
- Problem with hill climbing: efficient, but will get stuck in a local maximum.
- Problem with random walk: most inefficient, but will eventually find the local maximum.
- Combination of both → simulated annealing (more complete and efficient)

Simulated Annealing yields both efficiency and completeness
- To explain simulated annealing, we switch our point of view from hill climbing to gradient
descent (i.e., minimizing cost) and imagine the task of getting a ping-pong ball into the
deepest crevice in a bumpy surface.
➔ if we just let the ball roll, it will come to rest at a local minimum.
➔ If we shake the surface, we can bounce the ball out of the local minimum.
➔ The trick is to shake just hard enough to bounce the ball out of local minima but not
hard enough to dislodge it from the global minimum.
➔ The simulated-annealing solution is to start by shaking hard (i.e., at a high
temperature) and then gradually reduce the intensity of the shaking (i.e., lower the
temperature).
➢ Move to randomly chosen neighbor state
➔ If utility is higher, always move to that state.
➔ If utility is lower, move to that state with probability p < 1.
➢ Probability of a move to a worse state


3

, - Becomes less likely the worse the move makes the situation
- Becomes less likely as temperature decreases

Local Beam Search
The local beam search algorithm keeps track of k states rather than just one

Local Beam Search
- Selects the k best successors at every step
➔ if k=1 → hill climbing
➔ if k=>2 → parallel hill climbing process

Stochastic local beam search
- Selects k successors at random every step
- Probability of selection is a function of utility (aka fitness)


Genetic Algorithms
A genetic algorithm (or GA) is a variant of stochastic beam search in which successor states are
generated by combining two parent states rather than by modifying a single state

- Starts with k randomly selected states (population)
- Each state (or individual) is encoded as a string (commonly, a string of 0s and 1s)


Colors:
➔ R: FF, G: 00, B:00 - PURE RED
➔ R: II G: II, B: II - GRAY




- Each state is rated by the objective function (aka fitness function)




4

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller nataliaputorak. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $8.30. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

82215 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 14 years now

Start selling
$8.30
  • (0)
  Add to cart