100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
Summary of Introduction to AI part 1 $5.93   Add to cart

Summary

Summary of Introduction to AI part 1

 31 views  1 purchase
  • Course
  • Institution

all stuff covered in the course

Preview 4 out of 31  pages

  • February 1, 2022
  • 31
  • 2021/2022
  • Summary
avatar-seller
Model 7. Deterministic Problem Solving 3
Local search
Search methods discussed so far: they find a path though the search space, and while doing that,
they keep track of previous spaces. For example, in chess, it is really important to know how to
move the pieces in sequence to a particular state. You also need to keep track of previous states.
For example, to find the shortest route, you need to know where you have been. So, if you want
absolute certainty for things, or if you want the previous states tracked, then the methods that we
have used until now deal with those things. However, for many problems, we only want to know the
final configuration. We only want to get to a particular good solution. We don’t need the best
possible solution; we are satisfied with a good solution. For example, when you go to work, you just
want to get there in time, it doesn’t matter if you have the shortest route or faster, doesn’t have to
be optimal route.

Local search algorithms are called local search, because we don’t explore the entire search space,
but only states in the direct neighborhood (searching in a particular locality). Based on what’s
happening at those neighbor states, we decide what to do next. They do not keep track of the states
that have been reached, and they don’t care about the fact that there may be a better solution
somewhere else. We can apply these things if we are happy with a suboptimal solution. This kind of
search is really fast.

In AI the metaphor of a hill is used for local search, because: you are located at a particular state,
and you can go to a number of states. Each state has a utility and this utility is defined by an
objective function. And the aim is to get to the state with the highest possible utility/or lowest cost.
So, if you are climbing a mountain and want to get to the top, the state where you are right now, is
the current state. And where you can go are all of the successor states. The higher you go on the
mountain, the higher the utility for you, at least if up is where you want to go.




Each state has only one neighboring state, and each state has only one dimension (dimension =
vertical “elevation”). So, it is a graph of a one-dimensional state space. This resemblance AI.

Hill-climbing algorithm
(simplest algorithm to explore such problems) is like trying to climb a mountain in the fog, with
amnesia. You see nothing about what is around you, and you don’t remember if where you went
was actually higher than where you are now at any time. What you can do in that case: the only
thing is do a method which is called hill-climbing. What this does is:




27

, - keeps track of one current state (no backtracking), forgets all the states you have
been before (a little memory)
- does not look ahead beyond the immediate neighbors of the current state (greedy -
immediately takes the best value)
- on each iteration (next step) moves to the neighboring state with the highest value
(steepest ascent, choose the highest value)
- Terminates when a peak is reached (no neighbor has higher value).

When everything around you is equal or lower than where you are
now, you stop/ so it does not look beyond the immediate neighbors
of the current state. Hill climbing find the optimum if the mountain
looks like a simple ‘plato’. Otherwise, hill climbing is not guaranteed
to find anything at all, except a local maximum.

There are some problems with hill-climbing:

- Hill-climbing suffers from the fact that it will get stuck at a
local maximum
- It will get stuck wherever there is a plateau (flat local maximum or shoulder), you stop
there, none of neighboring values is higher,
- Ridges (see figure)

o sequence of local maxima that are not directly connected

o each local maximum only has worse connecting states

o common in low-dimensional state spaces

note: the dots in blue are different states, you see the elevation goes up towards the end. Ridges are
not a problem in a one-dimensional state space, but they exist when there are higher dimensions.
There is a path that goes up but you can only move from state to state by following the lines. And all
of the lines which go from a particular state, are going down. So, you are never going to find the top,
because to get there, you can pass only thought a single state for instance.

Improvements
- Allow for a limited number of sideways moves (if on plateau, that is really a
shoulder)
o Higher success rate + higher number of moves (disadvantage)
- Stochastic chill climbing: random selection between the uphill moves, with
probability related to steepness (highest probability to go to the best one)
- First-choice hill climbing: random testing of successors until one is found that is
better than the current state
o Good strategy when testing all successors is costly
- Random-restart hill climbing: do a number of hill-climbing searchers from randomly
selected states
o If each hill-climbing search has probability of success p then solution will be
found on average after 1/p restarts
o Will eventually find the correct solution because goal state will be initial
state




28

,If elevation = objective function -> find the global maximum or highest peak -> hill climbing
If elevation = cost -> find the global minimum or lowest valley -> gradient descent

Hill-climbing is not about a hill, what you are seeing is basically in a chart, the outcome of the
objective function. If your goal is to achieve something that is as high as possible, you are trying to
find the global maximum of that objective function. If the elevation is a cost function meaning that
the number illustrates how bad something is: the cost that you have to do something, the best cost
that you can have is zero. For instance, the amount of money you lose, then you want to go as low
as possible: you want to find the global minimum. This is called the gradient descent/hill climbing.

Simulated annealing
- it tries to solve the problem with hill climbing efficient, but it will get stuck in a local
maximum by combining that method with random walk.
- Problem with random walk is (just selecting states till you find local maxima): most
inefficient but will eventually find the local maximum.
- Combination of both is simulated annealing (completer and more efficient) (metal for
annealing). How does simulated annealing work:

We move wherever we are to a randomly chosen neighbor state. If the utility is higher, you are
always going to move to that state. But if the utility is lower, you move to that state with probability
p<1. So not entirely random. The probability of the moves to a worse state become less likely the
worse the move makes the situation. And it becomes less likely as the temperature decreases
(simulated annealing). This is the first algorithm that is not entirely deterministic, so there is a kind
of random sampling going on. Higher the temperature, the agent will go to the lower state. Higher
probability to find the global maxima with lower temperature.

Local beam search
is it doesn’t consider 1 state at a time to start with, but it considers/ explores all of the successor
states and from those dates selects “k” best successors at every step:

- if k = 1 -> hill-climbing (the best successor state)
- if k => 2 parallel hill-climbing process (not only choosing the best one, but second best one)

A variant on this is stochastic local beam search, which

- selects K successors at random every step,
- the probability of the selection of a particular state is a function of its utility (aka fitness).
That is where we move from utility to fitness, fitness of something is a kind of utility, but
we usually talk about fitness when we talk about organisms/biology. Fitness =
probability of survival.

Genetic algorithms
Kind of local beam search with a slight additional touch. As in beam search, we start with k randomly
selected states (population) and each state or individual is encoded as string. So, you have a
universe of states, and each state you can encode as a particular string, because a string is simply a
set of numbers. Each character you have in a string corresponds to a particular gene, and you can
cross those with each other: color sample.




29

, First: calculate the fitness of each color. Then: combine the colors in the second step, and the
probability that a color will be chosen is the height of the fitness. So, the entire population is going
to be replaced again.

So, two pairs are selected at random with probability of selection increasing with fitness (stochastic
random sampling). The cross over point for each pair is chosen at random. So, the amount of genetic
material that you get from one color and the amount of genetic material that you are getting from
its mate, cross point for each pair is chosen at random. The cross over point is the point in this string
where you cutoff the amount of string and combine it with the other string. Offspring are created by
combining strings at crossover point. You are also going to add a small probability of random
mutation. Characters and it’s going to randomly change that into other things.




(orange:mutation)

To summarize genetic algorithms. They combine uphill tendency (fitness) that you have with the hill-
climbing algorithm with a random exploration that we see when we apply these stochastic sampling
methods like annealing and beam search. And there is also exchange of information between the
search threads. It is not just something that is local, and you are exploring things completely local,
no it actually, you are taking solutions from the previous generation, and you are using them
together. The biggest advantage to the method comes from this cross over operation (there is no
advantage if the code is initially permuted, when you would take all of the color codes that you have
for the initial population and then you do permutations so that they have nothing to do with each
other and it stops working). If you want to use genetic algorithms, it requires careful engineering of
genetic code.

Summary
- For many search problems, we do not need the best solution, or the best solution is not
achievable at all.
- Local search methods are useful tool because they operate on complete search problems
without keeping track of all the states.
- Simulated annealing adds a stochastic element to hill climbing and can give optimal
solutions in some circumstances.
- Stochastic local beam search provides a first approach to the generation and selection of
states. This is how you move from hill climbing to genetic algorithm. This is also the first
algorithm where you have shared information between threads that are searching.
- Genetic algorithms maintain a large population of states and use operations such as
mutation and crossover to expand the search space.




30

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller adata. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $5.93. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

67866 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 14 years now

Start selling
$5.93  1x  sold
  • (0)
  Add to cart