100% tevredenheidsgarantie Direct beschikbaar na betaling Zowel online als in PDF Je zit nergens aan vast
logo-home
Summary of Introduction to AI part 1 €5,49   In winkelwagen

Samenvatting

Summary of Introduction to AI part 1

 31 keer bekeken  1 keer verkocht

all stuff covered in the course

Voorbeeld 4 van de 31  pagina's

  • 1 februari 2022
  • 31
  • 2021/2022
  • Samenvatting
Alle documenten voor dit vak (3)
avatar-seller
adata
Model 7. Deterministic Problem Solving 3
Local search
Search methods discussed so far: they find a path though the search space, and while doing that,
they keep track of previous spaces. For example, in chess, it is really important to know how to
move the pieces in sequence to a particular state. You also need to keep track of previous states.
For example, to find the shortest route, you need to know where you have been. So, if you want
absolute certainty for things, or if you want the previous states tracked, then the methods that we
have used until now deal with those things. However, for many problems, we only want to know the
final configuration. We only want to get to a particular good solution. We don’t need the best
possible solution; we are satisfied with a good solution. For example, when you go to work, you just
want to get there in time, it doesn’t matter if you have the shortest route or faster, doesn’t have to
be optimal route.

Local search algorithms are called local search, because we don’t explore the entire search space,
but only states in the direct neighborhood (searching in a particular locality). Based on what’s
happening at those neighbor states, we decide what to do next. They do not keep track of the states
that have been reached, and they don’t care about the fact that there may be a better solution
somewhere else. We can apply these things if we are happy with a suboptimal solution. This kind of
search is really fast.

In AI the metaphor of a hill is used for local search, because: you are located at a particular state,
and you can go to a number of states. Each state has a utility and this utility is defined by an
objective function. And the aim is to get to the state with the highest possible utility/or lowest cost.
So, if you are climbing a mountain and want to get to the top, the state where you are right now, is
the current state. And where you can go are all of the successor states. The higher you go on the
mountain, the higher the utility for you, at least if up is where you want to go.




Each state has only one neighboring state, and each state has only one dimension (dimension =
vertical “elevation”). So, it is a graph of a one-dimensional state space. This resemblance AI.

Hill-climbing algorithm
(simplest algorithm to explore such problems) is like trying to climb a mountain in the fog, with
amnesia. You see nothing about what is around you, and you don’t remember if where you went
was actually higher than where you are now at any time. What you can do in that case: the only
thing is do a method which is called hill-climbing. What this does is:




27

, - keeps track of one current state (no backtracking), forgets all the states you have
been before (a little memory)
- does not look ahead beyond the immediate neighbors of the current state (greedy -
immediately takes the best value)
- on each iteration (next step) moves to the neighboring state with the highest value
(steepest ascent, choose the highest value)
- Terminates when a peak is reached (no neighbor has higher value).

When everything around you is equal or lower than where you are
now, you stop/ so it does not look beyond the immediate neighbors
of the current state. Hill climbing find the optimum if the mountain
looks like a simple ‘plato’. Otherwise, hill climbing is not guaranteed
to find anything at all, except a local maximum.

There are some problems with hill-climbing:

- Hill-climbing suffers from the fact that it will get stuck at a
local maximum
- It will get stuck wherever there is a plateau (flat local maximum or shoulder), you stop
there, none of neighboring values is higher,
- Ridges (see figure)

o sequence of local maxima that are not directly connected

o each local maximum only has worse connecting states

o common in low-dimensional state spaces

note: the dots in blue are different states, you see the elevation goes up towards the end. Ridges are
not a problem in a one-dimensional state space, but they exist when there are higher dimensions.
There is a path that goes up but you can only move from state to state by following the lines. And all
of the lines which go from a particular state, are going down. So, you are never going to find the top,
because to get there, you can pass only thought a single state for instance.

Improvements
- Allow for a limited number of sideways moves (if on plateau, that is really a
shoulder)
o Higher success rate + higher number of moves (disadvantage)
- Stochastic chill climbing: random selection between the uphill moves, with
probability related to steepness (highest probability to go to the best one)
- First-choice hill climbing: random testing of successors until one is found that is
better than the current state
o Good strategy when testing all successors is costly
- Random-restart hill climbing: do a number of hill-climbing searchers from randomly
selected states
o If each hill-climbing search has probability of success p then solution will be
found on average after 1/p restarts
o Will eventually find the correct solution because goal state will be initial
state




28

,If elevation = objective function -> find the global maximum or highest peak -> hill climbing
If elevation = cost -> find the global minimum or lowest valley -> gradient descent

Hill-climbing is not about a hill, what you are seeing is basically in a chart, the outcome of the
objective function. If your goal is to achieve something that is as high as possible, you are trying to
find the global maximum of that objective function. If the elevation is a cost function meaning that
the number illustrates how bad something is: the cost that you have to do something, the best cost
that you can have is zero. For instance, the amount of money you lose, then you want to go as low
as possible: you want to find the global minimum. This is called the gradient descent/hill climbing.

Simulated annealing
- it tries to solve the problem with hill climbing efficient, but it will get stuck in a local
maximum by combining that method with random walk.
- Problem with random walk is (just selecting states till you find local maxima): most
inefficient but will eventually find the local maximum.
- Combination of both is simulated annealing (completer and more efficient) (metal for
annealing). How does simulated annealing work:

We move wherever we are to a randomly chosen neighbor state. If the utility is higher, you are
always going to move to that state. But if the utility is lower, you move to that state with probability
p<1. So not entirely random. The probability of the moves to a worse state become less likely the
worse the move makes the situation. And it becomes less likely as the temperature decreases
(simulated annealing). This is the first algorithm that is not entirely deterministic, so there is a kind
of random sampling going on. Higher the temperature, the agent will go to the lower state. Higher
probability to find the global maxima with lower temperature.

Local beam search
is it doesn’t consider 1 state at a time to start with, but it considers/ explores all of the successor
states and from those dates selects “k” best successors at every step:

- if k = 1 -> hill-climbing (the best successor state)
- if k => 2 parallel hill-climbing process (not only choosing the best one, but second best one)

A variant on this is stochastic local beam search, which

- selects K successors at random every step,
- the probability of the selection of a particular state is a function of its utility (aka fitness).
That is where we move from utility to fitness, fitness of something is a kind of utility, but
we usually talk about fitness when we talk about organisms/biology. Fitness =
probability of survival.

Genetic algorithms
Kind of local beam search with a slight additional touch. As in beam search, we start with k randomly
selected states (population) and each state or individual is encoded as string. So, you have a
universe of states, and each state you can encode as a particular string, because a string is simply a
set of numbers. Each character you have in a string corresponds to a particular gene, and you can
cross those with each other: color sample.




29

, First: calculate the fitness of each color. Then: combine the colors in the second step, and the
probability that a color will be chosen is the height of the fitness. So, the entire population is going
to be replaced again.

So, two pairs are selected at random with probability of selection increasing with fitness (stochastic
random sampling). The cross over point for each pair is chosen at random. So, the amount of genetic
material that you get from one color and the amount of genetic material that you are getting from
its mate, cross point for each pair is chosen at random. The cross over point is the point in this string
where you cutoff the amount of string and combine it with the other string. Offspring are created by
combining strings at crossover point. You are also going to add a small probability of random
mutation. Characters and it’s going to randomly change that into other things.




(orange:mutation)

To summarize genetic algorithms. They combine uphill tendency (fitness) that you have with the hill-
climbing algorithm with a random exploration that we see when we apply these stochastic sampling
methods like annealing and beam search. And there is also exchange of information between the
search threads. It is not just something that is local, and you are exploring things completely local,
no it actually, you are taking solutions from the previous generation, and you are using them
together. The biggest advantage to the method comes from this cross over operation (there is no
advantage if the code is initially permuted, when you would take all of the color codes that you have
for the initial population and then you do permutations so that they have nothing to do with each
other and it stops working). If you want to use genetic algorithms, it requires careful engineering of
genetic code.

Summary
- For many search problems, we do not need the best solution, or the best solution is not
achievable at all.
- Local search methods are useful tool because they operate on complete search problems
without keeping track of all the states.
- Simulated annealing adds a stochastic element to hill climbing and can give optimal
solutions in some circumstances.
- Stochastic local beam search provides a first approach to the generation and selection of
states. This is how you move from hill climbing to genetic algorithm. This is also the first
algorithm where you have shared information between threads that are searching.
- Genetic algorithms maintain a large population of states and use operations such as
mutation and crossover to expand the search space.




30

Voordelen van het kopen van samenvattingen bij Stuvia op een rij:

Verzekerd van kwaliteit door reviews

Verzekerd van kwaliteit door reviews

Stuvia-klanten hebben meer dan 700.000 samenvattingen beoordeeld. Zo weet je zeker dat je de beste documenten koopt!

Snel en makkelijk kopen

Snel en makkelijk kopen

Je betaalt supersnel en eenmalig met iDeal, creditcard of Stuvia-tegoed voor de samenvatting. Zonder lidmaatschap.

Focus op de essentie

Focus op de essentie

Samenvattingen worden geschreven voor en door anderen. Daarom zijn de samenvattingen altijd betrouwbaar en actueel. Zo kom je snel tot de kern!

Veelgestelde vragen

Wat krijg ik als ik dit document koop?

Je krijgt een PDF, die direct beschikbaar is na je aankoop. Het gekochte document is altijd, overal en oneindig toegankelijk via je profiel.

Tevredenheidsgarantie: hoe werkt dat?

Onze tevredenheidsgarantie zorgt ervoor dat je altijd een studiedocument vindt dat goed bij je past. Je vult een formulier in en onze klantenservice regelt de rest.

Van wie koop ik deze samenvatting?

Stuvia is een marktplaats, je koop dit document dus niet van ons, maar van verkoper adata. Stuvia faciliteert de betaling aan de verkoper.

Zit ik meteen vast aan een abonnement?

Nee, je koopt alleen deze samenvatting voor €5,49. Je zit daarna nergens aan vast.

Is Stuvia te vertrouwen?

4,6 sterren op Google & Trustpilot (+1000 reviews)

Afgelopen 30 dagen zijn er 67866 samenvattingen verkocht

Opgericht in 2010, al 14 jaar dé plek om samenvattingen te kopen

Start met verkopen
€5,49  1x  verkocht
  • (0)
  Kopen