COS3751 - Techniques of Artificial Intelligence (COS3751)
Institution
University Of South Africa (Unisa)
Book
Artificial Intelligence
Want to cut your studying time in half? If yes, then you definitely do not want to miss out on this wonderful opportunity. Contains all necessary concepts from the prescribed book with easy to understand explanations. This will replace your textbook completely! Use this to get a distinction for thi...
COS3751 - Techniques of Artificial Intelligence (COS3751)
All documents for this subject (26)
5
reviews
By: ajbrits • 1 year ago
By: truterkeagan • 3 year ago
By: jamesdeanmanderson • 3 year ago
Very helpful summary.
By: ThabaMolo • 2 year ago
By: mawelalab • 3 year ago
Seller
Follow
francoissmit
Reviews received
Content preview
Note: For the summaries, a block with a ‘red E’ means that it is an
exam question and is very important. A yellow block with an A means
it is an assignment questions, which is also important.
Agents and uninformed
searches
A problem can be defined by 5 components:
Initial state: e.g. In(Arad)
A description of the possible actions available to the agent:
With particular state s then ACTIONS(s) will return actions
applicable in s. e.g. in state In(Arad) the applicable actions
are {Go(Sibuiu), Go(Timisoara), Go(Zerind)}
Transition model which is description of what each action
A
does. Specified by function RESULT(s,a) that returns the state
that results from doing action a in state s. e.g
RESULT(In(Arad), Go(Zerind)) = In(Zerind) Think: transition to
another state
To define this 1st define the applicable actions.
Then go
Result(action1,State) -> Resulting state
Result(action2,State) -> Resulting State
Normally it will only ask 1 case. It will give you a state from which to work with
e.g.
Here the state we working from is S = {Andile, Bob, Torch}
, Although here there are more than 1 way to define actions and
states.
Goal test which determines whether a given state is a goal
state. E.g. the goal state is {In(Bucharest)}
Path cost function that assigns a numeric cost to each path. A
step cost of taking action a in state s to reach state s1 is
denoted by c(s, a, s1).
E Reflex agent:
Simple agent that choose action based on current percept (and maybe memory). So based
on input and current state (there is a lookup table) agent will choose and action.
Do not consider future consequences
Consider how the world IS
They are rational, because they will chose as to achieve the best outcome when given a
certain percept.
GIVE THIS IN EXAM: A rational agent is an agent that Acts in order to achieve the best
outcome, or where there is uncertainty, the best-expected outcome. Conceptually speaking,
it does the “right thing”.
Definition of rationality: A system is rational if it does the right thing, given what it knows.
A rational agent is one that acts so as to achieve the best outcome or, when there is
uncertainty, the best expected outcome.
Simple reflex agent:
Select actions on the basis of the current percept (information it received from
environment) , ignoring the rest of the percept history.
Basically the one as before
Used condition-action rule. e.g. if car-in-front-is-braking then initiate-braking
Model based reflex agent
uses knowledge about how the world works, i.e. uses a model.
Def: It bases actions on current percept and percept history and it uses a model of
how the world works
model shows what actions will do to change the world.
E
Environments:
,Discrete vs continuous environment:
Discrete = an environment with precise, finite states. Eg: a light
switch (2 states: on/off).
Continuous = an environment with an infinite number of possible
states (real life). Eg: temperature.
Fully vs partially observable environments:
Fully observable: If an agent’s sensors give it access to the complete
state of the environment at each point in time. Only the states the agent
need to function.
Partially observable: If agent’s sensors does not provide complete set of
all the states.
Single agent vs Multiagent:
Single agent environment: When a single agent is alone in an environment.
E.g. Agent playing crossword puzzle
Multiagent environment: When there are more than one agent in the
environment. E.g. Agent playing chess.
Deterministic vs stochastic:
Deterministic: If the next state of the environment is completely
determined by current state and the action executed by the agent.
e.g. agent playing crossword puzzle next state determined
(deterministic) by current state and the action
Stochastic: If next state is not only determined by current state and
the action. E.g. Taxi driver agent since it can’t predict the traffic
Episodic vs sequential:
Episodic: An agent whose action is not based on previous actions
taken. E.g. Agent that has to spot defect in assembly lines
Sequential: Agent whose actions is based on its previous actions,
i.e. current decisions effect future decisions. E.g. Agent playing
chess, since short term decisions effect future decisions.
Static vs dynamic:
, Static: If the environment cannot change while the agent is
deliberating (making decisions). e.g. agent playing crossword
puzzle
Dynamic: If environment can change while agent is deliberating:
E.g. Taxi driving agent, because traffic is constantly changing
while agent is making decisions
Known vs unknows:
Known: All outcomes for all actions are known to the agent.
Unknown: If not all outcomes of all actions are known to the agent
Competitive vs cooperative:
Competitive: When there are 2 agents which goals conflict with each
other. E.g. 2 agents playing chess. B is trying to maximize its
performance, which minimizes A’s performance.
Cooperative: When agents goals are similar and a particular agents
action maximizes all agents’ performance. E.g. taxi driver agent, because
avoiding collisions maximizes performance of all agents.
Note: That single agent environment can never be Competitive or
cooperative. It must be multiagent.
Optimal vs. complete planning:
A complete planning agent finds a solution (reaches goal state) and an optimal
agent finds a solution in the most optimal way.
Planning vs. replanning
Planning agent comes up with an entire plan to get to the solution and a
replanning modifies this plan to make it more optimal (basically a utility agent).
Replanning agent can initially also comes up with many plans and then choose
the best one.
A
Difference between model based and reflex based agent:
Simple Reflex agent: Agent’s actions are only based on current percept, not
percept history. E.g. E.g. Agent that has to spot defects in assembly lines
,Model based agent: Here the agent’s actions is not only based on current
percept, but also on percept history. E.g. taxi driving agent
E
Representation:
Example:
A state representation is a general state of a specific problem in a
formal way. Example is
State: S=(P, x)
x: represent how many items are left on the heap. x€(0,1,2,3,4,5)
P: Represents which player’s turn it is in this state to remove item/s, either A
or B , P€(A,B)
A specific state can be Sj = (A, 4)
i) they ask what elements form part of the state representation,
so to make state representation you must look at initial state,
actions and transition model
,Remember elements that from a state representation is initial State (or
states of the problem), actions and transition model
Example:
To define a game’s state representation:
You represent a general state in mathematical notation.
Something like State: S=(P, x)
Then you say what each variable is be specific and list what values the
variables can take on
E.g.:
x: represent how many items are left on the heap. x€(0,1,2,3,4,5)
, P: Represents which player’s turn it is in this state to remove item/s,
either A or B , P€(A,B)
Here is the full answer:
a) i) State: S = (P,n), P represents whose turn it is, either A or B, and n is the
amount of items on the heap
P€(A,B), P represents either player A or B
n€(0,1,2,3,4,5) n represent the amount of items currently on the heap
To define an action using this representation:
Represent action using mathematical notation. Also use the state in the
action:
Example: Action: a= remove(i) Note: don’t put state in the action, also
we use i, since n is already taken
You must say what the variable mean, what the action does in English
(explain it in English) and what values the variable can take on. You can
also give an example.
You can also show what the action returns in formal notation.
An example of an action would be remove(2) which will remove 2 items.
Here is answer to ii):
Applicable actions:
Ai = remove(i), this action removes i items from the heap. Explain what the
action does and explain what the variable is IN ONE SENTENCE
i€(1,2,3) say what values the variable can hold
e.g. A3 = remove(3), this will remove 3 items from the heap
The action will return a state
e.g. Result(S, Ai) -> S’, this says the result of doing action Ai in state S will yield
S’
How to show a state was achieved by doing a certain action:
iii)
Start state: S=(A,5)
we need to define A2 as well, try to define everything and be specific:
action A2 = remove(2)
Result(S,A2) -> S’ = (B,3), This says doing action A2 with starting state S will
result in state S’
Note: they did no say that we should define this formally thus the memo have
used English, but I will always define it formally
Make sure your examples are consistent with your definition.
Search problem (same as problem definition previously)
Consists of:
, - A state space: set of all the states in this world.
- A successor function : which
is description of what each action
does. Specified by function RESULT(s,a) that returns the state
that results from doing action a in state s. e.g
RESULT(In(Arad), Go(Zerind)) = In(Zerind)
- A start state and a goal test
e.g.
A solution is a sequence of actions (plan) which transform the start state to the
goal state.
Search problems are models!
World state includes every detail of the environment(e.g. In previous example
it includes, cities, roads, birds, dears, etc.), but search state includes only the
details needed for planning (abstraction) (e.g. cities, road cost)
State space graph: A mathematical representation of a search problem.
Graphical representation of all the states and which actions leads to which
states.
, - nodes are (abstracted) world configurations (states)
- Arcs represents successors (action results)
- The goal test is a set of goal nodes
- Each state only occurs once, regardless of the paths that could lead to
get to it.
- But this consumes too much memory, it’s only for understanding.
Search Trees
So we use this instead of a state space graph.
A “what-if” tree of plans and their outcomes
Start state it the root node
Children correspond to successors (state after applying an action to
another state)
Nodes show states, but correspond to PLANS that achieve those states.
This means same state can appear multiple times on the tree.
For most problems, we can never build the whole tree.
This state
corresponds to the
PLAN of being in the
start state and going
north
, Tree search algorithm:
How does the algorithm work? Pass in a problem and strategy (what type of search we do /
what frontier nodes do I expand next). Returns either a solution or failure.
Then continuously do:
if no children for expansion (meaning I have not reached a goal and cant
expand)return failure
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller francoissmit. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $4.45. You're not tied to anything after your purchase.