COS3751
EXAM
PACK
2023
FOR ANY QUERIES & EXAM ASSISTANCE
CONTACT: biwottcornelius@gmail.com
, COS3751/201
1 Assignment 01 model Solutions
ASSIGNMENT 01
Solution
Total Marks: 100
UNIQUE ASSIGNMENT NUMBER: 362609
Study material: Chapters 1 through 4. You may skip sections 4.2 and 4.5.
Question 1: 10 Marks
(1.1) List and explain the components that make part of a typical agent design. (6)
1. Sensors: used to perceive the environment X 2
2. Actuators: allows the agent to act upon the environment X 2
3. Environment: the context within which the agent operates. X 2
(1.2) Explain the difference between a Fully observable, and Partially observable state. (4)
A fully observable state means the agent is able to observe the entire state, at a distinct
point in time X 2 , while a partially observable state means the agent can only observe
a part of the state at a distinct point in time. X 2
Question 2: 12 Marks
Search spaces:
(2.1) Consider the following game: two players (A and B) take turns removing between 1
and 3 items from a heap containing 5 items. The goal is to be the last person to
take something from the heap (id est after that move, the heap is empty, and no more
moves can be made).
(a) Define a state representation for the game. (3)
The most compact state would be { P, R} X with P representing the player that
made the last move, and R representing the number of items remaining X . As a
special case, P ∈ A, B, since our start state should be {∅, 5} X
(b) How big is the search space for this game? (2)
A brute force search will have to search at most d levels deep with a branching
factor of bX . The depth is no more than 5 since we have two players taking turns
and in the worst case each one takes 1 item from the heap X . However, in most
cases the depth will not be that deep since an ideal strategy allows player one to
win with an initial move of 1. In this case the search is 3 levels deep. So, at worst:
O(d b ).
3
, (c) Using your state representation, define the start state for the game. (1)
{∅, 5} X
(d) Define the goal state for the game based on your state representation. (2)
{ A, 0} for player AX , and { B, 0} for player BX .
(e) Define all the operators and functions for this problem. (2)
For moving: move(State, p, n)7→ {p, m− n} X Goal test: goal({ p, 0} ) 7→ {true, false} X
(f) Write down the sequence of moves that guarantees that player A (the first player (2)
to move) wins every time.
move(A,1)
move(B,1) move(B,2) move(B,3)
move(A,3) move(A,2) move(A,1)
goal(A,0)
X2
Question 3: 70 Marks
(3.1) Explain the difference between a toy and real world problem. Why do we make the (3)
distinction?
X With
With toy problems we want to illustrate/test a particular problem solving method.
a real-world problem we actually need to answer a question X . Re-solving a toy-
problem does not mean much unless it means the answer is calculated faster, or the
margin of error is smaller. We use toy problems to test our hypothesis before attempt-
ing to loose them in the field on real-world problems.X
(3.2) Explain the concept of a transition model. (1)
Given a state and an action, it provides the resulting state.X
(3.3) What is the purpose of the frontier ? (1)
Maintains a list of all nodes that are available for expansionX (tells us which nodes we
can explore next)
(3.4) Explain the difference between path and step cost. (2)
Path cost provides the cost of the solution thus farX , while step cost provides the cost
when applying action a to state s in order to reach state s′ X
4
, COS3751/201
(3.5) List and explain three typical data structures that can be used to implement the frontier. (3)
1. FIFO: always returns the oldest (first-in) element when requesting an element.X
2. LIFO: always returns the youngest element when requesting an element.X
3. Priority Queue: always returns the element with the highest priority based on
some ordering scheme.X
(3.6) Uninformed Search:
(a) Clearly explain why uninformed searches do not necessarily produce the best (2)
search results.
Uninformed searches do not know if one non-goal state is more promising that
another non-goal stateX . This means that they may choose a horrible solution to
a problem.X
(b) What is the space complexity of the Depth First Search (DFS) algorithm with a (3)
branching factor of β at depth δ? Is this a problem? If so, explain why. If not,
explain why not.
O(δβ )X . Yes, it is a problem X . It means for large search spaces the problem
quickly becomes intractableX .
(c) Does Uniform Cost Search (UCS) alleviate the space complexity problems with (3)
Breadth First Search (BFS)? Explain your answer.
NoX . It may spend time exploring the area around the goal to try and find a shorter
route resulting in a larger space requirement X . O(b 1+⌊C∗/ǫ ) may be much larger
than O(β δ)X
(d) Clearly explain why would one choose Iterative Deepening Search (IDS) as a (2)
search strategy.
One may choose IDS if one is uncertain of the depth of the search tree X 2 .
(3.7) Consider the search tree in figure 1 and answer the questions that follow.
(a) Show the order in which the nodes will be expanded given that the BFS search (6)
is used. Assume the goal is node is L, and that nodes are expanded from left to
right.
Nodes are expanded level by level : A,B,C,D, E,F,G,H,I,J,K,LX 6
(b) Name two problems with this search approach. (2)
Not optimalX , space and time inefficientX .
5