COS3751 2023 Assignment 1
,Question 1:
1.1 Explain the difference between fully observable and a partially observable
environment.
In artificial intelligence and machine learning, the terms fully observable environment and partially
observable environment refer to the degree of visibility that an agent has into the state of its
environment.
A fully observable environment is one in which the agent can directly observe and perceive the
complete state of the environment at each time step. In other words, the agent has complete
information about the state of the environment, and there are no hidden variables that affect the
behavior of the environment.
On the other hand, a partially observable environment is one in which the agent cannot directly
observe the complete state of the environment at each time step. In such an environment, the agent
has only partial or noisy information about the current state of the environment. This can happen
because of sensor limitations, the presence of hidden variables, or other factors.
In a partially observable environment, the agent must maintain a belief state, which is a probability
distribution over all possible states of the environment, based on its observations and previous
actions. The agent's policy must take into account this belief state in order to make optimal
decisions. The task of determining the optimal policy in a partially observable environment is
known as a partially observable Markov decision process (POMDP).
1.2 Explain the difference between a Deterministic and Stochastic
environment.
In artificial intelligence and machine learning, the terms deterministic and stochastic refer to the
nature of the environment in which an agent operates.
A deterministic environment is one in which the outcome of each action is completely determined
and predictable. In other words, given a particular state and action, there is only one possible next
state. The environment does not introduce any randomness or uncertainty in the outcome of actions.
On the other hand, a stochastic environment is one in which the outcome of actions is not
completely determined, and there is some degree of randomness or uncertainty in the outcome. In
such an environment, the same action taken in the same state may lead to different outcomes at
different times due to the presence of random factors.
Stochastic environments can be further classified based on the source of randomness. For example,
in a partially observable stochastic environment, the randomness comes from the uncertainty in the
state of the environment, while in a fully observable stochastic environment, the randomness can
come from external factors such as noise or random events.
In a deterministic environment, an agent can determine the consequences of its actions with
certainty, and an optimal policy can be computed in advance. In a stochastic environment, the agent
must consider the probability distribution over possible outcomes and choose actions that maximize
, expected reward. The task of determining the optimal policy in a stochastic environment is known
as a stochastic Markov decision process (MDP).
1.3 Classify the following environments as deterministic or stochastic:
1) Chess: deterministic environment - the outcome of each move is completely determined and
predictable.
2) Soccer: stochastic environment - the outcome of the game can be influenced by various
random factors such as the bounce of the ball, the weather, player injuries, and referee
decisions.
3) Self-driving car: partially observable stochastic environment - the car's sensors may not
provide complete and accurate information about the environment, and the presence of other
unpredictable vehicles and pedestrians introduces uncertainty and randomness into the
outcome of the car's actions.
Question 2
2.1 Discuss well-defined problems in terms of its five components.
Well-defined problems are those problems in which the initial state, goal state, set of actions,
performance measure, and solution can be explicitly defined.
The five components of a well-defined problem are:
1. Initial State: The initial state is the starting point of the problem. It defines the current situation of
the problem and the available resources. In other words, it is the state in which the problem solver
begins to work on the problem.
2. Goal State: The goal state is the desired outcome of the problem. It defines what the problem
solver is trying to achieve. The goal state can be a single state or a set of states that meet certain
criteria.
3. Set of Actions: The set of actions are the available moves that can be made by the problem solver.
The actions must be explicitly defined and must be relevant to the problem. The actions can be
deterministic or stochastic.
4. Performance Measure: The performance measure defines the criteria for evaluating the success of
the problem solver. It measures the quality of the solution provided by the problem solver. The
performance measure can be a single objective or a set of objectives.