1. Introduction to Multi-Agent Systems
Agent characteristics
An agent is something that acts. It is an (distinct portion of) a computer program that
represents a social actor. This could be an individual, organization, firms, avatar, robots,
machines, apps, etc. Typically, we would consider an agent a computer agent to operate
autonomously, so it makes it decisions by itself, and not controlled by itself. A computer agent
perceives its environments and tense to persist over a longer period of time. So, basically it
observes what is happening in the environment, reacts on that, and it does that for some time.
And often we see agents that adapt to a change, and a computer agent may create and pursue
a goal.
We can consider MAS (ABM). A MAS is a system of multiple agents that are situated in some
environment. They can sense and act in that environment, they communicate with each other,
and they solve problems collectively. The most important part of MAS is that often collective
behavior in MAS is more than just the sum of the behavior of individual agents. It is not just
adding up.
Characteristics of Multi-Agent Systems
- Agent design
• Physical VS programmatic
• Heterogeneous (different hardware or design) VS homogeneous (designed in an
identical way and have a priori the same capabilities)
- Environment
• Static (chess board)
• Dynamic (football field) (MAS tend to be dynamic)
- Perception (what can an agent perceive)
• Distributed: the agents may observe data that differ spatially (appear at different
locations), temporally (arrive at different times), or semantically (require different
interpretations).
• Fully/Partially observable: optimal planning may be intractable. So, we need to
take into account that agents do not know everything,
• sensor fusion, that is, how the agents can optimally combine their perceptions in
order to increase their collective knowledge about the current state.
- Control (how the agent acts)
• agents need to decide on their own what to do, so the control is decentralized.
There is no one single program for all agents.
• Advantage: there is more robustness and fault tolerance: if one agent has a failure
somewhere, then the group still may be able to pursue the goal.
• Disadvantage: it is more difficult to divide what an agent should do, so how to
make decisions. Often, people rely on game-theory. Control requires coordination.
- Knowledge (agents have knowledge)
• Agents have knowledge about the world, and the world includes other agents as
well. But the level of common knowledge may differ. MAS agents should consider
what other agents know.
- Communicate (agents communicate with each other)
1
, • Two-way system: sender and receiver.
• This is necessary for coordination and negotiation.
• Typically, by MAS we should think of protocols for heterogeneous agents, that
interact with each other.
Applications
- E-commerce, trading, auctions
- Robotics
- Computer games
- Social and cognitive science
- Internet
- Human-machine learning
Challenges
- How can we understand and solve problems with multi-agent systems?
- How can agents maintain a shared understanding of their environment?
- How can we design agents that coordinate and resolve conflicts?
- What kind of learning mechanisms are there for agents?
- How can agents of different types interact effectively?
Agent types and they’re not mutually exclusive:
• Simple reflex agents (react to stimuli).
• Model-based reflex agents.
• Goal-based agents.
• Utility-based agents.
• Learning agents.
2. Rational Agents and the problem of decision
making (Goal-based agents)
Rational agents
Intelligent agent/goal-based agent/rational agent: an agent is rational if it always selects an action
that optimizes a known performance measure, given what the agent knows so far.
An agent consists of sensors to perceive what is in the environment. It contains an agent program
(or agent function) and it contains actuators in order to control the activities of the agent.
2
, An agent is situated in a certain environment, and the agent perceive things that are happening
in the environment, as far as observable to the agent. May be all the things in the environment
but may also be partial.
The information that it is perceived, forms an input to the agent program, the agent program
contains rules or functions to decide on how to act. So, the input is transferred in some
actuating control. A MAS have multiple agents inside the environment, who can interact with
each other.
A rational agent is an agent whose program is designed to optimize the appropriate
performance measure. A performance measure is a function that evaluates a sequence of
environment states (not agent states). We want this agent to optimize its behavior in terms of
its goals in the environment. So, considering an agent playing football: the optimal
performance measure is to have a ball inside the goal of the opponent. In order to have that,
we could evaluate where the ball is, and the distance to the goal, etc. the agent is always
evaluating the sequence of the environment states and tries to optimize that = rational agent.
But, the agent doesn’t know everything and doesn’t know what other agents are doing. So, the
agent is never sure about the effect of its actions. So, rather than optimize the performance
measure, it should be optimizing the expected performance measure. In order to improve this,
an agent may also need to gather more information. So, rather than deciding for a certain
action, a rational agent may need to apply “looking action”. When it is looking, an agent can
store info in its own memory and use this info to optimize its expected performance.
Computational agent: agent design to solve a particular task.
Why is rationality important?
• to efficiently achieve the goal
• makes a designer life easier
• predict/ reason the behavior of the agent
Rational agent and the problem of decision making
How we can design/model rational agents?
Main assumptions:
o discrete time steps: t = 0,1,2, … (time period)
o at each time step:
the agent takes an action at and,
the agent receives and observation at time t = Ot.
The agent can choose to maintain the history of actions and observations from previous
time steps in order to reason about the next action to take:
The set of all possible actions is called the actions set or action space, denoted by A.
Policy
Policy: a policy maps a history to an action. It tells the agent what actions to take given the
history:
3
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller adata. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $7.04. You're not tied to anything after your purchase.