100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
Summary Intelligent Systems (IS) $6.49   Add to cart

Summary

Summary Intelligent Systems (IS)

 14 views  1 purchase
  • Course
  • Institution

Summary of the Intelligent Systems course, taught at Maastricht University. Includes the following topics: - Intelligent agents - Types of AI - Agent environments - Agent architectures - Genetic algorithms - Expert systems - bayesian networks - multi agent coordination and coordination mo...

[Show more]

Preview 3 out of 22  pages

  • October 24, 2021
  • 22
  • 2019/2020
  • Summary
avatar-seller
INTELLIGENT SYSTEMS
intelligent systems Famke Nouwens
Lecture 1+2 – Intelligent agents
There is no commonly accepted definition of intelligence, but there are considered different forms, i.e. social,
emotional, senso-motoric etc.
Indirect goal of AI: computational precision of the everyday notion of intelligence.
The Turing test was designed to determine the intelligence of AI: “a computer program is intelligent if it
answers questions so that its responses are indistinguishable from a humans.
Motivations behind and requirements on AI:
- Visionary: build artefacts that produce intelligent behaviour in manner of humans
- Pragmatic: build artefacts that show behaviour comparable to human intelligent behaviour
There are also considered 2 types of AI:
- Weak: acts as if intelligent
- Strong: actually thinks
Types of AI:
· Knowledge based AI: since 1956. Its guiding model is an individual human and its guiding assumptions
are that intelligence is knowledge representation and processing. Hypothesis: symbol system – the
ability to produce and manipulate symbols is a necessary and sufficient condition for intelligence. This
is a top-down design of intelligence: start with high-level concepts at the knowledge level and break
them down into smaller, programmable units. The major problem is symbol grounding: how does it all
relate to the real world? How do words get their meaning?
· Behaviour-based AI: since 1985. Its guiding model is an individual human and animal and its guiding
assumptions are that intelligence is built upon elementary behavioural activities and that senso-
motoric coupling is essential. Hypothesis: physical grounding – rooting of symbols in the real world (in
which the artefact acts) is a necessary condition for intelligence where no rooting → no meaning →
no intelligent behaviour. This is a bottom-up design of intelligence: creating basic elements and
allowing a system to evolve to best suit the environment.
· Connectionism (neural networks): has been around 3 times. Its guiding model is the human brain, its
guiding assumptions is that processing of information through very simple but many interconnected
units that interact at allow signal-processing level. The key characteristics are: parallel, distributed and
sub-symbolic information processing.
· Distributed AI: since 1980. Its guiding model is a group of humans (human society) and its guiding
assumptions are that to act together is characteristic to intelligent beings (“no intelligence without
interaction”) and that interacting units operate on the knowledge level (rather than signal level as in
connectionism). Its key issues are communication, coordination, cooperation, organization etc. Why
would we deal with distributed intelligence?
· Some problems can only be solved on the basis of high-level interaction among intelligent
entities.
· Parallelism, scalability, robustness
· Close relationship among intelligence and interaction
· Intuitively clear approach to complex applications
Agents are “systems at level 1”. There is no commonly accepted definition of agent, since they are applied
differently by different people and the definitions are often based on intuitive understanding. Emerging
standard view: an agent is a computational entity that is situated (part of the environment) and that is
capable of flexible, autonomous activity – action and interaction – in order to meet its design objectives.
Some characteristics of agency that are sometimes claimed to be essential are: rationality, mobility, adaptivity
and introspections. However, a minimally intelligent agent is:
- Pro-active: takes the initiative to satisfy their (delegated) goals
- Reactive: perceives and responds timely to the environment

, - Socially able: capable of interacting with other agents and humans, which includes cooperation and
negotiation.
Difficulties with agents and intelligent systems are that separately goal-directed systems and reactive systems
can be simple but combining both in a good balance can be very complex. The agent/system must react to
new situations while focusing on getting things done.
To allow for qualitatively different system perspectives and to have different levels of abstraction, there is an
agent vs object concept which is complementary, rather than mutually exclusive. Both encapsulate:
- Identity: who
- State: what
- Passive behaviour: how, if invoked
But agents additionally encapsulate active behaviour: when, why, with whom, whether at all. There is a
gradual transition from agent to object.
Monolithic Modular OO AO

Unit behaviour Nonmodular Modular Modular Modular

Unit state External External Internal Internal

Unit invocation External External External Internal



Expert system – interact with user to collect facts and help with a decision process.
Agent environments:
- Accessible vs inaccessible: which level of information does the agent get about its environment? Is it
complete? Accurate? Up to date?
- Deterministic vs non-deterministic: is there a guaranteed effect for actions? Is there uncertainty
about the next state? → sufficiently complex determinism is just as bad as non-determinism.
- Episodic vs non-episodic: independent stages of behaviour where the shorter the episode the easier.
Only the current (or recent) percept is relevant, and one does not relate to the other.
- Static vs dynamic: is the influence of the agent the only cause for change?
- Discrete vs continuous: is there a fixed number of actions & perceptions?
Different agent architectures:
- Logic-based: fits with symbolic or knowledge-based AI view where intelligent behaviour is a result of a
symbolic representation of the environment combined with logical deduction or theorem proving.
Components:
· Theory of agency 𝜌: describes how intelligent agents behave in an executable way
· Belief database Δ: information the agent has about the environment
· Δ ⊢𝜌 φ means that 𝜑 can be derived from Δ using the rules of 𝜌. It basically has implied
actions, allowed actions and no action. (“if you can prove this then do this”).
· Problems:
▪ Symbol grounding: coupling perception with symbolic facts
▪ Reasoning takes (too much) time
▪ Very hard to build a sufficiently complete model of a complex environment
· But logic-based is not dead as symbol grounding is starting to work (deep learning for vision),
hardware is getting ridiculously fast and logical policies can be learned.
- Planning: all about using an environmental model to determine the best action to take. The theory of
planning has been well-developed (success stories: MCTS).
- Reactive: behavioural, situated agents. Intelligence emerges from the interaction between simple
behaviours and the environment but cannot be disembodied (result from only thinking/reasoning
about things).
· Subsumption architecture: decision making is established through a set of behaviours, where
each behaviour accomplishes some task (i.e. FSM). There are no complex symbolic

, representations and no symbolic reasoning: situation → action. There is a subsumption
hierarchy for when multiple behaviours choose conflicting actions (e.g. autonomous cars).
▪ Advantages: simplicity, computational tracktability, robustness against failure and can
be quite elegant.
▪ Problems:
• Only local information is used, no model of environment
• Short term view is inherent
• Emergence can be hard to predict
• Dynamics of interactions can become too complex to design
· Reinforcement learning in Markov Decision Processes:
▪ Learns a reactive policy
▪ Environment is assumed to be Markovian
▪ Global optimality through local decisions are made possible through a value-function
(value = sum of discounted future rewards).
- Belief/desire/intention: use practical reasoning.
1. Deliberation: what are the goals we want to achieve?
2. Means-end reasoning: how are we going to achieve these goals?
· Intentions:
▪ Drive means-end reasoning, lead to actions
▪ Constrain future deliberation, restrict reasoning
▪ Persist until achieved, believed to be unachievable or purpose is gone
▪ Influence beliefs for future reasoning
· This results in a trade-off: accomplish intentions through direct action and reconsider current
intentions.
▪ Bold agent: never stops to reconsider
▪ Cautious agent: constantly stops to reconsider
· If the world/environment changes a lot, it’s better to be cautious.
· Components are:
▪ Current beliefs (info about environment)
▪ BR function: updates beliefs according to perceptions
▪ GO function: generates available options/desires
▪ Current options/desires (must be consistent)
▪ Filter function: deliberation or intention revision process
▪ Current intentions (agent’s focus)
▪ Action selection: translates intentions into action
· Usually this implies representing the intentions as a stack or hierarchy again,
to make action selection and prioritization possible.
- Layered: when there is a need for both pro-active and reactive behaviour. The
planning for goals depends on current conditions and it must respond to changes in
environment.
· Two types:
▪ Vertically layered: one or two passes. In is perceptual input, output is action.
• Example: (2-pass) InteRRaP, which has bottom-up activation and top-down
execution. It has social interactions about others, every-day behaviour about
itself and reactive behaviour about the environment.
▪ Horizontally layered: Number of desired behaviours = number of layers, which might
need a mediator function if actions contradict. Central control might create
bottleneck if complex.
• Example: Touringmachine, which is a symbolic representation of state of all
entities and constructs plans to achieve the agent’s objectives. It has

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller FamkeNouwens. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $6.49. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

66475 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 14 years now

Start selling
$6.49  1x  sold
  • (0)
  Add to cart