Summary Human-Robot Interaction (0HM280)
Lecture 1 Probabilistic robotics 1
Uncertainties in robot navigation:
- Noisy sensors (random reading and calibration mistakes)
- Outdated maps (changes in furniture/position of people)
- Unknown location
- Inaccurate odometry and dead reckoning (programmed to go from A to B, but does
something different)
Solution: model uncertainties explicitly and develop ‘filters’ that update these uncertainties
Types of navigation:
- Navigation in open space
- Coastal navigation
Probabilistic multiple hypothesis tracking (doors example):
- No door detected, so equal distribution of location where it can be
- One door spotted, higher probability 3 at locations where there are doorsbelieve
state/posterior belief
- Robot moves further, probability moves with him and becomes more widely distributed as
there is extra movement uncertainty
- When it soon spots the second door, it knows certain that it is at door number 2
Fundamental notion of probability: we can assign a real number to a sample of a class of events
- Frequentist approach: based on counting and frequency of occurrence, problem when
there’s nothing to count
- Bayesian approach: probability of belief, a probability is a graded belief about an event, p is
degree of belief
Axioms:
- Pr(A) is the probability that A is true, it is a number between 0 and 1, A can be true or false
in which Pr(True) = 1 and Pr(False)=0
- Pr(A∨B) prob. That A or B is true, Pr(A∧B)prob. That A and B are true
- Pr(A∨ B) = Pr(A) + Pr(B) − Pr(A ∧ B) last part is correction for overlap between two circles
- P(-A), prob of not A
Random variables: variables with random values, outcome of event that is uncertain is expressed in
random variables, probability is a function of random variables:
- Discrete: e.g. Pr(single throw die rolls is a 6) outcomes are [1,2,3,4,5,6]
o P(X=xi) or P(xi) is a probability mass function
o Binomial probability distribution: tossing a coin n times, fair coin has
equal heads and tails so P=0.5
o Cumulative probability distribution: total probability of a value,
always increases
o Poison distribution: more skewed to the left
- Continuous: e.g. Pr(temp. will be below 25 degrees tomorrow), outcomes are continuous
o P(X=x) or p(x) is a probability density function, it’s about a range of values:
, o Two important continuous probability distributions
Uniform probability density: equal probability for all values, X ~ U(a,b)
a ≤ x ≤ b, . The total area and probability is constant within
range a to b
Normal or Gaussian probability density: mean u(mu) and width or standard
deviation σ (sigma), X ~ N(µ, σ2)random variable that follows a normal
distribution.
Standard normal distribution mu=0, sigma=1
Exponential probability density function, used for time intervals
Total area under probability function is always 1
Continuous distributions always have a probability of a range, prob of a
value is always 0
Cumulative probability density function: the probability density function is
the derivative of the cumulative prob function
Functions of random variables:
- Basic idea: , you start with the cumulative
probability of y and find the cumulative probability of x
- So the probability density is obtained by the derivative of the cumulative probability of x:
the chain rule
- If Y=X then Y<1 becomes -1<X<1
2
- So basic plan, first find cumulative density function F(x)=P(X<x), X is given so you can fill this
in and get P(Z..x), than you find the probability density function of X by
, Joint probability distribution: probability density function of more than 1 variable:
- Discrete Pr(X=x and Y=y) → P(x,y), continuous Pr(a<X<b and c<Y<d) → p(x,y)
- If X and Y are independent then P(x,y) = P(x) *P(y)
Conditional probability: we want to know the probability of one variable X for the given value of Y
- Pr(X|Y=y)
- Does not integrate to 1, but is fixed
- Discrete:
- Continuous:
- Law of total probability for discrete and continuous:
- So
- Bayes formula following from law of total prob
Bayesian statistics:
- Likelihood: sensory information, function of hypothesis p(observation|hypothesis), what is
the probability of observing 1.5m given that a person really is 1.6m tall
- Prior: independent of observation, prior knowledge about hypothesis
- Posterior: reflects belief in the hypothesis, takes prior knowledge into account P(x|y)
Conditioning: you can add conditional e.g. z to all terms
Conditional independence:
Is the same as and
Lecture 2 Design of interaction scenarios
Interaction scenario: describes a basic story which is a combination of simple robot actions and
interactions that will bring to accomplishment of a goal that a user of the robot needs
We need interaction scenarios because:
- Restriction of robot AI: It describes the key interactions and not all possible interactions
- Lack of domain knowledge: an interaction scenario is needed through which the robot can
execute tasks as trainer/teacher/therapist
Issues in HRI scenario creation: behavior of robot differs from expectation, environment can change.
Solution: try to include solutions for things that can go wrong
Creating an interaction scenario:
- Goal of the scenario and the affordances of the robot: what is the scenario, who is the user,
how can robot with certain embodiment and intelligence help in the goal
o Theory of mind Sally-anne: children say that it sally who left will look in the spot
where the ball is placed and don’t realize that sally didn’t see that and she will think
it is in the first place