S OLUTION M ANUAL F OR
PATTERN R ECOGNITION AND M ACHINE
L EARNING
E DITED B Y
ZHENGQI GAO
the State Key Lab. of ASIC and System
School of Microelectronics
Fudan University
N OV.2017
, 1
0.1 Introduction
Problem 1.1 Solution
We let the derivative of error function E with respect to vector w equals
to 0, (i.e. ∂∂w
E
= 0), and this will be the solution of w = {w i } which minimizes
error function E . To solve this problem, we will calculate the derivative of E
with respect to every w i , and let them equal to 0 instead. Based on (1.1) and
(1.2) we can obtain :
=>
∂E ∑
N
= { y( xn , w) − t n } xni = 0
∂w i n=1
=>
∑
N ∑
N
y( xn , w) xni = xni t n
n=1 n=1
=>
N ∑
∑ M
j ∑
N
( w j xn ) xni = xni t n
n=1 j =0 n=1
=>
N ∑
∑ M
( j+ i) ∑
N
w j xn = xni t n
n=1 j =0 n=1
=>
∑
M ∑
N
( j+ i) ∑
N
xn wj = xni t n
j =0 n=1 n=1
∑N i+ j ∑N
If we denote A i j = n=1 xn and T i = n=1 xn i t n , the equation above can
be written exactly as (1.222), Therefore the problem is solved.
Problem 1.2 Solution
This problem is similar to Prob.1.1, and the only difference is the last
term on the right side of (1.4), the penalty term. So we will do the same thing
as in Prob.1.1 :
=>
∂E ∑
N
= { y( xn , w) − t n } xni + λw i = 0
∂w i n=1
=>
∑
M ∑
N
( j+ i) ∑
N
xn w j + λw i = xni t n
j =0 n=1 n=1
=>
∑
M ∑ N
( j+ i) ∑
N
{ xn + δ ji λ}w j = xni t n
j =0 n=1 n=1
, 2
where
{
0 j ̸= i
δ ji
1 j=i
Problem 1.3 Solution
This problem can be solved by Bayes’ theorem. The probability of selecting
an apple P (a) :
3 1 3
P ( a) = P ( a| r ) P ( r ) + P ( a| b ) P ( b ) + P ( a| g ) P ( g ) = × 0.2 + × 0.2 + × 0.6 = 0.34
10 2 10
Based on Bayes’ theorem, the probability of an selected orange coming
from the green box P ( g| o) :
P ( o| g ) P ( g )
P ( g | o) =
P ( o)
We calculate the probability of selecting an orange P ( o) first :
4 1 3
P ( o) = P ( o| r ) P ( r ) + P ( o| b ) P ( b ) + P ( o| g ) P ( g ) = × 0.2 + × 0.2 + × 0.6 = 0.36
10 2 10
Therefore we can get :
3
P ( o| g ) P ( g ) 10 × 0. 6
P ( g | o) = = = 0.5
P ( o) 0.36
Problem 1.4 Solution
This problem needs knowledge about calculus, especially about Chain
rule. We calculate the derivative of P y ( y) with respect to y, according to
(1.27) :
d p y ( y) d ( p x ( g( y))| g‘ ( y)|) d p x ( g( y)) ‘ d | g‘ ( y)|
= = | g ( y)| + p x ( g( y)) (∗)
dy dy dy dy
The first term in the above equation can be further simplified:
d p x ( g( y)) ‘ d p x ( g( y)) d g( y) ‘
| g ( y)| = | g ( y)| (∗∗)
dy d g ( y) dy
If x̂ is the maximum of density over x, we can obtain :
d p x ( x) ¯¯
=0
dx x̂
Therefore, when y = ŷ, s.t. x̂ = g( ŷ), the first term on the right side of (∗∗)
will be 0, leading the first term in (∗) equals to 0, however because of the
existence of the second term in (∗), the derivative may not equal to 0. But
, 3
when linear transformation is applied, the second term in (∗) will vanish,
(e.g. x = a y + b). A simple example can be shown by :
p x ( x) = 2 x, x ∈ [0, 1] => x̂ = 1
And given that:
x = sin( y)
Therefore, p y ( y) = 2 sin( y) | cos( y)|, y ∈ [0, π2 ], which can be simplified :
π π
p y ( y) = sin(2 y), y ∈ [0, ] => ŷ =
2 4
However, it is quite obvious :
x̂ ̸= sin( ŷ)
Problem 1.5 Solution
This problem takes advantage of the property of expectation:
var [ f ] = E[( f ( x) − E[ f ( x)])2 ]
= E[ f ( x)2 − 2 f ( x)E[ f ( x)] + E[ f ( x)]2 ]
= E[ f ( x)2 ] − 2E[ f ( x)]2 + E[ f ( x)]2
=> var [ f ] = E[ f ( x)2 ] − E[ f ( x)]2
Problem 1.6 Solution
Based on (1.41), we only need to prove when x and y is independent,
E x,y [ x y] = E[ x]E[ y]. Because x and y is independent, we have :
p( x, y) = p x ( x) p y ( y)
Therefore:
∫ ∫ ∫ ∫
x yp( x, y) dx d y = x yp x ( x) p y ( y) dx d y
∫ ∫
= ( xp x ( x) dx)( yp y ( y) d y)
=> E x,y [ x y] = E[ x]E[ y]
Problem 1.7 Solution
This problem should take advantage of Integration by substitution.
∫ +∞ ∫ +∞
2 1 1
I = exp(− 2 x2 − 2 y2 ) dx d y
−∞ −∞ 2σ 2σ
∫ 2π ∫ +∞
1 2
= exp(− 2 r ) r dr d θ
0 0 2σ