100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
Solution Manual For PRML $5.90
Add to cart

Answers

Solution Manual For PRML

 632 views  3 purchases
  • Course
  • Institution
  • Book

This is a solution manual for pattern recognition and machine learning. It contains detailed solutions for the questions in the relevant book. Helpful for understanding and verifying the approach and correctness of solutions.

Preview 4 out of 253  pages

  • October 29, 2019
  • 253
  • 2019/2020
  • Answers
  • Unknown
avatar-seller
S OLUTION M ANUAL F OR
PATTERN R ECOGNITION AND M ACHINE
L EARNING

E DITED B Y

ZHENGQI GAO

the State Key Lab. of ASIC and System
School of Microelectronics
Fudan University
N OV.2017

, 1


0.1 Introduction

Problem 1.1 Solution

We let the derivative of error function E with respect to vector w equals
to 0, (i.e. ∂∂w
E
= 0), and this will be the solution of w = {w i } which minimizes
error function E . To solve this problem, we will calculate the derivative of E
with respect to every w i , and let them equal to 0 instead. Based on (1.1) and
(1.2) we can obtain :

=>
∂E ∑
N
= { y( xn , w) − t n } xni = 0
∂w i n=1
=>

N ∑
N
y( xn , w) xni = xni t n
n=1 n=1
=>
N ∑
∑ M
j ∑
N
( w j xn ) xni = xni t n
n=1 j =0 n=1

=>
N ∑
∑ M
( j+ i) ∑
N
w j xn = xni t n
n=1 j =0 n=1

=>

M ∑
N
( j+ i) ∑
N
xn wj = xni t n
j =0 n=1 n=1
∑N i+ j ∑N
If we denote A i j = n=1 xn and T i = n=1 xn i t n , the equation above can
be written exactly as (1.222), Therefore the problem is solved.

Problem 1.2 Solution

This problem is similar to Prob.1.1, and the only difference is the last
term on the right side of (1.4), the penalty term. So we will do the same thing
as in Prob.1.1 :

=>
∂E ∑
N
= { y( xn , w) − t n } xni + λw i = 0
∂w i n=1
=>

M ∑
N
( j+ i) ∑
N
xn w j + λw i = xni t n
j =0 n=1 n=1

=>

M ∑ N
( j+ i) ∑
N
{ xn + δ ji λ}w j = xni t n
j =0 n=1 n=1

, 2


where
{
0 j ̸= i
δ ji
1 j=i
Problem 1.3 Solution

This problem can be solved by Bayes’ theorem. The probability of selecting
an apple P (a) :
3 1 3
P ( a) = P ( a| r ) P ( r ) + P ( a| b ) P ( b ) + P ( a| g ) P ( g ) = × 0.2 + × 0.2 + × 0.6 = 0.34
10 2 10
Based on Bayes’ theorem, the probability of an selected orange coming
from the green box P ( g| o) :

P ( o| g ) P ( g )
P ( g | o) =
P ( o)
We calculate the probability of selecting an orange P ( o) first :
4 1 3
P ( o) = P ( o| r ) P ( r ) + P ( o| b ) P ( b ) + P ( o| g ) P ( g ) = × 0.2 + × 0.2 + × 0.6 = 0.36
10 2 10
Therefore we can get :
3
P ( o| g ) P ( g ) 10 × 0. 6
P ( g | o) = = = 0.5
P ( o) 0.36
Problem 1.4 Solution

This problem needs knowledge about calculus, especially about Chain
rule. We calculate the derivative of P y ( y) with respect to y, according to
(1.27) :


d p y ( y) d ( p x ( g( y))| g‘ ( y)|) d p x ( g( y)) ‘ d | g‘ ( y)|
= = | g ( y)| + p x ( g( y)) (∗)
dy dy dy dy

The first term in the above equation can be further simplified:

d p x ( g( y)) ‘ d p x ( g( y)) d g( y) ‘
| g ( y)| = | g ( y)| (∗∗)
dy d g ( y) dy
If x̂ is the maximum of density over x, we can obtain :

d p x ( x) ¯¯
=0
dx x̂
Therefore, when y = ŷ, s.t. x̂ = g( ŷ), the first term on the right side of (∗∗)
will be 0, leading the first term in (∗) equals to 0, however because of the
existence of the second term in (∗), the derivative may not equal to 0. But

, 3


when linear transformation is applied, the second term in (∗) will vanish,
(e.g. x = a y + b). A simple example can be shown by :

p x ( x) = 2 x, x ∈ [0, 1] => x̂ = 1
And given that:
x = sin( y)
Therefore, p y ( y) = 2 sin( y) | cos( y)|, y ∈ [0, π2 ], which can be simplified :
π π
p y ( y) = sin(2 y), y ∈ [0, ] => ŷ =
2 4
However, it is quite obvious :

x̂ ̸= sin( ŷ)

Problem 1.5 Solution

This problem takes advantage of the property of expectation:

var [ f ] = E[( f ( x) − E[ f ( x)])2 ]
= E[ f ( x)2 − 2 f ( x)E[ f ( x)] + E[ f ( x)]2 ]
= E[ f ( x)2 ] − 2E[ f ( x)]2 + E[ f ( x)]2
=> var [ f ] = E[ f ( x)2 ] − E[ f ( x)]2

Problem 1.6 Solution

Based on (1.41), we only need to prove when x and y is independent,
E x,y [ x y] = E[ x]E[ y]. Because x and y is independent, we have :

p( x, y) = p x ( x) p y ( y)

Therefore:
∫ ∫ ∫ ∫
x yp( x, y) dx d y = x yp x ( x) p y ( y) dx d y
∫ ∫
= ( xp x ( x) dx)( yp y ( y) d y)
=> E x,y [ x y] = E[ x]E[ y]

Problem 1.7 Solution

This problem should take advantage of Integration by substitution.
∫ +∞ ∫ +∞
2 1 1
I = exp(− 2 x2 − 2 y2 ) dx d y
−∞ −∞ 2σ 2σ
∫ 2π ∫ +∞
1 2
= exp(− 2 r ) r dr d θ
0 0 2σ

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller neobit. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $5.90. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

52928 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 14 years now

Start selling
$5.90  3x  sold
  • (0)
Add to cart
Added