CSCI 5512: Artificial Intelligence II Homework 2 solution

$29.99

Original Work ?
Category: You will Instantly receive a download link for .ZIP solution file upon Payment

Description

4.2/5 - (5 votes)

1. (40 points) [Programming Assignment] Consider the Hidden Markov Model in Figure 1.
Figure 1: Hidden Markov Model
Assume each of the hidden variables Xi
, i = 0, 1, 2, 3, . . . and the evidence variables Ei
, i =
1, 2, 3, . . . to be boolean, and can take two values T and F. Let P(X0 = T) = P(X0 = F) =
0.5. Let the transition matrix P(Xt+1|Xt) and sensor matrix P(Et
|Xt) be given by
T =

0.7 0.3
0.4 0.6

E =

0.9 0.1
0.3 0.7

,
where, in the T matrix,
T11 = P(Xt+1 = T|Xt = T) , T12 = P(Xt+1 = F|Xt = T) ,
T21 = P(Xt+1 = T|Xt = F) , T22 = P(Xt+1 = F|Xt = F) ,
and in the E matrix,
E11 = P(Et = T|Xt = T) , E12 = P(Et = F|Xt = T) ,
E21 = P(Et = T|Xt = F) , E22 = P(Et = F|Xt = F) .
Consider two sequences of evidence e1:10 over 10 time steps:
Evidence sequence 1: e1:10 = hF, F, F, T, T, T, T, F, F, Fi
Evidence sequence 2: e1:10 = hF, T, F, T, F, T, F, T, F, T i
For each of the above two sequences of evidence:
(a) (20 points) Write a program to compute the smoothed estimates of Xt
, t = 1, . . . , 10
given evidence e1:10.
1
(b) (20 points) Write a program to find the most likely sequence of states X1:10 given evidence
e1:10.
In addition to the smoothed estimates and sequence of states, you have to submit code for
SmoothHMM implementing computation of smoothed estimate and MaxSeq implementing
computation of the most likely sequence. The code for both algorithms should take two input
arguments:
(i) n, the length of the evidence (n = 10 for the two examples above)
(ii) The evidence sequence containing a ‘1’ for T and ‘0’ for F. Thus, for the first example
e1:10 = hF, F, F, T, T, T, T, F, F, Fi, implying the input should be 0 0 0 1 1 1 1 0 0 0.
The output for SmoothHMM should be an list of length n with smoothed estimates of P(Xt =
T), t = 1, . . . , n. The output should be clearly displayed on screen after running the program.
The output for MaxSeq should be a binary list of length n containing the most likely sequence
of states with 1 corresponding to T and 0 corresponding to F. The output should be clearly
displayed on screen after running the program.
Sample input for Python 3.6 for (a) and (b) when n = 10 and e1:10 = hF, F, F, T, T, T, T, F, F, Fi
$python SmoothHMM.py 10 0 0 0 1 1 1 1 0 0 0
$python MaxSeq.py 10 0 0 0 1 1 1 1 0 0 0
2. (45 points) [Programming Assignment] Consider the umbrella network shown in Figure 2.
Let U1, U2, . . . denote the sequence of evidence variables (umbrella), where ∀i, Ui = T (true)
or F (false). Let Ri be the random variable corresponding to the hidden state (rain) at step
i. Assume the prior probability of rain P(R0 = T) = P(R0 = F) = 1
2
. Consider the following
three evidence sequences:
(i) u1:10 = (T, T, T, T, T, F, F, F, F, F)
(ii) u1:10 = (F, F, F, F, F, F, F, T, T, T )
(iii) u1:10 = (F, T, F, T, F, T, F, T, F, T )
Raint
Umbrellat
Raint −1
Umbrellat −1
Raint +1
Umbrellat +1
Rt −1 P(R )t
f 0.3
t 0.7
Rt P(U )t
t 0.9
f 0.2
Figure 2: The Umbrella Network
For each of the three choices of u1:10, your code should output the filtering probability
P(R10|u1:10). We want to approximate this probability using two separate sample methods as follows:
2
(a) (20 points) Estimate P(R10|u1:10) using likelihood weighting with 100 and 1000 samples.
You have to submit code for lwUmbrella which takes 3 arguments: an integer numSamples,
denoting the number of samples (set to 100 and 1000), an integer numSteps, denoting the
number of steps (set to 10), and evidence, denoting the evidence u1:numSteps (T = 1, F =
0) of length numSteps. The output should be the estimate P(RnumSteps|u1:numSteps) and
the variance of the estimate.
Sample input for Python 3.6 for when numSamples = 100, numSteps = 10, and u1:10 =
(T, T, T, T, T, F, F, F, F, F)
$python lwUmbrella.py 100 10 1 1 1 1 1 0 0 0 0 0
(b) (20 points) Estimate P(R10|u1:10) using particle filtering with 100 and 1000 particles. You have to submit code for pfUmbrella, which takes 3 arguments: an integer
numSamples, denoting the number of particles (set to 100 and 1000), an integer numSteps, denoting the number of steps (set to 10), and evidence, denoting the evidence
u1:numSteps (T = 1, F = 0) of length numSteps. The output should be the estimate
P(RnumSteps|u1:numSteps) and the variance of the estimate.
Sample input for Python 3.6 for when numSamples = 100, numSteps = 10, and u1:10 =
(T, T, T, T, T, F, F, F, F, F)
$python pfUmbrella.py 100 10 1 1 1 1 1 0 0 0 0 0
(c) (5 points) Comment on the relative performance of the two methods on the three evidence
sequences and two sample sizes. In particular, how close are the estimates to the true
probabilities and what are the variance of the estimates.
3. (15 points) This question considers the value of perfect information (VPI) V P IE(Ej ) which
evaluates the value of additional information Ej given existing information E.
(a) (7 points) Show that VPI is non-negative, i.e., V P IE(Ej ) ≥ 0, ∀j, E.
(b) (8 points) Show that VPI is order independent, i.e.,
V P IE(Ej , Ek) = V P IE(Ej) + V P IE,Ej
(Ek) = V P IE(Ek) + V P IE,Ek
(Ej ) .
Instructions
Please follow these instructions carefully. Code submitted without adhering to these instructions
will not receive any credit.
For each programming assignment, you have to submit the code as required by the problem and
the algorithm must be implemented using a main file as named in the problem (e.g., SmoothHMM.py).
Only Python 3.6 will be accepted, any other language will receive zero credit. The program must
run on the CSE labs machines and will not receive credit if it fails this.
3