## Description

## Problem 1: Missing Entries (50 pts)

Suppose that you are given a symmetric matrix A ∈ R

n×n

that is missing some entries, e.g., Aij =?

for some indices i, j ∈ {1, . . . , n}. To determine which entries are missing, we will use an index

matrix Q ∈ {0, 1}

n×n

such that Qij = 1 if Aij =? and Qij = 0 otherwise.

1. Explain how to formulate the problem of finding the closest symmetric positive semidefinite

matrix to A under the Frobenius norm (over the non-missing entries) as a convex optimization

problem.

2. What is the dual of your optimization problem?

3. Write a Python function that takes as input the matrices A and Q, an initial guess for the

completion, B, and a number of iterations and returns the the result of applying projected

gradient descent, starting at B, for the specified number of iterations.

## Problem 2: Matrix Factorizations (50 pts)

1. Consider the following convex function, known as the generalized KL divergence, for two

nonnegative matrices A, B ∈ R

m×n

.

KL(A||B) = Xm

i

Xn

j=1

Aij log

Aij

Bij

− Aij + Bij

Suppose, now that A ∈ R

m×n

is a nonnegative matrix that we would like to approximate

as a product of two nonnegative matrices C ∈ R

m×k

, U ∈ R

k×n

. Explain how to formulate

the problem of finding the closest pair of nonnegative matrices to A under the generalized

KL-divergence as a biconvex optimization problem.

2. In Python, write a function that takes as input the matrix A and an integer k > 0 and

returns a matrix C and U obtained by running block coordinate descent to convergence from

a random starting point.

3. Is your block coordinate descent procedure guaranteed to converge to a critical point?

1