# EECS 349 Homework 7 solution

\$24.99

Original Work ?

## Description

1) Perceptrons (2 points) Read Chapter 4 of “Machine Learning” (on the course calendar) and answer the following.
A. (1/2 point): Is a one-layer perceptron capable of representing non-linear decision surfaces? Why or why not?
B. (1/2 point): Is it possible use a multi layer perceptron with linear activation functions to represent a non-linear decision surfaces? Why or why not?
C. (1/2 point): In a multi layer perceptron, is there any advantage to using sigmoid activation functions compared to linear activation functions? If so, what is it? If not, why not?
D. (1/2 point): If the sigmoid function in a multi-layer perceptron were replaced with the following function, how would this affect the back propagation of error training method? output=sign 1 1+e−net −0.5 ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ sign(x)= 1 if x≥0 −1 else ⎧ ⎨ ⎪ ⎩ ⎪ 2) Restricted Boltzman Machines (1 point) From the course calendar, follow the link to the tutorial on Deep Belief Networks and read the paper called “Scaling Learning Algorithms towards AI”…and there is always the Wikipedia. These resources will help you answer the following questions.
A. (1/2 point): Explain what a Restricted Boltzman Machine (RBM) is. Don’t just give a sentence. B. (1/2 point): What relationship do RBMs have to Deep Belief Networks? 3) Scaling Learning Algorithms towards AI (1 point) Read “Scaling Learning Algorithms towards AI” and answer the following questions.
A. (1/2 point): What are Kernel Machines? Describe two limitations of Kernel Machines. B. (1/2 point): What are Deep Architectures? .How do the authors expect Deep Architectures will get around the limits of Kernel Machines you describe in part A?
EECS 349 (Machine Learning) Homework 7
4) Activation functions (1 point) Modern neural nets use many different kinds of activation functions in their nodes. You’ve already learned about linear, perceptron and sigmoid activation functions. Two other popular ones are softmax, and ReLU. Research these on the web. A. (1/2 point): Write the formula for a ReLU activation function. Show a qualitative plot that shows the shape of the ReLU function. Don’t forget to cite your sources. B. (1/2 point): Write the formula for a softmax activation function. Give one reason for using a softmax function instead of normalizing a distribution in a standard way (i.e., dividing by the sum of all values)? Don’t forget to cite your sources. 5) Learning tensorflow/tflearn (2 points) For this problem and problem 6 we will be using Google’s open source, deep learning framework, TensorFlow (see www.tensorflow.org). More specifically, we will be using a wrapper library called tflearn (www.tflearn.org), a simpler interface for implementing neural networks.
TensorFlow and tf learn will not work on the virtual machine we have provided you (this is the only instance in this class where you should not test on the VM), so you will have to download and install them on your own computer (installation instructions for both packages are in the aforementioned links. But in summation, use python’s command-line software manager, pip, to install these libraries.) We will be available to help with any installation issues during recitation and office hours this week.
A good way to check if these packages are installed correctly on your machine is to run the following from the command line:
\$ python Python 2.7.12 |Anaconda 4.1.1 (x86_64)| (default, Jul 2 2016, 17:43:17) [GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00)] on darwin Type “help”, “copyright”, “credits” or “license” for more information. import tensorflow import tflearn
Figure 1. TensorFlow and tflearn installed successfully. The lines with the three carrots () are the lines we use to test the installation of these packages. They don’t throw errors so we know that these packages are installed correctly.
In the above example, the first line is calling python in the interactive shell from the command line. The next three lines about the specific version of python being used, but what’s important here is importing tensorflow and tflearn. If no errors get thrown then tensorflow and tflearn are installed correctly. See below for an example of what a fail to find tensorflow looks like.
\$ python Python 2.7.10 (default, Aug 22 2015, 20:33:39) [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.1)] on darwin Type “help”, “copyright”, “credits” or “license” for more information. import tensorflow Traceback (most recent call last): File “