AMATH 482 Homework 1 to 5 solutions

$120.00

Original Work ?
Category: You will Instantly receive a download link for .ZIP solution file upon Payment

Description

5/5 - (1 vote)

AMATH 482 Homework 1: An ultrasound problem

Your dog fluffy swallowed a marble. The vet suspects that it has now worked its way into the intestines.
Using ultrasound, data is obtained concerning the spatial variations in a small area of the intestines where the
marble is suspected to be.

Unfortunately, fluffy keeps moving and the internal fluid movement through the
intestines generates highly noisy data.

Do you want your dog to live? In order to save your dog’s life you must locate and compute the trajectory of
the marble. Download the file Testdata.mat that was included with the homework. This contains 20 rows
of data for 20 different measurements that were taken in time.

1. Through averaging of the spectrum, determine the frequency signature (center frequency) generated by
the marble.

2. Filter the data around the center frequency determined above in order to denoise the data and determine
the path of the marble. (use plot3 to plot the path once you have it)

3. Where should an intense acoustic wave be focused to breakup the marble at the 20th data measurement.

Good luck, and I hope your dog doesn’t die.
The following code will help you get started in analyzing the data. It also tells you the spatial and spectral
resolution of your ultrasound equipment. (NOTE: the reason for the close all command before isosurface is
that isosurface doesn’t seem to clear the previous image before plotting a new one)

clear; close all; clc;
load Testdata
L=15; % spatial domain
n=64; % Fourier modes
x2=linspace(-L,L,n+1); x=x2(1:n); y=x; z=x;
k=(2*pi/(2*L))*[0:(n/2-1) -n/2:-1]; ks=fftshift(k);
[X,Y,Z]=meshgrid(x,y,z);
[Kx,Ky,Kz]=meshgrid(ks,ks,ks);
for j=1:20
Un(:,:,:)=reshape(Undata(j,:),n,n,n);
close all, isosurface(X,Y,Z,abs(Un),0.4)
axis([-20 20 -20 20 -20 20]), grid on, drawnow
pause(1)
end

AMATH 482 Homework 2: G´abor transforms

Part I
In this homework, you will analyze a portion of Handel’s Messiah with time-frequency analysis.

To get started, you can use the following commands (note that Handel is so highly regarded, that
MATLAB has a portion of his music already in MATLAB! I’m sure they’ll add some Beyonce and
T-Swift at some point, but you’ll have to suffer Handel for now):
load handel
v = y’;
plot((1:length(v))/Fs,v);
xlabel(‘Time [sec]’);
ylabel(‘Amplitude’);
title(‘Signal of Interest, v(n)’);

This code plots the portion of music you will analyze. To play this back in MATLAB, you can use
the following commands:
p8 = audioplayer(v,Fs);
playblocking(p8);

This homework is rather open ended in the sense that I want you to explore the timefrequency signature of this 9 second piece of classic work.

Things you should think about
doing:
1. Through use of the G´abor filtering we used in class, produce spectrograms of the piece of work.

2. Explore the window width of the G´abor transform and how it effects the spectrogram.

3. Explore the spectrogram and the idea of oversampling (i.e. using very small translations of
the G´abor window) versus potential undersampling (i.e. using very course/large translations
of the G´abor window).

4. Use different G´abor windows. Perhaps you can start with the Gaussian window, and look to
see how the results are affected with the Mexican hat wavelet and a step-function (Shannon)
window.

Don’t be cheap on time here, i.e. don’t be lame. This is an opportunity for you to have a creative
and open ended experience with MATLAB and data analysis. Please do a nice job writing things up
and labeling your resulting plots. I believe this homework can be a really engaging and educational
experience if you devote some time to it.

Part II
Download the two files music1.wav and music2wav that were included with the homework.

These files play the song Mary had a little lamb on both the recorder and piano. These are .wav files
I generated using my iPhone. To import and convert them, use the following commands for both
pieces (NOTE: basically both pieces are converted to a vector representing the music, thus you can
easily edit the music by modifying the vector).

Figure 1: Music scale along with frequency of each note in Hz
[y,Fs] = audioread(‘music1.wav’);
tr_piano=length(y)/Fs; % record time in seconds
plot((1:length(y))/Fs,y);
xlabel(‘Time [sec]’); ylabel(‘Amplitude’);
title(‘Mary had a little lamb (piano)’);
p8 = audioplayer(y,Fs); playblocking(p8);
figure(2)
[y,Fs] = audioread(‘music2.wav’);
tr_rec=length(y)/Fs; % record time in seconds
plot((1:length(y))/Fs,y);
xlabel(‘Time [sec]’); ylabel(‘Amplitude’);
title(‘Mary had a little lamb (recorder)’);
p8 = audioplayer(y,Fs); playblocking(p8);

1. Through use of the G´abor filtering we used in class, reproduce the music score for this simple
piece. See Fig. 1 which has the music scale in Hertz. (note: to get a good clean score, you
may want to filter out overtones… see below).

2. What is the difference between a recorder and piano? Can you see the difference in the timefrequency analysis? Note that many people talk about the difference of instruments being
related to the timbre of an instrument. The timbre is related to the overtones generated by
the instrument for a center frequency. Thus if you are playing a note at frequency ω0, an
instrument will generate overtones at 2ω0, 3ω0, · · · and so forth.

AMATH 482 Homework 3: PCA

On Canvas are movie files (turned into matlab files) created from three different cameras (videos are from
2011).

The experiments are an attempt to illustrate various aspects of the PCA and its practical usefulness
and the effects of noise on the PCA algorithms.

• (test 1) Ideal case: Consider a small displacement of the mass in the z direction and the ensuing
oscillations. In this case, the entire motion is in the z direction with simple harmonic motion being
observed (camN 1.mat where N=1,2,3).

• (test 2) Noisy case: Repeat the ideal case experiment, but this time, introduce camera shake into the
video recording. This should make it more difficult to extract the simple harmonic motion. But if the
shake isn’t too bad, the dynamics will still be extracted with the PCA algorithms. (camN 2.mat where
N=1,2,3)

• (test 3) Horizontal displacement: In this case, the mass is released off-center so as to produce
motion in the x − y plane as well as the z direction. Thus there is both a pendulum motion and simple
harmonic oscillations. See what the PCA tells us about the system. (camN 3.mat where N=1,2,3)

• (test 4) Horizontal displacement and rotation: In this case, the mass is released off-center and
rotates so as to produce motion in the x − y plane, rotation, and motion in the z direction. See what
the PCA tells us about the system. (camN 4.mat where N=1,2,3)

In order to use PCA, you will have to extract the mass positions from the video frames. The following code
examples may be helpful to get you started.

To load the first video from test 1 and play the video using MATLAB’s video player, you can use the following
commands.
load(‘cam1_1.mat’)
implay(vidFrames1_1)

You may also find it useful to view the video by creating a loop and displaying each frame. That can be done
with the following code.
numFrames = size(vidFrames1_1,4);
for j = 1:numFrames
X = vidFrames1_1(:,:,:,j);
imshow(X); drawnow
end
Explore the PCA method on this problem and see what you find.

AMATH 482 Homework 4: Music Classification

Music Classification

Music genres are instantly recognizable to us, wether it be jazz, classical, blues, rap, rock, etc. One
can always ask how the brain classifies such information and how it makes a decision based upon
hearing a new piece of music. The objective of this homework is to attempt to write a code that can
classify a given piece of music by sampling a 5 second clip.

As an example, consider Fig. 1. Four classic pieces of music are demonstrated spanning genres
of rap, jazz, classic rock and classical. Specifically, a 3-second sample is given of Dr. Dre’s Nuthin’
but a ’G’ thang (The Chronic), John Coltrane’s A Love Supreme (A Love Supreme), Led Zeppelin’s
Over The Hills and Far Away (Houses of the Holy), and Mozart’s Kyrie (Requiem). Each has a
different signature, thus begging the question wether a computer could distinguish between genres
based upon such a characterization of the music.

2 2.5 3 3.5 4 4.5 5
−1
0
1
2 2.5 3 3.5 4 4.5 5
−0.5
0
0.5
2 2.5 3 3.5 4 4.5 5
−0.2
0
0.2
2 2.5 3 3.5 4 4.5 5
−0.2
0
0.2
time (seconds)

Figure 1: Instantly recognizable, these four pieces of music are (in order of top to bottom): Dr.
Dre’s Nuthin’ but a ’G’ thang (The Chronic), John Coltrane’s A Love Supreme (A Love Supreme),
Led Zeppelin’s Over The Hills and Far Away (Houses of the Holy), and Mozart’s Kyrie (Requiem).
Illustrated is a 3-second clip from time 2 seconds to 5 seconds of each of these songs.

• (test 1) Band Classification: Consider three different bands of your choosing and of different
genres. For instance, one could pick Michael Jackson, Soundgarden, and Beethoven. By taking
5-second clips from a variety of each of their music, i.e. building training sets, see if you can
build a statistical testing algorithm capable of accurately identifying “new” 5-second clips of
music, i.e. a test set, from the three chosen bands. Report your accuracy (success rate) on the
test set.

• (test 2) The Case for Seattle: Repeat the above experiment, but with three bands from
within the same genre. This makes the testing and separation much more challenging. For
instance, one could focus on the late 90s Seattle grunge bands: Soundgarden, Alice in Chains,
and Pearl Jam. What is your accuracy in correctly classifying a 5-second sound clip? Compare
this with the first experiment with bands of different genres.

• (test 3) Genre Classification: One could also use the above algorithms to broadly classify
songs as jazz, rock, classical etc. Choose three genres and build a classifier. In this case, the
training sets should be various bands within each genre. For instance, classic rock bands could
be classified using sounds clips from Led Zepellin, AC/DC, Pink Floyd, etc. while classical
could be classified using Mozart, Beethoven, Bach, etc.

WARNING and NOTES:
• You will probably want to SVD the spectrogram of songs versus the songs themselves. Interestingly, this will give you the dominant spectrogram modes associated with a given band.

• You may want to re-sample your data (i.e. take every other point or every third or fourth point)
in order to keep the data sizes more manageable. Regardless, you will need lots of processing
time.

• Feel free to collaborate with other students on getting the music for the training/test sets. I’d
encourage you to form groups and split up the work of forming five second clips from different
artists and genres. You can use the Piazza discussion board for forming groups and/or sharing
data.

• As usual, the rest of the assignment should be done individually. You should compute your
own spectrograms and SVDs and build your own algorithm.

• Do NOT post any copyrighted music in publicly available places online (i.e. Github).

• Do NOT illegally download or copy music.

• The following are some ways you might find downloadable music that is free:
– The Youtube audio library has a lot of music that is free.
– There is a lot of music that can be downloaded from Sound Cloud. You may want to try
searching for “Creative Commons.”

– There is also a lot of free music on Free Music Archive. It can even be sorted by genre.
When you click a download link on this website, it brings you to a download page. You can
load the song directly into MATLAB (without downloading the file onto your computer)
by copying the url of the download page and using the MATLAB command
[y,Fs] = webread(‘url’)

– The above sources mostly contain music that has a Creative Commons copyright license.
You can find a lot more music by searching for “Creative Commons License music” in
Google or Youtube.

AMATH 482 Homework 5: Neural Networks for Classifying Fashion MNIST

In class, we have built (or will build) a classifier for the popular MNIST dataset of handwritten
digits. In this assignment, you will work with an analogous data set called Fashion-MNIST. Instead
of handwritten digits, there are images of 10 different classes of fashion items. One image of each
type is shown below.

You can load the data using the following commands:
fashion_mnist = tf.keras.datasets.fashion_mnist
(X_train_full, y_train_full), (X_test, y_test) = fashion_mnist.load_data()

The Fashion-MNIST data has exactly the same structure as the MNIST data. There are 60,000
training images in the array X_train_full and 10,000 test images in the array X_test, each of which
is 28 × 28. The labels are contained in the vectors y_train_full and y_test. The values in these
vectors are numbers from 0 to 9, but they correspond to the 10 classes in the following way:
Label Description

0 T-shirt/top
1 Trouser
2 Pullover
3 Dress
4 Coat
5 Sandal
6 Shirt
7 Sneaker
8 Bag
9 Ankle boot

Before you begin, preprocess the data in the following way.
1. Remove 5,000 images from your training data to use as validation data. So you should end up
with 55,000 training examples in an array X_train and 5,000 validation examples in an array
X_valid.

2. The numbers in the arrays X_train, X_valid, and X_test are integers from 0 to 255. Convert
them to floating point numbers between 0 and 1 by dividing each by 255.0.

Part I
Train a fully-connected neural network to classify the images in the Fashion-MNIST data set.
You should try several different neural network architectures with different hyperparameters to try
to achieve the best accuracy you can on the validation data.

Some things you might try adjusting
are:
(a) The depth of the network (i.e. the number of layers)
(b) The width of the layers
(c) The learning rate
(d) The regularization parameters

(e) The activation functions
(f) The optimizer
(g) Anything else you can think of or find online

Can you beat 90% accuracy on the validation data? What about 95%? Once you have experimented
and found a network architecture and hyperparameter values that you think perform well, use that
architecture and those hyperparameter values to train one final model. Check the accuracy of your
final model on the test data and report your results.

You should also include the confusion matrix
for the test data. You should include in your report what things you tried while experimenting, but
you only need to include plots or results for the final model.

Part II
Repeat the same procedure as Part I, but now use a convolutional neural network. In addition
to the things listed in the previous part, you can now try adjusting:
(a) The number of filters for your convolutional layers
(b) The kernel sizes
(c) The strides
(d) The padding options
(e) The pool sizes for your pooling layers
Report your results and compare them to your fully-connected network from Part I.