Sale!

Solved Homework 2 AMATH 482 Winter 2025 Problem Description: Captured Motion Recognition

$50.00 $30.00

Original Work ?

Download Details:

  • Name: Report-2-8fumxp.zip
  • Type: zip
  • Size: 1.85 MB

Category: Tags: , You will Instantly receive a download link upon Payment||Click Original Work Button for Custom work

Description

5/5 - (1 vote)

Winter 2025

Problem Description: Captured Motion Recognition

You are working on the next version of a humanoid robot OptimuS-VD which has built-in sensors that record
the movements of 38 of its joints with rate of 60Hz. The movements of the joints are recorded as Euler angles
and can be transformed to xyz coordinates. You have recorded 5 samples of each of the 3 movements that
OptimuS-VD knows to perform: walking, jumping, and running. The samples are recorded for 1.4 secs (100
timesteps) and saved as a matrix of 114Γ—100, where the first dimension records π‘₯1, ..π‘₯38, 𝑦1, ..𝑦38, 𝑧1, ..𝑧38
locations of the joints and the second dimension the timesteps. Your goal is to build a projection of the
recordings to a lower dimension than the number of coordinates, visualize the movements and then based
on it to design an algorithm that will be able to recognize which movement OptimuS-VD is performing in
real-time. A test sample of each movement is available to you to test your approach.
You can download the data using the Google drive links on Canvas; either the data files hw2datanpy.zip
for Python users, or hw2datamat.zip for MATLAB users.
These files contain two folders: train and test. Within each, each 𝑛𝑝𝑦 or π‘šπ‘Žπ‘‘ file is a matrix of 114Γ—100
in format explained above.
Some comments and hints
Here are some useful comments and facts to guide you along the way.
1. sklearn has functionalities for PCA and many other functions (e.g. computing accuracy) that could
be useful for completing this homework. Using these can make your life easier.
2. Don’t forget to center 𝑋train before computing the PCA modes if you plan to use SVD. If you are using
sklearn’s PCA function then you don’t need to worry about this as it centers the data by default.
Make sure to check the convention of sklearn in terms of rows and columns of 𝑋train and transpose if
necessary.
3. Recall that the projection of a given sample 𝑧 onto π‘˜-PC modes is achieved by π‘§π‘˜ = π‘ˆ
𝑇
π‘˜
𝑧, where π‘ˆ
𝑇
π‘˜
is
the transposed π‘ˆ matrix from SVD with first π‘˜ column vectors kept intact and column vectors from
π‘˜ + 1 and onward are set to be zero vectors.
4. The provided notebook will visualize a given sample in time as a skeleton in xyz coordinates. See also
the GIF file that shows movements in time.
5. The data is a sub-sample of the CMU MoCap database http://mocap.cs.cmu.edu. There are additional
datasets and benchmarks that capture motion such as (Human 3.6) http://vision.imar.ro/human3.6m/description.ph(NTU -RGBD+) https://github.com/shahroudy/NTURGB-D, etc.
Tasks
Below is a list of tasks to complete in this assignment and discuss in your report.
1. Compile all train samples into a matrix π‘‹π‘‘π‘Ÿπ‘Žπ‘–π‘› and apply PCA such that the PCA modes are spatial
modes and the coefficients are time-dependent coefficients. Investigate how many PCA spatial modes
you need to keep to approximate π‘‹π‘‘π‘Ÿπ‘Žπ‘–π‘› up to 70%, 80% , 90% , 95% in Frobenius norm (i.e., energy).
Plot the cumulative energy to justify your results.
2. Truncate the PCA modes set to 2 and 3 modes and plot the projected π‘‹π‘‘π‘Ÿπ‘Žπ‘–π‘› in the truncated PCA
space as low dimensional 2D (PC1,PC2 coordinates) and 3D (PC1,PC2,PC3 coordinates) trajectories.
Use colors for different movements and discuss visualization and your findings.
3. In order to classify each sample with type of movement establish the following ground truth. Create a
vector of ground truth labels with an integer per class, e.g., 0 (walking), 1 (jumping), 2 (running) and
assign an appropriate label to each sample in π‘‹π‘‘π‘Ÿπ‘Žπ‘–π‘›. Then for each movement compute its centroid
(mean) in π‘˜-modes PCA space.
4. Having the ground truth, preform the following training. Create another vector of trained labels. To
assign these labels, for each sample in π‘‹π‘‘π‘Ÿπ‘Žπ‘–π‘› compute the distance between the projected point in
π‘˜-modes PCA space and each of the centroids. The minimal distance will determine to which class
the sample belongs. Assign the label of the class of the centroid with minimal distance in the trained
labels vector. Compute the trained labels for various π‘˜ values of π‘˜-PCA truncation and report the
accuracy of the trained classifier (the percentage of samples for which the ground truth and the trained
labels match). You can use accuracy_score function in π‘ π‘˜π‘™π‘’π‘Žπ‘Ÿπ‘› for this purpose. Discuss your results
in terms of optimal π‘˜ for the classifier accuracy.
5. To test how the classification performs on classification/recognition of new samples, load the given test
samples and for each test sample assign the ground truth label. By projecting onto π‘˜-PCA space and
computing the distance to the centroids, predict the test labels. Report the accuracy of the classifier
on the test samples. Discuss and compare it with trained accuracy. Try various π‘˜ values.
6. Bonus (+2 points): Implement an alternative classifier based on π‘˜-PCA space and compare with your
results above.