CPSC 501 Assignment 1 to 4 solutions

$75.00

Original Work ?
Category: You will Instantly receive a download link for .ZIP solution file upon Payment

Description

5/5 - (1 vote)

CPSC 501 Assignment 1: Refactoring

Instructions:

Find or create a running object-oriented Java program. You must have the proper
approval to use this codebase if it isn’t yours.

The program you use should be poorly
structured and able to be improved via refactoring. A recommendation is to pick code you
have written in the past, avoiding GUI-based code.

If you decide to write new code for this
exercise, some project suggestions are
ˆ Accounting or inventory systems for stores or other businesses
ˆ Employee management systems
ˆ Command-line games, e.g. tic tac toe

The program must consist of 5-10 classes that are coupled together to make the program function. These classes must also incorporate some sort of inheritance structure. For
example, there could be an interface that is implemented by two or more subclasses, or a
class that is extended by two or more child classes.

Upload this starting project as the initial codebase for a new GitLab project at gitlab-cpsc.
ucalgary.ca. You will receive instructions on how to use this system during the first week
of tutorials. Next, find aspects of the project that would benefit from refactoring and perform at least five unique refactorings (i.e. no two refactorings have the same name, so
you can’t just perform Rename Method five times). At least two of the refactorings must
result in substantial changes to the internal design of the system.

Each refactoring should be tracked using separate Git commits, and at least one of
the two larger refactorings should be completed using branch and merge Git commands.
These commits may be done on the local repository, but must then be pushed to the
remote repository. Make sure you document each refactoring with a meaningful message in
the version control system.

You must also perform JUnit testing as you do the refactoring. You will likely want
to add or modify tests as you refactor your code into smaller methods. The testing code
must also be kept under version control. This unit testing doesn’t have to cover the entire
project – a recommendation is to choose a couple operations from the initial project, design
simple unit tests for them, and then refactor the code relating to these tested operations.

This way, the unit tests provide feedback about whether your refactorings are preserving
functionality, without you having to write unit tests for the entire project.

Finally, you need to complete a written report that describes how you did your refactoring. For each of the five refactorings, your report should answer the following questions:
1. What code in which files was altered? Don’t include entire files, just the code snippets
that were relevant to the refactoring. Line numbers alone aren’t sufficient.

2. What needed to be improved, and why? List any “bad code smells” detected.
3. Which refactoring was applied? What steps did you follow? Use the terminology and
mechanics discussed in class or in the Fowler text.

4. What code in which files was the result of the refactoring? See point 1.
5. How was the code tested? Which JUnit test methods were applicable?
6. Why is the code better structured after the refactoring? This could be addressed
simultaneously with point 2.

Use SHA numbers to cross-reference your commits as you describe each refactoring.
Also make sure you indicate the refactoring that was required to use branch and merge.

In addition, at the beginning of the written report, you need to include directions for
the TA to access your GitLab project. This is how they will be able to access your
code, unit tests, and commit history, so double-check this works correctly before submitting.

The expected length of the report is about 2–3 pages with standard font and margins,
but there are no strict requirements. It is essential, however, that your report is clear, easy
to read, thorough, and written in complete sentences.

Submission instructions:
Upload your written report as a PDF file to the Assignment 1 dropbox on D2L by 23:59
on October 9th. The TAs will use the instructions in your report to access your GitLab
project, through which they will grade your submission.

Rubric (100 pts total):
ˆ Version control: Used Git/GitLab properly, Multiple small commits with informative
messages (25 pts)
ˆ Branch/merge: One branch and merge operation used for a larger refactoring (5 pts)
ˆ Unit testing: Tests are applicable to refactorings and test robustly for multiple points
of failure (15 pts)

ˆ Refactorings: Evidence in version control and report of five clear and systematic
refactorings. Two of these refactorings result in larger structural changes (25 pts)
ˆ Report: Description of each of the five refactorings answering provided questions.
Report is thorough and written in full sentences. (25 pts)

ˆ Communication: Clear, working instructions on how to access GitLab project. (5 pts)
Note that for this assignment, your work will be graded by your own lab TA. If you
have questions about the requirements or what is permitted, it is recommended you consult
with them directly during tutorial or through Piazza.

 

CPSC 501 Assignment 2: FFT Optimization

Instructions:

The goal of this assignment is to optimize, in stages, the performance of a convolution reverb program. Convolution reverb is an audio digital signal processing
technique where a “dry” recording of an instrument (i.e. a recording without reverberation) is convolved with the impulse response of an acoustical space such as
a concert hall.

The result of the convolution is a sound file where the instrument
sounds as if it were playing in the actual concert hall. This is a commonly used,
but computationally intensive, technique for adding natural-sounding reverberation
to recorded sounds.

You are to create a command-line program that takes in a dry recording and an
impulse response, and produces the convolved signal. It should be invoked from the
command line as follows:

convolve inputfile IRfile outputfile
where convolve is the name of your program, inputfile is the dry recording in .wav
format, IRfile is the impulse response in .wav format, and outputfile contains the
convolved signal in .wav format. All .wav files should be monophonic, 16-bit, 44.1
kHz sound files (at least initially, before attempting the bonus part of the assignment).

Sample audio and impulse response files are available on D2L.
As in the first assignment, you will be using GitLab to maintain version control
and to share your final project with the TAs. Your assignment should be kept in a
GitLab repository titled “CPSC 501 A2”.

Baseline program: Create an initial version of your program where the convolution is implemented directly in the time domain. You will find this version of
your program quite slow. Measure the run-time performance of your program using
a dry recording that is at least thirty seconds long and an impulse response that is at
least 2 seconds long.

You will reuse these same inputs for timing measurements after
each optimization that you do in the later stages of the assignment. There are many
suitable dry recordings and impulse responses available on the Internet as well as on
the course D2L site. You can find various utility programs online to convert sound
files of different types (e.g. .aiff or .snd) to the .wav format.

Although the program may be implemented in any programming language, it
would be best to use a language supported by the GCC compiler, since the gprof
profiler is what we will be using in class to work through examples in C++ (also note
the required compiler optimization). The GCC compiler is available on the CPSC
servers, and can easily be installed on a personal machine.

Algorithmic optimization: Create a second version of your program where you
re-implement the convolution using a frequency-domain convolution algorithm. A
handout will be provided summarizing the approach discussed in lecture. Measure
the run-time performance of this second version of program using the same inputs
that you used for the baseline program. Be sure to use version control, profiling, and
regression testing as part of a disciplined process of optimization.

Compiler-level optimization and code-tuning: Use compiler-level optimization and manual code tuning to further optimize the performance of your program.

Do your improvements step by step, measuring and testing at each stage. Be sure
to test, profile, and commit your code each time you make a change. Use at least 4
different manual code-tuning techniques and at least one compiler optimization.

Report: Create a formal written report that describes how you optimized your
code at each step. You must show each version of your program, and describe what
changes you made at every stage of the process. Include relevant code excerpts to
illustrate the changes you made.

You must also quantify the improvements with your
timing measurements, and describe the regression tests you performed. Use tables
and/or graphs to help illustrate how you improved performance at each stage of your
work. In addition, at the beginning of the written report, you need to include directions for the TA to access your GitLab project.

This is how they will be able
to access your code and commit history, so double-check this works correctly before
submitting. The expected length of the report is about 2–3 pages (not including code
excerpts) with standard font and margins, but there are no strict requirements. It is
essential, however, that your report is clear, easy to read, thorough, and written in
complete sentences.

Notes on regression testing: Rather than writing unit tests for this assignment
and comparing the output to some prepared standard, you will be using regression
testing to ensure your program remains correct after each optimization step. To
do this, you will need to keep the output from your initial baseline program for a
particular pair of sound/IR input files.

You should be able to check whether your
baseline output is correct by simply listening to the output file. Then, after each
optimization, compare this baseline output to the output of your modified program
on the same input files. If the two outputs are identical, you have confirmed that
your optimizations have left the program function unchanged. My recommendation
would be to write a bash script that performs the comparison automatically, and to
include this in your version control and report.

Bonus (up to 10%): Elaborate your program so that it can handle stereo (i.e.
2-channel) impulse response files, and produce the appropriate stereo (2-channel) output file. In other words, your program will convolve a monophonic dry input sound
with a stereo impulse response, and output a stereo sound file.

Your program should
be able to recognize automatically if the impulse response file has one or two channels.
You can either implement the bonus as part of your baseline program, or after all
optimizations. If you choose to implement the bonus feature, you need to indicate
this clearly in your report. Sample stereo IR files are available on D2L.

Submission instructions:
Important: If you are in T05 (Chris’ Monday morning tutorial), submit your assignment to Navid, who will be doing your grading. If you are in a different tutorial
section, submit to your own TA.

Upload your written report as a PDF file to the Assignment 2 dropbox on D2L
by 23:59 on November 6th. Make sure you add the correct TA to your GitLab
project with reporter access using their email given on D2L. The TA will use the
instructions in your report to access your GitLab project, through which they will
grade your submission.

Rubric (100 pts total):
ˆ Version control: Used Git/GitLab properly, Multiple small commits with informative messages (5 pts)
ˆ Profiling: Tests are run after each improvement and the results are documented
(5 pts)
ˆ Regression testing: Tests are run to make sure correctness is preserved between
changes (5 pts)
ˆ Baseline program: Unoptimized program correctly performs convolution reverb
in the time domain (20 pts)

ˆ Optimizations: Evidence in version control and report of five clear and systematic optimizations. The first of these must be an algorithmic optimization,
implementing the FFT. The remainder must contain a compiler optimization
and at least four distinct code tuning optimizations (20 + 5 + 20 = 45 pts)

ˆ Report: Description of each of the optimizations described above, with appropriate code excerpts shown. Report is thorough and written in full sentences.
(15 pts)

ˆ Logistics: Clear, working instructions on how to access GitLab project. Program
can be run from the command line using the specified instruction (5 pts)
ˆ Bonus: Solution can detect and handle stereo impulse response files (10 pts)

CPSC 501 Assignment 3: Object Introspection

Instructions:

The goal of this assignment is to create a reflective object inspector that does a
complete introspection of an object at runtime. The inspector will be implemented
in a Java class called Inspector, and will be invoked using the method:
public void inspect(Object obj, boolean recursive).

This method will perform introspection on the argument obj, printing what it finds
to standard output. You should find and display the following information about the
object:
1. The name of the declaring class
2. The name of the immediate superclass*
ˆ Always explore the superclass immediately and recursively, even if recursive
is false

3. The name of each interface the class implements*
ˆ Always explore all interfaces immediately and recursively, even if recursive
is false

4. The constructors the class declares. For each constructor, give the following
information:
ˆ Name
ˆ Modifiers
ˆ Parameter types
ˆ Exceptions thrown

5. The methods the class declares. For each method, give the following information:
ˆ Name
ˆ Modifiers
ˆ Parameter types
ˆ Return type
ˆ Exceptions thrown

6. The fields the class declares. For each field, give the following information:
ˆ Name
ˆ Type
ˆ Modifiers
ˆ Current value
– If the field is a primitive data type, simply print the value
– If the field is an object reference and recursive is set to false, then
simply print out the reference value directly. This will consist of the
name of the object’s class, plus the object’s identity hash code (e.g.
java.lang.Object@7d4991ad)

– If the field is an object reference and recursive is set to true, then
immediately recurse on the object
(*) You must always traverse the inheritance hierarchy to find all the same information about all superclasses and interfaces declared.

You should progress all the
way up the hierarchy to Object. You will notice that you may end up visiting certain
classes (such as Object) multiple times – this is expected. Each time you encounter a
superclass or interface, perform the complete recursion. This will result in potentially
printing all of the information for a class multiple times.

Arrays: Be sure you can also handle any array object you might encounter. This
could be either the starting object or from a field. In addition to the regular name,
type and modifiers, you must print out its component type, length, and the values of
all entries. You can assume that array fields will be limited to one dimension.

Infinite recursion: It is possible to define objects that end up in infinite recursion if the recursive method argument is enabled. However, you do not need to
design your code to detect or escape circular class references. The driver program for
evaluating your assignment will not test objects with this circular reference behaviour.

Formatting: Please indent each class recursed into by one tab of depth, and
indicate clearly whenever you enter a new class. It is also helpful to indicate which
class you are listing the current fields, methods and constructors for with a header
that indicates the current class.

The appropriate header would then be displayed each
time you enter or leave a recursion level. You can judge what looks best in terms of
output formatting, but make sure it is easy for the TAs to read and grade.

Refactoring: At some point in the development of your assignment code, you
need to perform two distinct refactorings. Any refactorings from the course list or
from Fowler’s textbook are accepted. Write up these refactorings in a similar format
to assignment 1 and include them in your report.

Other requirements:

ˆ You should have descriptive output. For example, you should not use toString()
for Field/Method or Class to get information. You will need to use the reflective
API methods discussed in class to pull out the required information and print
more descriptive explanatory lines.

ˆ When printing modifiers, you must convert the returned integer information into
descriptive text information such as public/private/final/static/transient/etc.
ˆ A Driver program is provided on D2L that creates objects to inspect and then
invokes your inspect method on each. This driver will output eight different
script*.txt files, each corresponding to a different object.

ˆ During the marking process, your TA will compile and run your code to verify
that everything works.
ˆ Remember to use version control and refactoring as discussed, as part of your
coding process.

As in the previous assignments, you will be using GitLab to maintain version
control and to share your final project with the TAs. Your assignment should be
kept in a GitLab repository titled CPSC 501 A3. As you develop your code, make
sure to use proper version control practices, making regular commits with descriptive
messages.

Report: Create a written PDF report that describes your two refactorings in a
similar format to assignment 1. Remember to include the name of the refactoring,
reasons for making it, and before-and-after code excerpts. The justification can be
brief – just a sentence or two is sufficient.

The report should also include directions for the TA to access your GitLab
project. This is how they will be able to access your code and commit history, so
double-check this works correctly before submitting. Make sure to indicate in the
report whether you decided to implement the bonus part of the assignment. You may
also include any information (known bugs, etc.) that you think will be useful to the
TAs when grading.

Bonus – dynamic loading (up to 10%): Create and submit your own, more
advanced, driver program DriverBonus.java. This driver program should take three
command line arguments: (1) the name of a class containing the inspect method, (2)
the name of a class to inspect, and (3) a boolean for recursive. This driver program
should use reflection to load the class indicated as the first command line argument.

This loaded class should then be used to run the inspect(Object, boolean) method
against a new instance of the class indicated as the second command line argument.
Recursion behaviour is indicated by the third command line argument. This bonus
should function even if the class containing the inspect(Object, boolean) IS NOT
in the project code you have written.

The TA should be able to take some other
differently named class containing the method and introduce it into the classpath of
your code. Through the three command line arguments the TA should be able to use
your dynamic loading driver to run inspect(Object,boolean) on any object that can
be instantiated by a constructor with no arguments. Make your bonus well-designed
to handle invalid argument input, as well as gracefully handling any errors if the reflection fails to find the classes indicated as command line arguments. A bonus that
doesn’t handle exceptions and bad input will not get full marks.

Submission instructions:
Upload your written report as a PDF file to the Assignment 3 dropbox on D2L
by 23:59 on November 20th. Make sure you add your own TA to your GitLab
project with reporter access using their email given on D2L. The TA will use the
instructions in your report to access your GitLab project, through which they will
grade your submission.

Rubric (100 pts total):
ˆ Version control: Used Git/GitLab properly, making multiple small commits with
informative messages (5 pts)
ˆ Refactoring: Performed two refactorings to improve the code structure, which
are clearly written up in the report. (5+5 pts)
ˆ Introspection: Program correctly displays the following information.
– Class: full name (2 pts)
– Superclass: full name (2 pts)
– Interfaces: full names (2 pts)
– Fields: name, modifiers, type (6 pts)
– Field data: value or reference or null (6 pts)
– Constructors: modifiers, parameter types, exceptions thrown (10 pts)
– Methods: name, modifiers, parameter types, return type, exceptions thrown
(10 pts)
– Super recursion: inspects interfaces and parent classes (10 pts)
– Arrays: handles 1-dimensional arrays with required info (10 pts)
– Formatting: tabbing, spacing, clarity of description (6 pts)
– Recursion: handles recursive = true by performing introspection on all
sub-objects (16 pts)
ˆ Logistics: Clear, working instructions on how to access GitLab project, submitted as a PDF report. Program can be run from the command line using the
specified instruction (5 pts)
ˆ Bonus: Solution uses reflection to dynamically load classes as described (10 pts)

CPSC 501 Assignment 4: Neural nets

Instructions:

The goal of this assignment is to give you some practice working with neural nets.
The assignment consists of three parts, corresponding to three different data sets.

You’ll be experimenting with variations of a neural net to solve classification problems, and writing up your results in a report.

For all parts of this assignment, you may use the provided neural network code
provided on D2L as network.py. This code is taken from Michael Nielsen’s Neural
Networks and Deep Learning (2015) textbook, which is linked on D2L. The web version is available for free, and is a recommended resource if you want more details on
how it works.

All data sets and starter code files are included in the Assignment4.zip file provided on D2L.
Package management: It is recommended that you use a python virtual environment to manage any external packages you use for the assignment. Instructions for
how to set this up can be found at https://docs.python.org/3/tutorial/venv.
html.

The following packages will be needed for Assignment 4:
ˆ numpy – scientific computing library for python (see https://numpy.org/install/)
ˆ matplotlib – allows easy plotting of data and depiction of images
ˆ idx2numpy – useful for reading the MNIST image files

Part 1: MNIST Dataset: The home page of the MNIST database is http:
//yann.lecun.com/exdb/mnist/. The data set is divided into a set of 60,000 training examples and a set of 10,000 test examples, each consisting of separate files for
the images and their labels. Details on the file format can be found at the bottom
of the MNIST web page, and will also be covered in tutorial. Note that Nielsen’s
textbook trains the net on only 50,000 samples, but you should use all 60,000.

In the starter code, you are provided with hyperparameters for the neural net
that will result in roughly 85% accuracy on the test data. Train the neural net with
these hyperparameters, and record the results in your report. Next, experiment by
adjusting the hyperparameters to achieve an accuracy of 95% or higher on the test
data. Record the data for at least two of these neural nets in your report. The neural
nets you record should be in order of increasing accuracy, and the final neural net
should have an accuracy of at least 95%. Save your final trained net with the filename “part1.pkl”, and include this in your repository.

You may also make changes
to improve the speed at which your neural net can be trained, as long as it does not
decrease accuracy.
Write a brief paragraph explaining and justifying the changes you made, including relevant code excerpts.

You may adjust any part of the code, as long as you can
justify why you did so. In your report, it should be clear what changes you made, as
well as how/why they improve the net’s performance.

In your report for this part, include three images that your final neural net failed
to classify correctly. Include the image, the correct label, and the label output by
your neural net. You can obtain images by index from the testing data using the
imageViewer.py code provided on D2L. The TAs will cover how to do this in tutorial.
To do this, it may be useful to save your neural net to a .pkl file after training, and
then load it in a separate program to find the testing samples where it fails, and to
obtain the images for those particular samples.

Part 2: notMNIST Dataset: The machine learning community is a bit sick
of seeing MNIST digits pop up everywhere, so they created a similar dataset and
named it notMNIST. Created by Yaroslav Bulatov, a research engineer previously at
Google and now at OpenAI, notMNIST is designed to look like the classic MNIST
dataset, but less ‘clean’ and extremely ‘cute’. The images are still 28×28 and there
are also 10 labels, representing letters ‘A’ to ‘J’. The homepage for the dataset is
http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html.

Your task for this dataset is the same as for the MNIST. In this case, however,
your starter code hyperparameters will yield an initial accuracy of roughly 60% on
the test data. Follow the same instructions as above to reach an accuracy of over
90%. Save your final trained net with the filename “part2.pkl”, and include this in
your repository.

Write a brief paragraph explaining and justifying the changes you made, including
relevant code excerpts. You may adjust any of the hyperparameters mentioned. In
your report, it should be clear what changes you made, as well as how/why they
improve the net’s performance.

In addition to this information, include a short paragraph comparing your results from this dataset to your results from the original MNIST dataset. Note that
even though the datasets are similar, it is much harder to get a high accuracy for
notMNIST. Why do you think this is the case? Include a brief explanation.

Part 3: Coronary Heart Disease Dataset: The file heart.csv contains data
taken from https://web.stanford.edu/~hastie/ElemStatLearn//. Each entry
lists a series of observed health factors, followed by a boolean indicating the presence of coronary heart disease. Your goal is to use the first 9 variables to predict the
predict the last. Useful information on the dataset can be found in heartmeta.txt.

You will need to load the data into feature vectors yourself, and divide it into
training and testing sets. When converting to feature vectors, you will need to convert family history to a boolean, and rescale the age to have maximum value 1. All
other variables should be converted to z-scores using the mean and standard deviation
calculated from the dataset. These values are provided in heartmeta.txt so you can
check your results, but you should calculate them yourself in your code.

Your task for this dataset is similar to MNIST and notMNIST, but will be more
of a challenge. Your starter code should give an accuracy of about 67–70%. Follow
the same instructions as for MNIST and notMNIST to find a net that achieves an
accuracy of over 72%. Save your final trained net with the filename “part3.pkl”, and
include this in your repository. One problem you will notice when trying to improve
the accuracy for this model is overfitting due to the small number of data samples
(although fortunately, this makes the training fast to run).

As in the previous two parts, write a brief paragraph explaining and justifying
the changes you made, including relevant code excerpts. You may adjust any of the
hyperparameters mentioned. In your report, it should be clear what changes you
made, as well as how/why they improve the net’s performance.

Bonus (15%) For each part, you will receive a bonus of 5% if your final neural
net achieves the performances below. To do this, you will need to make adjustments
that are beyond what we have discussed in lecture. Recommended reading would be
http://neuralnetworksanddeeplearning.com/chap3.html. You are free to make
any modifications to network.py, but you still need to justify all changes you make in
your report.
ˆ Part 1: 98% or higher
ˆ Part 2: 93% or higher
ˆ Part 3: 75% or higher

Version control: As in the previous assignments, you will be using either GitLab or GitHub to maintain version control and to share your final project with the
TAs. Your assignment should be kept in a repository titled CPSC 501 A4. As you
develop your code, make sure to use proper version control practices, making regular
commits with descriptive messages. This includes keeping a local copy of your
assignment (including the git repository) on your own computer, just in case 🙂

Report: Create a written PDF report that discusses the variations you tried and
their performances. For each neural net you train, you need to keep track of
1. the hyperparameters used to train the net (number of layers, number of neurons
in each layer, epochs, step size, batch size, activation function)
2. its performance on training data after each epoch
3. the time required to train the net

Make sure all of these factors are included in your report. Note that you will need to
add the timing code yourself to track how long the net.SGD() function takes.
The report should also include directions for the TA to access your project.

This is how they will be able to access your code and commit history, so double-check
this works correctly before submitting. Make sure to indicate in the report whether
you decided to implement the bonus part of the assignment. You may also include any
information (known bugs, etc.) that you think will be useful to the TAs when grading.

Submission instructions:
Important: If you are in T05 (Chris’ Monday morning tutorial), submit your assignment to Bardia, who will be doing your grading. If you are in a different tutorial
section, submit to your own TA.

Upload your written report as a PDF file to the Assignment 4 dropbox on D2L
by 23:59 on December 9th. Make sure you add the correct TA to your GitLab
or GitHub repository with the correct access level using their email given on
D2L. The TA will use the instructions in your report to access your project, through
which they will grade your submission.

Rubric (100 pts total):
ˆ Version control: Used Git properly, making multiple small commits with informative messages (5 pts)
ˆ Part 1:
– Input is correctly handled, including construction of feature vectors and
division into training/testing data. (10 pts)
– At least three neural nets are described, including the one given in the
starter code. Description in the report is clear, with appropriate code excerpts. Choice of hyperparameters is well justified and leads to improvements in performance. (10 pts)
– Final neural net has accuracy at least 95% (5 pts)
– Three misclassified MNIST images are included, along with both the incorrect outputs and the correct labels from the dataset (5 pts)
– (Bonus) Final neural net has accuracy at least 98% (5 pts)

ˆ Part 2:
– Input is correctly handled, including construction of feature vectors and
division into training/testing data. (10 pts)
– At least three neural nets are described, including the one given in the
starter code. Description in the report is clear, with appropriate code excerpts. Choice of hyperparameters is well justified and leads to improvements in performance. (10 pts)
– Final neural net has accuracy at least 90% (5 pts)
– Comparison is made to MNIST dataset, with reasonable conclusions drawn
(5 pts)
– (Bonus) Final neural net has accuracy at least 93% (5 pts)

ˆ Part 3:
– Input is correctly handled, including construction of feature vectors and
division into training/testing data. (10 pts)
– At least three neural nets are described, including the one given in the
starter code. Description in the report is clear, with appropriate code excerpts. Choice of hyperparameters is well justified and leads to improvements in performance. (15 pts)
– Final neural net has accuracy at least 72% (5 pts)
– (Bonus) Final neural net has accuracy at least 75% (5 pts)
ˆ Logistics: Clear, working instructions on how to access GitLab/GitHub project,
submitted as part of the PDF report. (5 pts)