Description
CSCI4145/5409: Docker Assignment
This assignment will measure your understanding of containerization, and specifically
containerization done through Docker. The task you are asked to do is not complicated
programmatically, however it should assess whether you have met the learning outcomes of
understanding Docker and its usage. This assignment assures us that you have attended the
Docker tutorials and learned its usage or found some other way to learn Docker.
Learning Outcomes
β’ You have a successful working install of Docker
β’ How to build a container
β’ How to open ports and communicate with other containers through a docker network
β’ Using JSON as a text-based data interchange format
β’ Making small webservices with existing official Docker images
β’ Creating Dockerfileβs and learning Docker commands used in container app
development
β’ Using docker compose to build multi-container microservice based architectures
β’ Developing the courage to dive into complicated cloud computing tools
Requirements
You will build two simple webapp containers that communicate to each other through a docker
network to provide more complex functionality, a very small microservice architecture. When
you are finished your system will look and function like this:
JSON
INPUT Container 1:
Cleans input,
Validates input,
Sends input to
Container 2,
Returns response
JSON
OUTPUT
Container 2:
Loads data from a
disk volume,
Looks up an entry,
Returns result
A directory on my
machine, mounted as a
volume, containing a
text file
JSON Input
Your first container will receive JSON with the following format:
{
“word”: “PoTaTo”
}
The intent of the message is for your microservice architecture to look up the definition for the
word passed in.
JSON Output
If the word provided via the input JSON is found in the dictionary, the definition is returned:
{
“word”: “PoTaTo”,
“definition”: “a starchy plant tuber which is one of the most
important food crops, cooked and eaten as a vegetable.”
}
If the word is provided, but not found in the dictionary, this message is returned:
{
“word”: “pota”,
“error”: “Word not found in dictionary.”
}
If the word is not provided, an error message is returned:
{
“word”: null,
“error”: “Invalid JSON input.”
}
Container 1
Your first container’s role is to serve as an orchestrator and gatekeeper, making sure that the
input into the system is clean and valid. It must:
1. Listen on exposed port 5000 for JSON input sent via an HTTP POST to “/definition”, e.g.
“https://localhost:5000/definition”
2. Validate the input JSON to ensure a word was provided, if the “word” parameter is nil,
return the invalid JSON input result.
3. Clean the input JSON to ensure the word passed to container 2 does not have any extra
spaces, and is in a consistent format:
a. Trim whitespace from the start and end of the word
b. Convert the word to all lowercase
4. Send the “word” parameter to container 2 (you don’t have to use JSON to do this, do it
however you like, but I recommend JSON) and return the response from container 2.
Container 2
The second container’s role is to listen on another port and endpoint that you define within
your docker network for requests to look up definitions. It must:
1. Mount the host machine directory ‘.’ to a docker volume
2. Load the contents of dictionary.txt in the docker volume
3. Listen on an endpoint/port you define to respond to definition requests:
a. Lookup the input word in the dictionary
b. Return the definition in the appropriate JSON format, or, if the word is not found
the word not found response (see errorresponses.json for exact response
formats).
Additional Requirements
1. You must push both your containers to a Dockerhub account you create (this is free)
2. You must prepare a docker-compose.yml file that defines a docker network and runs the
two containers from your dockerhub deploy, remember container 1 must be listening
on local port 5000 and you must mount the local volume ‘.’ (the current directory) to get
access to the dictionary.txt file I provide.
a. NOTE: The version keyword should be either ‘3’ in your docker-compose.yml
file or absent (indicating to use the latest version). You must ensure you use
the latest spec for building your docker-compose.yml file.
Marking Rubric
In this class I’m not very concerned about the quality of the code you write, if you write bad
quality code it is you that will suffer in maintaining and supporting it (especially on your
project). I care that you can meet the learning objectives defined at the top of this document,
and I can verify this by simply running your containers and verifying responses.
Your submission will be marked by a Python script that I will write, the script will do the
following:
1. Copy your docker-compose.yml file into a temp directory
2. Copy my version of dictionary.txt into the temp directory
3. Run “docker-compose up” in the temp directory
4. HTTP POST an invalid JSON input to https://localhost:5000/definition and verify that you
return the invalid JSON input response in your JSON response
5. HTTP POST a valid JSON input (with crazy combinations of capital, lowercase and
whitespace at the start and end of words) to https://localhost:5000/definition and verify
that you return the correct definition in your JSON response
6. HTTP POST a valid JSON input with a word that does not exist in the dictionary and
verify that you return the word not found JSON response
7. Run “docker-compose -v –rmi all” to shut things down and remove your images.
Your mark is entirely based on the success of steps 4, 5 and 6:
β’ Pass all 3 = 100%
β’ Pass 2 = 80%
β’ Pass 1 = 60%
β’ Any other result = 0%
Because your mark is entirely results based it makes sense for you to spend time testing to
ensure your docker-compose.yml is properly configured to work on my machine! I recommend
that you:
β’ Use the ‘docker image ls’ and ‘docker image rm’ commands to delete your local images
used during development / testing.
β’ Copy your docker-compose.yml to a temporary folder
β’ Place a testing dictionary.txt file in the folder
β’ Run ‘docker-compose up’ to verify that your images download properly from dockerhub
β’ Then use a tool like Postman to POST some testing JSON input to your container and
verify that you receive the correct responses
How To Submit
Submit your docker-compose.yml (and nothing else) to the Brightspace submission folder. If
it’s not named docker-compose.yml it won’t work and you will get 0, I have 175 students this
semester, I’m not manually renaming all your files! π
CSCI4145/5409: Docker Assignment
This assignment will measure your understanding of containerization, and specifically
containerization done through Docker. The task you are asked to do is not complicated
programmatically, however it should assess whether you have met the learning outcomes of
understanding Docker and its usage. This assignment assures us that you have attended the
Docker tutorials and learned its usage or found some other way to learn Docker.
Learning Outcomes
β’ You have a successful working install of Docker
β’ How to build a container
β’ How to open ports and communicate with other containers through a docker network
β’ Using JSON as a text-based data interchange format
β’ Making small webservices with existing official Docker images
β’ Creating Dockerfileβs and learning Docker commands used in container app
development
β’ Using docker compose to build multi-container microservice based architectures
β’ Developing the courage to dive into complicated cloud computing tools
Requirements
You will build two simple webapp containers that communicate to each other through a docker
network to provide more complex functionality, a very small microservice architecture. When
you are finished your system will look and function like this:
JSON
INPUT Container 1:
Cleans input,
Validates input,
Sends input to
Container 2,
Returns response
JSON
OUTPUT
Container 2:
Loads data from a
disk volume,
Looks up an entry,
Returns result
A directory on my
machine, mounted as a
volume, containing a
text file
JSON Input
Your first container will receive JSON with the following format:
{
“word”: “PoTaTo”
}
The intent of the message is for your microservice architecture to look up the definition for the
word passed in.
JSON Output
If the word provided via the input JSON is found in the dictionary, the definition is returned:
{
“word”: “PoTaTo”,
“definition”: “a starchy plant tuber which is one of the most
important food crops, cooked and eaten as a vegetable.”
}
If the word is provided, but not found in the dictionary, this message is returned:
{
“word”: “pota”,
“error”: “Word not found in dictionary.”
}
If the word is not provided, an error message is returned:
{
“word”: null,
“error”: “Invalid JSON input.”
}
Container 1
Your first container’s role is to serve as an orchestrator and gatekeeper, making sure that the
input into the system is clean and valid. It must:
1. Listen on exposed port 5000 for JSON input sent via an HTTP POST to “/definition”, e.g.
“https://localhost:5000/definition”
2. Validate the input JSON to ensure a word was provided, if the “word” parameter is nil,
return the invalid JSON input result.
3. Clean the input JSON to ensure the word passed to container 2 does not have any extra
spaces, and is in a consistent format:
a. Trim whitespace from the start and end of the word
b. Convert the word to all lowercase
4. Send the “word” parameter to container 2 (you don’t have to use JSON to do this, do it
however you like, but I recommend JSON) and return the response from container 2.
Container 2
The second container’s role is to listen on another port and endpoint that you define within
your docker network for requests to look up definitions. It must:
1. Mount the host machine directory ‘.’ to a docker volume
2. Load the contents of dictionary.txt in the docker volume
3. Listen on an endpoint/port you define to respond to definition requests:
a. Lookup the input word in the dictionary
b. Return the definition in the appropriate JSON format, or, if the word is not found
the word not found response (see errorresponses.json for exact response
formats).
Additional Requirements
1. You must push both your containers to a Dockerhub account you create (this is free)
2. You must prepare a docker-compose.yml file that defines a docker network and runs the
two containers from your dockerhub deploy, remember container 1 must be listening
on local port 5000 and you must mount the local volume ‘.’ (the current directory) to get
access to the dictionary.txt file I provide.
a. NOTE: The version keyword should be either ‘3’ in your docker-compose.yml
file or absent (indicating to use the latest version). You must ensure you use
the latest spec for building your docker-compose.yml file.
Marking Rubric
In this class I’m not very concerned about the quality of the code you write, if you write bad
quality code it is you that will suffer in maintaining and supporting it (especially on your
project). I care that you can meet the learning objectives defined at the top of this document,
and I can verify this by simply running your containers and verifying responses.
Your submission will be marked by a Python script that I will write, the script will do the
following:
1. Copy your docker-compose.yml file into a temp directory
2. Copy my version of dictionary.txt into the temp directory
3. Run “docker-compose up” in the temp directory
4. HTTP POST an invalid JSON input to https://localhost:5000/definition and verify that you
return the invalid JSON input response in your JSON response
5. HTTP POST a valid JSON input (with crazy combinations of capital, lowercase and
whitespace at the start and end of words) to https://localhost:5000/definition and verify
that you return the correct definition in your JSON response
6. HTTP POST a valid JSON input with a word that does not exist in the dictionary and
verify that you return the word not found JSON response
7. Run “docker-compose -v –rmi all” to shut things down and remove your images.
Your mark is entirely based on the success of steps 4, 5 and 6:
β’ Pass all 3 = 100%
β’ Pass 2 = 80%
β’ Pass 1 = 60%
β’ Any other result = 0%
Because your mark is entirely results based it makes sense for you to spend time testing to
ensure your docker-compose.yml is properly configured to work on my machine! I recommend
that you:
β’ Use the ‘docker image ls’ and ‘docker image rm’ commands to delete your local images
used during development / testing.
β’ Copy your docker-compose.yml to a temporary folder
β’ Place a testing dictionary.txt file in the folder
β’ Run ‘docker-compose up’ to verify that your images download properly from dockerhub
β’ Then use a tool like Postman to POST some testing JSON input to your container and
verify that you receive the correct responses
How To Submit
Submit your docker-compose.yml (and nothing else) to the Brightspace submission folder. If
it’s not named docker-compose.yml it won’t work and you will get 0, I have 175 students this
semester, I’m not manually renaming all your files! π
CSCI4145/5409: Compute & Storage
This assignment will measure your understanding of the main compute and storage mechanisms of our cloud provider AWS. This assignment assures us that you have attended the tutorials and learned about AWS EC2 and S3, or that you have found some other way to learn these services. Learning Outcomes β’ You understand how to launch AWS Elastic Compute (EC2) instances β’ You understand how to connect to an EC2 instance and provision it to support your web applications β’ You understand the AWS Simple Storage Service (S3) and how to create a bucket β’ Experience working with AWS libraries that allow you to perform AWS operations such as creating a file on S3 β’ More experience building REST APIs Requirements You will build a web application with any language or framework you like, deployed on an EC2 instance. I will do the same thing! Your application will “introduce” itself to mine by sending a POST request with some JSON to a URL that begins a chain of events: 1. I will POST to your app with some JSON that gives you data to store in a file on S3 2. Your app will retrieve the data from the POST, and use an AWS library to programmatically store the data in a file you create on S3 3. Your app will return a 200 status code and JSON that includes the URL for the file you created on S3 4. My app will download your file and verify that it contains the data I sent you When you are finished the system will look and function like this: I retrieve your file from the returned URL and verify contents Rob’s App on EC2 Your App AWS S3 Your app POST to /begin I POST to /storedata, you return URL Write data to file on S3 JSON Your App Sends To My App’s /begin Your app will send me the following JSON in your POST to /begin { “banner”: “”, “ip”: “” } JSON My App Sends To Your App’s /storedata When you POST to my app’s /begin endpoint with valid JSON my app will immediately interact with yours by sending a POST with the following JSON to your app’s /storedata endpoint: { “data”: “” } Your App’s Response To /storedata POSTs: When my app posts the JSON above to your /storedata endpoint, after you create the file on S3 you must return a 200 status code and the following JSON: { “s3uri”: “” } Marking Rubric In this class I’m not very concerned about the quality of the code you write, if you write bad quality code it is you that will suffer in maintaining and supporting it (especially on your project). I care that you can meet the learning objectives defined at the top of this document, and I can verify this by simply verifying the correct behaviour of your app’s interaction with mine. Your submission will be marked by the app that I will write, my app will: β’ Listen for requests to /begin, and initiate the check process β’ The check process: 1. Records the IP you send to /begin in DynamoDB 2. Sends a POST to your IP’s /storedata endpoint 3. Retrieves the file from the URL you returned 4. Verifies that the file contains the correct string Your mark is entirely based on the success of steps 1 – 4: β’ Your app posts to /begin from an AWS IP address β 50% β’ Your app stores a file on S3 and returns the URL when /storedata is called β 25% β’ The file from S3 retrieved via the returned URL contains the string sent to your app β 25% I will build my app such that only the performance of your most recent call to /begin counts towards grading. Because your mark is entirely results based it makes sense for you to spend time testing to ensure it works! I again recommend that you use a tool like POSTman to test your app’s behaviour. How To Submit Submit a zip file (one file only) of your application’s code to the Brightspace submission folder. This will count as a submission placeholder for my script to assign you a grade, and allow us to run MOSS on everyone’s code to check academic integrity. I will publish the IP of my running app a few days before the assignment deadline, I will leave it running. You must build, provision and execute your app such that it makes its call to /begin before the assignment deadline.
CSCI4145/5409: Serverless
This assignment will measure your understanding of some of the serverless mechanisms of our
cloud provider AWS. This assignment assures us that you have attended the tutorials and
learned about AWS Lambda and Step, or that you have found some other way to learn these
services. In addition, you will have to do some self-learning to study how to use AWS Simple
Queue Service (SQS).
Learning Outcomes
β’ Learn the benefits of serverless computing and apply that learning to implement a finite
state machine in AWS Step Functions using serverless compute mechanisms (Lambda)
and message buffering mechanisms (SQS).
Requirements
You will build the mock entry point to a producer/consumer model IT support application using
serverless compute mechanisms. The purpose of your step function is to do preliminary sorting
of new tickets into the support system for another process (that we won’t create) to “consume”
at a later time. The entry point to your system will be a step function, we can imagine in a realworld scenario a webpage that would launch the step function with input from a form that a
user fills out when submitting a request for help from their IT department. The form would ask
them various questions and then submit the data they enter to the step function via JSON.
Your state machine will use a lambda function to determine the “tier” of support required by
the user:
β’ Tier 1: These are issues with accounts and passwords. For example, the user forgot their
password and needs to have it reset. Or the user needs to set up a new account.
β’ Tier 2: These are issues with hardware that need an IT person to physically attend to the
user’s issue with their computer or printer.
β’ Tier 3: These are issues from high priority users that need to be addressed urgently.
Your step function’s job is to sort incoming requests into one of 4 SQS queues: T1, T2, T3 and
Unknown. The Unknown queue is the “catch all” queue to deposit requests that don’t fit the
criteria for tier’s 1 to 3.
Here is a rough state diagram for your system:
Incoming JSON:
The JSON sent to your step function will have the following format:
{
“email”: “rhawkey@dal.ca”,
“message”: “I do not remember my password! Please help!”
}
Lambda Task State:
The purpose of your lambda is to do the “thinking” of the state machine. It will do two things:
1. Parse the incoming JSON sent to it from the step function to determine what tier of
support the user needs.
2. Return the tier to your step function so that your step function can use choice states to
put the message in the correct queue.
Determine tiers as follows:
β’ Tier 1: Any message with the words “account” or “password” in the string.
β’ Tier 2: Any message with the words “computer”, “laptop” or “printer” in the string.
β’ Tier 3: Any message with email address “rhawkey@dal.ca”
β’ Unknown: Anything that doesn’t fit the other categories.
β’ Note: The tiers are a hierarchy, e.g. if a message contains both “laptop” and sent from
“rhawkey@dal.ca” then it is a tier 3 message.
Once your decision has been made you should return some kind of JSON (that you define) to
your step function for it to use in choice state decision making.
You can write your lambda in whatever language you like best.
Step Function Choice States
Your choice states will act on the input path to their state (which is the output path of your
Lambda task state). They will decide which queue to place the message on. How you place the
messages onto the SQS queues is up to you. You will need to do some self-learning to figure out
how to create SQS queues, and how to insert messages onto them from step functions. You
may be able to do it directly from inside the step function, or you may have to write a lambda
function to do this task for you. This is left for you to figure out as part of the assignment.
The result of every execution of the step function is the same message that is passed into the
step function is then placed on the appropriate SQS queue.
How To Submit
First, execute your step function with the following inputs:
{
“email”: “rhawkey@dal.ca”,
“message”: “I do not remember my password! Please help!”
}
{
“email”: “person@dal.ca”,
“message”: “I do not remember my password! Please help!”
}
{
“email”: “person@dal.ca”,
“message”: “My laptop computer broke and I cannot use my
printer! Please help!”
}
{
“email”: “person@dal.ca”,
“message”: “The registrar system is down! Please help!”
}
Then submit the following to Brightspace:
β’ The JSON definition of your step function in a text file.
β’ The code for your lambda function(s) (not zipped, just the .py or .js files)
β’ A screenshot for each of your 4 SQS queues showing each one having the correct
message on it.
Marking Rubric
Your submission will be marked by the TAs reviewing your code and screenshots:
β’ Your lambda correctly classifies incoming requests β 40%
β’ Your step function correctly sorts requests into correct SQS queues β 40%
β’ Your SQS queues display the correct incoming message for each execution β 20%
CSCI4145/CSCI5409 Cloud Computing β Group Project
Overview
The technology of cloud computing changes more quickly than most other technology in
software development, in fact this is part of why companies have flocked to putting their
software in the cloud. They can ride this wave of technological advances to keep their software
on the cutting edge. The services and technologies you learn during this course will be
deprecated, changed, or expanded upon within only a few years. For this reason, the most
critical skill for you to develop in this course is your ability to dive into a cloud provider’s service
offerings, to learn what they have available, and how the services you are interested in work.
To work in the field of cloud computing you must be confident that you can dive in and work
with any of the services a cloud provider offers.
The group project is the primary driver for experiential learning in this course. We can only truly
say you understand cloud computing once you’ve built and deployed a working software
system in the cloud. To accomplish this goal will require tenacity, self-directed learning,
experimentation and critical analysis. You are 4th year students or graduate students; this is
what you’ve been training for. You got this!
Group Size
You will work in groups of 1 β 3 students. I encourage most of you to work together, but
sometimes students just can’t stand the thought of working with other students and you’ve
already learned the lessons of being good group members in other courses.
There will be a spreadsheet named Groups.xslx in the Files tab of Microsoft Teams. Students
must form their own groups and enter their group information in the spreadsheet. Unless
students request to be in a 1-person group, we will automatically assign students who have not
formed their own groups to a random group of 3 students on the group formation deadline.
Graduate students in CSCI5409 may only group with other enrolled students in CSCI5409.
There are a limited number of 1 person groups we can support due to the need for TA advice,
supervision and marking. We will play this by ear rather than setting a fixed amount. Obviously
if everyone wanted to do the projects on their own I would have to draconically form random
groups.
The instructor reserves the right to at any time:
β’ Move students from one group to another
β’ Merge groups
β’ Dissolve groups
I will alter groups if I see problems that affect a student’s ability to learn, or if I see problems in
terms of the distribution of effort within the group.
Requirements
There are many career paths in cloud computing. From traditional app developers, to data
scientists, machine learning, IoT and even people just interested in supporting cloud app
development (DevOps). For this reason, rather than describing a project for you to implement I
am leaving the nature of your project largely up to your group to decide. There will be a menu
with different categories of technologies, services or methodologies you must select from,
however what you build with these technologies is entirely up to you. Use this opportunity to
gain experience with something you might want to work with further when you enter industry
or to build some cool thing you can further develop and even bring to market.
No matter what services you choose from the menu, you will also need to develop the software
that runs on the services you select. You are not allowed to use existing code (not even your
own from other courses or personal projects), all code you write for this course must be
original and entirely created by your group members for this course. The software does not
have to be a traditional web page. It could be an API, a website, a mobile app that talks to an
API, etc. The choices are endless.
Each category below will require you to select a number of items from the list of choices to
include in the design and implementation of your project. Pick the things you are most
interested in learning about or doing. The instructor has no preferences here, all choices are
equally valid.
Categories Menu
Compute β Pick two (2):
β’ AWS EC2 β Host a web app in a virtual machine
β’ AWS Elastic Beanstalk β Automatically run, load balance and scale web apps via virtual
machines
β’ Docker & AWS Elastic Beanstalk β Run a web app in a container with the Docker
platform
β’ AWS Elastic Container Service β Best way to run Docker containers (Note: Not
supported by AWS Academy, youβd need to use your starter account or professional
account)
β’ AWS Lambda β Functions that run without servers! Amazing! This is the future!
β’ AWS Step Functions β Build a serverless state machine!
Storage β Pick one (1):
β’ AWS S3 β Simple file storage
β’ AWS DynamoDB β NoSQL database
β’ AWS Aurora (be careful, this will eat your credits up fast!) β Managed database
Network β Pick one (1):
β’ AWS Virtual Private Cloud (VPC) β A private network protecting internal services
speaking to each other over a virtual private network
β’ AWS API Gateway β Secure and route API requests to lambdas or container APIs (Note:
not available via your Academy account, you would need to use your starter or
professional account to use this service)
β’ Amazon CloudFront β Content delivery network for high-speed content delivery
β’ Amazon EventBridge β Build event driven architectures by decoupling event sources and
HTTP endpoints (lambdas, microservices, etc.)
General β Pick two (2):
β’ Use Heroku and AWS to build a multi-cloud system
β’ Amazon Cognito β Add user sign up, sign in and access control to your app with support
for social identity providers (Facebook, google, etc.)
β’ Amazon Comprehend β Natural language processing with machine learning for deriving
and understanding valuable insights from text within documents
β’ AWS SNS β Send text messages, emails or mobile push notifications
β’ AWS SQS β Build a producer/consumer type system where one services produces output
to a queue that another service consumes
β’ AWS Secrets Manager β Secure secrets in your application (DB usernames / passwords,
API keys, etc.)
β’ AWS Key Management Service β Store your cryptographic keys outside of your app
β’ AWS Kinesis β Capture data streams and do something with them
β’ AWS Robomaker β ROBOTS!!!! That’s all that has to be said here.
β’ AWS Glue β Sanitize or alter data for machine learning processing
β’ AWS Elastic Load Balancing
β’ Amazon Lex β Build chatbots with conversational AI
β’ Amazon Polly β Text to speech, you send text and get back an audiostream!
β’ Amazon Rekognition β Automated image and video analysis
β’ Amazon Textract β Automatically extract text, handwriting and data from scanned
documents
β’ Amazon Translate β Fast, high quality text translation
β’ Difficult – Brave the wilds of the FCS OpenStack infrastructure to build a hybrid system
that combines public cloud and private cloud
β’ (4145 Students Only) Difficult – Infrastructure as code: build a CI/CD system to
automatically build and deploy and provision your infrastructure via AWS
CloudFormation
β’ Difficult – AWS Machine learning β Anything you do here will be both impressive and
difficult. Caution: You cannot use your work from other machine learning courses.
We are restricted to the services supported in the AWS Academy sandbox. Not all AWS services
are supported, and sometimes they don’t work as well in the sandbox as they do in the real
AWS infrastructure (for example IAM is extremely limited in AWS Academy unfortunately).
Please refer to the “AWS Academy Learner Lab – Foundational Services” document on
Brightspace to see a full list of available services and any special considerations in their use in
the lab environment. If you see something available in AWS Academy that you want to use that
isn’t listed here, then just ask! We’ll tell you whether you can use it and which category it
applies to.
CSCI5409 Students Extra Requirement
Because you are graduate students your expectations are higher than undergrad students. This
semester we will be requiring all graduate students to provision their infrastructure with AWS
CloudFormation. We will not study CloudFormation in the tutorials until the very last tutorial in
the semester, therefore it is critical that each member of your group learn and apply this tool
very early in your project. Start right from the beginning with CloudFormation as your method
for provisioning your infrastructure in the cloud, avoid using the console as much as possible.
Critical Analysis & Response Tasks
After we cover each of the major sections of cloud computing topics in our lectures, your group
will be posed a set of questions related to the material. Your group will need to meet and work
together to create a written response to these questions. These questions will force you to
critically analyze the system you are developing and respond with how you will address the
topic in your projects. Consider these tasks as group assignments that help you progress
towards the implementation of your project.
1 – Project Proposal, Deployment & Delivery Model
Your group proposes what you will build, and how you will build it. You will write a document
that describes the application you intend to build and how it will work. You will also have to
explain how you will deploy your app. Will it be entirely on AWS or use a multi-cloud or hybrid
cloud approach?
2 β Mechanisms
Your group explains which services your project will use, and why you are using those services
over other potential choices or approaches.
3 β Architecture
Your group will explain the cloud architecture you are using, or, if your system doesn’t fit an
architecture taught in the course you will explain why that is, and whether your choices are
wise or potentially flawed.
4 β Security & Business Considerations
Your group will analyze and describe your project’s approach to security, particularly its
approach to securing data. You will also analyze the cost metrics for operating your system. You
will calculate the up-front, on-going and additional costs to build this system in the real world.
You will also explain alternative approaches that might have saved you money, or maybe
provide justification for a more expensive solution.
Project Deliverables & Grade Distribution
The group project represents a combined 50% of your grade.
Critical Analysis & Response Tasks β 20%
β’ Project proposal, deployment & delivery model β 5%
β’ Mechanisms β 5%
β’ Architecture β 5%
β’ Security & business considerations β 5%
Final Written Report β 10%
This document will combine your responses from the critical analysis tasks, as well as expand
on the final result of your project. You will describe the architecture of your final system and
describe each group member’s role in the implementation of the project. This report identifies
and proves how you met your menu item requirements.
Video Presentation β 20%
Your group will record a video that demonstrates all of the functionality of your application.
This is where you prove that what you built actually works. We aren’t going to grade the quality
of the code you write for the individual services you implement, what we care about is the final
working product. Your video will demonstrate the back-end implementation, the AWS
configuration (and any other cloud services, e.g. OpenStack), and every part of the functionality
it provides. This is your chance to show it off and show us the cool thing your group made. 5409
students must demonstrate the full provisioning of their application via CloudFormation.
Group Communication
To ensure all group members have a reliable and accessible mechanism to communicate with
other group members, and to help us in the resolution of group conflicts, all students are
required to use Microsoft Teams for group communication. Dalhousie’s language of instruction
is English, and therefore all communications must be in English. Each group will have its own
private channel within the course Team channel to discuss their project, hold online meetings
and to communicate with the instructor and your TA.
Group Work & Conflicts
You are working on a group project and are assessed as a group. Each of you shares an equal
responsibility for contribution to the project. We do not want an imbalanced level of effort on
the projects. Unlike other projects, you cannot focus on only one aspect of the project (e.g.,
documentation, testing, reports, etc.). Each of you must participate in all aspects of the
project, especially its implementation. If it is determined at the end of the course that you did
not contribute equally to your group, you may be subject to a grade decrement (see below).
We will determine your group contribution based on the following metrics:
β’ Attendance in group meetings
β’ Reports you write indicating how you’ve distributed work
β’ Oral and written reports from your fellow group members
β’ Observation by the instructor or TAs
We will assume you are a good member of your group and that your group members have no
complaints against you until we hear otherwise from your fellow group members or from our
own personal observations.
We will use the following levels of grade decrements; this penalty applies to the full weight of
your group project on your final grade:
Underperformed vs. rest of the group: -15%
Significantly underperformed vs. rest of the group: -30%
Performed less than half the work of rest of the group: -50%
Contributed almost nothing: -90%
No group work/contact: -100%
Suggested Project Timeline
This timeline is my best guess at the best path to success for your group working on your cloudbased application. The “what you should work on” I describe below is very general and may
need to change depending on what your group is building. It does not account for the specific
programming requirements unique to your project. You will need to adjust this plan
depending on the complexity you’re striving for.
Date Course Topic What you should work on / project deliverables
January 18 – 20 Intro, key
terminology,
goals & benefits,
risks & challenges
Your groups are formed this week. Introduce
yourselves and start talking to each other about what
kinds of projects you might be interested in taking on.
Use a Doodle poll to figure out a regular meeting
time. At first, once per week should be OK as long as
you are productive, plan out your tasks and work well
once apart. Later, or if you are not seeing progress,
you should meet at least a couple of times per week.
Pay attention to introductory lectures to learn what
cloud computing can do, use this to come up with
ideas for your project.
January 21 – 27 Cloud-enabling
Technology
Start thinking about software you use on a daily basis:
what kind of data does it store, what kinds of
components might that software be made of, and
which of these would be in the cloud? Your group
should be throwing ideas around like crazy until you
find one that sticks. When you have an idea, if you
don’t know where to start in terms of meeting the
project requirements now is the time to work with
your TA and get advice!
January 28 β
February 4
Deployment &
delivery models,
project discussion
Pay attention to these lectures, you will need to
choose a deployment and delivery model for the
project you build, and you must be able to justify your
choices. In the labs you’ve learned a lot about Docker
and containers. Now is the time to decide which
computer technologies you will use for your project.
Containers or virtual machines on EC2. Your group
should have a solid idea of what kind of cloud-based
app you want to build, and now you are searching for
the tools and right combination of AWS / other
services to make it happen.
February 5 – 13 DevOps &
managing
releases, guest
lecture
Dive into services on AWS Academy. You may not
have learned all of the details of the services, or how
to use them, but you should understand what they all
do and what role they play in cloud computing. If you
are stuck seek my advice! Your group must work
intensely together this week to finalize your project
proposal. Your project proposal and deployment &
delivery model critical analysis and response report
is due February 13th at 23:59 Atlantic time.
February 20 β
27
Winter study
break
Use this week to learn any technology your project
and group members need you to use that you may
not have experience in, use AWS Academy to your full
advantage. Make sure by the end of this week you’re
ready to be an equal contributor to your project.
Your group needs to prepare your critical analysis and
response on the mechanisms your group plans to use
to implement your projects. Your mechanisms report
is due February 27th at 23:59 Atlantic time.
February 28 β
March 4
Fundamental
cloud
architectures
It’s time to get to work! Get coding and provisioning
your project infrastructure. Equally divide the work of
provisioning IT resources and writing the code you
need to run your app, do not assign all of any one task
to one person. Use a “whole team” approach where
every group member understands the architecture
and implementation of the whole project so that you
can assist each other in your work.
March 7 β 11 Advanced cloud
architectures
At this point you’ve learned various architectural
approaches to designing software for the cloud.
Because of timing you’ve already begun work on your
projects, maybe your app doesn’t match one of these
architectures but that’s ok. In your critical analysis and
response task you will be given the opportunity to
compare what your architectural choices are to the
ones you’ve learned in class, there’s no wrong answer
as long as your app works, but at least you’ll have a
chance to analyze how your architecture differs and
the pros and cons of your choices. Your architecture
critical analysis and response report is due March
20th at 23:59 Atlantic time.
March 14 β 18 Security Continue working hard! You’ve got this! Aim to reach
33% functional completeness by this date.
March 21 β 25 Business
considerations
You’ll need to prepare your critical analysis and
response report on security and business
considerations this week. This will be a very hectic
and busy time of the semester, with many
deliverables in other courses as well I would imagine.
Plan your time wisely! Tell your
friends/family/guildmates/significant others that you
can’t talk to them until April. π Your security and
business considerations analysis and response report
is due March 29th at 23:59 Atlantic time. This won’t
take you very long to complete, so also aim to reach
60% functional completeness by this date.
March 26 β
April 6 (~2
weeks)
Case studies and
discussion
This is crazy go time! Go, go, go, get that project 100%
complete! Your final report and video presentation is
due on April 6th at 23:59 Atlantic time.