CSCE 441 Computer Graphics Programming Assignment 1 to 6 solutions

$140.00

Original Work ?

Download Details:

  • Name: hws-kffgcx.zip
  • Type: zip
  • Size: 6.03 MB

Category: You will Instantly receive a download link upon Payment||Click Original Work Button for Custom work

Description

5/5 - (1 vote)

CSCE 441 – Computer Graphics Programming Assignment 1

Goal The goal of this assigment is to become familar with OpenGL and event-driven programming and mouse and keyboard interaction using GLFW.

2 Starter Code The starter code can be downloaded from here. 3

Task 1 Visit the toturial from here to set up OpenGL on your machine. You will need to install some libraries and get your system set up to process OpenGL. It is important to note that you will be using the process outlined there in all future programming assignments; it includes instructions for which directories to submit with your assignments, and how to get each assignment started. You should be sure you can run the test code provided at that link and are clear on the process for starting new programs in the future.

4 Task 2 Download the starter code and go through the process to set it up on your system (you will need to follow the process similar to the test code in the tutorial). Once you set up the code, run the program. You should see blank window. If you click in the window and drag the mouse, you should see the location of the mouse printed. If you press a key, you should see a message printed. Look through the code to see how it works. Here are a few points regarding the code: • The main function first initializes the window and events.

Then it displays the “framebuffer” and checks for the events to see if there is any update from the inputs. This process is repeated until the window is closed. • Overall, the program keeps track of a “framebuffer” that is an array of pixel colors (the array is defined at the top and is called frameBuffer).

You will set framebuffer values in the program. • Calling ClearFrameBuffer will set the framebuffer to black. • Calling SetFrameBufferPixel will set a particular (x, y) pixel to a color (R, G, B). The RGB color values should range from 0 to 1. The pixel values should range from 0 to the width/height minus 1. – Note that this command uses the upper left corner as (0,0). – The framebuffer structure is simple (just a 3D array of floats) so you can access the framebuffer directly if you wish – e.g., to read values. However, realize that the data in the framebuffer is not stored in the order you might expect – if you choose to access it directly, be sure you understand how it is organized! • Calling Display will show the framebuffer on the screen.

• The MouseCallback is used to handle mouse clicks. The CursorPositionCallback, on the other hand, is used to get the position of the mouse. Moreover, CharacterCallback is called when a key is pressed on the keyboard. You should use these functions to handle the input from mouse and keyboard. 1 Figure 1: A drawing with the initial background color is shown on the left. On the right, the result of changing the background color to red by pressing Shift + ’1’ is shown.

You are encouraged to play around with the code (modify it to do different things for different key presses and/or for different mouse clicks) to ensure you understand the basic process of how this code works, before working on the actual assignment.

5 Task 3 You should write a program that allows you to paint on the window. To do so, you have to implement the followings: • Make the window have a width to height ratio of 3:2 and a width of at least 900 pixels. Also change the title to “Assignment 1 – ”. • When the user clicks the left button of the mouse and drags it across the screen, the program should draw a rectangle from (−s, −s) to (s, s) centered at the current mouse position. The variable s controls the size of the brush and user should be able to increase or decrease it by a factor of 2 by pressing ’+’ and ’-’, respectively. The size s should be restricted to be between 1 and 128.

• The user should be able to change the brush color by pressing keys 0 – 7. The colors should correspond to the binary version of the numbers, e.g., 5 → 101 → magenta and 3 → 011 → cyan. • The user should be able to change the background color by pressing keys 0 – 7, while holding the shift key. Again the colors should correspond to the binary version of the numbers. Note that, by pressing Shift + (one of the keys 0 – 7) the background color should automatically change, but the drawings should stay unchanged (see Fig. 1). • Clicking the right button of the mouse should reset the screen to background color by clearing all the drawings.

• [Extra] Allow the user to change the brush shape between rectangle and circle by pressing ’b’. • [Extra] Implement the spray paint brush. Instead of painting all the pixels within the brush shape, you have to randomly select a percentage of the pixels and paint them. You can use the rand() function to do so. 6 Deliverables Please follow the instruction below or you may lose some points: • You should include a README file that includes the parts that you were not able to implement, any extra part that you have implemented, or anything else that is notable.

2 • Your submission should contain folders “src” as well as “CMakeLists.txt” file. You should not include the “build” folder. • Zip up the whole package and call it “Firstname Lastname.zip”. Note that the zip file should extract into a folder named “Firstname Lastname”. So you should first put your package in a folder called “Firstname Lastname” and then zip it up.

7 Ruberic Total credit: [50 points] [05 points] – Setting the correct window size and title [10 points] – Implementing the rectangular brush and correct placement of brush at mouse position [05 points] – Changing the brush size [05 points] – Changing the brush color [20 points] – Changing the background color [05 points] – Clearing the screen and setting it to background color Extra credit: [10 points] [05 points] – Implementing a circle brush [05 points] – Implementing a spray paint brush 8 Acknowlegement The toturial is provided by John Keyser. Some of the text is based on John Keyser’s assignmnet. 3

CSCE 441 – Computer Graphics Programming Assignment 2

1 Goal
The goal of this assigment is to become familar with model, view, and projection transformations in
OpenGL.

2 Starter Code
The starter code can be downloaded from here.

3 Task 1
Download the code and run it. You should be able to see three cubes as shown in Fig. 1. Make sure you
write your name in the appropriate place in the code, so it shows up at the top of the window.

Here is a
brief explanation of the starter code:
• There are two folders in the package. “shaders” contains the vertex and fragment shader programs.
You do not need to touch these files for this assignment. We will discuss them later in the course.
The other folder “src” contains the source files. Again, you do not need to touch the “Program.cpp”
and “Program.h” as they are responsible for reading and compiling the shader programs. For this
assigment, you’ll be mainly modifying the “main.cpp” and “MatrixStack.cpp”.

• The MatrixStack class is basically the matrix stack used for hierarchical transformation as discussed in the class. The stack is always initialized with the identity matrix. Any transformation
is then right multiplies with the matrix at the top of the stack. pushMatrix creates a copy of
the top and pushes it to the stack. popMatrix remove the top matrix from the stack. There are
several transformation in this class that are currently implemented using the glm library. For more
information about the library, access the API documentation. You will be implementing most of these
transformations by commenting out the glm functions in the next task (Sec. 4). Moreover, you will be
using this MatrixStack class to write the transformations required for implementing a functional
robot as shown in Fig. 2.

• The main function in “main.cpp” is similar to the one in the previous assignment. The Init function, initializes the window, events, and the shader programs. It also calls the function CreateCube
which creates an array of vertices and their colors representing a cube. The vertex position and colors
are then passed to the GPU in this function to be accessible later in the main display loop.
• There are a few callback functions for the mouse, cursor position, and keyboard which you need to
fill to handle the input based on the instruction given in later tasks.

• The Display function is the one responsible for drawing the cubes on the screen. This function
sets the transformations using the global matrix stack variable modelViewProjectionMatrix.
Specifically, it first sets the top matrix to identity and creates a copy of the top. Then it sets up the
perspective and view transforms by calling the Perspective and LookAt functions. The next
chunk of code is responsible for drawing the first cube. We first create a copy of the top matrix
by calling pushMatrix. Then we perform a series of model transformation including translation,
rotation, and scale to position the cube properly. Finally, we draw the cube by calling the DrawCube
function and pop the matrix. This will only remove the model transformation for cube 1 while
keeping the projection and view transforms in the stack. We continue the same process for cube 2
and 3. Note that, for every push there needs to be a pop.

1
Figure 1: Running the skeleton code should produce three cubes as shown here.
• ConstructRobot function and RobotElements class are discussed later in Sec. 5.

4 Task 2
In this part, you will be implementing several 3D transformations that are currently implemented using the
glm library in the MatrixStack class. Since each transformation is a 4 × 4 matrix, you can create a
one dimensional array of size 16 (float A[16]) and fill it in based on the transformation appropriately.
Note that glm is column major, so the indexing is as follows:


0 4 8 12
1 5 9 13
2 6 10 14
3 7 11 15



Once you fill in the array, you can feed it to the glm matrix using the glm::make mat4 function.
Note that, you need to set all the values in your one dimensional array (even if the value is zero) since it is
not initialized to zero when you create it (could contain garbage values).
Specifically, you need to implement the following functions:
• translate(const glm::vec3 &t)
• scale(const glm::vec3 &s)
• rotateX(float angle)
• rotateY(float angle)
• rotateZ(float angle)
• multMatrix(glm::mat4 &matrix)
• LookAt(glm::vec3 eye, glm::vec3 center, glm::vec3 up)
• Perspective(float fovy, float aspect, float near, float far)
2
Figure 2: You need to create a robot with 10 components as shown on the left (the resting position). Different components should
be rotatable in a hierarchical manner as shown on the right (i.e., rotating the upper arm, rotates the lower arm as well).

Note that, the function multMatrix is not a transformation, but right multiplies the input argument
(matrix) with the top matrix in the stack. So for this, you need to implement matrix multiplication.
If you implement all these functions correctly, you should get the same three cubes, in the same position,
scale, and orientation.

5 Task 3
You write a program to create a robot with 10 components (see Fig. 2) in a hierarchical way as follows:
• Torso
– Upper left arm
∗ Lower left arm
– Upper right arm
∗ Lower right arm
– Upper left leg
∗ Lower left leg
– Upper right leg
∗ Lower right leg

When a parent component is transformed, all of its descendants should be transformed appropriately.
You should be able to transform each component using keyboard as follows:
• “.” (period): traverse the hierarchy forward
• “,” (comma): traverse the hierarchy backward
• “x”, “X”: increment/decrement x angle
• “y”, “Y”: increment/decrement y angle
3
• “z”, “Z”: increment/decrement z angle

By pressing the period and comma keys, you should be able to select different components in the
hierarchy. You must draw the selected component slightly bigger by scaling it, so that it is distinguishable
from unselected components. The x/X, y/Y, and z/Z keys should change the rotation angle of the selected
component.
[Extra]: Animate a running/walking/etc. model by bending some or all of the joints with time, using
glfwGetTime(). Animated joints do not need to be controlled with the keyboard.

5.1 Details
The class RobotElements represents a component of the robot. This class should contain the necessary
member variables so that you can make a tree data structure out of these components. The tree should be
constructed in ConstructRobot function and it should be called in the Init function. The root of the
tree should represent the torso, which means that transforming the torso transforms everything else.

In addition to the member variables required for the tree hierarchy, the class should also have the
following:
• A glm::vec3 representing the translation of the component’s joint with respect to the parent component’s joint.
• A glm::vec3 representing the current joint angles about the X, Y, and Z axes of the component’s
joint. (You may want to start with Z-rotations only.)
• A glm::vec3 representing the translation of the component with respect to its joint.
• A glm::vec3 representing the X, Y, and Z scaling factors for the component.
• A member method for drawing itself and its children.
• Any additional variable(s) and method(s) you see fit.

The drawing code should be recursive. In other words, in the Display() function in “main.cpp”,
there should be a single draw call on the root component, and all the other components should be drawn
recursively from the root. The drawing function should take modelViewProjectionMatrix and
update it by the transformation of each component (based on the position, angle of the joints, etc.) in a
recursive manner. Make sure to pass the matrix stack by reference or as a (smart) pointer.

For this assignment, the 3D rotation of the joint should be represented simply as a concatenation of
three separate rotation matrices about the x-, y-, and z-axes: Rx * Ry * Rz. The position of the joint should
not be at the center of the box. For example, the elbow joint should be positioned between the upper and
lower arms.

You must draw the selected component slightly bigger than the other ones. This requires you to scale
the selected component. The traversal of the tree with the period and comma keys should be in depth-first
or breadth-first order. Do not hardcode this traversal order – your code should be set up so that it works
with any tree.
6 Task 4
Here, you will write functions to allow the user to change the viewpoint using mouse. For this, you need
to change the eye, center, and up, which are the inputs to the LookAt() function, based on the user
input. The mouse inputs are explained below:
4

• Holding the left mouse botton and moving the mouse should rotate the camera around the center
point. Moving the mouse horizontally should rotate the view left/right. Similarly, moving the mouse
vertically should rotate the view up/down.

• Holding the right mouse botton and moving the mouse shoud translate the camera. This means both
the eye and the center should be moved with the same amound. Again moving the mouse horizontally
and vertically corresponds to moving the camera left/right and up/down, respectively.
• Scrolling the mouse should change the distance of the camera to the center point. This means that
the view direction stays the same, but the camera gets closer to or further away from the center. For
this, you need to look up the appropriate callback function for scrolling from the GLFW mouse API
documentation.

7 Deliverables
Please follow the instruction below or you may lose some points:
• You should include a README file that includes the parts that you were not able to implement, any
extra part that you have implemented, or anything else that is notable.
• Your submission should contain folders “src” and “shaders” as well as “CMakeLists.txt” file. You
should not include the “build” folder.

• Zip up the whole package and call it “Firstname Lastname.zip”. Note that the zip file should extract
into a folder named “Firstname Lastname”. So you should first put your package in a folder called
“Firstname Lastname” and then zip it up.
8 Ruberic
Total credit: [150 points]
[30 points] – Implementing the MatrixStack class functions
[10 points] – Implementing translate, scale, rotateX, rotateY, and rotateZ
[05 points] – Implementing multMatrix
[05 points] – Implementing Perspective
[10 points] – Implementing LookAt
[75 points] – Implementing the robot with 10 components
[50 points] – Implementing the hierarchical robot with recursive design
[10 points] – Ability to control the robot with keyboard
[15 points] – Ability to select components with the keyboard and show them with a different size
[45 points] – Changing the view
[25 points] – Rotating the camera around the center
[10 points] – Translating the camera
[10 points] – Changing the distance of the camera to the center
Extra credit: [5 points]
[05 points] – Animating the robot
9 Acknowlegement
The robot part is based on the assignment by Shinjiro Sueda.
5

CSCE 441 – Computer Graphics Programming Assignment 3

1 Goal
The goal of this assigment is to become familar rasterization process.

2 Starter Code
The starter code can be downloaded from here.

3 Task 1
Download the code and run it. You should be able to see a white bunny as shown in Fig. 1. Make sure you
write your name in the appropriate place in the code, so it shows up at the top of the window. Here is a
brief explanation of the starter code:

• There are two folders in the package. “obj” contains the a few obj files that contain the geometry
information for different objects. The other folder “src” contains the source files. For this assigment,
you’ll be mainly modifying the “main.cpp” and the triangle class. “tiny obj loader.h” is a simple
header file for loading the obj files and you will use it as is.

• The main function in “main.cpp” is similar to the one in all the previous assignments. The Init
function, in this case, loads the model in addition to initializing the windowand events. The LoadModel
function, reads the vertices of triangles from the desired obj file and writes them into the vertices
vector. The CreateTriangleVector function then creates an instance of the triangle class for
each three vertices in the vertices vector and pushes them into the triangleVector vector.

• The display code then constructs the modelview and projection matrices and draws the triangles one
by one by calling the appropriate drawing function in the triangle class. There are two modes; one
drawing using OpenGL (RenderOpenGL) and the other is drawing with CPU (RenderCPU). The
function for drawing using OpenGL is already provided, but you have to implement the rasterization
process in RenderCPU function. You can toggle between rendering using OpenGL and CPU using
the space key. In the skeleton code, OpenGL draws a white bunny, since the color of all the vertices
are set to white in the triangle class constructor. The CPU mode, however, does not draw anything
since the RenderCPU function is empty.

• You can move closer and further away from the object using ’w’ and ’s’, respectively. The code
simply adjusts the distance of the camera to the coordinate center where the object is located at.
Currently, the path to the obj file is hardcoded. You should set up your code so it takes the path to the
obj file as the input argument. This way you can test your system on different models without changing the
source code.

4 Task 2
In this part, you will be implementing different coloring modes. You should be able to switch between
different modes by pressing ’0’, ’1’, and ’2’.
• Mode 0: Assign a random color to each triangle, as shown in Fig. 2 (left). Note that your version is
not going to be exactly like the one shown in this figure as you’ll be assigning the colors randomly.
1
Figure 1: Running the skeleton code should produce a white bunny as shown here.
Figure 2: The coloring modes 0, 1, and 2 are shown on the left, middle, and right, respectively.
• Mode 1: Assign a random color to each vertex, as shown in Fig. 2 (middle). Again your version is
going to be different from the one shown in this figure.
• Mode 2: Use the z value of the pixel as the color, as shown in Fig. 2 (right). You can chose any color
you want for this. For this, you have to map the z values to range 0 and 1 (min-z mapped to 0 and
max-z to 1).

5 Task 3
Here you implement the graphics pipeline on CPU. A call to RenderCPU should draw the current triangle on to the framebuffer.color. You can use framebuffer.depth to implement the z-buffer
algorithm. You have to implement the followings:
• Transform the triangle to the screen – The vertices of the triangle are in object coordinate. To
project them onto the screen you first need to apply model view projection transformation to bring
them to normalize device coordinate (NDC). Finally you apply viewport transformation to go from
2
Figure 3: You have to make sure your rendering does not have artifacts like the ones on the left when the object is far away. The
correct rendering is shown on the right.

NDC to screen space. Here, we don’t have a model transformation (it is basically identity) and view
and projection matrices are provided using glm::lookAt and glm::perspective, but you
have to create the viewport matrix based on the screen resolution. Note that, you have to perform
perspective division (division by the w coordinate) to get the final transformed coordinates.

• Rasterize the triangle – The previous step places the triangle on screen. Now, you have to loop over
the pixels on the screen and determine if a pixel is inside or outside the triangle. To make sure the
code runs at a decent speed, you have to implement bounding rasterizer. This means you need to
first compute the min and max of the x and y coordinates of the triangle vertices to find the box that
contains the triangle. Then you only look over the pixels in the box and perform the inside test.

Note that, you do not need to implement the edge cases that we discussed in the class. Since we
do not have any transparent objects, you can double rasterize the pixels shared with two triangles
(Incorrect solution #1 – slide 123). Make sure there are not any gaps in between your triangles.
• Interpolate the color of the pixel using Barycentric coordinates – If a pixel is inside the triangle,
you need to compute its color by interpolating the color of the three vertices. For this, you need to
implement Barycentric coordinates (α, β, γ from the slides). Once you obtain these, you can compute
the color as the weighted sum of the color of vertices.

• Implement the z-buffer algorithm – In this stage, you make sure that only the triangles that are
closest to the camera are drawn. This can be done using the z-buffer method as discussed in the
class. The basic idea is to use a depth buffer (you can use frameBuffer.depth) to keep track of
the closest depth drawn so far. You basically initialize this buffer with infinity. Then before drawing
each pixel onto the color buffer, you first check if its depth is less than the depth in the depth buffer.
If it is, then you color the pixel in the color buffer and update the depth buffer. If it is not, then you
do nothing. Note that, to obtain the depth at every pixel, you have to interpolate it from the depth of
the three triangle vertices using Barycentric coordinate.

• [Extra] Implement the clipping algorithm.
You need to test your implementation by comparing against GPU rendering (by pressing space) on
different objects and various conditions. Make sure you move the camera closer to the object and further
away to test the accuracy of your code (see Fig. 3). If you implement the clipping, test your algorithm by
getting really close to the object. Your rendering with clipped triangles should match that of OpenGL.
3

6 Deliverables
Please follow the instruction below or you may lose some points:
• You should include a README file that includes the parts that you were not able to implement, any
extra part that you have implemented, or anything else that is notable.
• Your submission should contain folders “src” as well as “CMakeLists.txt” file. You should not include the “build” or “obj” folder.

• Zip up the whole package and call it “Firstname Lastname.zip”. Note that the zip file should extract
into a folder named “Firstname Lastname”. So you should first put your package in a folder called
“Firstname Lastname” and then zip it up.
7 Ruberic
Total credit: [100 points]
[05 points] – Taking the obj file as an argument
[20 points] – Implementing color modes
[05 points] – Mode 0
[05 points] – Mode 1
[10 points] – Mode 2
[75 points] – Implementing the full rasterization pipeline on CPU
[10 points] – Transforming the triangles
[25 points] – Implement bounding rasterizer with no gaps between the triangles. To get the full point,
your code should not take several seconds to render a frame.
[20 points] – Barycentric interpolation
[20 points] – Implement Z-buffer
Extra credit: [20 points]
[20 points] – Implement clipping
8 Acknowlegement
The color modes is based on Shinjiro Sueda’s assignment.
4

CSCE 441 – Computer Graphics Programming Assignment 4

1 Goal
The goal of this assignment is to write shading codes in GLSL.

2 Starter Code
The starter code can be downloaded from here.

3 Task 1
Download the code and run it. You should be able to see a red bunny as shown in Fig. 1. Make sure you
write your name in the appropriate place in the code, so it shows up at the top of the window. Here is a
brief explanation of the starter code:

• There are three folders in the package. “obj” contains the “bunny.obj” file that has the geometry
information (vertex position, normal, etc.) for a bunny object. “shaders” contains the vertex and
fragment shader programs. You’ll be mainly modifying these files to implement different shading
methods. Finally, “src” contains the source files. Program is a class for loading, compiling, and
linking the shader programs as well as sending data to them. Moreover, “tiny obj loader.h” is a
simple header file for loading the obj files. You will use the Program class and “tiny obj loader.h”
as is, but have to modify the main file.

In summary, you’ll be modifying the following functions:
– “main.cpp”: You need to modify this function to pass appropriate data to the program shaders
and set up the materials and lighting.
– “shader.vert”: This is the vertex shader and you’ll be filling it out to implement a specific shader.
– “shader.frag”: This is the fragment shader and you’ll be filling it out to implement a specific
shader.
• Now let’s take a look at “main.cpp”

– There are several global variables defined at the top of this file. Specifically, program is
responsible for processing the shader programs. posBuff, norBuff, and texBuff store
the vertex position, normal, and texture coordinates, respectively. materials and lights
are structures for storing the material parameters and lighting information.

– The structure of main function in “main.cpp” is similar to the one in all the previous assignments.
– The Init function
∗ The first few functions are called to initializes the window and events.
∗ LoadModel function reads the obj file and saves the position, normal, and texture coordinates of each vertex of the geometry in the posBuff, norBuff, and texBuff vectors.
We do not use the texBuff data in this assignment.
∗ The last four lines are responsible for setting up the shader programs.
∗ program.SetShadersFileName sets the address of the vertex and fragment shader
files so we can load and compile them next.

1
Figure 1: Running the skeleton code should produce a red bunny as shown here.
∗ program.Init loads, compiles, and links, the shader files, so they are ready to be executed.
∗ program.SendAttributeData sends the vertex attributes to the GPU. Attributes are
defined at each vertex and will be directly passed as the input to the vertex shader program.

program.SendAttributeData(posBuff, ‘‘vPositionModel’’) sends the
position of each vertex which is stored in posBuff to the vertex program under the name
of vPositionModel. If you look at the vertex program (“shader.vert”), you see a variable called vPositionModel which contains the position of each vertex. Here, we send
the position and normal of the vertices to the vertex program.

– The Display function
∗ The first few lines of this function set the projection, view, and model matrices. These are
the matrices used to project the vertices from the object space (the space that the positions
in posBuff are given in) to the normalize device coordinate.

∗ program.Bind activates the shader programs so they can be used for drawing. Note that
you can have an array of program variables each set up with a different vertex and shader
files, e.g., program[0] set up with “shader0.vert” and “shader0.frag” and program[1]
set up with “shader1.vert” and “shader1.frag”. This set up can be done in the Init function. Then in the Display function, you can just bind the particular shader program that
you would like to be used for drawing, e.g., program[1].Bind(). In fact in this assignment you have to write three different shaders and be able to cycle through them using
a key.

∗ program.SendUniformData sends uniform data to the shader programs (both vertex
and fragment). These are the variables that are the same for all the vertices. Here, we
send the model, view, and projection matrices to the vertex program, so we can use them
to transform the vertices to normalize device coordinate in the vertex program. Variables
model, view, and projection in “shader.vert” are basically these 4×4 matrices.

∗ glDrawArrays basically performs the graphics pipeline including the shader programs
to display the triangles on the screen.
∗ Finally, program.Unbind deactivates the shader.
• Now let’s look at the “shader.vert” and “shader.frag”
2
– “shader.vert”
∗ At the top, we define the two attributes vPositionModel and vNormalModel containing the position and normal of each vertex, respetively. These two variables are of type
vec3 meaning that they have three elements. The data for these two variables are privided
through calling program.SendAttributeData in the Init function of “main.cpp”.

∗ Next we define three uniform 4×4 matrices (mat4) to serve as the model, view, and projection matrices. Uniform variables are constant for all the vertices (do not change from
one vertex to another). These matrices are passed to the vertex shader through calling
program.SendUniformData in the Display function of “main.cpp”.
∗ The next few lines of codes define a structure for holding the information about the light
sources.

∗ The line uniform lightStruct lights[NUM LIGHTS] creates an array of the
previously defined light structure, called lights. Note that since the light sources are the
same for all the vertices, they should be defined as uniform variables. Moreover, please
note that currently no value is passed to these variables. You need to define these light
sources in the “main.cpp” by indicating the position and color of each light source and
then pass them to the vertex program by calling program.SendUniformData with
appropriate arguments in the Display function.

∗ Next, we have four uniform variables (three vec3 and one float) which contain information about the materials. These are the parameters required to calculate the color of each
pixel or vertex based on the Phong shading model. Again the value for these variables need
to be passed by calling program.SendUniformData with appropriate arguments in
the Display function.

∗ In the next line, we define a varying variable of type vec3, called color. Similar
to attributes, varying variables are defined per vertex. These are the variables that are
defined in the vertex program and are passed in the fragment program. This is in contrast
to attributes which are passed from CPU to the vertex program.

∗ The next few lines are the main function of the vertex program. We first multiply the
model, view, and projection matrices to transform the vertices stored in vPositionModel
to normalized device coordinate. The output of this process is stored in gl Position
which is a pre-defined output of the vertex shader. Note that, we add 1 to the end of each
vertex to take them to homogeneous coordinate. We then define the color of each vertex to
be red (vec3(1.0f, 0.0f, 0.0f)).
– “shader.frag”

∗ We first define the varying variable color. This is the variable that is passed from the
vertex program to the fragment program.
∗ In the main function, we set gl FragColor which is a pre-defined output of the fragment shader to be equal to color. Note that, gl FragColor has four elements corresponding to red, gree, blue, and alpha. Alpha defines the transparency of the color. Alpha
equal to 1 means the object is opaque which is why we add 1 to the end of the color
variable before passing it as gl FragColor.

4 Task 2
In this part, you will be implementing Phong shading model using Gouraud approach. Phong shading
model uses the following equation to calculate the color of each point:
3
I = ka +
X
k
i=1
Ci
[kd max(0, Li
· N) + ks max(0, Ri
· E)
s
] . (1)

Note that this equation is slightly different from the one in the slides. In the slides, the ambient term is
defined as kaA , but here we assume the intensity of the ambient illumination A is equal to (1, 1, 1). Use
the following material and light sources to implement the shading.
• Material 1
– ka = (0.2, 0.2, 0.2)
– kd = (0.8, 0.7, 0.7)
– ks = (1.0, 1.0, 1.0)
– s = 10.0
• Light
– light 1
∗ position = (0.0, 0.0, 3.0) in world coordinate
∗ color = C1 = (0.5, 0.5, 0.5)
– light 2
∗ position = (0.0, 3.0, 0.0) in world coordinate
∗ color = C2 = (0.2, 0.2, 0.2)

You need to first set the material and light information in the structure arrays provided at the top of
“main.cpp”. Note that materials is an array of size 3, but for this task you only set the first element,
i.e., materials[0]. You need to set the other two materials and be able to cycle through them in the
next task.

Once you set material and light sources properly, you need to pass them to the vertex shader by calling
program.SendUniformData with appropriate arguments in the Display function. Note that, the
position and color of the light sources are defined with a structure in the shader program. You won’t be
able to pass a structure from CPU to GPU, so you should just pass the individual properties for each light
source.

For example, you can pass the position of the first light source as:
program.SendUniformData(lights[0].position, ‘‘lights[0].position’’)
Here, the first argument is the name of the variable on CPU, and the second argument is the name of the
variable on GPU inside quotations.

The position and color of light sources along with the material properties (ka, kd, ks, s) can then be
used to implement Eq. 1 in the vertex program. I in this equation is basically the color of the vertex. So
you need to set color (the varying variable in the vertex program) to I. This color is then interpolated
in the graphics hardware to obtain the color of each pixel (fragment).

The varying variable color in the
fragment shader is the interpolated color and thus setting gl FragColor to this color variable in the
fragment shader results in outputting the shaded pixels. Since the shading is done in the vertex program
(per each vertex) and the color of each pixel is obtained by interpolating the color at the three vertices, this
shader is the implementation of the Gouraud approach.

Note that, the interpolation is done on the graphics
hardware in the processes between vertex and fragment shaders and you do not need to implement it.
Note that, in order to implement Eq. 1, in addition to the materials and light properties, you need to
compute Li
, N, Ri
, and E. As discussed in the class, you can compute them either in the world space or
4
Figure 2: The outcome of task 2.

camera space. Let’s say you choose to compute them in the world space. In this case, the position of the
vertices and normals (vPositionModel and vNormalModel) are in the object space. So you have to
first transform them properly using the model transformation to the world space (you have to be careful
about normal transformation, as discussed in the class). Since position of light sources and eye is given in
the world space, they do not require any transformations.
If you implement this task correctly, you should be able to see a bunny shown in Fig. 2

5 Task 3
Here, you’ll be creating two more materials and add keyboard hooks to be able to cycle through these
materials with the m/M keys (m moves forward, while M moves backward). The two additional materials
are as follows:
• Material 2
– ka = (0.0, 0.2, 0.2)
– kd = (0.5, 0.7, 0.2)
– ks = (0.1, 1.0, 0.1)
– s = 100.0
• Material 3
– ka = (0.2, 0.2, 0.2)
– kd = (0.1, 0.3, 0.9)
– ks = (0.1, 0.1, 0.1)
– s = 1.0
The rendered bunny using these two materials and Gouraud approach is shown in Fig. 3

6 Task 4
Here, you’ll be implementing two more shaders.
• Phong approach for shading – As discussed in the class, the Phong approach performs per-pixel
shading, as opposed to per-vertex shading in Gouraud approach. This is very similar to the what you
implemented for the Gouraud approach. Here, instead of calculating Eq. 1 in the vertex shader, you
5
Figure 3: The rendered bunny using material 2 (left) and 3 (right).
Figure 4: The rendered bunny using Phong approach (material 1) (left) and silhouette shader (right).
have to implement it in the fragment shader and directly set the final color I to gl FragColor. Of
course in this case, you need to pass the materials and lighting information as well as other variables
to the fragment shader to be able to do the computations. The outcome of this shader for material 1
is shown in Fig. 4 (left).

• Silhouette shader To implement a silhouette shader, you need to color every fragment black, except
for the ones that the angle between the normal and eye is close to 90◦
. For this, you need to compute
the dot product of the normal and eye vector and threshold the results. Make sure the threshold is
chosen properly so you get similar result to Fig. 4 (right). Note that this shader doesn’t use the light
or material information.

Pressing ’1’, ’2’, and ’3’ should switch to Gouraud, Phong, and Silhouette shaders, respectively.
For this, you need to implement each shader approach in a set of different shader files, e.g., Gouraud in
“shader1.vert” and “shader1.frag”, Phong in “shader2.vert” and “shader2.frag”, and silhouette in “shader3.vert”
and “shader3.frag”. You should also create an array of program. Then in the Init function you assign a particular shader to each program element (e.g., program[0] for Gouraud – “shader1.vert” and
“shader.frag”) and load and compile all of these shaders and pass appropriate attributes to them. In the
Display code, you should then bind the correct shader program based on the keyboard input.
6

7 Task 5
Provide the ability to move the light sources. The user should be able to cycle through the two light sources
with l/L (l move forward and L move backward) and move the selected light source in x, y, and z axis
using keys x/X, y/Y, and z/Z, respectively; Positive direction for lower case letters and negative direction
for upper case letters.

8 Deliverables
Please follow the instruction below or you may lose some points:
• You should include a README file that includes the parts that you were not able to implement, any
extra part that you have implemented, or anything else that is notable.
• Your submission should contain folders “src” and ’shaders’ as well as “CMakeLists.txt” file. You
should not include the “build” or “obj” folder.
• Zip up the whole package and call it “Firstname Lastname.zip”. Note that the zip file should extract
into a folder named “Firstname Lastname”. So you should first put your package in a folder called
“Firstname Lastname” and then zip it up.

9 Ruberic
Total credit: [100 points]
[30 points] – Implementing the Gouraud approach
[30 points] – Implementing the Phong approach
[15 points] – Implementing the silhouette shader
[10 points] – Ability to cycle through multiple materials with the keyboard
[15 points] – Ability to move the light sources with the keyboard
Extra credit: [10 points]
[10 points] – Implement a spotlight
7

CSCE 441 – Computer Graphics Programming Assignment 5

1 Goal
The goal of this assignment is to write a ray tracer.
2 Starter Code
The starter code can be downloaded from here.

3 Task 1
Download the code and run it. You should be able to see a black screen. Make sure you write your name
in the appropriate place in the code, so it shows up at the top of the window. Here is a brief explanation of
the starter code:
• There is only one folder in the package, which includes all the source files.
• The main function in “main.cpp” first calls the Init function.
• The Init function performs the followings:
– First, it initializes the window.
– Then it creates an instance of the Scene class. This class is a container that holds all the scene
information including the shapes and light sources.

– Next it create an instance of the Camera class. Currently, only the resolution of the image is
passed to the camera. However, you need to modify this and pass all the necessary information.
– Then we call a function of the Camera class to take a picture of the scene using ray tracing.
This is where the main loop of the ray tracer should be implemented in. This function takes
the scene as the input and takes a picture of it. The picture is stored in a private variable called
renderedImage.

– We then have the camera return the rendered image and set it to the frameBuffer through
memcpy.
Note that, you do not need to follow this starter code structure. If you want to write your ray tracer
differently, feel free to do so. The starter code is just provided to help you.

4 Task 2
In this part, you will be implementing a ray tracer. Your ray tracer should support the followings:
• Shadows: You should compute shadow rays to all the light sources.
• Reflections: Your code should recursively call the reflection ray, i.e., you shouldn’t hard code the
reflection calls. Set the level of recursion to at least 4. DO NOT HARD CODE FOUR REFLECTION
CALLS.
• Lighting: You should compute basic lighting (ambient, diffiuse, specular) as explained in the shading
lectures.

You are expected to demonstrate your ray tracer on the following scene. Here is the camera properties:
1
• Eye = (0.0, 0.0, 7.0)
• Look at = (0.0, 0.0, 0.0)
• Up = (0.0, 1.0, 0.0)
• FovY = 45
• Width Res = 1200
• Height Res = 800

Your scene should include two planes, four spheres, and two light sources as follows:
• Shapes
– Sphere 1
∗ Position = (-1.0, -0.7, 3.0)
∗ Radius = 0.3
∗ ka = (0.1, 0.1, 0.1)
∗ kd = (0.2, 1.0, 0.2)
∗ ks = (1.0, 1.0, 1.0)
∗ km = (0.0, 0.0, 0.0)
∗ s = 100.0
– Sphere 2
∗ Position = (1.0, -0.5, 3.0)
∗ Radius = 0.5
∗ ka = (0.1, 0.1, 0.1)
∗ kd = (0.0, 0.0, 1.0)
∗ ks = (1.0, 1.0, 1.0)
∗ km = (0.0, 0.0, 0.0)
∗ s = 10.0
– Sphere 3 (reflective)
∗ Position = (-1.0, 0.0, -0.0)
∗ Radius = 1.0
∗ ka = (0.0, 0.0, 0.0)
∗ kd = (0.0, 0.0, 0.0)
∗ ks = (0.0, 0.0, 0.0)
∗ km = (1.0, 1.0, 1.0)
∗ s = 0.0
– Sphere 4 (reflective)
∗ Position = (1.0, 0.0, -1.0)
∗ Radius = 1.0
∗ ka = (0.0, 0.0, 0.0)
∗ kd = (0.0, 0.0, 0.0)
2
∗ ks = (0.0, 0.0, 0.0)
∗ km = (0.8, 0.8, 0.8)
∗ s = 0.0
– Plane 1
∗ Center = (0.0, -1.0, 0.0)
∗ Normal = (0.0, 1.0, 0.0)
∗ ka = (0.1, 0.1, 0.1)
∗ kd = (1.0, 1.0, 1.0)
∗ ks = (0.0, 0.0, 0.0)
∗ km = (0.0, 0.0, 0.0)
∗ s = 0.0
– Plane 2
∗ Center = (0.0, 0.0, -3.0)
∗ Normal = (0.0, 0.0, 1.0)
∗ ka = (0.1, 0.1, 0.1)
∗ kd = (1.0, 1.0, 1.0)
∗ ks = (0.0, 0.0, 0.0)
∗ km = (0.0, 0.0, 0.0)
∗ s = 0.0
• Light
– light 1
∗ Position = (0.0, 3.0, -2.0)
∗ Color = (0.2, 0.2, 0.2)
– light 2
∗ Position = (-2.0, 1.0, 4.0)
∗ Color = (0.5, 0.5, 0.5)
Your final rendering should look exactly like the image in Fig. 1.

4.1 Steps
Here are the steps to take to properly implement the ray tracer:
• Fill in the Camera class to have the necessary member variables. Currently, the instance of the
Camera class in the Init function of “main.cpp” is created with only width and height information.
You should set it up using the camera parameters provided above.
• Set up the scene with the information provided above by filling in the Shape and Light vectors in
the Scene class. This can be done in the constructor of the Scene class.
To do this, you need to first add appropriate member variables to the Shape, Sphere, Plane, and
Light classes. For example, Light class should at least have the position and color as member
variables. Note that, Shape is the base class for the Sphere and the Plane and has access to all
their member variables.

Once all the classes are properly set up, you can create instances of Sphere, Plane, and Light classes
based on the information provided above and then push them into the appropriate vector.
3
Figure 1: The result of rendering the scene.
Figure 2: From left to right the outcome of steps 1, 2, 3, and 4 in Sec. 4.2.
• Next, you should fill in the TakePicture function of the camera class. In this function you should
loop over all the pixels. For each pixel, you first create a ray pointing from the camera to the pixel.
You then call a function to get the color of the ray. Most of the computations are done inside this
recursive function. The returned color of the ray is then used to set the color of the renderedImage
array at that pixel.

4.2 Suggestions
Since the ray tracer has several features, it would be better if you build your system incrementally. For
example, you can do the following (see Fig. 2 for outcome of each step):
1. Render a sphere without any shadows or recursive calls.
2. Add a plane to the scene.
3. Shoot shadow rays to be able to render shadows.
4. Implement the recursive reflection call and add a reflective sphere to the scene to test it.
5 Bonus
• Add the ability to render triangles
• Add area light and render soft shadows

4
6 Deliverables
Please follow the instruction below or you may lose some points:
• You should include a README file that includes the parts that you were not able to implement, any
extra part that you have implemented, or anything else that is notable.
• Your submission should contain folders “src” as well as “CMakeLists.txt” file. You should not include the “build” folder.

• Zip up the whole package and call it “Firstname Lastname.zip”. Note that the zip file should extract
into a folder named “Firstname Lastname”. So you should first put your package in a folder called
“Firstname Lastname” and then zip it up.

7 Ruberic
Total credit: [150 points]
[20 points] – Camera set up and primary rays generated
[10 points] – Plane is intersected correctly
[15 points] – Sphere is intersected correctly
[20 points] – Depth test is done properly
[30 points] – Basic local lighting is computed
[10 points] – Normals of spheres and planes computed correctly
[10 points] – Light, view, and reflection vectors are computed correctly
[10 points] – Shading is computed using Phong model
[25 points] – Shadow rays are calculated and incorporated
[30 points] – Reflection is supported
[20 points] – Reflection is computed recursively
[10 points] – Color of reflected vector is combine with local Phong shading
Extra credit: [15 points]
[5 points] – Triangle
[10 points] – Area light
5

CSCE 441 – Computer Graphics Programming Assignment 6

1 Goal
The goal of this assignment is to become familiar with Bezier curves. You will write a program that
generates points on a Bezier curve, given the control points.
2 Starter Code
The starter code can be downloaded from here.

3 Task 1
First, download and get the code itself to run. The code has the following characteristics:
• You can specify a file to read as a command-line argument. There is also a default set.
– 3 sample files have been included. Each one will contain 4 Bezier control points.
• Reading the file is NOT included in the code. You will want to read the file yourself.
• You will want to fill in two routines that are unfinished: loadFile and generatePoints. The
rest of the routine should be done for you.

• Note that points are stored as glm::vec3’s.
• The program will display a region from [-10,10] in x, y, and z. You do not need to change anything
about the display routine.

4 Task 2
You are to finish the program. This involves filling in two functions (you may write additional functions if
you want):
• The loadFile routine. It takes in a file name and should return a vector consisting of the four
control points of the curve.
– The first line will be a single number, n, giving the number of control points. Here, the curves
you read will always have exactly 4 control points, so n will be 4.
– The next n lines will each contain 1 control point.
– Each line will consist of 3 values (x, y, z coordinates of the point)

– You should read in the 4 points and store them in a vector of points that is returned from the
function.
• The generatePoints routine. It takes in a vector that contains 4 control points. It should output
a vector of points that are along the curve.
– The input will be a vector of the four control points, in order.
– You should generate a list of at least 51 points along the curve, in order, from the start of the
curve (t = 0) to the end of the curve (t = 1).

∗ You can do this by directly evaluating the polynomial function. However, to get the full
credit you need to implement de Casteljau’s algorithm.
– The results should be stored in a vector of 3D points that are returned.
1

5 Deliverables
Please follow the instruction below or you may lose some points:
• You should include a README file that includes the parts that you were not able to implement, any
extra part that you have implemented, or anything else that is notable.
• Your submission should contain folders “src” as well as “CMakeLists.txt” file. You should not include the “build” or “DataFile” folders.
• Zip up the whole package and call it “Firstname Lastname.zip”. Note that the zip file should extract
into a folder named “Firstname Lastname”. So you should first put your package in a folder called
“Firstname Lastname” and then zip it up.

6 Ruberic
Total credit: [50 points]
[10 points] – File is read in correctly
[30 points] – Points are generated along curve correctly
[10 points] – Points are generated using de Casteljau algorithm
7 Acknowlegement
The assignment is from John Keyser with slight modification.
2