In this assignment, you will use texturing techniques to add more details to the scene. We consider the task
of rendering a terrain like the one above. The assignment is split into there exercises and one bonus exercise.
You will start from the flat land to which you will first add height using displacement mapping. Next, you
will texture the terrain using images of, for example, grass, sand, stones, etc. In the third exercise, your task is
to estimate normal directions for the terrain from the heightmap and add simple diffuse shading to the scene.
The last (bonus) exercise is about improving normal estimation to get a more accurate shading. The image
above presents a sample rendering after completing all the tasks.
Important: You have an option to opt out from these tasks and choose to implement texturing in our raytracer.
The option might be especially appealing for people taking part in the rendering competition. Please find
more details on that at the end of the document.
The current template provides a solution to the previous assignment. It was also modified to accommodate
new functionalities you will add in this assignment. Below, find the most significant changes to the previous
2. The geometry.js file includes now a function for creating terrain geometry, which is a subdivided quad
(Figure 1). A variable inside the function can control the amount of subdivision. Look for the comments
in the file.
3. The main file includes now additional functionality related to the terrain rendering and textures setup.
For example, there is a definition of additional shader program for terrain rendering, the code for creating a VAO for terrain, as well as reading textures, uploading them to GPU, and setting them up during
the rendering. Some of the code has to be still uncommented. Look for the comments in the file. To
correctly initialize the textures you will need to feel two arrays textureFiles and textureVariables. They
should include the paths to the images and corresponding names in the shader program.
Figure 1: The definition of the terrain geometry; view from the top, down the y-axis. The z-values for all
vertices are equal to 0.
Feel free to adapt the template, rewrite it in your way, or copy parts of it to your solution of the previous
Exercise 1 [5 points]
To complete this exercise, you will need to add displacement mapping to the terrain. You should start by
getting a displacement map corresponding to terrain altitude. You have three options for that:
1. On iCorsi you can find preselected fragments of the altitude map of Switzerland.
2. There is also a link to the altitude map of the entire Switzerland from which you can crop a region you
would like to work on.
3. You can visit https://heightmap.skydark.pl/ webesite and download some altitude map
from there. Unfortunately, the website does not provide accurate altitude information for mountainous
In general, we recommend looking for places with nice height variation as well as some water. Later in the
course, you will be able to implement reflections of the environment in the water.
Your next step should be adding the heightmaps as a texture to the WebGL code. You should also define a
new shader program (both vertex and fragment shader, see the template) for rendering the terrain. In this
assignment, the shader program should: (1) displace the vertices along the y-axis according to the information
in the heightmap; (2) render the terrain using one color, e.g., grey. Optionally, you can color the surface of the
terrain with the height. This will allow you to investigate better whether the displacement mapping works
Exercise 2 [5 points]
The next step is adding the textures to color the terrain. You should use at least three different textures. In the
example from the beginning of the document, we used the textures for grass, sand, and rocks. Additionally,
the water and the snow have single colors, i.e., they are not textured. For the textures, you will find many
resources online. In particular a very useful one is 3DTextures1
. For more resources, you can also check the
We suggest applying textures to the terrain according to the height. In other words, the final color of the
fragment should get the value of the texture depending on which height it corresponds to. To make the scene
nicer, you can blend the textures on the transitions. This can be done, for example, by using GLSL smoothstep
Alternatively to the above texturing strategy, you can prepare additional texture/textures, which encode the
masks for each material. You can get very nice results with such a technique, but it requires a bit more work.
For the inspiration on this you can check the following article3
Exercise 3 [5 points]
In the last step, you should add the shading to your terrain. Please assume that the terrain is a Lambertian
surface. For this exercise, you will need to expand the fragment shader for the terrain. In particular, for each
fragment (in the fragment shader), you need to estimate the corresponding normal vector using heightmap
information. To this end, please recall that the normal vector can be estimated as a cross-product between two
vectors, which are tangential to the surface. While you can compute tangential vectors along any direction,
the most natural way is to choose the vectors along u and v-axes. You can estimate the vectors by fetching
the neighboring texels from the heightmap.
• Leave the Phong-shaded sphere from the previous assignment and set it such that it appears above the
terrain. This will allow you to easily identify the direction of the light and check whether the shading
• You may want to scale the terrain non-uniformly using a model matrix to flatten it a bit. Remember to
T matrix for normal transformation, where M is the model matrix.
• To get a nice surface of water, you can change the normal estimation for the water to be [0.0, 1.0, 0.0]T
Bonus [3 points]
To get extra bonus points, you can improve the normal estimation in your renderer. Currently, the problem is
that your Web browser most likely does not support 16-bit images while the downloaded hightmaps should,
in fact, be 16-bit images. This means that your heightmap in the GLSL program is represented using only
8-bit. This introduces visible errors in the estimated normals (see the illustrations in Figure 2). But there is
an old trick in computer graphics to overcome this limitation. You already use a texture that has at least
three channels (red, green, blue), and each of them has 8-bit precision. You can, therefore, encode your 16-bit
heightmap in two channels, for example, by encoding the first 8-bits in the red channel and the others in
the green channel. This way, you can store in the texture all 16 bits even though the texture has only 8-bit
precision. You can use any means to do the conversion. To generate the sample solution, we used a short
Matlab script. Feel free to use any programing language. To do the improved normal computation in your
fragment shader you will need to decode the height from the two channels before computing the normal
Exercise 1 Exercise 2 Exercise 3 Bonus
Figure 2: The illustrations of the sample solutions for the individual exercises. Please, note that in the first
image, the color of each fragment is set according to the input height. The red circles demonstrate the problem
when using only 8-bit precision and the improvement proposed in the bonus exercise.
Texturing in the raytracer [18 points4
If you like the raytracer we were developing in the first part of the course, now it is the time to extend it by
nice texture mapping. Instead of solving the above exercises you can implement texturing in your raytracer.
4The points from the rasterization and the raytracing parts cannot be combined. You have to choose one option.
You should use at least one object to be textured. It can be either a sphere or a cone. Solving this version of
the assignment will enable you to add interesting materials to the scene. This is the alternative list of tasks
which at the end will sum up to the same number of points as the regular assignment:
• Images as textures [8 point] You will need to start by making your raytracer application read images.
For this you will need to use external library. One option is to use a library for reading JPEG files.
Reading these files is, however, complicated and so are the usually available libraries. Alternatively,
you can convert images representing textures to PNG, BMP, or PPM files using available tools for image
manipulation. Libraries for reading these formats are usually much simpler. You are free to use any
solution and library you like. Once you are able to read image files, modify the procedural texturing we
implemented in the raytracer to use image information and add color texture to one object in the scene.
For the source of textures see Exercise 2.
• Normal mapping [7 points] While the previous tasks is about getting simple texturing working, now
you are asked to also read normal map and implement normal mapping. For this, you will need to
compute tangent and bitangent vectors to properly transform normals read from the files to the object
space. As you will be texturing simple objects, computation of these vectors is also rather simple.
• Ambient occlusion and roughness textures [3 points] Make use of ambient occlusion and roughness
textures. Ambient occlusion texture represent the amount of indirect light and can be used as a multiplier for the ambient term in the Phong reflectance model. Roughness can be used to calculate shininess
parameter in model. Note that roughness is opposite to shininess, and you can map the roughness
value to the shininess parameter in whatever way you want. One possibility is to use something like:
Submitting the assignment
Submit an archive file (ZIP) with all the files required for running your application. If you also solve the
bonus assignment, please include in your submission the source code of the program that does the encoding
of the heightmap in two 8-bit channels.
Solutions must be returned on December 8, 2022 via iCorsi3