The project proposal should be 2-page document including the following:
– The project goals
– An abstract of the project you want to do
– Detail of steps that need to be taken to achieve the goals:
o A timeline should be included for how long each step can take
– A brief literature overview and some references about your project.
One important factor in writing proposals is the doability factor; how achievable your goals are. I suggest evaluating your
capabilities realistically and set your goals slightly higher. This can help you be realistic while at the same time pushing
yourself to learn more.
The member 1 of the team (link to teams here) is responsible for uploading the proposals in the Blackboard. If you want to
change the member 1 please edit the link above.
As explained in the class, there exists a basic project definition which comprises only up to 80% of the total project grade.
To achieve the rest 20%, you have to go farther and implement an algorithm from scratch or design a complex system. Your
proposal does not need to necessarily include the basic project if it is complex and involving enough.
If you want to use a robot platform rather than Turtlebots, your proposals should reflect that. Focus on capabilities of the
robot you want to use and indicate why you need those capabilities to achieve your goals.
– Progress report presentation
– Final presentation
– Detailed technical document
– A final video highlighting your achievements
Standard Project: Autonomous Robot Navigation (up to 80% of your project grade)
In this project, you will program a mobile robot so that it can navigate in an unknown environment autonomously while
mapping its environment. You will focus on high-level mapping and navigation tasks for the robot to create a world model
of the environment and then travel through it.
Upon successful completion of this project, you will be able to:
1. program a mobile robot for waypoint navigation.
2. program a mobile robot for mapping its environment.
3. program a mobile robot for obstacle avoidance.
4. program a mobile robot for path planning in a (dynamic) world map.
1. Robot must operate fully autonomously within the experimental environment setup.
2. Robot must effectively avoid obstacles and walls within its environment.
3. Robot must generate a local map of its environment.
4. Robot must generate a global map of its workspace.
5. Robot must be capable of planning an admissible trajectory to move within its workspace.
6. Robot must be capable of estimating its position and orientation in world coordinates.
7. Robot must be capable of navigating to a given waypoint. The waypoint will be considered reached when any part
of the robot covers the waypoint marker. The waypoints will remain stationary within robot’s workspace and will
be marked by a circle with diameter equal to the Turtlebot’s diameter. Robot must navigate at least 4 out of 5
8. Robot must return to the starting point (base) by planning a path that will cover all the open areas in the map
generated. The robot must update the world map while returning to the base. Note that the location of the obstacles
on the return trip may change and the robot must adjust the map and plan accordingly.
Figure 1 – An example world for autonomous robot navigation
A non-inclusive list of suggestions for proposals
– Implement an algorithm from scratch for robot localization, path planning, etc. Examples:
o Path planning algorithms like A*, D*, RRT, trajopt (some not taught in the class)
o Collaborative mapping (by multiple robots)
o Visual odometry
o 3D mesh reconstruction
– Implement a working complex system. Example:
o A robot that gets an image of a location from a cell phone in an environment, go search and find the location.
– (partial bonus) Using AWS RoboMaker for your projects
– (partial bonus) Using docker for development
Issues on Robots
For reporting robot problems please file issues at the following link:
The title of the issue should have as title “[NAME OF THE ROBOT] A BRIEF INFORMATIVE TITLE”. Teams are
responsible for reporting and troubleshooting the issues (as this is part of a robotic career), but our support team may also
Access to the physical robot is limited. The robots can be accessed through checkouts, i.e. marking a calendar for the time
you need the robot (usually no longer than ~4 hours periods; the process to be announced later in detail). The access to the
physical robot might be different based on the platform you are using as well (Turtlebot, ROSbot, HSR). So instead of trying
all pieces of your code on the physical robot from the beginning, the following more convenient way is suggested: simply
create a working simulation environment, develop your codes in simulation, then try the final code on the physical robot.
This is a workflow used by roboticists all over the world. Fortunately, for all our robot platforms this simulation environment
already exists, although you might need to add your own objects to the environment. Simulations in ROS are usually done
in Gazebo, which is a physics modeling environment to model dynamics, collisions, sensors, etc. The modeling of robots is
done through Universal Robotic Description Format (URDF) XML description. To make modeling of robots more modular
there also exist a macro language named XACRO which helps make the URDF files more modular. As an example of
XACRO and URDF file see the “turtlebot_description” package:
The modularity that ROS provides through using “topics” makes transferring between simulation and reality very easy, only
the interface should be kept the same between the real robot and the simulation. For our current platforms this uniform
interface between simulation and real robot is already implemented.
There exists a tool named RVIZ in ROS, which is a very useful tool for robot visualization. This tool should not be confused
with Gazebo, as it does not provide any physics modeling whatsoever (kinematics, dynamics, collision, etc.). It is JUST a
visualization tool which can be both used with simulation or the physical robot.
• Some basic useful instructions for turtlebots:
• ROS navigation:
• Modeling robots:
• Visualization tools:
• April tags: https://github.com/RIVeR-Lab/apriltag_ii_ros
• Visual odometry:
• Trajectory planning: