Description
1 Overview
The goal of this homework is to become familiar with robot localization using particle
filters, also known as Monte Carlo Localization. In particular, you will be implementing a global localization filter for a lost indoor mobile robot (global meaning
that you do not know the initial pose of the robot). Your lost robot is operating
in Wean Hall with nothing but odometry and a laser rangefinder. Fortunately, you
have a map of Wean Hall and a deep understanding of particle filtering to help it
localize.
As you saw in class, particle filters are non-parametric variants of the recursive
Bayes filter with a resampling step. The Prediction Step of the Bayes filter involves
sampling particles from a proposal distribution, while the Correction Step computes
importance weights for each particle as the ratio of target and proposal distributions.
The Resampling Step resamples particles with probabilities proportional to their
importance weights.
When applying particle filters for robot localization, each particle represents a
robot pose hypothesis which for a 2D localization case includes the (x, y) position
and orientation θ of the robot. The Prediction and Correction Steps are derived
from robot motion and sensor models respectively. This can be summarized as an
iterative process involving three major steps:
1. Prediction Step: Updating particle poses by sampling particles from the motion model, that is x
[m]
t ∼ p(xt
|ut
, x
[m]
t−1
). The proposal distribution here is
the motion model, p(xt
|ut
, xt−1).
1
Algorithm 1 Particle Filter for Robot Localization
1: X¯
t = Xt = φ
2: for m = 1 to M do
3: sample x
[m]
t ∼ p(xt
| ut
, x
[m]
t−1
) . Motion model
4: w
[m]
t = p(zt
| x
[m]
t
) . Sensor model
5: X¯
t = X¯
t +
D
x
[m]
t
, w
[m]
t
E
6: end for
7: for m = 1 to M do
8: draw i with probability ∝ w
[i]
t . Resampling
9: add x
[i]
t
to Xt
10: end for
11: return Xt
2. Correction Step: Computing an importance weight w
[m]
t
for each particle as the
ratio of target and proposal distributions. This reduces to computing weights
using the sensor model, that is w
[m]
t = p(zt
|x
[m]
t
,M).
3. Resampling Step: Resampling particles for the next time step with probabilities proportial to their importance weights.
Here, m is the particle index, t is the current time step, and M is the occupancy
map. x
[m]
t
, w
[m]
t
is the robot pose and importance weight of particle m at time t.
2 Monte Carlo Localization
Monte Carlo Localization (MCL), a popular localization algorithm, is essentially the
application of particle filter for mobile robot localization. You can refer to Section
4.3 of [1] for details on the MCL algorithm. Algorithm 1, taken from [1], describes
the particle filter algorithm applied for robot localization.
As you can see, the MCL algorithm requires knowledge of the robot motion and
sensor models, and also of the resampling process to be used. We briefly describe
these three components and point you to resources with more details and pseudocodes.
Motion Model
The motion model p(xt
|ut
, xt−1) is needed as part of the prediction step for updating particle poses from the previous time step using odometry readings. Chapter
5 of [1] details two types of motion models, the Odometry Motion Model and the
Velocity Motion Model. You can use either model for sampling particles according
to x
[m]
t ∼ p(xt
|ut
, x
[m]
t−1
). The Odometry Motion Model might be more straightforward to implement since that uses odometry measurements directly as a basis for
computing posteriors over the robot poses.
Sensor Model
The sensor model p(zt
|xt
, m) is needed as part of the correction step for computing
importance weights (proportional to observation likelihood) for each particle. Since
the robot is equipped with a laser range finder sensor, we’ll be using a beam measurement model of range finders. Section 6.3 of [1] details this beam measurement
2
model p(zt
|xt
, m) as a mixture of four probability densities, each modeling a different type of measurement error. You’ll have to play around with parameters for
these densities based on the sensor data logs that you have. You are also free to go
beyond a mixture of these four probability densities and use a measurement model
that you think describes the observed laser scans better.
Additionally, as part of this beam measurement model, you’ll be performing
ray-casting on the occupancy map so as to compute true range readings z
k∗
t
from
individual particle positions (shifted to laser position).
Hint: The book specifies that the sensor model needs to be a normalized probability distribution, however, we have found in practice it is easier to debug when
the mixture weights (and thus the distribution) are unnormalized as the particle
weights are later normalized.
Note: Keep in mind that the sensor coordinate frame and robot coordinate
frame are not perfectly aligned. See data/instruct.txt for details on this as well
as units used in the provided data.
Resampling
As part of the resampling process, particles for the next time step are drawn based
on their weights in the current time step. A straightforward resampling procedure
would be sampling from a multinomial distribution constructed using importance
weights of all particles. However, repetitive resampling using such a technique may
cause the variance of the particle set (as an estimator of the true belief) to increase.
One strategy for reducing the variance in particle filtering is using a resampling
process known as low variance sampling. Another strategy is to reduce the frequency
at which resampling takes place. Refer to the Resampling subsection under Section
4.3.4 of [1] for more details on variance reduction and using low variance resampling
for particle filters.
3 Implementation
Resources
You may use any programming language for implementation. There is no real-timeness requirement, although it is advisable to use something faster. Feel free to utilize
the techniques that we have discussed in class as well as extensions discussed in [1]
or elsewhere. You would be performing global localization for a lost indoor mobile
robot in Wean Hall given a map, odometry readings and laser scans. The data
directory that you received with this handout (courtesy of Mike Montemerlo) has
the following files:
• data/instruct.txt – Format description for the map and the data logs.
• data/log/robotdataN.log – Five data logs (odometry and laser data).
• data/map/wean.dat – Map of Wean Hall to use for localization.
• data/map/bee-map.c – Example map reader in C from BeeSoft that you may
use. Note we also provide a Python map reader.
• assets/wean.gif – Image of map (for reference).
3
• assets/robotmovie1.gif – Animation of data log 1 (for reference).
Note: Please note that the resources do not include any groundtruth information. This is something you will come across frequently in real-world robotics.
However, robotmovie.gif can give you a sense of the starting map location and trajectory of the robot in datalog1. You may find this useful in tuning your parameters
even for the other logs.
We have also provided you with helper code (in Python) that reads in the occupancy map, parses robot sensor logs and implements the outer loop of the particle
filter algorithm illustrated in Algorithm 1. The motion model, sensor model, and
resampling implementations are left as an exercise for you.
• main.py – Parses sensor logs (robotdata1.log) and implements outer loop
of the particle filter algorithm shown in Algorithm 1. Relies on SensorModel,
MotionModel and Resampling classes for returning appropriate values.
• map reader.py – Reads in the Wean Hall map (wean.dat) and computes and
displays corresponding occupancy grid map.
• motion model.py, sensor model.py, resampling.py – Provides class interfaces for expected input/output arguments. Implementation of corresponding
algorithms are left as an exercise for you.
You are free to use the helper code directly or purely for reference purposes. To
utilize the framework, please start with a Python 3 environment. We highly recommend creating a virtual environment to be used for the rest of this class using e.g.
conda. A short tutorial for creating a virtual environment can be found here. With
your virtual environment or in your global environment you can use pip install
-r requirements.txt (located in the code directory) to install the basic dependencies.
Improving Efficiency
Although there is no real-time-ness requirement, the faster your code, the more
particles you will be able to use feasibly and faster would be your parameter tuning
iterations. You’ll most probably have to apply some implementation ’hacks’ to
improve performance, for instance,
• Intializing particles in completely unoccupied areas instead of uniformly everywhere on the map.
• Subsampling the laser scans to say, every 5 degrees, instead of considering all
180 range measurements
• When computing importance weights based on the sensor model, be cognizant
of numerical stability issues that may arise when multiplying together likelihood probabilities of all range measurements within a scan. You might want
to numerically scale the weights or replace the multiplication of likelihood
probabilities with a summation of log likelihoods.
• Since motion model and sensor model computations are independent for all
particles, parallelizing your code would make it much faster.
4
• You’ll observe that operations like ray-casting are one of the most computationally expensive operations. Think of approaches to make this faster, for
instance using coarser discrete sampling along the ray or possibly even precomputing a look-up table for the raycasts.
• Lastly, if you use Python, apply vectorization as much as possible; if you’re
comfortable with C++, consider using the OpenMP backend (which is a oneliner) to accelerate.
Debugging
For easier debugging, ensure that you visualize and test individual modules like the
motion model, sensor model or the resampling separately. Some ideas for doing that
are,
• Test your motion model separately by using a single particle and plotting its
trajectory on the occupancy map. The odometry would cause the particle
position to drift over time globally, but locally the motion should still make
sense when comparing with given animation of datalog 1 (robotmovie1.gif).
• Cross-check your sensor model mixture probability distribution by plotting the
p(zt
|z
∗
t
) graph for some set of values of z
∗
t
.
• Test your ray-casting algorithm outputs by drawing robot position, laser scan
ranges and the ray casting outputs on the occupancy map.
4 What to turn in
You should generate a visualization (video) of your robot localizing on robotdata1.log
and another log of your choice. Don’t worry—you’re implementation may not work
all the time—but should perform most of the time for a reasonable number of particles. Hyperlinks to the videos must be in the report—we prefer unlisted Youtube
videos or Google Drive links. Please ensure proper viewing permissions are enabled
before sharing the links. Please speed-up videos to ensure each log is under 2 minutes, and mention the speed multiplier in the video or report. The report must
describe your approach, implementation, description of performance, robustness, repeatability, and results. Make sure you describe your motion and
sensor models, your resampling procedure, as well as the parameters you had to tune
(and their values). Include some future work/improvement ideas in your report as
well. Turn in your report and code on Gradescope by the due date. Do not upload the data/ folder or any other data. Only one group member needs to submit,
and should list all group members on the title page as well as via Gradescope (see
instructions here).
Score breakup
• (10 points) Motion Model: implementation correctness, description
• (20 points) Sensor Model: implementation correctness, description
• (10 points) Resampling Process: implementation correctness, description
5
• (10 points) Discussion of parameter tuning: for Motion and Sensor Models
• (30 points) Performance: accuracy and convergence speed on example logs
• (20 points) Write-up quality, video quality, readability, description of performance, and future work
• (Extra Credit: 10 points) Kidnapped robot problem
• (Extra Credit: 10 points) Adaptive number of particles
• (Extra Credit: 5 points) Vectorized Python or fast C++ implementation
5 Extra credit
Focus on getting your particle filter to work well before attacking the extra credit.
Points will be given for an implementation of the kidnapped robot problem and
adaptive number of particles. Please answer the corresponding questions below in
your write up.
i. Kidnapped robot problem: The kidnapped robot problem commonly refers
to a situation where an autonomous robot in operation is carried to an arbitrary
location. You can simulate such a situation by either fusing two of the log files or
removing a chunk of readings from one log. How would your localization algorithm
deal with the uncertainity created in a kidnapped robot problem scenario?
ii. Adaptive number of particles: Can you think of a method that is more
efficient to run, based on reducing the number of particles over timesteps? Describe
the metric you use for choosing the number of particles at any time step.
You will also receive bonus credits provided your implementation is optimized,
either with vectorization in Python or acceleration in C++.
6 Advice
The performance of your algorithm is dependent on (i) parameter tuning and (ii)
number of particles. While increasing the number of particles gives you better
performance, it also leads to increased computational time. An ideal implementation
has a reasonable number of particles, while also not being terribly slow. Consider
these factors while deciding your language of choice—e.g. choosing between a faster
implementation in C++ or vectorized optimization in Python vs. using the raw
Python skeleton code.
References
[1] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic robotics. MIT
press, 2005.
6


