Pilot by Freedom Robotics is now live! As part of the launch, we are making it so that anyone can test out our platform free for a whole year. Use code "ROSPILOT" (case sensitive) to access it today!

Sign Up for a 1 Year Free Trial X

Deploying on Mars: Rock solid Odometry for Wheeled Robots

This is the third blog in the build series of my NASA-JPL Open Source Rover. It walks through setting up wheel odometry and what that means in ROS, amongst other things. You can find part 1 and part 2 of my series by following those links.

Setting up a navigation stack on a mobile robot can be a nightmare. I’ve had to do it on various robots, spending countless hours figuring out why my robot got stuck, smashed into a wall, or lost localization. There are hundreds of tuning parameters and settings, and multiple sensors to fuse. Most of these problems, it turns out, are due to bad odometry. With all the exciting evolutions going on, odometry certainly isn’t the hottest topic in robotics, but it’s not a topic to gloss over. I was going through the process of setting up and tuning odometry on my Mars Rover, and realized I could share some tips and tricks I’ve learned over the years on how you get rock solid odometry, for any wheeled robot.

Opportunity_Rover_Tracks

Tracks from the Opportunity rover. Extremely accurate odometry is the difference between being able to barely drive in a straight line and being able to autonomously perform complex maneuvers safely and accurately like NASA’s Opportunity rover.

Odometry can be thought of as estimation of relative location based on speed measurements. Imagine biking with your eyes closed and using how fast you pedal to estimate how far you’ve traveled (this is called dead reckoning). You’ll probably be off, and the farther you bike and the faster you go, the more off you will be on your estimate. A lot of robot sensors lend themselves very well to providing an estimate of velocity, which is used to localize the robot in the world, or build a map of the environment (SLAM).

 

Earth, world, map, odom, odom_combined, base_link, base_footprint

In REP105, ROS defines a standard for transformation frames that describe where a mobile robot is located in the world using the default transformation library, tf2. The description there can be a bit confusing, but it starts making sense once you know what all the frames are for. The two most straightforward frames are the map frame and the base_link frame. base_link represents a fixed pose on the robot, typically the center of the robot. I always put it at the height of the wheel axes, but it can be the top of your base platform as well. The map frame is a fixed point in the world, typically the origin. base_link is expressed with respect to the map frame.

The odom frame on the other hand doesn’t correspond to any real point in the world but rather floats around, connecting map to base link. The transformation odom -> base_link represents where the robot thinks it is based on the odometry source alone. Of course, odometry can drift over time and so a second sensor, typically a lidar, is used to correct the estimates (the update step in a Kalman or particle filter). This hopefully small correction will be applied as a transformation between map and odom so that the full estimate map to base_link is accurate.

So that means:

  • When odometry tracks the actual position of the robot perfectly, which doesn’t happen in reality, map and odom will coincide because no corrections are needed.
  • When odometry is bad, you will see the odom frame continuously jumping around and drifting away from the map frame.

Unlike the absolute estimate of the robot’s location (map -> base_link), the odom frame doesn’t jump and is continuous, which turns out to be quite useful. Imagine having to write a controller for navigating the robot between waypoints on slippery terrain. Wheel odometry will be bad and the robot’s position estimate will continually be jumping around as the update step corrects the estimate using the laser scans. If the robot is moving fast, these jumps can be large and your (PID) controller might produce values all over the place, causing sharp moves, making localization even worse, or even bumping into an obstacle. When you switch to controlling in the odom frame, your controller output will be much smoother, and your robot will converge to the waypoint’s location much faster.

Another frame you might see on some robots is the odom_combined frame. As the name suggests, it combines odometry from multiple sources smoothly, for example using a complementary filter. It’s a great way to assign less weight to a particular sensor when it fails to produce good odometry, for example when you are using visual odometry and the camera is looking at a featureless wall. Base_footprint is simply the projection of the base_link onto the floor, and earth allows representing multiple maps, which is useful if you’re transitioning between maps, or when your (jet-powered) robot moves so fast you have to start incorporating the curvature of the earth.

With that terminology out of the way, let’s jump into how you can make your wheel odometry better!

 

Measuring angular velocity, often overlooked, actually fun

There are several ways to improve wheel odometry, but the one that should be addressed first arises from a simple problem: wheel angular velocity doesn’t match reality, and the error falsely gets classified as slip or drift.

Often when using encoders, there is some conversion that needs to be done to get from encoder ticks to an angular velocity in rad/s. The documentation isn’t very clear, and you’re not sure if you need to divide your values by 4 because you’re using a quadrature encoder. Since you’re using the same calculation to convert commanded speeds to encoder ticks, the velocity readings match the commands nicely, but is that the actual speed your wheels are turning at?

Luckily, there are a few simple ways to check that.

 

A Tachometer

For about $20, you can get this wonderful, velocity measuring device that uses a laser to measure how fast something is spinning.

tachometer1

Using a tachometer allows you to cheaply and accurately identify RPM and tune your models quickly

I recommend adding multiple reflective strips on your wheel and divide the result by the number of strips you added. You’ll get more accurate values at low speeds, and fewer spikes when the velocity drastically changes.

 

Freedom Robotics - Stream Tab

Using the Stream tab in Freedom Robotics’ web app, I copied the last drive command I sent to the wheels

 

Publishing Commands on Freedom Robotics

I then pasted the command and modified it to set one wheel to be spinning at 15.71 rad/s, or 150rpm

 

I sent an angular velocity command to one of the wheels through Freedom’s command interface (you can sign up for a free Freedom Robotics account here) and measured the actual velocity of the wheel. I know my drive motors can spin at a maximum speed of 200rpm, and so I picked 150rpm or 15.71 rad/s as my command. When measuring the actual velocity using the tachometer, I was happy to see it wasn’t far off, running slightly faster than it should be at ~153 rpm. If I had added more reflective markers, I would have divided the result by the total number of markers to get to the same result.

 

A Strobe App On Your Phone

A free, very cool, but less convenient way to measure angular velocity of your wheels is to make use of the wagon-wheel effect, also referred to as aliasing in computer vision. When driving on the highway, sometimes the spokes of a car seem to be standing still or even moving backwards. That happens because our brain (or a camera) can only register one image at a time, at a certain frequency, and the wheels spin so fast that we don’t register the motion that happens between the successive images. As it turns out, if we can control that frequency, we can directly infer the speed, within some range.

Download a strobe app on your phone that lets you finely (and accurately!) control the frequency of the strobe, turn off the light and set your wheels to run at a fixed speed. Now turn on your strobe light to the highest possible frequency. As you turn down the frequency, you should notice a point at which the spokes or marker on your wheel start slowing down and almost come to a standstill. Simply divide that frequency by 2pi to get the angular velocity in rad/s, or multiply it by 60 to get the velocity in rpm. Make sure that you’re not skipping an entire revolution each time (hence the starting at high frequency), or at a fraction of the actual speed if you’re using the spokes. The latter should be easy to spot, as small changes in frequency should throw off the slowed down motion drastically.

StrobeLight_Horizontal

The pattern stopped spinning at a frequency of around 2.5Hz, and was actually pretty sensitive to small changes. Multiply the value by 60 to get the rpm, and you get a perfect 150rpm. Impressive.

 

Now try the same experiment while adding friction to the setup. The wheels should slow down briefly but quickly recover to run at the same speed. If the response is too slow, or the wheels run at a slower speed than commanded, consider making the PID settings more aggressive. If the odometry is correct, but the wheels don’t spin at the commanded speed, you’ll probably be fine, but if it’s off by a lot, you’ll waste a lot of time later debugging your navigation stack.

Tachometer2_Horizontal

If you notice there’s a substantial discrepancy between the readings from the tachometer or strobe, you’re likely looking at a bug. Find and fix the cause before you continue.

 

A Measured Track

If you have an odometry topic and are using the nav_msgs/Odometry message (which I strongly recommend), you can also measure out a fixed distance and drive the robot down that length and at the end of it compare that distance with where the robot thinks it is. You might be surprised to learn how hard this is for your robot.

Test several distances, mainly a short distance, where your robot goes from zero to max speed and back to zero in a short time. Also test several longer distances, the longer the better. There are a few possible outcomes:

  1. The robot does reasonably well on the short distance, but is way off on the longer distances. This means that there’s gradual error accumulation happening along the way.
  2. It actually performs fairly well on the longer distances, but is way off on the short distance. This likely means that acceleration incurs significant slip which isn’t represented well in your odometry model.
  3. It doesn’t perform well in either case. Check that your wheels are spinning at the speed you think they are using the tachometer/strobe methods above.

Perform this experiment on the terrain(s) your robot will find itself navigating as results may be drastically different.

 

Overlapping Lidar Scans

Another quick check that I like to use when I have a lidar is to drive the robot towards a wall and check that the wall shows up on the same spot in the successive lidar readings as the robot moves towards it. This fast method from the ROS nav stack tuning guide requires you to set up a tf static transform between base_link and the lidar frame, but as long as you’re moving forward, you don’t need to know the exact location of the lidar on the robot. Set the timeout on the lidar scans such that you can see multiple successive scans at the same time. Based on how the lidar scans move, you can analyze how well your odometry performs:

  • As you drive forward, the line representing the wall moves forward as well. That means odometry is underestimating the actual movement. We’ll take a look at possible fixes in the next section, but one thing to try is to slightly increase the wheel diameter in code.
  • As you drive forward, the line moves closer to the robot. This typically happens because of slip. A compensating slip factor based on acceleration is a great quick hack for this.
Screen Recording 2020-06-09 at 11.22.54 AM

You can use successive lidar readings as a rough check of odometry. If the lidar views are staying in one place nicely while the robot moves, that is a good indicator that your odometry is working well.

 

Now that we have linear motion of your robot tuned in, let’s analyze the robot’s turns. Turning is harder to analyze and depends strongly on the platform used. A four wheel skid steer will behave very differently from an Ackermann steering robot and as before, I recommend investing time in making sure your math is behaving the way you expect it to. On the Mars Rover and other Ackermann steering based platforms like any car, we have to remember that the faster you drive, the more straight your wheels should be to maintain the same angular velocity, so don’t treat angular velocity like a steering angle.

If you have a differential drive, omni-directional base, or skid-steer, take your robot to a large open space and program it to go to four waypoints to form a square trajectory, about a couple of meters on each side. Use the odometry frame as the reference, so make sure any localization or SLAM node is turned off. The robot likely won’t end up exactly where it started. Based on the shape the robot tracked, you can infer if your robot is under- or overestimating its turns.Freedom Robotics Stream Tab - Odometry StackYou can try a similar experiment for Ackermann steering vehicles by having it track a large circle instead.

 

Motion Tracking

If you have access to it, the most accurate way to verify your odometry would be by using a motion capture setup (MoCap). This approach, often installed in research labs, makes use of multiple very precisely calibrated cameras and a few visual markers on the robot to track the robot over time in a small space. Unlike previous methods, this approach makes it possible to measure angular velocity precisely.

UAV - Motion Capture System

Motion Capture systems, often used to track UAVs accurately, use an array of calibrated cameras and markers to track an object accurately.

Another approach as an alternative to MoCap is to implement and tune SLAM, using a different set of sensors on the robot, and use Twist calculated from each update step as the ground truth. Needless to say this only works if the SLAM stack works well and produces accurate twist readings.

 

Tuning Odometry Model

Now that you have a sense of the quality of your odometry, let’s look at some methods for improving it.

 

Tweaking Physical Parameters

When your angular velocity is as you’d expect it to be, but the linear velocity of your robot doesn’t match, try slightly adjusting the wheel diameter in your calculations to see if that helps. It’s a quick and dirty way to adjust for any slip. For making that clear in code, I would suggest adding this factor as a tuning parameter applied to the actual expected wheel diameter.

Note that this affects both forward motion and turns, so always check that both are improved by the adjustment. Note that the wheel diameter might actually change depending on the load the robot is carrying, especially with air-filled tires, so don’t blindly copy the wheel diameter from the spec sheet.

If your odometry is great when the robot is moving straight, but performs worse when turning, try adjusting the length of the wheel base. Wheels that are farther apart will result in less sharp turns. Try increasing this dimension in your model to be very large or very small, and see how it affects odometry.

Freedom Robotics Stream View - Linear Velocity IO

Linear velocity IO: top shows the commanded input velocities in blue along with the linear odometry. Bottom shows the angular velocities reported by each of the six wheels. You can try this out by signing up for a free trial on Freedom Robotics.

 

Including Dynamics

This section is fun, but not necessarily worth the effort. Instead, I fuse odometry with a sensor that is less prone to friction, such as Visual-Inertial Odometry. You can also accept friction as a hard-to-predict fact of life and add it to your covariance estimates as well, as explained in the next section.

There are two main ways to include dynamics: one is model-based, where you explicitly model out the physics of your robot and environment, and estimate motion based on your dynamics model, and the other is data-driven, where you learn a representation that minimizes error over time. The latter is an active research topic.

When your robot doesn’t incur significant drift while moving at constant speed but tends to incur large discrepancies while accelerating, a more simple yet effective trick is to make use of a slip factor in your odometry that is dependent on the acceleration. For this you need to keep a running estimate of how fast your robot is accelerating (the velocity difference divided by the time step), and reduce velocity updates by a factor of the acceleration. As always, more complexity potentially means more time spent debugging...

 

Other Approaches

Another potential culprit is that your odometry updates aren’t happening fast enough. Odometry can only be as good as your update rate. Aim for at least 10Hz updates, and note that you don’t have to publish at that frequency, just update the internal model. The faster the robot moves, the higher the frequency the update loop should run at. It can be a good idea to offload this computation onto a separate compute platform like a Raspberry Pi or Arduino. Luckily, the ROS community realized that roboticists start from scratch for each new robot but most mobile bases fall in one of a few categories. You do have to create a hardware interface if no one has done that for your robot already which can be confusing, but then all the work of converting commands and doing odometry updates will have been done. Even if you are using a special type of robot, it’s possible to extend the existing controllers to match your kinematic model.

If the odometry parameters of your robot change over time, or vary strongly from one robot to another, they can also be estimated on the fly by propagating errors back to the odometry model. This approach (Simultaneous Parameter Calibration, Localization, and Mapping) unfortunately isn’t yet available in the form of a robust open source package.

 

What do I set covariance to?

Think of covariance as the confidence the odometry has in itself. It’s two 6 by 6 matrices, one for the pose estimate and one for twist. The confidence in twist is relatively constant over time (and often just set to constant values), but the confidence in the pose decreases over time and so the covariance grows larger and larger. Certain algorithms need a good estimate of covariance to work well. If you’re familiar with the Kalman filters, they are used to calculate the relative importance of your motion model’s prediction to the measurement update, for example your lidar.

You can think of covariance as how much error is induced by a motion, but split up by the dimension of the motion. The elements of the matrix relate one dimension of movement to another: for example when you command your robot to spin around its axis, it’ll likely start deviating from that center point fairly soon. How much your robot deviates will inform the entry in your covariance matrix that describes how much angular velocity affects linear velocity. To be exact: the element in row x, column y, or cov(x, y) describes the joint variability between x and y, an expression of how linearly related these two random variables are.

Twist Odometry Covariance Matrix

General form of a twist covariance. The pose covariance, if required, is similar, except that the covariances grow over time. Note that a covariance matrix is always symmetric. On the diagonal, cov(x, x) = var(x, x)

 

Start with a simple model however, by assuming a change in forward velocity only affects the error on forward velocity, not on angular velocity and vice versa. For non-holonomic bases, that means that your covariance matrix will only have two elements non-zero, on the diagonal. This works well in most cases. These elements on the diagonal, the covariances of variables with themselves, are also known as variances. They have units (m/s)^2 or (rad/s)^2 which aren’t very intuitive, but when we take the square root, we get the standard deviation, and that might ring a bell. The standard deviation has physical meaning and describes the spread of your data (assuming a normal distribution). Remember that one standard deviation should encompass about 68% of your data spread, so if we have a feeling for that, we can fill in our matrix by simply taking that value to the power 2!

Before you determine your covariance, make sure that the algorithm you plan on using for localization uses the covariance information. The most commonly used 2D localization package in ROS, the particle filter amcl, won’t use it, but Kalman based filters robot_pose_ekf and robot_localization do.

Generally algorithms don’t look at the covariance associated with your pose estimate, and only take the twist and its covariance into account. So let’s look at my Mars Rover’s twist covariance:

Twist Odometry Covariance Matrix for Mars Rover

Covariance on the Mars Rover. Most values are zero because no motion exists along those axes.

 

When I send my robot a linear velocity, I expect the actual velocity the robot moves at to be within 15% of that value in about 68% of the cases. So at a commanded velocity of 1m/s, most of the actual velocities range from 0.85m/s - 1.15m/s. This is a rough estimate that I might want to refine later, and I’ll set it to a larger value when the robot is on slippery terrain. So one standard deviation equals 0.15m/s and my covariance is (0.15)^2 = 0.0225. Since my Mars Rover can’t rotate in place, I can’t easily measure this for rotation independently, but since I have a bit less faith in rotations due to the fact that if not all wheels are aligned perfectly, the rotational movement will average out and I’ll pick 20% for one standard deviation, or cov(ωz, ωz) = 0.04.

The off-diagonal values indicate how much pure rotation along the vertical axis (ωz) is related to forward motion (vx). We can’t use the standard deviation trick anymore, but we can use the intuition we just built up from estimating these values. Since the Mars Rover uses Ackermann steering, these values are way smaller than a robot that uses skid steering, which relies on slip to turn. The covariance matrix is always symmetric, so I set both values to 0.01 (rad*m/s^2).

If you have a probabilistic motion model, you can extract covariances from there. You can also calculate covariance from data recorded from driving your robot around, although that is a more involved process.

Lastly, if you’re not sure, either overestimate the covariance, or set it to a large value. When you decide to fuse multiple odometry sources, you can also set covariance high for any dimensions that you know are prone to large errors or drift. Remember that an algorithm that sees a low covariance will trust that measurement to be accurate which is hard to recover from.

 

Conclusion

Setting up odometry and verifying it isn’t as straightforward as it initially may seem. This process can take several days of effort to get right. But the effort is bounded and will save you a lot of time and hassle later on, when you’re setting up mapping and localization. Covariance in odometry is often glossed over and set to random values. Understanding their meaning helps pick the right value, which can greatly improve localization.

I’m always excited to hear any feedback or help answer any questions: achille@freedomrobotics.ai. I also highly encourage taking advantage of the free trial on Freedom Robotics to greatly speed up improvements on your robot’s odometry.

your time is valuable
Use it wisely.

Mission critical software infrastructure to enable the next generation of
robotics companies to build, operate, and scale robots and robotic fleets.

Mission critical software infrastructure to enable the next generation of
robotics companies to build, operate, and scale robots and robotic fleets.