Pilot by Freedom Robotics is now live! As part of the launch, we are making it so that anyone can test out our platform free for a whole year. Use code "ROSPILOT" (case sensitive) to access it today!

Sign Up for a 1 Year Free Trial X

Deploying on Mars: How to Pick Sensors to Enable Navigation for Your Autonomous Mobile Robot (AMR)

This is the second blog in the build series of my NASA-JPL Open Source Rover. For part 1 please click here. It walks through hard-learned lessons of how to simplify your sensor stack to work in the real world and enable accurate odometry, mapping, navigation and teleoperation. Through making clear tradeoffs, you can significantly cut your cost, code complexity and failure edge cases in productions and increase the accuracy of positioning estimates by 10x or more.

We often get asked for recommendations for sensors from robotics startups, and it turns out that there are a few very common setups we see among our customers’ fleets across different applications that work very well, while others fail, either for cost, accuracy or reliability reasons.

It’s surprising how little information is available on what the most commonly used sensors are. A Google search will tell you about tactile sensors, temperature sensors, and photovoltaic sensors, but those are definitely not my first pick when envisioning what my robot will use to sense the world while traveling at 2 meters per second. I recently went through this exercise when I was looking for sensors on my Mars Rover or the most part, these are the same sensors you can consider for your wheeled autonomous mobile robot.

 

Sensors… the Fewer, the Better

It’s tempting, I know. You find out odometry is bad on your robot and immediately you (all robotics engineers) get excited about integrating a new sensor on the robot. “An IMU would compensate errors due to wheel slip” and “a monocular camera with optical flow would be way more accurate”.

 

NASA-JPL Open Source Mars Rover

Sensor stack on my Mars Rover: A Logitech webcam allows for taking over remotely, person following, and visual odometry. The SICK TIM561 provides great outdoor mapping while a small MPU6050 in the Rover’s body improves odometry on rough terrain.

It’s true that there is a lot of value in having complementary sensors. It was easy for me to get excited about adding on multiple cameras on the Mars Rover, but then I spent three hours trying to figure out why they couldn’t be launched at the same time and also had to keep calibrating their alignment. On slippery terrain, wheel odometry almost becomes useless, cameras are sensitive to changes in illumination, IMUs drift, but they all fail in different scenarios. But more often than not, more sensors is actually the wrong answer. Why?

More sensors are the wrong answer:

  1. Tune your odometry model instead - Chances are your existing odometry could be better. Good odometry from one source goes a long way. For many of the companies we have worked with, Odometry is the culprit - Odometry tuning is just so often a step that is skipped - and in the next blog I walk through how you can supercharge your odometry by doing a few simple things right.
  2. More sensors don’t guarantee better localization - Don’t let that fool you - there’s no such thing as ‘slapping on another sensor and improving SLAM’. If your new sensor is producing large outliers or drifting significantly and you didn’t tune the fusion or filtering algorithm for it properly, you’ll now have worse odometry. You’ll have to deal with extrinsic calibration, outlier removal, covariance tuning, and complex fusion algorithms. Oftentimes this integration introduces an increased number of more complex variables that result in marginally improved or more commonly decreased accuracy.
  3. Sensor fusion and stability is exponentially complex - More sensors means more libraries to sift through to get them to work properly. It means more cables and udev rules. It will consume more power from your already limited battery life because you’ll need to power the sensor and you’ll need more compute. Depending on the sensor, it’ll make your robot more expensive and harder to manufacture. Something everyone runs into is the limits of the USB bus for both power and bandwidth when using multiple cameras, lidar and other sensors at the same time, and those issues are often hard to debug.
  4. More data does not usually equal better autonomy - Ask yourself: If I add this extra sensor, will I be significantly closer to autonomy? Can I remotely intervene when an edge case happens using teleoperation (Freedom Pilot) instead? Many sensors give more data, but they are not necessary to get your robot to perform autonomous tasks.

 

A Hard-Earned Go-To Sensor Stack

Before you pick a sensor, decide on what it will do for you. The winners are sensors that fulfill multiple functions so you get the most out of them.

For example, a depth sensor used for object pose estimation can double as your TeleOp camera, and also be used to detect obstacles during autonomous navigation. Aside from price and functionality, look for sensors which have strong current community support, preferably with open-sourced libraries. Strong support typically means good documentation and a robust library exist, which will save hours or days of pain.

 

Fix Your Wheel Odometry

In each of these, wheel odometry is a given. There are cases where wheel odometry doesn’t work as well, but rarely does it actually hurt more than help. A very easy thing you should spot check is the accuracy of your odometry in linear driving and for turning on the surfaces you expect to perform tasks on. If your robot uses a differential drive and is on surfaces it will skid significantly, on, you will need different tuning to make it accurate. It's the cheapest way to significantly improve your accuracy and I extensively wrote on how to do that here.

 

Use a Monocular Camera No Matter What

A simple camera can be excluded from the rule: “the fewer, the better”. I still encounter robots every once in a while without a camera mounted on them, but I strongly recommend having at least one. Even cheap cars today all have at least a backup camera. Your robot should too.

Cameras are essential for assessing what’s up with your robot remotely and most are plug ‘n play with linux. If you’re not using it to assess fruit ripeness, you won’t need high resolution either. Here are my top picks, based on price, quality, and ease of setup:

  • The Spinel 2MP 30FPS is a great bare board camera with interchangeable lenses.
  • If you’re looking for a camera with a case, check out the Logitech C270. Don't pick any Logitech camera, I've found some to have issues due to the way the stream is encoded.
  • If you’re using a Raspberry Pi, I’ve found the Raspicam with interchangeable lenses to be great. However, It does come with a ribbon cable which can be hard to wire. A fisheye lens works really well for tele-operating a robot, as you can see more of the robot’s surroundings.

A note on cameras - there is a very large difference in the USB or bus bandwidth if you have the camera send compressed or uncompressed images. Unless you need uncompressed video (Most people don’t), you can save significant bus bandwidth by setting the format to mjpeg rather than rgb or yuyv. If you connect over USB 2, this is mandatory. If you connect over USB 3, if you have many systems on the same bus, it can really help to lower your bandwidth if you have many systems on the same bus.

Additionally, most applications don’t actually need more than 640x480 pixels. If you have a beautiful 4k camera, use it, but set a lower resolution for output.

 

2D Lidar Enables Navigation, Simple Safety and Other Key Things

Lidars have come down in cost significantly and will continue to do so. They can be great for odometry (e.g. ROS laser scan matcher) when your wheel odometry isn’t very good. For example, they are useful when you’re using a four wheel skid steer with a large payload or your rover is on slippery terrain and moving fast, like a sidewalk robot. They are commonly used to perform loop closure or in gmapping to update particles and to create the map.

They don’t work well however when there are few features: long hallways and open spaces means that you’ll have to rely on odometry completely. This is only a problem if your robot can’t recover its location later. Do you really need accurate localization in these situations? When your robot is on uneven terrain, consider a 3D lidar instead, but those are much pricier.

I recommend the RPLidar lineup for users starting out. The A2 and A3 models are great picks, and the more recent S1 can be made waterproof and has an impressive 40m range. They have several versions of each, depending on the range required. Note that the marketed range is a maximum, and you’ll want to make sure the features your robot needs to see accurately are well within that range. A bonus is that you can interface with them through TTL/UART especially if you’re like me and try to stay away from USB as much as possible. For outdoor vehicles, I recommend SICK’s wide selection of LIDARs which are a step up in terms of quality and price but like the RPLidar, have a great community of roboticists and solid ROS packages. My Mars Rover has the SICK TIM561.

 

Depth Cameras

I’ve tried quite a few depth sensors since they stopped making the Kinect, and although there are good sensors out there, the RealSense line-up is the strongest in its price category. There are a few methods to produce depth and they are crucial to understand for picking a sensor that works well for your robot’s environment. The RealSense folks wrote a great blog about this. Specifically, I would recommend the D435i or D415. The D435i has lower resolution than the D415, but has a larger field of view and comes with a built-in IMU.

These depth sensors are much noisier than 3D lidars and have shorter range (even the ToF based RealSense L515 has a maximum range of 9m), but they produce a denser cloud (unless you’re willing to shell out lots of money) and you don’t have to do any extrinsic calibration to a camera to get the depth information aligned with RGB images. This makes them a great tool for scene understanding, pose estimation, and manipulation tasks.

Many times, though, having more data is not always better. Do you really need a depth camera, or can a 2d lidar and a $60 board RGB camera give your robot the things it needs to succeed? Depth data significantly increase the CPU and GPU compute necessary and also can lead to significantly more complex algorithms which are not always stably performant without significant tuning.

 

Other Sensors To Consider

  • Millimeter wave radar - less commonly used in small robots because of lower accuracy and resolution but performs better than visual-inertial odometry (VIO) in challenging conditions such as rain, smoke, and dust. It typically boasts longer range than Lidar.
  • Bumpers - often overlooked as a sensor, but a great last line of safety and in some cases a great way to localize to objects, by gently bumping into them.
  • IMUs - tiny, cheap, low-power, and nearly free 6DOF pose estimation. They consist of an accelerometer and a gyroscope, and sometimes even a magnetometer. They are very prone to drift, but are a great combo for low-cost robots that suffer from slip during acceleration. SparkFun’s MPU6050 has good support and should work well for most robots.
  • 3D lidar - if you will be manufacturing several robots, these will likely be too expensive, but if you’re set on it, Ouster, Velodyne, and SICK all produce great 3D lidar.

Even if you don’t use ROS, we recommend using their message standards so they are rendered nicely out of the box in the Stream tab in Freedom’s web app.

Freedom Robotics Stream Tab - IMU, Steering Angle, GPS

Above is a view from the Freedom Stream tab showing IMU, steering angle, GPS, cameras and other key sensors on an autonomous car stack. By being able to fuse and view all the data together, you can make it significantly easier to debug sensor issues and identify how to properly align them in time and space.

 

Tips For Positioning Sensors

Now that you’ve picked your sensors, it’s time to think about where to attach them to the robot. I’ve found that this is a tricky and iterative process, regardless of the sensor. Mobile robots wiggle and bounce around while they accelerate, bump into things, and navigate tight environments.

 

Position monocular cameras to enable easy navigation

For a monocular camera used for remote operation, consider installing a ‘third-person camera’ which allows you to see part of the robot while you’re operating it. I’ve found that it makes navigating tight spaces a lot easier since you have a better sense of the robot’s relative size.

A way to think through your positioning is to think back to first and third person perspectives in your favorite video games. They need to have a wide enough view that you don’t keep hitting walls, they need to not be so wide that you can’t see things clearly in the center far away and they need to be the right height/placement so that you understand how the physical robot corresponds to the environment.

NASA-JPL Open Source Mars Rover 3rd Person Camera

My Mars Rover has a third person camera attached to the back, and a first person camera in front that doesn’t see any parts of the robot. The third person camera is much easier to use to Pilot.

 

Position depth cameras and 3d lidar capture your key area of interest

For any 3D lidar or (depth) cameras, I would recommend positioning them so that their Field of View (FOV) overlap. Although you can calibrate from ego-motion, overlap allows you to calibrate much more accurately and verify calibration quickly using calibration targets. If you use the sensor for localization or mapping, point the sensors at the features that you are localizing to. If you want really good global localization to a room, make sure your sensors are pointing at fixed features on the walls. If instead you’d like your robot to localize to fixtures or furniture that might move slightly over time, you care about having those show up prominently in your sensor data. You might not be localized with respect to the walls perfectly, but those matter less.

 

Maximize 2d lidar view

If you decide to attach your 2D lidar to the top of the robot so that it has the full 360deg view, your robot might miss certain obstacles which can cause the robot to bump into them. I would recommend mounting a Lidar on the front of your robot at knee-height with at least 200 degrees of view, the more the better. Stick it out too far and you might damage it if the robot bumps into something, and if it leaves the robot’s footprint, navigating tight spaces will become much more difficult. As a result, most robots using 2D lidar will have a thin slit at knee height allowing the laser rays to pass through the robot. Also consider the environment the robot will be operating in and make sure that the lidar height will pick up the most common obstacles that the robot will encounter.

On more safety-oriented robots, you will see two 2d lidars. One on a front corner and the other on the opposing corner.

 

What Does Your Hard-Earned Sensor Stack Look Like?

There’s a lot of value in carefully examining which sensors you need, but the sensors above are a great start mostly due to their excellent community support. Many algorithms are designed with a combination of these common sensors in mind, allowing you to cut development time and focus on your application.

I’m always excited to hear about ingenious ways to use new sensors or interesting setups. Let me know if there are any sensors that need to be added to this list, or if there are any visualizations you would love to see in the Stream tab (achille@freedomrobotics.ai).

your time is valuable
Use it wisely.

Mission critical software infrastructure to enable the next generation of
robotics companies to build, operate, and scale robots and robotic fleets.

Mission critical software infrastructure to enable the next generation of
robotics companies to build, operate, and scale robots and robotic fleets.