Welcome to the second entry in Zupt Autonomous Products and Technologies (ZAPT) blog series: Stories From Our Engineers.
As a technology company, our engineers have combined their knowledge from mechanical, navigation, localization, perception, and electrical engineering fields to build our first fully autonomous product, a robotic commercial lawnmower known as Nomad. During this series, several of our lead engineers will share their take on how they played a role in designing Nomad, what makes it unique to them, and the future they see. The next engineer we would like you to meet is Lily, one of the Navigation Engineers responsible for perception-object detection, LiDAR map matching, and stereo imagery depth and perception.
Pictured: Matt (Left) and Lilly(Right) detected by Nomad
What She Brings
Lilly has an extensive background in object and obstacle detection using cameras and LiDAR. While completing her doctorate in Mechanical Engineering, she conducted in-depth experiments using localization and perception algorithms to compute object detection at high accuracy. She says, "Working on Nomad is a chance to implement what I have learned by bringing the academic scene into the real world.” Our object detection algorithm for Nomad is a version of Yolo, a famous object detection system that can run in real-time, with TensorRT to accelerate the model interference. Lilly is excited to integrate her deep understanding in this niche area of knowledge to ensure a safe, fully autonomous, robotic lawnmower.
What She Does
Lilly focuses on developing localization and perception algorithms using point cloud and imaging sensors such as LiDAR and Cameras. Our localization uses LiDAR to scan and implement a real-time set of aiding observations to the integrated solution (Kalman) when our GNSS (GPS) system is degraded due to trees or buildings. Essentially Lilly works on how Nomad can navigate precisely within its challenging environment by allowing it to “see” and know where to mow.
Lilly has three primary responsibilities: designing Lidar SLAM algorithms, designing camera-based obstacle detection algorithms based on deep learning, and designing LiDAR-based segmentation algorithms:
1. Designing LiDAR SLAM algorithms:
There are two main modules for using LiDAR for navigation: Mapping and Localization.
In the Mapping module, if the mower enters a new area for the first time, onboard LiDAR data is collected and integrated with GNSS, resolver, and IMU measurements to build the base map to define the mowing boundary and forbidden areas.
This map will be post-processed to correct small failures and transformed into a global reference frame (UTM). The steps involved are:
Build a base map using SLAM techniques in real time.
Post-process the map to add loop and plane constraints and remove distortion using scan-and-matching techniques and RANSAC-based plane detection methods.
Transformed this pre-computed base map into the navigation reference frame.
In the Localization module, LiDAR is used to find the precise real-time position using multi-threaded frame-to-frame and frame-to-global scan matching methods after building the base map. Next, the estimated LiDAR Odometry transforms into a real-time Global frame, with the odometry and absolute positions published in UTM. This process is implemented on a GPU and runs very quickly. Lily says it is essential that "our slam algorithm includes both real-time frame-to-frame motion estimation and frame-to-global pose estimation to increase the localization accuracy and reduce the drift.”
2. Designing a camera-based obstacle detection algorithm based on deep learning:
The obstacle detection function oversees identifying obstacles like people, animals, and vehicles within our environment, finding distances between the mower and obstacles, and generating warnings for the central control module and the local planner.
3. Designing LiDAR-based segmentation algorithm:
The segmentation module clusters the environment into ground and non-ground groups. The ground segments are mow areas, and the non-ground group is the terrain the mower cannot traverse. If the mower approaches the non-ground area, the segmentation module will warn the control module to slow down and avoid possible collisions.
What She Likes
Compared to other companies designing robotic mowers, Nomad is unique because the onboard sensors cover the entire surrounding 360 degrees instead of focusing only on the front – or travel direction. This makes Nomad very safe and reliable. The LiDAR we use is a MEMS spinning lidar that can detect objects up to 120 meters away. Then, the cameras take care of any possible blind spots from LiDAR coverage. Lilly says, "anything near our mower from any direction can be detected and, if appropriate, will trigger the obstacle avoidance function.”
Moreover, Nomad is designed to be adaptive to the environment. Nomad is expected to change mode with respect to its surroundings. Lilly says this adaptive function is shown as “switching between fat and skinny mode when narrow access or wide spaces are detected.”
What She See’s
Lilly's vision for the future is seeing Nomad “reliably autonomously mow in GNSS denied areas based purely on LiDAR navigation.” With the implementation of localization and perception algorithms for object detection and LiDAR map matching, Nomad can autonomously mow challenging spaces by knowing its environment based on our sensor suite. More importantly, using the advanced algorithms Lily has implemented. The end goal of Nomad is not only to speed up operational efficiency to mow more areas with the same amount of people safely but to mow in difficult areas without human intervention.
Next time, our Stories From Our Engineers blog will feature Vash, our Control Engineer responsible for our Global and Local Path planners.
Commentaires