Robotics: Science and Systems XX
Demonstrating CropFollow++: Robust Under-Canopy Navigation with Keypoints
Arun Narenthiran Sivakumar, Mateus Valverde Gasparino, Michael McGuire, Vitor Akihiro Hisano Higuti, M. Ugur Akcal, Girish ChowdharyAbstract:
We present an empirically robust vision-based navigation system for under-canopy agricultural robots using semantic keypoints. Autonomous under-canopy navigation is challenging due to the tight spacing between the crop rows (∼ 0.75 m), degradation in RTK-GPS accuracy due to multipath error, and noise in LiDAR measurements from the excessive clutter. Earlier work called CropFollow addressed these challenges by proposing a learning-based visual navigation system with end-to-end perception. However, this approach has the following limitations: Lack of interpretable representation, and Sensitivity to outlier predictions during occlusion due to lack of a confidence measure. Our system, CropFollow++, introduces modular perception architecture with a learned semantic keypoint representation. This learned representation is more modular, and more interpretable than CropFollow, and provides a confidence measure to detect occlusions. CropFollow++ significantly outperformed CropFollow in terms of the number of collisions needed (13 vs. 33) in field tests spanning ∼ 1.9km each in challenging late-season fields with significant occlusions. We also deployed CropFollow++ in multiple under-canopy cover crop planting robots on a large scale (25 km in total) in various field conditions and we discuss the key lessons learned from this.
Bibtex:
@INPROCEEDINGS{Sivakumar-RSS-24,
AUTHOR = {Arun Narenthiran Sivakumar AND Mateus Valverde Gasparino AND Michael McGuire AND Vitor Akihiro Hisano Higuti AND M. Ugur Akcal AND Girish Chowdhary},
TITLE = {{Demonstrating CropFollow++: Robust Under-Canopy Navigation with Keypoints}},
BOOKTITLE = {Proceedings of Robotics: Science and Systems},
YEAR = {2024},
ADDRESS = {Delft, Netherlands},
MONTH = {July},
DOI = {10.15607/RSS.2024.XX.023}
}
