Abstract

In autonomous driving systems, advanced sensing technologies (such as Light Detection and Ranging (LIDAR) devices and cameras) can capture high volume of data for real-time traversability analysis. Off-road autonomy is more challenging than other autonomous applications due to the highly unstructured environment with various types of vegetation. The understory with unknown density can create extremely challenging scenarios (such as negative obstacles masked by dense vegetation) by concealing potential obstacles in the terrain, leading to severe vehicle damage, significant financial loss, and even operator injury or death. This paper investigates the impact of understory vegetation density on obstacle detection in off-road traversability analysis. By leveraging a physics-based autonomous driving simulator, a machine learning–based framework is proposed for obstacle detection based on point cloud data captured by LIDAR. It is observed that the increase in the density of understory vegetation adversely affects the classification performance in correctly detecting solid obstacles. With the cumulative approach used in this paper, however, sensitivity results for different density levels converge as the vehicles incorporates more time frame data into the classification algorithm.

References

References
1.
McDaniel
,
M. W.
,
Nishihata
,
T.
,
Brooks
,
C. A.
,
Salesses
,
P.
, and
Iagnemma
,
K.
,
2012
, “
Terrain Classification and Identification of Tree Stems Using Ground-Based LiDAR
,”
J. Field Rob.
,
29
(
6
), pp.
891
910
. 10.1002/rob.21422
2.
Ahtiainen
,
J.
,
Stoyanov
,
T.
, and
Saarinen
,
J.
,
2017
, “
Normal Distributions Transform Traversability Maps: LIDAR-Only Approach for Traversability Mapping in Outdoor Environments
,”
J. Field Rob.
,
34
(
3
), pp.
600
621
. 10.1002/rob.21657
3.
Arnold
,
E.
,
Al-Jarrah
,
O. Y.
,
Dianati
,
M.
,
Fallah
,
S.
,
Oxtoby
,
D.
, and
Mouzakitis
,
A.
,
2019
, “
A Survey on 3D Object Detection Methods for Autonomous Driving Applications
,”
IEEE Trans. Intell. Transp. Syst.
,
20
(
10
), pp.
3782
3795
. 10.1109/TITS.2019.2892405
4.
Myneni
,
R. B.
,
Hall
,
F. G.
,
Sellers
,
P. J.
, and
Marshak
,
A. L.
,
1995
, “
The Interpretation of Spectral Vegetation Indexes
,”
IEEE Trans. Geosci. Remote Sens.
,
33
(
2
), pp.
481
486
. 10.1109/TGRS.1995.8746029
5.
Bradley
,
D.
,
Thayer
,
S.
,
Stentz
,
A.
, and
Rander
,
P.
,
2004
,
Vegetation Detection for Mobile Robot Navigation
,
Robot. Institute, Carnegie Mellon University
,
Pittsburgh, PA
.
6.
Nguyen
,
D.-V.
,
Kuhnert
,
L.
,
Thamke
,
S.
,
Schlemper
,
J.
, and
Kuhnert
,
K.-D.
,
2012
, “
A Novel Approach for a Double-Check of Passable Vegetation Detection in Autonomous Ground Vehicles
,”
2012 15th International IEEE Conference on Intelligent Transportation Systems
,
Anchorage, AK
,
Sept. 16–19
, pp.
230
236
. https://dx.doi.org/10.1109/ITSC.2012.6338752
7.
Wurm
,
K. M.
,
Kretzschmar
,
H.
,
Kümmerle
,
R.
,
Stachniss
,
C.
, and
Burgard
,
W.
,
2014
, “
Identifying Vegetation From Laser Data in Structured Outdoor Environments
,”
Rob. Auton. Syst.
,
62
(
5
), pp.
675
684
. 10.1016/j.robot.2012.10.003
8.
Lacaze
,
A.
,
Murphy
,
K.
, and
DelGiorno
,
M.
,
2002
, “
Autonomous Mobility for the Demo III Experimental Unmanned Vehicles
,”
Proceedings of AUVSI 2002 Conference
,
Anchorage, AK
,
Sept. 16–19
. http://dx.doi.org/10.1109/ITSC.2012.6338752
9.
Wellington
,
C.
, and
Stentz
,
A.
,
2004
, “
Online Adaptive Rough-Terrain Navigation Vegetation
,”
IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA’04
,
New Orleans, LA
,
Apr. 26–May 1
. http://dx.doi.org/10.1109/ROBOT.2004.1307135
10.
Lalonde
,
J.-F.
,
Vandapel
,
N.
,
Huber
,
D. F.
, and
Hebert
,
M.
,
2006
, “
Natural Terrain Classification Using Three-Dimensional Ladar Data for Ground Robot Mobility
,”
J. Field Rob.
,
23
(
10
), pp.
839
861
. 10.1002/rob.20134
11.
Goodin
,
C.
,
Carruth
,
D.
,
Doude
,
M.
, and
Hudson
,
C.
,
2019
, “
Predicting the Influence of Rain on LIDAR in ADAS
,”
Electronics
,
8
(
1
), p.
89
. 10.3390/electronics8010089
You do not currently have access to this content.