Online Camera LiDAR Fusion and Object Detection on Hybrid Data for Autonomous Driving
Online Camera LiDAR Fusion and Object Detection on Hybrid Data for Autonomous Driving
Abstract
- Improve the LiDAR and camera fusion (Extrinct Calibration)
- Intensity discontinuities and erosion and dilation of the edge image for increased robustness against shadows and visual patterns, which is a recurring problem in point cloud related work
- use a gradient-free optimizer
- Fusion pipeline is lightweight and able to run in real-time
- Modify from Faster R-CNN
- Test dataset KITTI data set
- Outlook on how radar can be added to the fusion pipeline via velocity matching
- real-time with 10Hz
Comparision
Sensors
- RGB cameras:
- color and texture information
- good for object classification
- limited detection range
- perform poorly in limited lighting or adverse weather conditions
- cheap
- LiDARs
- provide precise distance info
- wider range detection
- small object detection
- no colar info
- performance decreases in heavy rain
- expensive
- Radars
- provide precise distance and velocity info
- work well in inclement weather
- rather low resolution (解析度)
- bad object detection
Fusion techniques
High-level fusion (HLF)
- Centralized System
- detects objects with each sensor separately
- subsequently combines these detections
- limited available information
Low-level fusion (LLF)
- Distributed System
- Combines all the data from raw data level
- More difficulties combining the data
Key of Sensing
- Time-synchronized for ego-motion
- Fusion and detection algorithms need to be real-time
Related Work
Use 3D markers (inconvenient)REF
Deep learning based end-to-end architecture for feature extraction, feature matching and global regression REF
- Large amount of data is required for training
- Separate data collection for each vehicle
Edge alignments between optical camera and LiDAR data using reflectivity values REF
- Uses an exhaustive grid search to fit edges in image and point cloud data
- takes to many time –> not real-time
(The One this paper applies)Target-less camera LiDAR calibration REF
Methodology
Use LLF
- prevent the occurrence of aberrations and duplicate objects
Use MLF
- An abstraction sitting on top of LLF, where extracted features from multiple sensor data are fused
Sensor
- Velodyne HDL-64E S3
![](https://i.imgur.com/PpcfZsI.png =300x300)
- Velodyne HDL-64E S3
Fusion of Lidar and Camera (extrinct calibration)
- Relies on accurate instrict calibration of sensors
- Use a ring pattern Calibration board REF
- Reduce 70% reduction of the average reproject errors
- Finding the 4x4 transformation matrix TL
- TL cconsists of a rotation and a translation (6 degrees)
- Find edges in camara images match in point cloud with similarity function S
- Use the inverse distance transformation (IDT) and erosion and dilation (ED) to increase the robustness to shadows in the scen