
When would you use a 24×24 LiDAR depth sensor instead of stereo vision?
I’ve been looking at compact LiDAR options for embedded vision and robotics applications, and the Sony AS-DT1 is interesting because it is not really meant to be a high-resolution 3D mapping sensor. It seems better suited for obstacle detection, proximity sensing, navigation, and spatial awareness.
Key specs that stand out:
- dToF SPAD distance sensing
- 24 × 24 depth grid / 576 ranging points
- Up to 30 fps in standard modes
- Up to 40m indoor range, with shorter outdoor range
- 940 nm VCSEL
- USB-C host connection
- UART and external trigger support
- Compact 29 × 29 × 31 mm housing
My take is that this type of sensor makes sense when you need compact, low-overhead distance data rather than dense 3D reconstruction. For robotics or UAVs, it could be useful as a lightweight obstacle/proximity sensor alongside cameras or other perception hardware.
Spec/source page I was looking at:
https://aegis-elec.com/sony-as-dt1-lidar-depth-sensor.html
Curious how others here would compare this kind of compact dToF module against stereo vision or higher-density LiDAR for robotics navigation.