u/Old_Course9537

▲ 3 r/ROS

Looking for an open-source XYZ gantry (pick & place style) with full 3D CAD (STEP / SolidWorks compatible).

Requirements:

Cartesian XYZ gantry (rails + belts or screws)

Full assembly (not just STL files)

Prefer something buildable, not just concept

Bonus: BOM + drawings

I’ve checked:

OpenPnP ecosystem — good references but CAD is scattered

LumenPnP — closest thing to a full machine, but CAD is in FreeCAD and not super clean to reuse

Surprised there’s no “Voron-style” clean parametric project for this.

Anyone know:

A complete open-source design?

Or even just a solid XY gantry module I can adapt?

reddit.com
u/Old_Course9537 — 7 days ago
▲ 30 r/ROS

Hey everyone,

I’m looking to dive into self-driving / autonomous systems using ROS and wanted to ask for some guidance from people with more experience.

What would you say is the best open-source self-driving project overall right now (in terms of maturity, community support, and real-world applicability)? I’m hoping to find something solid to study and possibly build on.

Also, I’m especially interested in agricultural applications (like autonomous tractors, field robots, etc.). Are there any standout open-source projects or frameworks focused on agriculture that you’d recommend?

Appreciate any suggestions, experiences, or pointers 🙏

reddit.com
u/Old_Course9537 — 8 days ago
▲ 13 r/ROS

Hey everyone,

I'm building an autonomous agricultural robot using ROS2 and I'm hitting a wall on the perception and navigation side of things. Would love some advice from people who've tackled similar problems.

The Setup

A field of trees planted in rows. I mark the start and end GPS waypoints of each row and the robot needs to drive from the start waypoint to the end waypoint. The critical requirement is that the robot must stay exactly 30 cm from the tree trunks, not more, not less. Budget is tight, so expensive sensor arrays are off the table.

My Current Thinking on Perception

I'm planning to use YOLO for tree detection via camera. My reasoning is that it would let me specifically detect tree trunks and ignore everything else in the environment like weeds, rocks, or uneven ground, which I think rules out 2D LiDAR for this use case (more on that below). Once I detect the trunks, I can use their position in the image to estimate lateral offset and keep the robot at exactly 30 cm. Does this approach make sense for this level of precision? What camera would you recommend for this, monocular, stereo, or depth camera like a RealSense? And would YOLO alone be sufficient for the distance estimation, or does it need to be paired with a depth sensor?

Remaining Questions

  1. Is 2D LiDAR a bad fit for farm environments?

I'm leaning away from 2D LiDAR because in a field with weeds and ground vegetation it seems like it would detect everything as an obstacle and become unusable. If mounted higher to clear the weeds, it might miss the lower parts of the trunks. Is this a fair assessment, or are there ways to make 2D LiDAR work in this kind of environment?

  1. Localization — is GPS + EKF enough?

GPS alone won't give me 30 cm accuracy. I'm thinking of using GPS for coarse positioning (waypoint navigation), EKF fusing GPS + IMU + wheel odometry for dead reckoning between trees, and then the YOLO based camera pipeline for the fine-grained lateral offset to enforce the 30 cm constraint. Does this architecture make sense, or am I overcomplicating or undercomplicating it?

  1. The 30 cm lateral constraint — how do people usually solve this?

Even with YOLO detecting the trunks, I'm not sure how to reliably convert a bounding box into a real-world distance of exactly 30 cm, especially as lighting changes throughout the day. Is visual servoing the right approach here? Is there a standard method for agricultural row following at this precision level?

Constraints Summary

Using ROS2, YOLO for tree detection, low budget (ideally sub $300 for sensors), outdoor environment with variable lighting, trees are roughly uniform spacing in rows.

Any advice on sensor selection, fusion architecture, distance estimation from YOLO detections, or ROS2 specific packages that could help here would be massively appreciated. Also happy to hear "you're thinking about this wrong" if that's the case!

Thanks 🙏

reddit.com
u/Old_Course9537 — 14 days ago

Hey everyone,

I'm building an autonomous agricultural robot using ROS2 and I'm hitting a wall on the perception and navigation side of things. Would love some advice from people who've tackled similar problems.

The Setup

A field of trees planted in rows. I mark the start and end GPS waypoints of each row and the robot needs to drive from the start waypoint to the end waypoint. The critical requirement is that the robot must stay exactly 30 cm from the tree trunks, not more, not less. Budget is tight, so expensive sensor arrays are off the table.

My Current Thinking on Perception

I'm planning to use YOLO for tree detection via camera. My reasoning is that it would let me specifically detect tree trunks and ignore everything else in the environment like weeds, rocks, or uneven ground, which I think rules out 2D LiDAR for this use case (more on that below). Once I detect the trunks, I can use their position in the image to estimate lateral offset and keep the robot at exactly 30 cm. Does this approach make sense for this level of precision? What camera would you recommend for this, monocular, stereo, or depth camera like a RealSense? And would YOLO alone be sufficient for the distance estimation, or does it need to be paired with a depth sensor?

Remaining Questions

  1. Is 2D LiDAR a bad fit for farm environments?

I'm leaning away from 2D LiDAR because in a field with weeds and ground vegetation it seems like it would detect everything as an obstacle and become unusable. If mounted higher to clear the weeds, it might miss the lower parts of the trunks. Is this a fair assessment, or are there ways to make 2D LiDAR work in this kind of environment?

  1. Localization — is GPS + EKF enough?

GPS alone won't give me 30 cm accuracy. I'm thinking of using GPS for coarse positioning (waypoint navigation), EKF fusing GPS + IMU + wheel odometry for dead reckoning between trees, and then the YOLO based camera pipeline for the fine-grained lateral offset to enforce the 30 cm constraint. Does this architecture make sense, or am I overcomplicating or undercomplicating it?

  1. The 30 cm lateral constraint — how do people usually solve this?

Even with YOLO detecting the trunks, I'm not sure how to reliably convert a bounding box into a real-world distance of exactly 30 cm, especially as lighting changes throughout the day. Is visual servoing the right approach here? Is there a standard method for agricultural row following at this precision level?

Constraints Summary

Using ROS2, YOLO for tree detection, low budget (ideally sub $300 for sensors), outdoor environment with variable lighting, trees are roughly uniform spacing in rows.

Any advice on sensor selection, fusion architecture, distance estimation from YOLO detections, or ROS2 specific packages that could help here would be massively appreciated. Also happy to hear "you're thinking about this wrong" if that's the case!

Thanks 🙏

reddit.com
u/Old_Course9537 — 14 days ago