Finished HW of my ROS 2 Bimanual diff drive robot
So far it has:
- 2 Lerobot arms
- Pan & Tilt with Realsense
- Diff drive with ros2_control
- Magnetic charging
Next I want to pick some clothes like socks and put them into washing machine autonomously 😄
So far it has:
Next I want to pick some clothes like socks and put them into washing machine autonomously 😄
Hey r/robotics community,
A couple weeks back, I asked about how you all were managing AI development in robotics and I got a bunch of great responses. To summarize:
My problems
Your solutions
I implemented four changes into my setup:
After making these changes, I've seen a pretty sizeable increase in my development speed using AI in robotics.
Previously, I was trying to fill my context window with the code I've already written, but that seemed to not be enough context for Claude to actually understand the software architecture or data pipeline in my codebase. With the changes I've mentioned above, I actually noticed that I can let Claude develop new nodes and software. There's significantly less problems when integrating Claude's code and existing code from what I've seen so far.
One thing that was always an annoyance for me was Claude's lack of understanding of what was ROS 1 and what was ROS 2. I ended up creating a RAG database that can input relevant documentation for whatever Claude was working on and that's worked incredibly well. With this in pairing with some custom tool calls I've made, my setup no longer has any confusion on what's ROS 2 and what commands I have access to running ROS 2 Jazzy and Gazebo Harmonic in particular.
Thanks for all of your help! I thought I'd leave this post here for those who may also run into something similar trying to use Claude Code for robotics. I'm considering even doing some custom evals for this setup on robotics-specific coding problems because of how much more consistent this setup seems to be. If anyone's already done something similar to this, would love to hear about it in the comments. Cheers!
We are releasing rbot, an open-source Autonomous Mobile Robot simulation stack for ROS 2 Jazzy and Gazebo Harmonic.
The project is built for teams, students, and ROS users who want a practical AMR baseline they can run, study, and adapt. It packages the core simulation workflow into one ROS 2 workspace: robot description, Gazebo simulation, ros2_control, teleoperation, sensors, localization, mapping, and Nav2 navigation.
What is included:
The quick workflow follows the same path a user would take with a real AMR project: map the environment, save the map, localize against it, and send navigation goals in RViz.
Gazebo Harmonic is the supported simulator today. Isaac Sim integration is planned.
Repository: https://github.com/rlxai/rbot
Demo video: YouTube Link
We would welcome feedback from the ROS and robotics community, especially around navigation tuning, reproducible simulation scenarios, launch validation, and teaching workflows.
Try today! We would appreciate a ⭐️ if you like and support this work.
Cheers!
Good day everyone,
I’ve got a rather complex question about how to structure my (our) ROS2 workspace. I am a member of a student research group focusing on autonomous driving. We are using ROS2 and are just starting out. The goal is to create a sustainable infrastructure for our code that is intuitive to use for future generations as well. Our team is in a bit of a clinch about what structure to use. I will list all proposed ideas in the following and would like to ask you (seasoned devs) for an opinion.
Some additional context: We work with multiple model cars (scale 1:10 / 1:8) that are different in construction. So, some of our code will be tailored towards a specific car (for example, steering/odometry) and some code will be generic (for example, navigation logic on maps). To keep our code portable, we heavily use Docker and run basically everything we can in containers. In addition to cars in real life, we work with a Gazebo simulation as well, which configs etc. have to live somewhere as well.
Option 1 – A single repo
EVERYTHING lands in a single repo. Car-specific and generic code alike. The simulation lands in there as well. We design a sophisticated structure.
Option 2 – (n+2) repos
n is the number of cars, +1 for a generic repo, +1 for the simulation. Each car gets a repo with its specific code; the simulation configs get a repo. The specific repos include the generic repo via git submodules or a .repos file.
Option 3 – One repo per ROS package
Very modular, self-explanatory. We create multiple repos; each package gets its own. To combine them, we create a meta-repo per car that includes required code via git submodules or .repos files.
The final question: Should we use git submodules or .repos files?
What are your thoughts about this?
Update since: https://www.reddit.com/r/robotics/comments/1sq4rip/comment/oioxsel/
Resolved two subtle issues that I would never have been aware of without pondering for days about what the heck could go wrong with a manually coded walking gait. Hopefully, it can help someone in the future who tries to implement the same thing:
Will add directionality control next, still manually though, and finally tidy up the GitHub for those who are interested. Afterward, I really want to try out RL on ROS2, but will need to enable the IMU and probably strengthen the second servo joint first (the LIDAR so far is just a counterweight, lol)
Full ROS2 + all commercial/3D-print part:
https://github.com/SphericalCowww/CubicDoggo
Just a rant. I guess ROS 2 relies heavily on template metaprogramming. ROS 1 was definitely faster and made the development loop way faster and enjoyable.
https://reddit.com/link/1tbuxmk/video/rpjibnzfxv0h1/player
I'm using ROS2 Humble with RPLidar and RViz2.
The robot TF moves correctly, but the LaserScan also rotates/moves together with the robot instead of staying fixed to the environment.
What I checked:
/scan frame_id = laser_linkbase_link -> laser_link TF is correct# A new ROS2-native, SUPERFAST visualizer written in Rust — `fastviz`
Hi everyone,
I've been hacking on a side project called **fastviz**: a Rust-based 3D visualizer that runs as a native ROS2 node, built on `wgpu` and `egui`. RViz has been the workhorse of the community for many years and isn't going anywhere — fastviz is just an experiment to see how much smoothness and headroom we can get out of a pure-Rust + GPU-native pipeline, and I wanted to share where it's at in case it's useful to others.
It's at a preliminary stage — only a handful of message types are wired up so far — but the core architecture is in place and it already renders things like TurtleBot 4 in Gazebo end-to-end.
**Repo:** https://github.com/ksatyaki/fastviz
---
## The bits I'm most excited about
### 1. It IS a ROS2 node
No bridge, no middleware, no separate process. fastviz subscribes directly to topics via `r2r`, so there's nothing extra to wire up between your robot and the visualizer.
### 2. The render thread never touches ROS2
The `r2r` executor runs on a dedicated thread; the renderer talks to it through an `Arc<RwLock<SceneGraph>>` with brief, write-only handoffs. The UI never blocks on DDS — frames stay smooth even when a noisy topic is flooding the graph.
### 3. GPU-accelerated via `wgpu`
Vulkan on Linux, Metal on macOS, DX12 on Windows, and WebGPU is on the menu too. Same renderer everywhere.
### 4. Revision-cached render passes
A `revision()` counter on the scene graph drives pass-level caching, so an idle scene costs ~zero CPU. Walking away from the visualizer doesn't pin a core.
### 5. GPU-side per-entity transforms for point clouds
The point-cloud pipeline is instanced, per-entity transforms happen on the GPU, and the prepare step is revision-cached with buffer reuse. PointCloud2 streams stay cheap.
### 6. TF tree reimplemented in Rust
No `tf2` C++ dependency — TF maintenance lives in pure Rust alongside the rest of the ingestion layer.
### 7. TOML config as the source of truth
Layouts are declared in a TOML file — diff-friendly, version-controllable, and easy to commit alongside your robot's launch config.
### 8. Polled wildcard topic discovery
Drop `"*"` into a topic list and every matching message type in the ROS graph gets auto-subscribed within about a second. Handy when you're exploring an unfamiliar bag or sim and don't want to enumerate topics by hand.
### 9. Per-topic QoS overrides in config
`reliability`, `durability`, and `depth` are all settable per topic from the same TOML file.
### 10. URDF support with STL / OBJ / DAE meshes
URDF parsing via `urdf-rs`; mesh loading covers STL, OBJ, and Collada. `package://` URIs resolve through `AMENT_PREFIX_PATH`, and `JointState` drives the FK.
### 11. Dev container + release Docker image
The `.devcontainer/` ships an Ubuntu 24.04 + ROS2 Jazzy image with `r2r` build deps, the Vulkan loader, and NVIDIA passthrough already wired up. A root `Dockerfile` also builds a release image you can `docker run`.
---
## What's supported today (early days!)
This is very preliminary — only a few message types are supported right now:
| Topic kind | Message |
|---|---|
| `[map]` | `nav_msgs/OccupancyGrid` |
| `[poses]` | `geometry_msgs/PoseStamped` |
| `[pose_arrays]` | `geometry_msgs/PoseArray` |
| `[paths]` | `nav_msgs/Path` |
| `[scans]` | `sensor_msgs/LaserScan` |
| `[points]` | `sensor_msgs/PointCloud2` |
| `[tf]` | `tf2_msgs/TFMessage` |
| `[urdf]` | `std_msgs/String` + `JointState` |
`MarkerArray`, `Image`, `Imu`, `Odometry`, and friends are on the near-term roadmap. ROS2 Jazzy is the only distro currently tested.
---
## Try it
```sh
git clone https://github.com/ksatyaki/fastviz
cd fastviz
source /opt/ros/jazzy/setup.bash
cargo build --release
cargo run -p app -- --config configs/turtlebot4.toml
```
Or via the dev container — open the folder in VS Code / Cursor and pick "Reopen in Container".
---
## Help wanted
If you give it a spin, I'd genuinely love to hear:
- which message types you'd want supported next,
- what kinds of bags would make good benchmarks,
- any architectural input on plugins, MCAP playback, or multi-window layouts.
Issues, PRs, and "this completely broke on my robot" reports are all very welcome.
Hopefully this can grow into something useful for the community. Thanks for taking a look!
**GitHub:** https://github.com/ksatyaki/fastviz
​
Hey everyone,
Building a SO-101 6-DOF arm for autonomous pick and place with drop recovery. Using LeRobot + ACT policy + ROS2 Jazzy on Ubuntu 24.04.
my setup -
- Single SO-101 follower arm (can't afford the leader arm)
- Lenovo i3 laptop, Intel UHD only, no NVIDIA GPU
- PyBullet and MuJoCo working, Isaac Lab is out for me
wanted to know -
1.Single arm training : LeRobot normally needs leader + follower. Has anyone trained ACT with just one arm? Keyboard teleoperation? Gamepad? Sim-to-real from MuJoCo?
Simulation without GPU— Isaac Lab is unusable on my machine. Is Webots or Genesis viable on Intel UHD? Any ROS2-friendly sim that actually works on CPU?
Virtual demo collection — any tool or GitHub repo that lets you move a virtual arm with keyboard/mouse and export as LeRobot-compatible dataset?
Drop recovery— using STS3215 servo load register + YOLOv8 wrist camera fusion to detect drops, then FoundationPose for re-grasp. Has anyone done anything similar on cheap hardware? Any gotchas?
Any GitHub repos, Discord servers, or tips appreciated 🙏
Stack: ROS2 Jazzy | LeRobot | ACT | PyBullet | MuJoCo | YOLOv8 | FoundationPose | MoveIt2
Hey everyone, I've been banging my head against this for a while and could really use some experienced eyes on my setup.
The robot: Segway RMP platform, differential drive. It's a ground robot that does long outdoor runs, typically 30-60 minutes, covering several kilometers on a university campus. Mix of open areas, tree-lined paths, and some areas with buildings nearby causing GPS multipath.
Sensors:
nav_msgs/Odometry on /odom/wheelsnavsat_transform_node outputting to /gps/odometryekf_node at 150HzThe problem: Most of the time the filter works okay-ish, but on some runs it completely falls apart. Like, the estimated position jumps to somewhere millions of meters away after what I think is a GPS spike getting accepted, and then the filter never recovers. It just keeps publishing nonsense from that point on. On other runs the path length ratio is completely off (filter thinks the robot traveled either way more or way less than it actually did).
Also running ukf_node in parallel to compare, and that one is just spitting out Critical Error, NaNs were detected in the output state of the filter almost constantly. So the UKF option seems totally broken for my setup.
Current config:
rl_ekf:
ros__parameters:
frequency: 150.0
sensor_timeout: 0.5
two_d_mode: true
transform_time_offset: 0.0
transform_timeout: 0.1
print_diagnostics: false
debug: false
map_frame: map
odom_frame: odom
base_link_frame: base_link
world_frame: odom
# Wheel odometry: fuse Vx and Vyaw only (differential drive, no lateral velocity)
odom0: /odom/wheels
odom0_config: [false, false, false,
false, false, false,
true, false, false,
false, false, true,
false, false, false]
odom0_differential: false
odom0_relative: false
odom0_queue_size: 10
odom0_twist_rejection_threshold: 4.03
# GPS via navsat_transform: fuse XY position
odom1: /gps/odometry
odom1_config: [true, true, false,
false, false, false,
false, false, false,
false, false, false,
false, false, false]
odom1_differential: false
odom1_relative: false
odom1_queue_size: 10
odom1_pose_rejection_threshold: 3.72
# IMU: roll/pitch only (no magnetometer, no absolute yaw), angular vel, accel
imu0: /imu/data
imu0_config: [false, false, false,
true, true, false,
false, false, false,
true, true, true,
true, true, true]
imu0_differential: false
imu0_relative: false
imu0_remove_gravitational_acceleration: true
imu0_queue_size: 50
process_noise_covariance: [0.05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.04, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.015]
Process noise covariance is diagonal, values roughly in the 0.01-0.06 range for position/orientation and 0.01-0.025 for velocity/acceleration. I basically eyeballed these from examples online.
Specific questions:
sqrt(chi2(2, 0.999)) = 3.72. Is this too loose? Too tight? When a GPS spike comes in and passes the gate, is there any way to recover without restarting the node?false in that position). The filter figures out heading from GPS travel. How long does this take to converge and is there a better way to handle it?odom0_differential: false with wheel odometry publishing absolute twist... is this correct or should it be true? I've seen conflicting advice on this.Running ROS 2 Jazzy on Ubuntu 24.04.
Hi there,
I'm currently considering the installation of CachyOS, but before I go on I'd like to get some feedback about this OS. I heared that CachyOS is pretty fast and that it's good for gaming, but is it also good for developing? Is there someone who could help me?
This is a list of tools I use the most:
- Flutter (with VSCode)
- Android Studio (with Java)
- ROS2
As far as I know Flutter as well as Android Studio are probably no problem, but how about ROS2? Is there anyone, who has some experience with this? Because I heared, that ROS2 is officially just for Ubuntu, but I just don't want to use Ubuntu...
I’m curious how serious robotics teams are handling real-world data collection in ROS 2.
For small experiments, recording topics with rosbag2 and writing custom scripts seems fine.
But once a team starts collecting data for robot learning, teleoperation, debugging, or fleet analytics, the workflow can get messy quickly.
For example:
-camera streams
-joint states
-force / torque data
-operator commands
-robot state topics
-failure cases
-teleop sessions
-rosbag2 replay data
The hard part seems to be less about recording data, and more about turning it into something clean, searchable, synchronized, and useful later.
For teams using ROS 2 in production or serious research:
Are you mostly using rosbag2 + custom scripts, or has your team built a more structured internal data pipeline for collection, storage, filtering, replay, and training?
Hey everyone,
I’m looking to dive into self-driving / autonomous systems using ROS and wanted to ask for some guidance from people with more experience.
What would you say is the best open-source self-driving project overall right now (in terms of maturity, community support, and real-world applicability)? I’m hoping to find something solid to study and possibly build on.
Also, I’m especially interested in agricultural applications (like autonomous tractors, field robots, etc.). Are there any standout open-source projects or frameworks focused on agriculture that you’d recommend?
Appreciate any suggestions, experiences, or pointers 🙏
Im a full stack C# dev, with a mechanical and electrical background.
Experince in reverse engineering (e.g hijacking ps5 controllers ect..)
Building robots, IoT systems
Building fully secure API with devops experince
And much more..
Looking to team up make friends share knowledge
Collaborate on projects had enough of working solo.
Designed the chassis in Fusion 360, exported to URDF, and built the full stack using ROS 2.
Stack:
Nav2 for navigation & path planning
ArUco-based visual docking for precise alignment
Custom waypoint sequencing for multi-shelf tasks
Gazebo + RViz for simulation & visualization
Challenge:
LiDAR point cloud rotated with the robot in RViz, breaking the mapping and navigation.
Root cause:
odom/TF mismatch during turns.
Fix:
Developed a GroundTruthOdom node using Gazebo pose data to publish stable /odom and consistent TF, including handling ROS-Gazebo timestamp issues.
In the video: robot autonomously services requests for Shelf B and Shelf C and delivers them to the drop-off
zone.
Happy to discuss the system or challenges!
I want to know if using ros and gazebo on wsl (windows subsystem for linux) causes any problem or is it as normal as in ubuntu.
Been building Altara, a React component library that connects to rosbridge and ships with real robotics components — attitude indicator, occupancy grid, LiDAR point cloud viewer, time-series charts, and more.
Live demo: https://jayasaikishanchapparam.github.io/altara/demo/ GitHub: https://github.com/JayaSaiKishanChapparam/altara
Tired of seeing people rebuild the same GCS components from scratch. Would genuinely love feedback on what's missing or wrong.
I’ve been working on robotics systems (autonomy stacks) for ~5 years. I keep seeing some pattern across different startups - the tech is usually “good enough".. but things break when it comes to real deployments, customers, and real-world usage, etc. Demos work, but real environments don’t. Suddenly their timelines slip, teams get stuck fixing edge cases, sales slow down or stop.
I’m trying to understand this better from the inside: What actually breaks first when you try to deploy your system in the real world?
I'd love to hear real examples more than theory.
(If you’re open to sharing more, I added a short questionnaire in the comments and I'll be happy to share results later.)