u/Hairy_Strawberry7028

Autopilot research airframe advice: X500 V2 vs X650 for Jetson + sensors?

I know this is an ArduPilot community; I’m not trying to start a PX4 vs ArduPilot debate. I’m asking here because people here tend to have real Pixhawk/autopilot airframe and payload experience.

I’m choosing a multirotor platform for an indoor-first autonomy/research project:

- open autopilot stack, currently planning PX4 + ROS 2

- Jetson companion computer

- depth/perception sensors

- future small gripper/aerial manipulation payload

- not FPV, not a camera drone

The hardware choice is **Holybro X500 V2 vs Holybro X650**.

For people who have built or tuned autonomy-capable Pixhawk multirotors:

- Would you start with the X500 V2 or go straight to X650 for Jetson + sensors + future payload?

- Does the X500 class become too weight/power/space-limited once companion compute is added?

- Is the X650’s payload and flight-time margin worth the size/indoor safety tradeoff?

- Any powertrain, PDB/BEC, vibration, EMI, or failsafe gotchas you’d watch for?

- Any US vendors with reliable Holybro stock/fast shipping, or trusted used sources for research-grade setups?

- Anything you’d avoid: Pixhawk clones, underpowered ESC/motor combos, bad 5V rails, misleading vendor stock, etc.?

Real setup details would help a lot: frame, FC, battery, companion computer, payload, and indoor/outdoor use.

reddit.com
u/Hairy_Strawberry7028 — 5 days ago

Jetson on a PX4 drone: X500 V2 vs X650 payload/power headroom?

I’m choosing a multirotor platform specifically because I want to carry a Jetson companion computer for real-time autonomy/perception, and I’d like advice from people who have put Jetsons on moving robots or UAVs.

Context:

- PX4 + ROS 2 autonomy stack

- indoor-first robotics testbed

- Jetson companion computer, likely Orin Nano/NX class

- depth camera/perception sensors

- future small gripper/aerial manipulation payload

- not FPV and not a camera drone

Current frame decision: **Holybro X500 V2 vs Holybro X650**.

Jetson-specific things I’m trying to sanity check:

- Which platform leaves enough power, cooling, mounting, and payload margin for Jetson + sensors?

- Does an X500 V2 get cramped/weight-limited too quickly once you add regulators, wiring, depth camera, telemetry, and safety hardware?

- Is an X650 meaningfully better for Jetson development, or does the larger aircraft become too awkward for indoor-first testing?

- Any power setup you’d avoid for Jetson on a drone? Separate BEC, battery tap, isolated DC-DC, brownout gotchas, EMI/noise problems, etc.?

- Any US vendor/source tips for getting Holybro/PX4 kits or Jetson-drone hardware quickly to San Francisco?

Real build details would be ideal: Jetson model, power supply, frame, flight controller, battery, sensors, and what actually failed or worked.

reddit.com
u/Hairy_Strawberry7028 — 5 days ago

Aerial robotics platform choice for ROS2/PX4 + Jetson + future gripper: X500 V2 or X650?

I’m trying to choose a programmable aerial robotics platform, not an FPV or camera drone.

Project constraints:

- indoor-first autonomy/robotics experiments

- ROS 2 + PX4 stack

- Jetson companion computer

- depth/perception sensors

- future small gripper or aerial manipulation payload

- enough room/power headroom to iterate without immediately rebuilding the whole aircraft

The two obvious Holybro options seem to be **X500 V2** and **X650**.

My concern is the tradeoff:

- X500 V2: smaller, cheaper, common PX4 dev kit, likely easier indoors, but maybe too payload-limited after Jetson + sensors + manipulation hardware.

- X650: more payload and flight-time margin, probably better for real manipulation experiments, but maybe too large/dangerous/awkward for indoor-first testing.

For people with real aerial robotics/autonomy experience, which would you pick and why?

I’m also trying to buy in the US/San Francisco area and get it within about a week. If you know vendors that currently handle Holybro/PX4 kits well, or trusted used sources for research-grade multirotor setups, I’d appreciate pointers.

Anything you would avoid is useful too: underpowered powertrains, old Pixhawk clones, frames that are miserable indoors, vendors with slow stock/shipping, payload assumptions that look fine on paper but fail in practice, etc.

reddit.com
u/Hairy_Strawberry7028 — 5 days ago
▲ 5 r/ROS

ROS 2 + PX4 offboard testbed: Holybro X500 V2 or X650 for Jetson indoor autonomy?

I’m choosing a hardware platform for a ROS 2 + PX4 indoor-first autonomy project and would value advice from people who have actually run ROS 2 offboard control on real multirotors.

This is **not** for FPV and not for camera/cinematic work. Goal is a robotics research/testbed platform:

- PX4 autopilot with ROS 2 offboard control

- Jetson companion computer

- depth camera / perception sensors

- eventually a small gripper or aerial manipulation payload

- indoor testing first, outdoor testing later

I’m currently comparing **Holybro X500 V2** vs **Holybro X650**.

Questions for ROS/PX4 users:

- Which frame would you choose for ROS 2 + PX4 + Jetson work today?

- Is the X500 V2 payload/space/power headroom enough once you add Jetson + sensors, or does it become limiting quickly?

- Is the X650 worth the added payload and flight-time headroom, or is it too big/awkward for indoor-first experiments?

- Any gotchas around vibration, power distribution, companion-computer mounting, Micro XRCE-DDS / MAVLink links, or estimator setup?

- Any hardware/vendor choices I should avoid if I want a reliable ROS 2 research stack rather than an FPV hobby build?

I’m in San Francisco and trying to buy from a US vendor with actual stock/fast shipping, so any recent vendor experience is also useful. Please mention your setup if you reply: frame, FC, companion computer, sensors, and whether you have flown ROS 2 offboard on hardware.

reddit.com
u/Hairy_Strawberry7028 — 5 days ago

PX4/ROS2 research drone buying advice: Holybro X650 vs X500 V2 for indoor Jetson + future gripper?

Looking for advice from people who have actually built/flown PX4/ROS2 autonomy or research platforms, especially with a companion computer and offboard control. I am explicitly **not** looking for FPV or camera-drone recommendations.

Current goal:

- Indoor-first robotics/autonomy project; outdoor testing later.

- PX4 + ROS 2 + Jetson companion computer.

- Future gripper/aerial manipulation payload, plus likely depth camera/sensors.

- More research testbed than cinematic platform.

Main decision: **Holybro X500 V2 vs Holybro X650**.

My current read is that the X500 V2 looks cheaper, more common as a PX4 dev kit, and less painful indoors. The X650 looks like it has much better payload and flight-time headroom for Jetson + sensors + future gripper, but it may be too large/dangerous/awkward for indoor-first work.

For people who have actually carried Jetson/sensors/manipulation payloads, which would you buy today and why?

Buying/logistics questions:

- I am in San Francisco, US, and need something actually in stock that can arrive within about a week.

- Best US vendors right now for Holybro PX4 kits/parts? GetFPV, Advexure, RobotShop, etc.?

- Any vendor you trust for Holybro kits specifically, or should I order direct from Holybro despite international shipping/import-duty/tariff uncertainty?

- Any trusted used/second-hand sources for PX4 research setups? University/lab surplus, PX4 Discord, RC Groups classifieds, eBay sellers, etc.?

- Anything to avoid: old Pixhawk clones, mismatched power systems, underpowered X500 setups, discontinued components, weak support, unsafe indoor frame choices, etc.?

If you reply, please mention your real setup/experience: frame, flight controller, battery, companion computer, payload, indoor vs outdoor use, and whether you have done ROS2/PX4 offboard or aerial manipulation work.

reddit.com
u/Hairy_Strawberry7028 — 5 days ago

Do old PCs / mini PCs make sense as edge AI boxes for physical-world deployments?

Most edge AI hardware discussions focus on Jetson, NPUs, or dedicated accelerators. I’m curious about the less elegant option: old PCs and cheap mini PCs as local inference boxes for physical-world systems.

For use cases like cameras, inspection, lab automation, small robots, home automation, or field devices, a local x86 box can avoid cloud latency/cost without requiring specialized embedded hardware.

Questions:

- Are people using old PCs / mini PCs for local vision or multimodal inference?

- What makes them better or worse than Jetson/SBC-style hardware?

- Is power draw the main downside, or is deployment reliability/software maintenance harder?

- Any practical setups that have worked well for camera-to-decision workloads?

reddit.com
u/Hairy_Strawberry7028 — 6 days ago

What are the best hardware startup wedges in physical AI right now?

Curious how hardware founders are thinking about physical AI.

The obvious categories are crowded or capital-intensive: humanoids, autonomous vehicles, warehouse robotics, drones. But there may be better startup wedges around the surrounding stack.

Examples I’m thinking about:

- edge inference hardware / deployment layers

- low-cost sensing for robots and industrial equipment

- retrofit kits for existing machines

- inspection or QA systems for specific verticals

- fleet data capture and evaluation

- safety / monitoring / compliance layers

- devtools for robotics teams

For people who have built hardware companies: where do you see opportunities that are painful enough for customers to pay for, but still narrow enough for a startup to attack?

reddit.com
u/Hairy_Strawberry7028 — 6 days ago

What are the best hardware startup wedges in physical AI right now?

Curious how hardware founders are thinking about physical AI.

The obvious categories are crowded or capital-intensive: humanoids, autonomous vehicles, warehouse robotics, drones. But there may be better startup wedges around the surrounding stack.

Examples I’m thinking about:

- edge inference hardware / deployment layers

- low-cost sensing for robots and industrial equipment

- retrofit kits for existing machines

- inspection or QA systems for specific verticals

- fleet data capture and evaluation

- safety / monitoring / compliance layers

- devtools for robotics teams

For people who have built hardware companies: where do you see opportunities that are painful enough for customers to pay for, but still narrow enough for a startup to attack?

reddit.com
u/Hairy_Strawberry7028 — 6 days ago

Anyone running multimodal / vision models on edge hardware instead of desktop GPUs?

Most local LLM/VLM discussion I see is around desktop GPUs, Macs, or servers. I’m curious about deployments on much more constrained hardware: Jetsons, mobile NPUs, ARM CPUs, SBCs, drones/robots, or old PCs.

Recent datapoint from a deployment I worked on: multimodal classifier on Jetson Orin NX, 111ms cold start, 100% of decisions inside a 150ms budget, zero cloud calls.

For people doing local multimodal inference outside normal workstation setups:

- What hardware are you targeting?

- Which models are practical today?

- Are you using llama.cpp-style stacks, ONNX/TensorRT, vendor SDKs, or custom runtimes?

- What breaks first: RAM/VRAM, latency, cold start, unsupported ops, quality after quantization, or packaging?

Mostly looking to compare notes on what actually works in the ugly edge cases.

reddit.com
u/Hairy_Strawberry7028 — 6 days ago

Are industrial automation teams running vision AI fully on-device yet?

Curious what people in industrial automation are actually doing with larger vision / multimodal models at the edge.

A lot of factory/field deployments seem like bad fits for cloud inference: latency, network reliability, data privacy, safety, and cost all push toward local inference on an industrial PC, Jetson, ARM box, or some vendor accelerator.

Recent datapoint from a deployment I worked on: multimodal classifier on Jetson Orin NX, 111ms cold start, 100% of decisions inside a 150ms budget, zero cloud calls.

For people deploying vision AI in industrial settings:

- What hardware are you using near the line / machine?

- Are you running cloud, on-prem server, or fully on-device?

- What breaks first: latency, camera/preprocessing, model accuracy after quantization, power/thermal, network, or integration with PLC/SCADA/MES?

- Are larger VLM-style models useful yet, or is most production work still classical CV + smaller models?

reddit.com
u/Hairy_Strawberry7028 — 6 days ago

How should edge deployment be evaluated after quantizing a vision model?

Question for people who have shipped ML models onto constrained hardware.

When you quantize/prune/distill a vision or multimodal model for edge deployment, how do you decide the compressed model is still good enough?

A recent datapoint from a deployment I worked on: multimodal classifier on Jetson Orin NX, 111ms cold start, 100% of decisions inside a 150ms budget, zero cloud calls.

The obvious eval is final task accuracy, but I’m wondering if people also track:

- per-class degradation after quantization

- edge-case / long-tail slices

- latency percentiles and cold start

- camera/sensor-specific evals

- hardware-specific regressions

- production feedback loops or human review

What eval setup has worked best for you when model quality and hardware latency both matter?

reddit.com
u/Hairy_Strawberry7028 — 6 days ago

Anyone running vision / multimodal inference on Orange Pi or similar SBCs?

Most edge AI discussion I see is around Jetson, but I’m curious how far people have gotten with Orange Pi / RK3588-class boards or similar SBCs.

Recent reference point from a Jetson deployment I worked on: multimodal classifier on Orin NX, 111ms cold start, 100% of decisions inside a 150ms budget, zero cloud calls.

For Orange Pi / RKNN / similar setups:

- What vision models are practical today?

- Are you using RKNN, ONNX Runtime, NCNN, OpenCV DNN, or something else?

- What hurts most: NPU op support, quantization quality, memory bandwidth, preprocessing, or packaging?

- Have you gotten any larger multimodal/VLM-style model to run usefully, or is the sweet spot still smaller CV models?

reddit.com
u/Hairy_Strawberry7028 — 6 days ago

For autonomy stacks, where do large vision models actually run: onboard, cloud, or offline only?

I’m trying to understand the production reality for larger vision / multimodal models in autonomous systems.

A lot of demos can use workstation/cloud inference, but production autonomy has harder constraints: latency, connectivity, safety, power/thermal, and deterministic behavior. That seems to push more inference onboard, but the hardware envelope is painful.

Recent datapoint from a deployment I worked on outside AV: multimodal classifier on Jetson Orin NX, 111ms cold start, 100% of decisions inside a 150ms budget, zero cloud calls.

For people working around autonomy:

- Are larger vision/VLM-style models running onboard yet, or mostly offline labeling/debugging?

- What hardware class is realistic for production inference?

- What breaks first: latency, memory, thermal/power, model quality after compression, sensor/imaging mismatch, or evaluation?

- Do you see hybrid cloud ever being acceptable for safety-critical perception, or only non-critical features?

reddit.com
u/Hairy_Strawberry7028 — 6 days ago
▲ 19 r/CUDA

For edge inference, when do you drop below TensorRT/ONNX and write custom CUDA kernels?

Question for people who do CUDA work on production inference paths.

For large vision / multimodal models running on edge devices, the first pass is usually export/compile/quantize with TensorRT, ONNX Runtime, vendor SDKs, etc. But sometimes a small set of operators or pre/post-processing steps dominates the latency trace enough that custom CUDA kernels become worth it.

Recent datapoint from a Jetson Orin NX deployment I worked on: multimodal classifier, 111ms cold start, 100% of decisions inside a 150ms budget, zero cloud calls.

Curious how CUDA folks decide when custom kernels are worth the maintenance cost:

- What trace/profile signs make you reach for custom CUDA?

- Do you usually target model ops, preprocessing, memory layout/conversion, or batching?

- How do you keep custom kernels portable across Jetson vs larger NVIDIA GPUs?

- Any profiling workflow you trust for this kind of edge latency work?

reddit.com
u/Hairy_Strawberry7028 — 6 days ago

Orin NX edge inference: what usually dominates your latency trace?

I’m comparing notes with people deploying larger vision / multimodal models on Jetson hardware.

Recent datapoint from a deployment I worked on: multimodal classifier on Jetson Orin NX, 111ms cold start, 100% of decisions inside a 150ms budget, zero cloud calls.

The obvious work is quantization / TensorRT / pruning, but in practice the trace often comes down to a few hot operators, data movement, preprocessing, or startup behavior.

Curious what people here usually see on Orin / Jetson deployments:

- Are you bottlenecked more by inference kernels, preprocessing, memory copies, cold start, or power/thermal limits?

- Are you mostly using TensorRT, ONNX Runtime, DeepStream, custom CUDA, or vendor examples?

- Which model classes have been most annoying to optimize?

- Do you measure cold start as part of your target latency budget?

reddit.com
u/Hairy_Strawberry7028 — 6 days ago
▲ 3 r/mlops

How are teams treating edge model deployment in their MLOps pipeline?

I’m trying to compare notes on MLOps for edge / physical AI deployments.

For cloud models, the loop is fairly mature: train, eval, deploy, monitor, roll back. For edge models running on robots, Jetsons, mobile NPUs, ARM CPUs, etc., the deployment process seems much less standardized.

The issues I keep seeing:

- model works on workstation/cloud GPU but misses latency on-device

- quantization/pruning changes behavior in ways the normal eval set does not catch

- cold start matters separately from steady-state latency

- unsupported ops or vendor SDK differences force target-specific work

- monitoring is hard when the runtime has to stay offline or privacy constrained

Recent datapoint from a deployment I worked on: multimodal classifier on Jetson Orin NX, 111ms cold start, 100% of decisions inside a 150ms budget, zero cloud calls.

How are people handling this in practice?

- Is edge compilation a separate release gate?

- Do you maintain hardware-specific evals?

- Are model + runtime + target device versioned together?

- What tools are you using for regression testing after compression?

reddit.com
u/Hairy_Strawberry7028 — 6 days ago

Lessons from getting a multimodal classifier under 150ms on Jetson Orin NX

Sharing a practical deployment datapoint since a lot of ML learning material stops at training/eval and doesn’t cover the messier edge deployment phase.

A recent system I worked on: multimodal classifier on Jetson Orin NX, 111ms cold start, 100% of decisions inside a 150ms budget, zero cloud calls.

A few things that mattered more than expected:

- Optimize for the actual target device early. A model that looks fine on a workstation can fail badly on the deployment hardware.

- Measure cold start separately from steady-state latency. It can dominate user-visible behavior.

- Compression is not one trick. Distillation, quantization, pruning, compilation, and operator-level work each hit different bottlenecks.

- Hardware-specific kernels matter when a few ops dominate the trace.

- Offline inference changes the product constraints: no fallback, no telemetry dependency, no cloud latency hiding bad local performance.

Curious what people here want to learn more about: quantization tradeoffs, TensorRT/ONNX export pain, latency profiling, or edge eval setup?

reddit.com
u/Hairy_Strawberry7028 — 6 days ago
▲ 2 r/ollama

Anyone pushing local VLM inference onto Jetson / mobile NPUs / older PCs?

Most local inference threads I see are about desktop GPUs, Macs, or servers. I’m curious about the weirder hardware edge: Jetson, mobile NPUs, ARM CPUs, older PCs, drones/robots/field devices, etc.

A recent deployment datapoint I worked on: multimodal classifier on Jetson Orin NX, 111ms cold start, 100% of decisions inside a 150ms budget, zero cloud calls.

Questions for people here:

- Are you running VLMs locally outside normal workstation/server setups?

- Are Ollama-style stacks part of that, or do you switch to ONNX/TensorRT/vendor SDKs for edge targets?

- What breaks first: RAM/VRAM, latency, cold start, unsupported ops, quality after quantization, or packaging/deployment?

Mostly looking to compare notes on practical local multimodal inference.

reddit.com
u/Hairy_Strawberry7028 — 6 days ago

What are people using for edge deployment of large vision / multimodal models?

I’m trying to compare notes on the deployment side of deep learning, specifically large vision / multimodal models that need to run on constrained hardware instead of a cloud GPU.

The hard parts I keep seeing are less about model architecture and more about the production envelope: latency budget, memory pressure, cold start, unsupported ops, power/thermal limits, and quality drop after quantization.

A recent datapoint from a deployment I worked on: multimodal classifier on Jetson Orin NX, 111ms cold start, 100% of decisions inside a 150ms budget, zero cloud calls.

For people doing this in production or serious prototypes:

- What hardware are you targeting?

- Are you using ONNX/TensorRT/vendor SDKs/custom kernels/something else?

- Which compression step usually hurts quality the most: distillation, quantization, pruning, operator replacement?

- Do you eval only final task success, or also intermediate per-step behavior?

Would love to hear what stacks people trust right now.

reddit.com
u/Hairy_Strawberry7028 — 6 days ago
▲ 8 r/ROS

ROS teams running VLM / vision perception nodes on-device: what are your deployment bottlenecks?

I’m working on edge deployment infrastructure for robotics and trying to understand what ROS teams are actually running into when deploying larger vision / multimodal perception models on-device.

The cases I’m most interested in are robots where cloud inference is a bad fit because of latency, connectivity, privacy/safety constraints, or cost.

A few questions for people doing this in ROS/ROS2 stacks:

- Are you running inference as ROS nodes, separate services, or outside ROS entirely?

- What hardware are you using: Jetson, x86 + GPU, ARM CPU, mobile NPU, something custom?

- What breaks first in practice: latency, memory, startup time, thermal/power, unsupported ops, message-passing overhead, or debugging/evaluation?

- Are larger VLM-style models actually making it into production robots yet, or are they still mostly used for demos / offline labeling?

Recent datapoint from our side: multimodal classifier on Jetson Orin NX, 111ms cold start, 100% of decisions inside a 150ms budget, zero cloud calls.

Mostly looking to compare notes with people who have shipped this or tried to.

reddit.com
u/Hairy_Strawberry7028 — 6 days ago