u/Straight_Fill7086

Built a micro SaaS that scores GitHub repos using live activity data, looking for honest feedback

Built a micro SaaS that scores GitHub repos using live activity data, looking for honest feedback

Hey everyone,

I’ve been working on a micro SaaS project called RepoRank.co and wanted to share it here to get some real feedback from other founders/builders.

The platform analyzes GitHub repositories and generates a performance score using live repo data such as:

  • commit activity
  • contributor consistency
  • growth trends
  • overall project momentum

The idea came from wanting a faster way to evaluate open-source projects without manually checking dozens of GitHub metrics individually.

One thing I’ve also been experimenting with is integrating a Solana-based utility layer into the platform through our native token, $REPO.

The long-term goal is for $REPO to support things like:

  • premium platform features
  • contributor reputation mechanics
  • repository discovery incentives
  • ecosystem participation rewards

Still actively refining everything, so I’d genuinely appreciate feedback from other SaaS builders on:

  • whether the scoring system makes sense
  • whether the UI feels intuitive
  • whether the token integration actually adds value or just complexity
  • and how you’d approach monetization for something like this

Would love you guys to check it out and give your honest feedback. Thanks

u/Straight_Fill7086 — 7 hours ago
▲ 4 r/betatests+1 crossposts

Built a tool to speed up repetitive typing across any app, beta testers?

I built a desktop tool that lets you save snippets and trigger them with hotkeys in any app (email, Slack, IDEs, terminals).

It also has AI rewrite shortcuts and voice-to-text features.

Trying to figure out if the workflow actually saves people time or just adds complexity.

You can test it here:
lightning-assist.com

Main things I’d love feedback on:

  • Is onboarding confusing anywhere?
  • Do the hotkeys feel natural?
  • Would you actually use this daily?
  • Which feature feels most useful vs unnecessary?

Especially looking for feedback from developers, support teams, or people doing repetitive typing all day.

u/Straight_Fill7086 — 4 days ago

My workflow slowly became a mess of small tools over time

My workflow slowly became a mess of small tools over time.

I had snippets in one place, shortcuts somewhere else, a few automation scripts I barely touched, and a bunch of utilities I kept switching between during the day.

Eventually I just started consolidating everything into one setup Lightning-assist so I wasn’t constantly bouncing between apps for small repetitive actions.

It mostly just reduced how many things I have to think about while working.

reddit.com
u/Straight_Fill7086 — 5 days ago

Looking for beta testers for DeepAlphaBot, crypto execution system (grid + DCA)

I built DeepAlphaBot, an automated crypto execution layer for running grid and DCA strategies across exchanges like Binance and Bybit.

You can try it here: DeepAlphaBot.com

I’m looking for feedback on how stable the execution feels in real use, especially around order tracking, multi-exchange behavior, and overall reliability in live conditions.

reddit.com
u/Straight_Fill7086 — 5 days ago

My productivity setup slowly turned into a pile of disconnected tools

My workflow somehow turned into a mix of way too many little tools over time.

I had one thing for text snippets, another for shortcuts, separate AI tools, random clipboard utilities, and a few automation scripts I barely remembered setting up.

Everything technically worked, but the setup itself started feeling harder to manage than the actual work.

Lately I experimented with Lightning-assist to keep more of those workflows in one place instead of bouncing between apps constantly.

Still figuring out what setup works best long-term, but having fewer moving parts already feels nicer.

reddit.com
u/Straight_Fill7086 — 5 days ago
▲ 0 r/Python

I’ve been benchmarking a real-time Python inference pipeline using an ensemble of XGBoost and LightGBM models and found that the primary bottleneck wasn’t model execution itself.

Most of the slowdown actually came from serialization overhead when moving data between the WebSocket ingestion thread and the prediction engine through standard multiprocessing queues.

After switching to shared memory buffers for inter-process communication, the latency improvement was significantly larger than any model-side optimization I tested.

The local-first setup also seems useful from a privacy/security perspective since model logic and API credentials never leave the hardware, although managing shared state across processes adds a lot more architectural complexity.

Curious if others working on high-throughput Python streaming systems have moved toward:

  • shared memory
  • memory-mapped files
  • zero-copy approaches

Or is the standard multiprocessing queue system still the preferred trade-off despite the serialization overhead?

reddit.com
u/Straight_Fill7086 — 7 days ago

I’ve been thinking about the trade-off between power and privacy in algorithmic trading, especially when using models like LightGBM or XGBoost. Most people use cloud-based bots because they are easy to set up, but that usually means handing over your API keys and your entire strategy logic to a third-party server.

By moving to a local, non-custodial framework, you can run high-level machine learning models on your own hardware while maintaining read-only API security.

This approach keeps your specific "alpha" private and prevents the platform owner from seeing your trade logic or front-running your moves. I’ve been documenting this architecture and the ensemble system.

It seems like the move toward edge computing is the only real way to ensure your strategy doesn't get leaked or exploited by a central provider.

reddit.com
u/Straight_Fill7086 — 7 days ago
▲ 0 r/zapier

Been running into reliability issues with webhooks when using Zapier for higher-volume workflows, especially around retries and missed events.

I initially built a small internal tool to queue, store, and replay webhook events reliably (with things like delayed retries and fallback delivery).

But over time it’s kind of grown beyond just webhooks, more like handling event flows in general rather than just acting as a webhook layer.

It made a noticeable difference in dealing with edge cases where Zapier alone wasn’t enough.

Curious how others here are handling failures or retries at scale, are you relying fully on Zapier, or adding something on top?

reddit.com
u/Straight_Fill7086 — 8 days ago

I’ve been looking into ML-based trading bots for futures (especially ones using things like LightGBM/XGBoost and regime detection models).

One thing I’ve noticed is that some newer systems claim to combine order book data, funding rates, and volatility features to improve directional accuracy compared to traditional indicator-based bots.

I’m curious if anyone here has actually used ML-driven bots in live markets. In practice, do these approaches hold up in choppy conditions, or do they tend to overfit and degrade quickly?

Also interested in how people here think about risk management in automated systems—especially stop-loss logic like ATR-based approaches.

reddit.com
u/Straight_Fill7086 — 9 days ago
▲ 16 r/stripe

I had a complete disaster yesterday and I’m curious if anyone else has moved away from the "standard" integration. We had a brief database timeout right as a bunch of checkout sessions were completing. Stripe tried to send the webhooks, my server choked with a 500 error, and everything got out of sync.

The worst part isn't even the server crash, it’s the recovery. Trying to figure out exactly which events failed by digging through logs and then manually checking them against the Stripe dashboard took me hours. I felt like I was doing data entry instead of engineering.

I've decided I need a middle layer. I want something that just catches the webhook immediately, stores it, and then lets me retry it or inspect it through a clean UI if my app fails. I’m tired of the "silent failures" and the stress of deployments potentially breaking our payment flow.

Is everyone just building their own custom queue with SQS or something similar to act as a buffer? Or is there a simpler way to manage this that doesn't involve me maintaining more infrastructure?

reddit.com
u/Straight_Fill7086 — 10 days ago