
r/laravel

Lumen (API-only in Laravel) replacement?
I want to build a purely API in Laravel and Nissan Lumen. Is there anything comparable?
Since there will never be a front-end to this pulling in the whole framework is overkill.
! $thing vs !$thing - minor pint microrant
Who is really putting a space after the ! in conditions? The Laravel pint rules just seem a bit off on this point. Am I alone?
if (! $thing) { } // ??if (!$thing) { } // The way of the 99%
Live Walkthrough: What's new in Laravel Starter Kits w/ Wendell Adriel
The Laravel starter kits (React, Vue, Svelte, and Livewire) have had a huge run of updates recently, including team support, Inertia v3 support, and toast notifications.
I'll be going live tomorrow (04/21) at 12pm EDT (4pm UTC) with Wendell Adriel for a walkthrough of what's new. We'll also touch on Maestro, the orchestrator that powers how all the kits stay in sync.
Would love to see you there! If you have any questions, feel free to drop them here ahead of time or ask in chat during the stream!
[Showcase] I got tired of building SaaS billing from scratch, so I made an open-source Laravel package with a Filament admin panel. Sets up in 15 mins.
I just released Laravel subscriptions with a ready-to-use Filament UI.
This allows developers to set up subscription sales in their projects in 15 minutes, or an hour at most.
It comes with:
- Pre-built pricing pages (Tailwind)
- Filament admin dashboard for managing subscriptions
- Built-in webhook handling
This idea came to me when I was faced with implementing subscriptions myself. There were many pitfalls. Debugging was difficult. Boxed solutions were cumbersome and expensive.
Previously, this was practically impossible due to integration with the existing admin panel. Now, Filament solves this problem.
Did I solve someone's problem?
I’d love to hear your feedback on the code architecture or features I should add next!
Live Demo: https://subkit.noxls.net/
Your job "succeeded" but did nothing how do you even catch that?
Had an interesting conversation recently about queue monitoring in Laravel. Someone came to me with a production case: a job was supposed to create 10,000 users, created 400, and still reported as successful. No errors, no exceptions, everything green. And I realized, right now my system can't even tell whether a job actually did what it was supposed to. I started looking at other monitoring tools, and most of them just say "it ran" or "it failed". But what about when it runs, doesn't crash, and just ... does the wrong thing?
Started thinking about tracking execution time baselines, if a job that normally takes 30 seconds suddenly finishes in 2, something's probably off. But that only catches the obvious cases. The harder question is: should the job itself validate its own result? Like "I was supposed to create 10,000 records, I created 400, that's not right"? Or is that already business logic and doesn't belong in monitoring?
Because the moment you start checking results, you're basically writing tests for every job, and that feels like a rabbit hole.
Curious how you guys handle this. Do you just trust "no error = success" or do you actually verify what happened after the job ran?
Is it even worth digging into this or is it overengineering?
GitHub: https://github.com/RomaLytar/yammi-jobs-monitoring-laravel
Just released Laravel Sluggable
Hi r/laravel,
I built a package called Laravel Sluggable; it's basically my opinionated take on automatic slug generation for Eloquent models.
It's the exact pattern I've ended up using across a bunch of projects (including Laravel Cloud), and I finally wrapped it up into a package.
Usage is intentionally minimal: just drop a single #[Sluggable] attribute on your model and you're done. No traits, no base classes, no extra wiring.
It handles a lot of the annoying edge cases out of the box: slug collisions (even with soft-deleted models), Unicode + CJK transliteration, scoped uniqueness (per-tenant, per-locale), multi-column sources, etc.
Let me know what you think.
I built HorizonHub: monitor multiple Laravel Horizon services in one place
Hey everyone,
I wanted to share something I built for myself called HorizonHub.
I work with several Laravel services using Horizon, and I kept feeling the same pain: checking queues/jobs/workers across services was messy and annoying.
So I started building a small tool to make my own life easier.
Right now, HorizonHub lets me:
- Monitor jobs from multiple Laravel services in one place
- Restart jobs in batch
- Receive alerts
All jobs can be viewed at a glance
It’s still early and very much a real "built-from-need" project.
If you run several Laravel apps with Horizon and are tired of switching between dashboards, this might be useful.
If anyone wants to try it, checkout the Github repository: https://github.com/enegalan/horizonhub.
Any feedback (good or bad) helps me improve it 🙏
PagibleAI 0.10: Laravel CMS for developers AND editors
We just released Pagible 0.10, an open-source AI-powered CMS built for Laravel developers:
What's new in 0.10
- MCP Server — Pagible ships with a built-in Model Context Protocol server. AI agents can create pages, manage content, and search your site programmatically. This makes Pagible one of the first CMS platforms where AI can directly manage your content through a standardized protocol.
- Customizable architecture — The codebase has been split into 9 independent sub-packages (core, admin, AI, GraphQL, search, MCP, theme, etc.). Install only what you need.
- Vuetify 4 admin panel — The admin backend has been upgraded to Vuetify 4 and optimized for WCAG accessibility, keyboard navigation and reduced bundle size.
- Significant performance work — This release focused heavily on database performance: optimized indexes, reduced query count, eager loading, optimized column selection, and faster page tree fetching.
- Rewritten fulltext search — Custom Scout engine supporting fulltext search in SQLite, MySQL/MariaDB, PostgreSQL, and SQL Server. Paginated results with improved relevance ranking.
- Named roles & JSON permissions — Moved from bitmask permissions to a readable JSON array system with configurable roles (e.g. editor, publisher, viewer, etc).
- Security hardening — Rate limiting on all endpoints, strict security, DoS protection against all inputs.
What makes Pagible different
- Laravel-native — Not a CMS bolted onto Laravel. It uses Blade, Eloquent, migrations, Scout, service providers — everything you already know.
- AI-first — MCP server for agent-driven content management, plus built-in AI features for content generation, translation, and image manipulation via Prism/Prisma.
- Hierarchical pages — Nested set tree structure with versioning. Editors see drafts, visitors see published content.
- Multi-tenant — Global tenant scoping on all models out of the box.
- Small footprint — The entire codebase is deliberately kept small. No bloat, no unnecessary abstractions.
- LGPL-3.0 — Fully open source.
Links
- Demo: https://demo.pagible.com/
- GitHub: https://github.com/aimeos/pagible
- Website: https://pagible.com/laravel-cms
Would love to hear your feedback and if you like it, give a star :-)
I built a CLI tool that lets your AI agents improve your query performance in a loop
Hey there everyone.
When working with libraries like Filament, a lot of the queries are not explicit in code as only Model classes are passed around. This makes debugging and improving query performance more difficult. Existing debugging tools like Debugbar or Telescope work require a browser UI and aren't very accessible to coding agents.
I built LaraPerf.dev because I wanted to let my agents run in a loop and continue trying to find query speed improvements.
The tool is optimized for having an agent call it as a tool call.
The agent calls `artisan perf:watch --seconds 20` which will listens to all queries for 20 seconds. However, the command immediately exits to let the agent do more actions within those 20 seconds. The agent can then query through all query results with the `perf:query` command to find slow or N+1 queries and with `perf:explain` to run `EXPLAIN ANALYZE` for a given query.
It also comes with a premade skill to let your agent run in a loop.
Find it at https://laraperf.dev or checkout the code at https://github.com/mateffy/laraperf
Laravel's wildcard validation is O(n²), here's a fix
I was profiling a slow import endpoint. 100 items, 47 fields each with exclude_unless and required_if. Endpoint took 3.4 seconds. I assumed database queries. Validation alone was 3.2s.
When you write items.*.name => required|string|max:255, Laravel's explodeWildcardRules() flattens data with Arr::dot() and matches regex patterns against every key. 500 items × 7 fields = 3,500 concrete rules, and the expansion is O(n²). Conditional rules like exclude_unless make it worse because they trigger dependent-rule resolution on every attribute.
I submitted 10 performance PRs to laravel/framework. Four merged, the six validation ones were all closed. So I built it as a package: laravel-fluent-validation.
Add use HasFluentRules to your FormRequest, keep your existing rules. The wildcard expansion is replaced with O(n) tree traversal. For 25 common rules it compiles PHP closures (is_string($v) && strlen($v) <= 255 instead of rule parsing + method dispatch + BigNumber). If the value passes, Laravel's validator never sees it. Fails go through Laravel for the correct error message. It also pre-evaluates exclude_unless/exclude_if before validation starts, so instead of 4,700 rules each checking conditions, the validator only sees the ~200 that actually apply.
class ImportRequest extends FormRequest
{
use HasFluentRules;
}
Benchmarks (CI, PHP 8.4, OPcache, median of 3 runs):
| Scenario | Laravel | With trait | Speedup |
|---|---|---|---|
| 500 items × 7 simple fields | ~200ms | ~2ms | 97x |
| 500 items × 7 mixed fields (string + date) | ~200ms | ~20ms | 10x |
| 100 items × 47 conditional fields | ~3,200ms | ~83ms | 39x |
It's already noticeable with a handful of wildcard inputs that each have a few rules. The package works with Livewire and Filament, is Octane-safe and has a large set of tests.
https://github.com/SanderMuller/laravel-fluent-validation
Performance issue tracked upstream: laravel/framework issue 49375
I built an open source WebSocket server in Go that's Pusher-compatible — self-host free forever, or use the managed cloud tier
Hey r/laravel,
I built Relay — an open source WebSocket server written in Go. Sharing it here since most of you are the target audience.
Why build this when Reverb exists?
Reverb is great and I want to be upfront about that. The real reason Relay exists is different: Reverb runs inside a Laravel PHP application — you need a Laravel app running to host it. Relay is a standalone Go binary with zero dependencies. No PHP, no Composer, no Laravel. You drop it on any server and run it.
That matters if you want to self-host WebSockets without owning a Laravel app, or if you want a server that starts in milliseconds and uses minimal resources regardless of what stack you're on.
What Relay actually does differently:
- Single Go binary — no runtime, no dependencies, drop it anywhere
- Performance — at 1,000 concurrent connections: ~18% CPU, ~38MB RAM vs Reverb's ~95% CPU, ~63MB RAM on equivalent hardware
- Built-in Channel Inspector — live view of active channels, subscriber counts, and event payloads with syntax highlighting. Nothing like it exists in Reverb.
- Open source exit ramp — Relay Cloud is the managed tier, but the binary is MIT licensed. Self-host free forever, or move between cloud and self-hosted with two env var changes.
What Relay does NOT uniquely offer:
Being honest here since I got called out on this elsewhere — Relay, like Reverb and every other Pusher-compatible server, works with any Pusher client. That's not unique. It's just how the Pusher protocol works. Reverb also supports multiple apps and Laravel Cloud now has a fully managed Reverb offering.
The stack:
Server: Go binary, MIT licensed — github.com/DarkNautica/Relay
Managed cloud (optional): relaycloud.dev — free hobby plan, $19/mo Startup
Laravel package: composer require darknautica/relay-cloud-laravel
Benchmark post: relaycloud.dev/blog/relay-vs-reverb-benchmark
Happy to answer questions and take criticism — clearly still learning what makes this actually unique.
I built a lightweight alternative to Laravel Horizon that works without Redis (SQS / DB / sync supported)
I built a small package for Laravel to monitor queues without being tied to Redis.
Horizon is great, but:
- it requires Redis
- it's a bit heavy for small projects
- and it doesn’t really work if you're using SQS, database or sync drivers
In many cases, I just wanted to know:
- did my jobs run?
- which ones failed?
- why did they fail?
So I made a lightweight solution:
- works with any queue driver (Redis, SQS, database, sync)
- tracks full job lifecycle (processing / success / failed)
- shows retries and execution time
- simple Blade dashboard out of the box
- JSON API included (for custom frontends)
Setup is super simple:
composer require romalytar/yammi-jobs-monitoring-laravel
php artisan migrate
That’s it — you immediately get a UI at `/jobs-monitor`.
Would really appreciate any feedback
Especially what’s missing or what could be improved.
GitHub: https://github.com/RomaLytar/yammi-jobs-monitoring-laravel
I used multiple Claude Code instances to build and test a Laravel package across 3 production codebases
I posted recently on Reddit about building a fluent validation rule builder for Laravel (laravel-fluent-validation). Since then I also released a Rector companion package for automated migration. Instead of the usual pre-release-and-wait cycle, I ran Claude Code on the package repo and on three production Laravel codebases simultaneously and let the Claude instances work together.
The workflow
claude-peers is an MCP server for Claude Code. Each instance running on your machine can discover other instances, see what they're working on, and send messages. They don't share context. Each has its own conversation with full codebase access.
In practice it works like this: the package peer tags a new release. It sends a message to the three codebase peers saying "0.4.5 tagged, fixes the parallel-worker race, please re-verify." Each codebase peer receives the message, pulls the new version, runs the migration, runs their tests, and sends back results. If something breaks, the response includes the exact error, the file, and usually a theory about why. The package peer reads that, asks follow-up questions if needed, fixes the issue, and the loop continues.
One thing I didn't expect was how quickly the peers developed their own review dynamic. They would challenge each other's assumptions, ask for evidence, and sometimes reach consensus before coming back with a recommendation.
I had four terminals open:
- The package repo, building features, writing tests, shipping releases
- Three production codebases, each a real Laravel app with its own validation patterns, framework integrations, and test suites
Everything runs locally. Claude Code works on local clones of each codebase, with the same filesystem access you'd have in your terminal. No production servers, no remote environments, no secrets exposed to AI.
The interesting part was what the peers caught that tests and synthetic fixtures couldn't:
- One app has 108 FormRequests and uses
rules()as a naming convention on Actions and Collections. The skip log grew to 2,988 entries / 777KB. On a smaller codebase you'd never notice. - Another app runs 15 parallel Rector workers. The skip log's truncate flag was per-process, so every worker wiped the others' entries. Synthetic fixtures run single-process. This bug doesn't exist there.
- The same app runs Filament alongside Livewire. Five components use Filament's
InteractsWithFormstrait which defines its ownvalidate(). Inserting the package's trait would have been a fatal collision on first render. - A third app found that 5/7 of its Livewire files had dead
#[Validate]attributes coexisting with explicitvalidate([...])calls. Nobody anticipated that pattern.
Wrote up the full workflow, what worked, and when I'd use it (link in comments).