Event-driven architecture for scalable dashboards — does this approach make sense?
Hey folks, I’d like to get some feedback on an architecture approach for dashboards in systems that need to scale in terms of read load and data volume.
We’re moving away from a model where most things are computed in real time directly on raw data, and considering a more decoupled setup. The idea is roughly:
domain services persist data as usual
in the same operation, they also write an event to an outbox table/collection
a worker publishes those events to a broker (e.g., Kafka)
independent consumers build read models/projections optimized for queries
5.dashboards read from those projections instead of recalculating everything on demand
We’re also considering reprocessing projections (by scope) in case of bugs or changes in business logic.
Main questions:
Is this kind of setup (outbox + event-driven projections + read models) a common pattern for scalable dashboards?
Does it make sense to separate write and read concerns like this even if the system isn’t super complex yet?
Have you seen simpler approaches work well in similar scenarios?
At what point would you consider this overengineering?
Just trying to sanity check if we’re heading in a reasonable direction or missing something simpler/more standard.