Synthetic data and UX research
I’ve been thinking about why UX and research firms like Qualtrics have become so keen on synthetic data. When they started pivoting to synthetic data, the justification was efficiency and democratisation. The underlying story seems a bit different.
Companies like Qualtrics introduce synthetic data as the baseline, clients know it’s not reliable enough for serious decisions so they look for ways to de-risk it, and the natural answer becomes a hybrid model. Real customer insights, which used to be the default, become a discretionary upgrade.
Firms advertise lower entry-level pricing, which is technically true, but anyone serious about their research ends up using a hybrid option.
Real customer insights, which used to be the default, become a discretionary upgrade.
Over time, real customer research shifts from baseline to optional premium tier. What was standard becomes luxury. It’s about understanding that once you give people a cheaper-but-risky option, they’ll pay extra to remove the risk. This whole things seems like a pricing re-baselining exercise for research...