What does building a metadata synchronization interface actually look like?
I'm a stats student with no SWE background who got assigned a project that involves building a synchronization interface between a metadata providing service and an internal dataset platform, as part of a larger distributed computing system. I understand what the end goal is but I have no mental model of what this actually looks like in practice. Are there any similar projects or examples online I can look at to get a feel for how this kind of integration is typically built?