
Direct Lake Throttling?
I have a single semantic model in my F2 capacity that seems to be consuming quite a bit of CU resources and throttling. I'm in the process of stripping it down to improve performance, but wondering if there is a set of strategies to systematically vet a semantic model and set it up for Direct Lake success? I currently process everything in notebooks and store in lakehouses with a final stored procedure to write to a Gold Layer Warehouse.
u/wjwilson206 — 5 hours ago