Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes! Register now.
Hi Community
We have recently moved our semantic models from “direct lake on SQL” to “direct lake on Onelake”. The models works fine but we have noticed that Vertipaq Analyzer is no longer able to show the memory consumption of the model and columns. We can see the temperature of columns but are no longer able to see the memory consumption. This is the same behavior if try using DAX Studio, Tabular Edtior 3 or a notebook with semantic link labs (labs.vertipaq_analyzer).
It seems the model size is not something that can be reported when we have a model in “Direct lake on Onelake”. However knowing the size of model and columns is rather important to know if you are close to the Sku limits for your capacity. For instance F128 = 50 GB memory limit.
Has anyone had the same issue and how do you then know how large your models/columns memory consumption is?
Friedly Regards
Kim Tutein
Solved! Go to Solution.
Hello @KimTutein,
Thank you for clarifying your need for per-model and per-column memory consumption details, which are currently available for Import models and Direct Lake on SQL but not for Direct Lake on OneLake. As you noted, the Fabric Capacity Metrics App tracks Capacity Units (CUs) but doesn’t provide granular memory stats.
This limitation in Direct Lake on OneLake is due to its architecture: Data is queried on-demand from Delta tables, loading only necessary segments into a temporary VertiPaq cache, resulting in tools like VertiPaq Analyzer reporting only metadata size This is expected behavior, as outlined in the Direct Lake overview.
Workaround to Estimate Memory Usage:
To better understand resource usage during query execution, you can use Power BI Desktop’s Performance Analyzer to check if queries process in Direct Lake mode or fall back to Direct Query, which may increase resource consumption. This can provide indirect clues about memory usage patterns. For setup, see Performance Analyzer documentation.
Best regards,
Ganesh Singamshetty
Hi @KimTutein,
Thank you for reaching out to the Microsoft fabric community forum.
If I misunderstand your needs or you still have problems on it, please feel free to let us know.
Best regards,
Ganesh Singamshetty.
Thank you for your answer. As far as I know thw capacity Metircs apps do not expose the memory consumption but "only" capacity units (cu). We are explicit interested in how much memory a given semantic model use with specs on the columns. This is possible for "imported models" and "direct lake on sql" but it seems it is not possible for "direct lake on onelake".
Hello @KimTutein,
Thank you for clarifying your need for per-model and per-column memory consumption details, which are currently available for Import models and Direct Lake on SQL but not for Direct Lake on OneLake. As you noted, the Fabric Capacity Metrics App tracks Capacity Units (CUs) but doesn’t provide granular memory stats.
This limitation in Direct Lake on OneLake is due to its architecture: Data is queried on-demand from Delta tables, loading only necessary segments into a temporary VertiPaq cache, resulting in tools like VertiPaq Analyzer reporting only metadata size This is expected behavior, as outlined in the Direct Lake overview.
Workaround to Estimate Memory Usage:
To better understand resource usage during query execution, you can use Power BI Desktop’s Performance Analyzer to check if queries process in Direct Lake mode or fall back to Direct Query, which may increase resource consumption. This can provide indirect clues about memory usage patterns. For setup, see Performance Analyzer documentation.
Best regards,
Ganesh Singamshetty
User | Count |
---|---|
15 | |
4 | |
3 | |
3 | |
2 |