Don't miss your chance to take the Fabric Data Engineer (DP-600) exam for FREE! Find out how by attending the DP-600 session on April 23rd (pacific time), live or on-demand.
Learn moreNext up in the FabCon + SQLCon recap series: The roadmap for Microsoft SQL and Maximizing Developer experiences in Fabric. All sessions are available on-demand after the live show. Register now
Hi,
I am testing Fabric in last month and i really like direct lake connection.
This made me think if we use direct lake and no actual data is stored in semantic model how does this reflect to choosen storage format for that workspace.
On workspace level i can choose between "Small semantic model storage format" and "Large semantic model storage format" so if i am only going to use direct lake connection do i have some benefits or constraints of choosing one over another?
Solved! Go to Solution.
Hi @morz3d
Regarding the choice between "Small semantic model storage format" and "Large semantic model storage format" at the workspace level, this decision primarily affects how data is stored and managed within Fabric's semantic models themselves, rather than directly influencing the performance or capabilities of direct lake queries.
Since direct lake connections bypass the semantic model for querying, the choice of storage format for the semantic model doesn't directly affect the performance of these queries. Your decision should therefore be guided by how you plan to use the semantic models alongside direct lake queries.If your strategy leans heavily towards leveraging direct lake queries with minimal reliance on semantic models, the "Small semantic model storage format" might be more cost-effective. Conversely, if you anticipate needing to perform complex operations within semantic models for a subset of your analytics needs, the "Large semantic model storage format" might offer the necessary performance benefits.
Best Regards,
Jayleny
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Hi @morz3d
Regarding the choice between "Small semantic model storage format" and "Large semantic model storage format" at the workspace level, this decision primarily affects how data is stored and managed within Fabric's semantic models themselves, rather than directly influencing the performance or capabilities of direct lake queries.
Since direct lake connections bypass the semantic model for querying, the choice of storage format for the semantic model doesn't directly affect the performance of these queries. Your decision should therefore be guided by how you plan to use the semantic models alongside direct lake queries.If your strategy leans heavily towards leveraging direct lake queries with minimal reliance on semantic models, the "Small semantic model storage format" might be more cost-effective. Conversely, if you anticipate needing to perform complex operations within semantic models for a subset of your analytics needs, the "Large semantic model storage format" might offer the necessary performance benefits.
Best Regards,
Jayleny
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.
A new Power BI DataViz World Championship is coming this June! Don't miss out on submitting your entry.
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.
| User | Count |
|---|---|
| 31 | |
| 30 | |
| 21 | |
| 16 | |
| 16 |