The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredAsk the Fabric Databases & App Development teams anything! Live on Reddit on August 26th. Learn more.
Hi everyone,
I'm currently working on a scenario where we have semantic models in DirectQuery mode, connected to an on-premises SSAS cube. These models are already in production and widely used in our Power BI reports.
We want to make the data from our semantic model available in OneLake, so that business users, analysts, and developers can:
Access it from Notebooks, Lakehouses, Spark, T-SQL, etc.
Reuse it outside Power BI for their own insights and use cases
The OneLake Integration for Semantic Models feature, as documented here, only supports Import mode models.
However, in our case, switching to Import mode is not a viable option because:
We don’t know the full impact on performance and business continuity
Rebuilding the entire semantic model would be too time-consuming and resource-heavy
Our data volume makes Import mode difficult to scale and manage long term
Given that OneLake Integration isn’t available for DirectQuery models, what are the recommended alternatives to expose semantic model data in OneLake or Lakehouse environments?
I'd love to hear your feedback, ideas, or experience if you've dealt with a similar scenario.
Any guidance on how to strike the right balance between performance, governance, and self-service would be greatly appreciated!
Thanks a lot in advance !
Solved! Go to Solution.
Hi,
Thank you for sharing your scenario—this is a common challenge in the current Microsoft Fabric ecosystem given the limitations around OneLake Integration for DirectQuery semantic models.
Key considerations and recommended approaches:
Data replication via Lakehouses or Dataflows
Since OneLake Integration supports only Import mode, a best practice is to create dedicated Lakehouse or Dataflow Gen2 datasets that replicate or aggregate the key data from your SSAS cube. This approach enables broad accessibility (Notebooks, Spark, T-SQL) while maintaining centralized governance and refresh scheduling.
Leverage data virtualization or Synapse Link
If your on-premises data platform can be integrated with Azure Synapse or a similar data virtualization layer, syncing or exposing data via these platforms into Lakehouses can provide scalable, performant access for analytics beyond Power BI.
API or service-based data access
For use cases requiring programmatic access, consider exposing the semantic model data through APIs or OData feeds, which developers and analysts can consume directly.
Hybrid architecture
Maintain DirectQuery models for real-time operational reporting, while simultaneously building Import-based Lakehouse datasets for broader analytical and self-service scenarios. This balances performance, data freshness, and usability.
If this answered your question, please consider clicking Accept Answer and Yes if you found it helpful.
If you have any other questions or need further assistance, feel free to let us know — we’re here to help.
Hi @AntoineW ,
Thanks for reaching out to the Microsoft fabric community forum.
@Rufyda Thanks for your prompt response
In addition to tagging @Rufyda, I’ve included previously resolved threads and a relevant blog post that may help you better understand and resolve the issue.
Semantic Link: OneLake integrated Semantic Models | Microsoft Fabric Blog | Microsoft Fabric
Solved: Data update in report combining Direct Query and O... - Microsoft Fabric Community
We appreciate your engagement and thank you for being an active part of the community.
Best Regards,
Lakshmi Narayana
Hi,
Thank you for sharing your scenario—this is a common challenge in the current Microsoft Fabric ecosystem given the limitations around OneLake Integration for DirectQuery semantic models.
Key considerations and recommended approaches:
Data replication via Lakehouses or Dataflows
Since OneLake Integration supports only Import mode, a best practice is to create dedicated Lakehouse or Dataflow Gen2 datasets that replicate or aggregate the key data from your SSAS cube. This approach enables broad accessibility (Notebooks, Spark, T-SQL) while maintaining centralized governance and refresh scheduling.
Leverage data virtualization or Synapse Link
If your on-premises data platform can be integrated with Azure Synapse or a similar data virtualization layer, syncing or exposing data via these platforms into Lakehouses can provide scalable, performant access for analytics beyond Power BI.
API or service-based data access
For use cases requiring programmatic access, consider exposing the semantic model data through APIs or OData feeds, which developers and analysts can consume directly.
Hybrid architecture
Maintain DirectQuery models for real-time operational reporting, while simultaneously building Import-based Lakehouse datasets for broader analytical and self-service scenarios. This balances performance, data freshness, and usability.
If this answered your question, please consider clicking Accept Answer and Yes if you found it helpful.
If you have any other questions or need further assistance, feel free to let us know — we’re here to help.
User | Count |
---|---|
17 | |
9 | |
8 | |
5 | |
3 |
User | Count |
---|---|
55 | |
21 | |
20 | |
17 | |
14 |