Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Ask the Fabric Databases & App Development teams anything! Live on Reddit on August 26th. Learn more.

Reply
AntoineW
Resolver I
Resolver I

OneLake Integration for DirectQuery Semantic Models – How to expose data in OneLake?

Hi everyone,

I'm currently working on a scenario where we have semantic models in DirectQuery mode, connected to an on-premises SSAS cube. These models are already in production and widely used in our Power BI reports.

 

🎯 Business Goal

We want to make the data from our semantic model available in OneLake, so that business users, analysts, and developers can:

  • Access it from Notebooks, Lakehouses, Spark, T-SQL, etc.

  • Reuse it outside Power BI for their own insights and use cases

 

Limitation encountered

The OneLake Integration for Semantic Models feature, as documented here, only supports Import mode models.

However, in our case, switching to Import mode is not a viable option because:

  • We don’t know the full impact on performance and business continuity

  • Rebuilding the entire semantic model would be too time-consuming and resource-heavy

  • Our data volume makes Import mode difficult to scale and manage long term

 

My questions

Given that OneLake Integration isn’t available for DirectQuery models, what are the recommended alternatives to expose semantic model data in OneLake or Lakehouse environments?

 

💬 Open to community input

I'd love to hear your feedback, ideas, or experience if you've dealt with a similar scenario.
Any guidance on how to strike the right balance between performance, governance, and self-service would be greatly appreciated!

 

Thanks a lot in advance !

1 ACCEPTED SOLUTION
Rufyda
Kudo Kingpin
Kudo Kingpin

Hi,

Thank you for sharing your scenario—this is a common challenge in the current Microsoft Fabric ecosystem given the limitations around OneLake Integration for DirectQuery semantic models.

Key considerations and recommended approaches:
Data replication via Lakehouses or Dataflows
Since OneLake Integration supports only Import mode, a best practice is to create dedicated Lakehouse or Dataflow Gen2 datasets that replicate or aggregate the key data from your SSAS cube. This approach enables broad accessibility (Notebooks, Spark, T-SQL) while maintaining centralized governance and refresh scheduling.

Leverage data virtualization or Synapse Link
If your on-premises data platform can be integrated with Azure Synapse or a similar data virtualization layer, syncing or exposing data via these platforms into Lakehouses can provide scalable, performant access for analytics beyond Power BI.

API or service-based data access
For use cases requiring programmatic access, consider exposing the semantic model data through APIs or OData feeds, which developers and analysts can consume directly.

Hybrid architecture
Maintain DirectQuery models for real-time operational reporting, while simultaneously building Import-based Lakehouse datasets for broader analytical and self-service scenarios. This balances performance, data freshness, and usability.


If this answered your question, please consider clicking Accept Answer and Yes if you found it helpful.
If you have any other questions or need further assistance, feel free to let us know — we’re here to help.


View solution in original post

2 REPLIES 2
v-lgarikapat
Community Support
Community Support

Hi @AntoineW ,

Thanks for reaching out to the Microsoft fabric community forum.

@Rufyda Thanks for your prompt response

 

In addition to tagging @Rufyda, I’ve included previously resolved threads and a relevant blog post that may help you better understand and resolve the issue.

Semantic Link: OneLake integrated Semantic Models | Microsoft Fabric Blog | Microsoft Fabric

Solved: Data update in report combining Direct Query and O... - Microsoft Fabric Community

 

We appreciate your engagement and thank you for being an active part of the community.

 

Best Regards,

Lakshmi Narayana

Rufyda
Kudo Kingpin
Kudo Kingpin

Hi,

Thank you for sharing your scenario—this is a common challenge in the current Microsoft Fabric ecosystem given the limitations around OneLake Integration for DirectQuery semantic models.

Key considerations and recommended approaches:
Data replication via Lakehouses or Dataflows
Since OneLake Integration supports only Import mode, a best practice is to create dedicated Lakehouse or Dataflow Gen2 datasets that replicate or aggregate the key data from your SSAS cube. This approach enables broad accessibility (Notebooks, Spark, T-SQL) while maintaining centralized governance and refresh scheduling.

Leverage data virtualization or Synapse Link
If your on-premises data platform can be integrated with Azure Synapse or a similar data virtualization layer, syncing or exposing data via these platforms into Lakehouses can provide scalable, performant access for analytics beyond Power BI.

API or service-based data access
For use cases requiring programmatic access, consider exposing the semantic model data through APIs or OData feeds, which developers and analysts can consume directly.

Hybrid architecture
Maintain DirectQuery models for real-time operational reporting, while simultaneously building Import-based Lakehouse datasets for broader analytical and self-service scenarios. This balances performance, data freshness, and usability.


If this answered your question, please consider clicking Accept Answer and Yes if you found it helpful.
If you have any other questions or need further assistance, feel free to let us know — we’re here to help.


Helpful resources

Announcements
Fabric July 2025 Monthly Update Carousel

Fabric Monthly Update - July 2025

Check out the July 2025 Fabric update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.