Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
Ostrzak
Helper II
Helper II

Problems with refreshing of semantic model - continuation

Hi,

 

I'm experiencing similar problems as in this post:

Solved: Problems with refreshing of semantic model - Microsoft Fabric Community

 

For whatever reason the semantic model is not refreshing properly. The only way is to rebuild it from scratch, but it is an laborous and not efficient. I tried the "DirectQuery only" approach, but with the capacity I have at my disposal, it works very slow.

 

It seems that Direct Lake connection is malfunctioning at times. I get that this is a great feature, but not knowing how exactly it works under the hood makes it difficult to trobuleshoot (check out the overview here: how does the "in-memory querying" actually works? Are there additional parquet files stored somewhere for the model's sake? If model refresh only fetches newest metadata, when does it reload any data so I know data in model is really refreshed?  ).

 

When it works seamlessly, direct lake seems to be the best of both worlds (direct query and import). But when it lags, and there is no way to know exactly why, I really wish there existed a possibility to look into it.

 

Am I missing something? I guess I could create an import semantic model from PBI experience, but I would rather avoid that.

 

Thank you in advance for any help.

 

3 REPLIES 3
Ostrzak
Helper II
Helper II

Hi @Anonymous 

 

That is a good overview of the Microsoft documentation on the issue, kudos for that.

 

I read through several articles and I don't think there is a clear-cut solution for the problem. Seems to me that the automatic refresh of the semantic model causes problems, since there is no clear-cut framing for a bigger ETL processes (several tables being overwritten at different stages). It is even mentioned in the documentation (see screenshot):

Ostrzak_0-1729691991603.png

 

After I disabled the automatic update and only use manual/pipeline refresh, it seems to work better.

 

 

Anonymous
Not applicable

Thank you for your kudos.

 

For the result you observed, my understanding is that manual or pipeline refreshes might change the priority of tasks.

 

Typically, manual refreshes (on-demand refreshes) have a higher priority to allow users to see the changes or results of their actions more quickly. On the other hand, automatic refreshes usually occur as background operations, and their execution priority is automatically arranged by the backend based on available resources and other factors.

 

However, as users, it is difficult for us to know the specific execution mechanism of the backend.

 

Best Regards,
Jing

Anonymous
Not applicable

Hi @Ostrzak 

 

Power BI suspends automatic updates when a non-recoverable error is encountered during refresh. A non-recoverable error can occur, for example, when a refresh fails after several attempts. So, make sure your semantic model can be refreshed successfully. You can go to Refresh history to check the status of refreshes for a semantic model. Check if there are any failures there. 

vjingzhanmsft_0-1729564421361.png

For a custom semantic model in Direct Lake mode, you can try refreshing it manually or configure schedule refresh for it when you notice that the data is not updated. 

vjingzhanmsft_2-1729564882584.png

 

A Direct Lake semantic model refresh operation might evict all resident columns from memory. That means the first queries after a refresh of a Direct Lake semantic model could experience some delay as columns are loaded into memory. Delays might only be noticeable when you have extremely large volumes of data. To avoid such delays, consider warming the cache by programmatically sending a query to the semantic model. A convenient way to send a query is to use semantic link. This operation should be done immediately after the refresh operation finishes.

 

But notice that warming the cache might only make sense when delays are unacceptable. Take care not to unnecessarily load data into memory that could place pressure on other capacity workloads, causing them to throttle or become deprioritized. Reference: Manage Direct Lake semantic models - Microsoft Fabric | Microsoft Learn

 

To learn more about how Direct Lake works and the data storage behinds it, please refer to the following documents:

Understand storage for Direct Lake semantic models - Microsoft Fabric | Microsoft Learn

How Direct Lake works 

 

In my understanding, Direct Lake semantic models only load needed columns into the memory, not all data. And the columns might get removed from the memory for some reasons. Please refer to Column loading (transcoding) section. 

 

Best Regards,
Jing
If this post helps, please Accept it as Solution to help other members find it. Appreciate your Kudos!

Helpful resources

Announcements
July 2025 community update carousel

Fabric Community Update - July 2025

Find out what's new and trending in the Fabric community.

June FBC25 Carousel

Fabric Monthly Update - June 2025

Check out the June 2025 Fabric update to learn about new features.