Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
Anonymous
Not applicable

Dataflow updated without errors but the result is not complete

Hi,

we created several Dataflows Gen2, which are to be executed regularly by a Data Pipeline.
While setting up the Pipeline, we checked several times that all dependencies such as data sources, data sinks, connections and table schemas were set up correctly.

Nevertheless, at irregular intervals, some of our resulting tables are incomplete or still at the level before the update,
without any errors or problems being recorded in the Data Pipeline log or in the Dataflow logs.
SQL queries of our Lakehouses revealed that individual tables and therefore individual dataflow queries were not executed.

We checked that there was sufficient capacity available at the time of the incomplete Pipeline executions.
The execution time of the Pipeline activities also did not deviate from the expected average.

Normally, restarting the Pipeline manually fixes the error. This method is not sufficient for our requirements.

Has anyone had similar experiences?
Are there any known solutions or at least troubleshooting measures?
1 ACCEPTED SOLUTION
v-karpurapud
Community Support
Community Support

Hi @Anonymous 

May I ask if you have resolved this issue? If so, please mark the helpful reply and accept it as the solution. This will be helpful for other community members who have similar problems to solve it faster.

Thank you

View solution in original post

7 REPLIES 7
v-karpurapud
Community Support
Community Support

Hi @Anonymous 

I hope this information is helpful. Please let me know if you have any further questions or if you'd like to discuss this further. If this answers your question, please Accept it as a solution and give it a 'Kudos' so others can find it easily.

Thank you.

v-karpurapud
Community Support
Community Support

Hi @Anonymous 

I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions. If my response has addressed your query, please accept it as a solution and give a 'Kudos' so other members can easily find it.

Thank you.

v-karpurapud
Community Support
Community Support

Hi @Anonymous 

May I ask if you have resolved this issue? If so, please mark the helpful reply and accept it as the solution. This will be helpful for other community members who have similar problems to solve it faster.

Thank you

Anonymous
Not applicable

Hi,
I apologize for the delay in replying.
I have been monitoring our dataflows recently and there have been no further unlogged errors. During this period, development has been slowed down to use less capacity. Our assumption that the utilization of our capacity and the resulting deprioritization of individual activities is related to the behavior seems to be correct.

v-karpurapud
Community Support
Community Support

Hi @Anonymous 

Thank you for reaching out to the Microsoft Community Forum.
 

We understand you are experiencing an intermittent issue where Dataflows Gen2 in Microsoft Fabric fail to update some tables when triggered via a Data Pipeline, despite showing no errors in the logs.
 

If multiple Dataflows Gen2 are running in parallel within the pipeline, some dependencies might not be correctly respected. This issue could be due to race conditions where one Dataflow starts before the previous Dataflow finishes updating the necessary tables.

Consider each Dataflow has a clear dependency in the pipeline. Use "Wait for completion" in the Execute Dataflow Gen2 activity. If Dataflow B depends on Dataflow A, explicitly set Dataflow A’s completion as a prerequisite before executing Dataflow B.

 

Microsoft Fabric does not always log partial failures if the execution itself does not return an explicit error.Even if capacity was available, dynamic resource allocation may cause some Dataflows to be deprioritized. Instead of triggering all Dataflows at once, stagger them using pipeline dependencies.

Dataflows Gen2 store temporary intermediate results in a staging area before writing to the final destination. If the staging step encounters an issue (e.g., Lakehouse connection interruptions), it might not write the final table without logging an error. Enable "Store staging data in OneLake" in Dataflow settings to persist intermediate results.


If my response has resolved your query, please mark it as the Accepted Solution to assist others. Additionally, a 'Kudos' would be appreciated if you found my response helpful.

Thank You.

Anonymous
Not applicable

Hi,

In my opinion, race conditions should not be a problem in our system.
We use a data pipeline in which we execute 15 activities depending on each other.
Our system follows the medallion scheme, where we transform data from two independent sources at bronze and silver level in parallel and then combine them in our gold level.
The pipeline covers the entire workflow from data ingestion in Azure Data Factory to transformations in several data flows and updating the semantic model.
All dataflows are waiting for the (successful) completion of their predecessors.
Lakehouse tables are never read or written to by multiple dataflows at the same time in our pipeline.

You mentioned that Fabric does not always log partial errors, is there a way to get more detailed insights into the execution of dataflows or pipeline activities beyond the logs of the monitoring tab? Is it possible to see the prioritization of dynamic resource allocation? We already use the Microsoft Fabric Capacity Metrics App.

Hi @Anonymous 


We understand that race conditions are unlikely in your case and appreciate your detailed explanation of the Medallion architecture and pipeline execution dependencies. Since Fabric does not always log partial error. we recommend the following steps to consider :

Open Dataflow Gen2 → Monitor → Execution History to review execution times, warnings, and table updates. Since the Monitoring tab only shows high-level logs, enable detailed execution logging at the pipeline level:

 

Check Throttle Events & CPU/Memory Spikes in Fabric Capacity Metrics App to detect resource prioritization issues.Compare execution order & delays across multiple runs to see if specific Dataflows are consistently deprioritized.

 

Thank you.

Helpful resources

Announcements
July 2025 community update carousel

Fabric Community Update - July 2025

Find out what's new and trending in the Fabric community.

July PBI25 Carousel

Power BI Monthly Update - July 2025

Check out the July 2025 Power BI update to learn about new features.

Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.