Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us for an expert-led overview of the tools and concepts you'll need to become a Certified Power BI Data Analyst and pass exam PL-300. Register now.

Reply
ChristopherKing
Regular Visitor

Dataflow Gen 1 Refresh Failures starting Last Week

We have a number of dataflows in one Workspace (premium capacity) that started experiencing issues during refresh last week. These had been working for months without issue before. So far, I have seen 2 different error messages:

 

Error #1

Error: Compute engine failed: Insufficient memory. Please increase your compute engine/workload memory and try again. (8645). Details: A timeout occurred while waiting for memory resources to execute the query in resource pool 'default' (2). Rerun the query.

 

Error #2

There was a problem refreshing your dataflow. {"RootActivityId":"d8d04e68-36f7-4dd5-abeb-75ee885de2b1" "ErrorMessage":"Cannot acquire lock for model '797dbfa0-3988-499a-a807-e1b49a7c3377/1c1c7039-733f-4157-a2b0-387f536e5c22' because it is currently in use."}

 

For Error #1, from everything I have seen, dataflow memory should be managed by the service in premium capacity, and I don't see any settings I can change anyway.

 

For Error #2, the dataflow that produces this error does have several other flows that all feed into one Final dataflow for end user use, but there are no other refreshes occuring that should be locking the child dataflow when the error occurs.

 

I have other dataflows in another workspace using the same On-prem gateway and the same premium capacity that are refreshing without issues.

 

I have been able to work around both of these issues by removing the workspace from premium capacity then adding it back in, but as these are refreshing late at night, I would like to not need to do this anymore. Are there any issues going on that may be causing this?

1 ACCEPTED SOLUTION
v-veshwara-msft
Community Support
Community Support

Hi @ChristopherKing ,
Thanks for using Microsoft Fabric Community and apologies for the delayed response.
Hope your issue got resolved. If not, please consider the following response.

Error #1 : "Insufficient memory (8645)"
Even though you're using Premium capacity, if the dataflow connects through an on-premises gateway, the hardware specs of the gateway can impact performance. It's recommended to have:

At least 2 cluster members with 16 GB RAM, or ideally 4 with 32 GB

Good CPU, disk speed, and stable network

This could help reduce memory-related refresh failures, especially if your data size or query complexity has grown.
Solved: Dataflow process error - Microsoft Fabric Community

 

Error #2 : "Cannot acquire lock for model"
This usually happens when a refresh is interrupted or fails to complete cleanly. The lock often clears after 24 hours, but in rare cases it may persist. Microsoft support can clear it manually if needed.

Workaround suggestion:
Reassign the dataflow connection to a different gateway (even temporarily). This helps reset the internal connection mapping and resolve lock errors. 
Here is the detailed workaround:

Solved: Re: "CDSALockAcquireError" when attempting to save... - Microsoft Fabric Community

 

If the issue still persists even after the above steps, please raise a support ticket with Microsoft through this link.

Hope this helps. Please reach out for further assistance.
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly and a kudos would be appreciated.

 

Thank you.

 

 

View solution in original post

5 REPLIES 5
v-veshwara-msft
Community Support
Community Support

Hi @ChristopherKing ,
Thanks for using Microsoft Fabric Community and apologies for the delayed response.
Hope your issue got resolved. If not, please consider the following response.

Error #1 : "Insufficient memory (8645)"
Even though you're using Premium capacity, if the dataflow connects through an on-premises gateway, the hardware specs of the gateway can impact performance. It's recommended to have:

At least 2 cluster members with 16 GB RAM, or ideally 4 with 32 GB

Good CPU, disk speed, and stable network

This could help reduce memory-related refresh failures, especially if your data size or query complexity has grown.
Solved: Dataflow process error - Microsoft Fabric Community

 

Error #2 : "Cannot acquire lock for model"
This usually happens when a refresh is interrupted or fails to complete cleanly. The lock often clears after 24 hours, but in rare cases it may persist. Microsoft support can clear it manually if needed.

Workaround suggestion:
Reassign the dataflow connection to a different gateway (even temporarily). This helps reset the internal connection mapping and resolve lock errors. 
Here is the detailed workaround:

Solved: Re: "CDSALockAcquireError" when attempting to save... - Microsoft Fabric Community

 

If the issue still persists even after the above steps, please raise a support ticket with Microsoft through this link.

Hope this helps. Please reach out for further assistance.
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly and a kudos would be appreciated.

 

Thank you.

 

 

Error #1 has not occured again since I removed the workspace from the premium capacity and added it back. 

 

For Error #2, I have made some changes to the dataflows that were presenting the issue, and it seems to be corrected, however, I will keep in mind the idea of changing the gateway connection if it occurs again in the future. I use a deployment pipeline, will changing the connection in the Production step of the pipeline cause any issues if i need to deploy changes in the future? I would think not, but I'd just like to verify before I do.

 

Thank you,

Chris

Hi Chris,

Thanks for the update. Glad to hear Error #1 hasn't reappeared and that the changes you made seem to have helped with Error #2 as well.

Regarding your question about the deployment pipeline:
If you change the connection in the Production stage, it won't break future deployments as long as the connection settings (like name and type) are consistent across stages.

 

If the connection in Production differs from Test/Dev, the deployment won't overwrite the connection itself.

You may need to manually update or reassign the connection in Production after a deployment if anything changes at the source level.

So it's safe to do, just worth keeping track of in case you need to realign connections after deploying updates.

 

Hope this helps. Please reach out for further assistance.

Would you mind marking any helpful reply or yours as Accepted Solution to help others find the answer quickly who have similar issues and a Kudos would always be appreciated.

Thank you.

Poojara_D12
Super User
Super User

HI @ChristopherKing 

The issues you're encountering with your dataflows—particularly Error #1 (compute engine memory timeout) and Error #2 (lock contention)—are known to occur in Power BI Premium when the capacity is under pressure or not optimized for concurrent workloads. While Premium capacity is designed to handle large datasets and manage memory dynamically, it does have resource limits, especially when multiple complex dataflows or chained dependencies are scheduled closely together. Error #1 suggests that the capacity is running out of memory while executing your dataflow, which can happen if the workload is too high during refresh windows or if recent model/dataflow changes increased memory demands. Error #2 points to a conflict in accessing shared resources, likely caused by overlapping dataflow refresh schedules, particularly in chained dataflows where one depends on another completing first. Although removing and re-adding the workspace to premium resets the capacity state temporarily, this is not sustainable.

To resolve this, first review the refresh schedules in the workspace and stagger them to ensure that dependent dataflows do not run concurrently. Use the Power BI Capacity Metrics app to monitor memory usage, query durations, and refresh overlaps to identify peak load times. Additionally, optimize your dataflows by reducing transformations where possible, using computed tables efficiently, and splitting large dataflows into smaller, modular ones that can refresh independently. If capacity pressure remains high, consider assigning a dedicated capacity or increasing your memory allocation per workload in the Power BI Admin Portal under Capacity Settings → Workloads → Dataflow workload. Lastly, ensure the Power BI service is up-to-date by checking the Power BI Support site for any ongoing service incidents, as backend changes or bugs can occasionally impact stability.

 

Did I answer your question? Mark my post as a solution, this will help others!
If my response(s) assisted you in any way, don't forget to drop me a "Kudos"

Kind Regards,
Poojara - Proud to be a Super User
Data Analyst | MSBI Developer | Power BI Consultant
Consider Subscribing my YouTube for Beginners/Advance Concepts: https://youtube.com/@biconcepts?si=04iw9SYI2HN80HKS

Thank you for the response. Fortunately, Error #1 has not occured again since the first time I removed the workspace from premium capacity and added it back. Also, even when that specific workspace was experiencing that issue, I had 2 other workspaces in the same capacity that had no issue with dataflow refreshes. I have looked at the Fabric Capacity Metrics app, and I see no issues with our usage. I don't think I have ever seen our usage exceed 50%.

Capacity.PNG

I have also looked in Capacity settings, and I do not see a section in Power BI Workloads to manage Dataflow Workloads. We are in GCC, so I don't know if that is different, but these are the options I see:

ChristopherKing_0-1744726178963.png

As for Error #2, I have 5 dataflows that extract data from our SQL servers. These feed into 1 dataflow for end users to consume. The only one of these dataflows that has a scheduled refresh is the first one, which refreshes nightly at 11 pm. I then have Power Automate cloud flows that begin the refresh of the next flow in the series one minute after the previous flow completes. The issue I am seeing, however, occurs with the first flow of the night. It normally runs for about 20 to 30 minutes before completing. When this locking issue occurs, it will run for 30 minutes, the refresh log will show complete for all tables, but the refresh does not finish. A few minutes later, it fails, and the log then only has one line with the above error message. Since the last refresh usually completes around 2 in the morning, this is occuring about 21 hours after there has been any refresh activity on any of the dataflows. This issue occured last week, then cleared up over the weekend, then began again last night.

 

Thank you,

Chris

 

 

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June 2025 Power BI Update Carousel

Power BI Monthly Update - June 2025

Check out the June 2025 Power BI update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.