Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.
To test the auto-ml feature, I created a new dataflow in the workspace and trained it. The results were satisfactory, so I scheduled the dataflow for refresh.
Today, a BI error was detected, and upon investigation, the following message was found: "ErrorMessage":"Dataflow refresh failed because your organization's Fabric compute capacity has exceeded its limits. Try again later. Learn more at https://aka.ms/capacitymessages"
I have identified three problematic dataflows in the FABRIC CAPACITY METRICS, and I'm contemplating how to resolve this situation.
Currently, the dataflows in this workspace are unusable, and I'm unable to update any related BI materials.
Is there anyone who can help with these questions?
Solved! Go to Solution.
- Learn about Overages and Burndowns, The capacity throttling will resolve itself after 24 hours unless you keep accruing CU debt.
- consider enabling Autoscale (comes at additional cost!)
- Abandoning the workspace will resolve nothing - what would "help" is moving the workspace to another, non-overloaded capacity. Of course this assumes you have that option.
- Learn about Overages and Burndowns, The capacity throttling will resolve itself after 24 hours unless you keep accruing CU debt.
- consider enabling Autoscale (comes at additional cost!)
- Abandoning the workspace will resolve nothing - what would "help" is moving the workspace to another, non-overloaded capacity. Of course this assumes you have that option.