The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends September 15. Request your voucher.
To test the auto-ml feature, I created a new dataflow in the workspace and trained it. The results were satisfactory, so I scheduled the dataflow for refresh.
Today, a BI error was detected, and upon investigation, the following message was found: "ErrorMessage":"Dataflow refresh failed because your organization's Fabric compute capacity has exceeded its limits. Try again later. Learn more at https://aka.ms/capacitymessages"
I have identified three problematic dataflows in the FABRIC CAPACITY METRICS, and I'm contemplating how to resolve this situation.
Currently, the dataflows in this workspace are unusable, and I'm unable to update any related BI materials.
Is there anyone who can help with these questions?
Solved! Go to Solution.
- Learn about Overages and Burndowns, The capacity throttling will resolve itself after 24 hours unless you keep accruing CU debt.
- consider enabling Autoscale (comes at additional cost!)
- Abandoning the workspace will resolve nothing - what would "help" is moving the workspace to another, non-overloaded capacity. Of course this assumes you have that option.
- Learn about Overages and Burndowns, The capacity throttling will resolve itself after 24 hours unless you keep accruing CU debt.
- consider enabling Autoscale (comes at additional cost!)
- Abandoning the workspace will resolve nothing - what would "help" is moving the workspace to another, non-overloaded capacity. Of course this assumes you have that option.