Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
I've scheduled just a few dataflows in Fabric to transfer data from parquet files to a Fabric lakehouse. While running this schedule every hour, the CU background usage exceeds and finaly reaches the 100% and the service crashes. The dataflows are not very complicated so I don't no why this happens and how to prevent it?
Example
Solved! Go to Solution.
Hi, @SaiTejaTalasila
thanks for your concern about this issue.
Your method is also a solution.
And I would like to share some additional solutions below.
I am glad to help you.
According to your description, you want to know what causes this exceeding CU growth for just a few simple dataflows?
If I understand you correctly, then you can refer to my solution.
Check the complexity of the data flow: Even if the data flow seems simple, make sure there is no hidden complexity or inefficiency. Some transformation operations may be more resource-intensive than expected, so you can check for transformations and optimize some of them appropriately.
Throttling Policies: Understand and configure throttling policies to manage and balance performance and reliability.
Monitor CU Usage: Use the Microsoft Fabric Capacity Metrics app to monitor CU usage. This helps determine if there are specific times or operations that cause spikes.
Adjust Scheduling: Instead of running streams every hour, try staggering times or running streams during off-peak hours to distribute the load more evenly.
Increase Capacity: If your current capacity is consistently reaching its limits, consider increasing Fabric capacity to better handle the load.
Here's the official documentation, hope it helps:
Understand your Fabric capacity throttling - Microsoft Fabric | Microsoft Learn
I hope my suggestions give you good ideas, if you have any more questions, please clarify in a follow-up reply.
Best Regards,
Fen Ling,
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Hi, @SaiTejaTalasila
thanks for your concern about this issue.
Your method is also a solution.
And I would like to share some additional solutions below.
I am glad to help you.
According to your description, you want to know what causes this exceeding CU growth for just a few simple dataflows?
If I understand you correctly, then you can refer to my solution.
Check the complexity of the data flow: Even if the data flow seems simple, make sure there is no hidden complexity or inefficiency. Some transformation operations may be more resource-intensive than expected, so you can check for transformations and optimize some of them appropriately.
Throttling Policies: Understand and configure throttling policies to manage and balance performance and reliability.
Monitor CU Usage: Use the Microsoft Fabric Capacity Metrics app to monitor CU usage. This helps determine if there are specific times or operations that cause spikes.
Adjust Scheduling: Instead of running streams every hour, try staggering times or running streams during off-peak hours to distribute the load more evenly.
Increase Capacity: If your current capacity is consistently reaching its limits, consider increasing Fabric capacity to better handle the load.
Here's the official documentation, hope it helps:
Understand your Fabric capacity throttling - Microsoft Fabric | Microsoft Learn
I hope my suggestions give you good ideas, if you have any more questions, please clarify in a follow-up reply.
Best Regards,
Fen Ling,
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Hi,
This is not the answer to my guestion. I've read all the documentation and what I said, it's just a very simple dataflow I've scheduled that causes this issue.
We're using the F64 capacity and I guess that's enough?
Hi, @remconicolai-
You mentioned that you are using F64, I think you can check out these two documents below to learn more about F64 in general.
Microsoft Fabric features by SKU - Microsoft Fabric | Microsoft Learn
Capacity metrics in Microsoft Fabric | Microsoft Fabric Blog | Microsoft Fabric
I hope my suggestions give you good ideas, if you have any more questions, please clarify in a follow-up reply.
Best Regards,
Fen Ling,
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Hi @remconicolai- ,
Instead of dataflow gen2 you can try to use notebooks spark for the transformations/transfer.Then you will be able to limit the number of executes on workspace level.So,it will use limited resources for the task and avoid the CU issue.
What is the source of your parquet files?
Thanks,
Sai Teja
The source is Azure storage, blob. I'd like a solution for the dataflows and not to notebooks instead.
Hi @remconicolai- ,
As per my knowledge on notebooks it gives more flexibility on resource utilisation but on power query as per my understanding it's not possible to limit the resources.On tenant level you can set query limitations, but it applies for entire tenant.
https://microsoftlearning.github.io/mslearn-fabric/Instructions/Labs/10-ingest-notebooks.html
Thanks,
Sai Teja
Still my question is, what causes this exceeding CU growth for just a few simple dataflows?
We're experiensing the same insane usage for basically nothing. Messy notebooks are not a viable option for now. Any progress on this on your side?
Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes!
Check out the September 2025 Power BI update to learn about new features.