Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hello,
We have a 64 capacity and I have been receiving emails about reaching 75% of our capcity.
I would like to do a stress test that can be monitored to see when do I ran out of capacity and what happends.
Have anyone done it? How did you do it?
Thanks
Astrid
May I ask if you have resolved this issue? Can you confirm this once. This will be helpful for other community members who have similar problems to solve it faster.
If we don’t hear back, we’ll go ahead and close this thread. For any further discussions or questions, please start a new thread in the Microsoft Fabric Community Forum we’ll be happy to assist.
Thank you for being part of the Microsoft Fabric Community.
Good morning,
Thanks to everyone, I will no do the stress testing at the end, but a lot of usefull information, thanks again
Hi @AstridM,
We would like to confirm if our community members answer resolves your query or if you need further help. If you still have any questions or need more support, please feel free to let us know. We are happy to help you.
@BalajiL , @frithjof_v & @KevinChant ,Thanks for your prompt response
Thank you for your patience and look forward to hearing from you.
Best Regards,
Prashanth Are
MS Fabric community support
We faced a similar challenge in sizing production capacity. After baseline testing with 39GB of compressed data, F64 couldn’t handle loads beyond 40GB—leading to pipeline hangs and low throughput. F128 performed reliably. For accurate sizing, stress test with realistic loads and monitor capacity usage via the Fabric Metrics App.
Below with the sample comparison on the stress test in our scenario. Hope this helps.
| F64 | F128 |
Total Capacity Units | 1920 | 3840 |
File size in GB (Compressed) | 39 | 39 |
Total files | 12528 | 12528 |
How many files processed | 12444 | 12528 |
How many files not processed | 84 | 0 |
CU % Utilization | 82% | 42% |
Performance | Slow when it reached 70% | Fast – No slowness |
Pipeline succeeded | Not completed fully | Completed Successfully |
Projected CU% Utilization | ||
50 GB (Compressed) | 105.10% | 54% |
52 GB (Compressed) | 110% | 56% |
On a trial capacity, I scheduled 20 dataflow gen2 to see what happens when I enter throttling: Is Fabric throttling directly related to the cumulative overages - or not? : r/MicrosoftFabric
But I would not do that in protection, because you will get throttled - meaning the capacity will be unavailable for x minutes, hours or even days.
Hi @AstridM ,
Antoine’s explanation is accurate there’s no need to run a stress test yourself, since Microsoft Fabric automatically manages capacity limits. When thresholds are exceeded, workloads are throttled or queued, and you won’t incur additional charges.
If you want to proactively monitor your environment’s performance, I recommend using the Fabric Capacity Metrics App to track utilization and identify throttling events. For more comprehensive insights, Fabric Unified Admin Monitoring (FUAM) provides detailed dashboards highlighting which workloads consume the most capacity.
These tools will empower you to understand both official throttling behavior and your specific usage patterns.
I'm also sharing the link with you for reference: https://learn.microsoft.com/en-us/fabric/enterprise/metrics-app?utm_source.
Thank you,
Tejaswi.
Hello @AstridM,
You have a Microsoft Fabric F64 capacity and are receiving emails indicating the capacity reached 75%.
You don't have to do a stress test to understand throttling in the capacity, here is the link for the consequence when your capacity exceed his limit : https://learn.microsoft.com/en-us/fabric/enterprise/throttling
Throttling in Fabric does not generate extra cost. It is not like cloud auto-scaling where you “pay more.” Instead, workloads are slowed down or queued (background and interactive task) to remain within the purchased capacity.
Typical effects of throttling include:
Queries taking longer to start or execute.
Refreshes/pipelines queued until resources are available.
Spark notebooks waiting longer to acquire a kernel.
In rare cases, timeouts if the queue delay exceeds limits.
You can first Understand Current Usage :
Use the Fabric Capacity Metrics App to analyze historical CU consumption.
Identify the heaviest workloads (e.g., Power BI refreshes, SQL queries, Spark notebooks).
Look at who/what consumes the most CUs and at what times.
Hope it can help you !
Best regards,
Antoine
Hello @AstridM ,
as addition you can also use FUAM to find out exactly what is generating your load.
Here you can find it:
https://github.com/renefuerstenberg/fabric-toolbox
Best regards
If you've just updated your capacity metrics app to dig deeper into the issue then recommend installing latest version of FUAM, as it has just been updated to cater for new version of the capacity metrics app:
https://github.com/microsoft/fabric-toolbox/tree/main/monitoring/fabric-unified-admin-monitoring
To stress test your Fabric capacity:
- Use Capacity Metrics app to monitor usage.
- Trigger multiple large dataset refreshes, heavy queries, and chained dataflows at once.
- Automate with PowerShell or REST API for parallel load.
- Watch for throttling, failures, or slow performance as usage nears 100%.
- Document what happens at 75%, 85%, 95% to understand limits.