Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hi everyone,
We’re working on a project where we’ve built a public website containing embedded Power BI reports. These reports will be accessible by citizens across an entire country (~5–10 million potential users).
We need to determine the appropriate Fabric capacity to ensure our reports perform reliably under high load and don’t crash or stop rendering.
Currently, we’re using an F16 capacity in our development environment, but we expect to scale up for production.
To estimate the needed capacity, we used Microsoft’s tools from this repository:
🔗Power BI Tools for Capacities
Using PowerShell, we simulated report rendering by launching Chrome instances for the chosen reports. We then tried to monitor performance using the Fabric Capacity Metrics App.
However, we’ve run into two main issues:
Simulation limitations:
Since the test generates separate Chrome sessions, local CPU can’t handle a realistic number of instances. I can only simulate around 50–100 users, which is far from our expected traffic.
Monitoring challenges:
We tried running smaller tests (10, 50, 100 users) and extrapolating behavior, but we’re finding the Capacity Metrics App difficult to interpret. Visual refreshes seem delayed, and we can’t clearly capture the impact of each simulation load.
Has anyone successfully used this methodology for capacity planning?
Are there alternative approaches or tools for stress testing Fabric/Power BI capacities at scale?
Any insights or experiences would be greatly appreciated!
Thanks in advance,
Solved! Go to Solution.
Hi @Rafaela07,
You can also use the Fabric Capacity Estimator to get an idea: Fabric Capacity Estimator | Microsoft Fabric
For the capacity metrics, you are correct that it is not real time. For items that existed before the last semantic model refresh, there is usually a 15-20 minute delay in data getting into capacity metrics. For new items, they only appear once the semantic model has been refreshed (usually once a day).
On the roadmap is an eventstream source for capacitity utilization events:
Once this releases, then real time data will be available.
If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.
Hi @Rafaela07 ,
Could you please let us know whether the issue has been resolved on your end? Your feedback can assist others in the community who may encounter a similar problem.
@Rafaela07 , is the issue resolved now, or are you still facing any difficulties? If you need any additional details or support, please feel free to share.
Thank you.
Hi @Rafaela07,
You can also use the Fabric Capacity Estimator to get an idea: Fabric Capacity Estimator | Microsoft Fabric
For the capacity metrics, you are correct that it is not real time. For items that existed before the last semantic model refresh, there is usually a 15-20 minute delay in data getting into capacity metrics. For new items, they only appear once the semantic model has been refreshed (usually once a day).
On the roadmap is an eventstream source for capacitity utilization events:
Once this releases, then real time data will be available.
If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.
Hi @Rafaela07 ,
Thanks for reaching out to the Fabric community. You’re absolutely on the right path using the Load Assessment Tool and Metrics App for capacity testing.
Here are a few suggestions that might help.
1. Use the official Load Assessment Tool, this tool is designed to simulate concurrent users with randomized filter values, helping avoid caching and providing a more realistic view of capacity behavior.
2. Local machines quickly hit CPU limits. Running the Load Assessment Tool from Azure VMs allows you to simulate hundreds or thousands of users without hardware bottlenecks.
3. While the app’s visuals can appear slightly delayed, filtering by workspace and tracking key metrics like CPU, memory usage, and query duration helps isolate the test impact. You can also export the underlying dataset and analyze it in Power BI Desktop for deeper insights.
Regarding Capacity Planning.
F16 is suitable for development, but for production scenarios with high concurrency, you’ll typically need F64 or higher, depending on report complexity and user interaction patterns.
Reference:
Power BI Embedded Analytics Capacity Planning - Power BI | Microsoft Learn
Plan your capacity size - Microsoft Fabric | Microsoft Learn
Understand the metrics app compute page - Microsoft Fabric | Microsoft Learn
Best practices for faster performance in Power BI embedded analytics - Power BI | Microsoft Learn
Regards,
Yugandhar.
Hi @V-yubandi-msft!
First of all thank you for your respond.
1. You are mentioning the "official Load Assessment Tool". Is this something different or in another repository than the one mentrioned in the original post? And to my understanding there are no further information mentioned (for instance how long is suggested that the simulation will run) even in the included video.
2. Regarding the Fabric Capacity Metric App, the issue is not that it delays the refresh of the visuals, yet that it does seem to work near real-time? Should I manually refresh the semantic model for more instand results?
Thank you again,
Hi @Rafaela07 ,
Thanks for you response.
1. Yes, we are using the same Load Assessment Tool from the GitHub repository. Our main challenge was simply the limits of the local machine, so we’ll move the testing to Azure VMs to properly scale up the number of concurrent users.
2. Regarding the Capacity Metrics App, your explanation makes perfect sense. It’s good to know the usage data isn’t real time and that a 15–20 minute delay is expected. And for new items, we’ll make sure to refresh the semantic model so they appear sooner. This aligns with what we were seeing during our tests.
Also, @tayloramy , thanks for highlighting the upcoming Eventstream option for real time capacity monitoring that will be really helpful once it becomes available.
Thanks for your time @Rafaela07 .