The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends September 15. Request your voucher.
Good afternoon,
Reviewing the P1/A4 SKU's for Power BI Premium Capacity, I see that "Max concurrent pages" is listed at 55. I am interpreting this as "maximum number of concurrent users viewing the same page" but I think I'm wrong on that.
Some of the capacity estimating calculators I've seen estimate a need for an second P1 node at 1499+1 users. I believe I am misunderstanding something because it is pretty easy to assume I could have more than 55 concurrent views of a page at 1499 users?
Looking for some help to understand this - and a general perspective on how to gauge/estimate when one would "run out of capacity" and need more. I understand you can start estimating based on Max memory sizing, but if I go with P1 and adoption expands, at what point would I know I need another node? When people are being blocked from viewing consistently because of the 55 concurrent page view limit, or due to slow performance because 8 cores can't handle the traffic?
Thanks in advance.
Solved! Go to Solution.
In terms of concurrency, if you have a lot of users all using the same report, Power BI caches the results, so when a report is rendered using the cache it gets that straight from memory and does not require any CPU. It is only when running DAX queries that need to get the data from the model that it will then need CPU.
If you design your data with a star schema (facts and dimensions), that means that your DAX measures will be easy to be calculated and will not require much CPU, which means not a very high demand on the CPU.
With the above of best practice design a single report or multiple reports can scale to many users because they all share the cache and also when need a new DAX measure it will not require a lot of CPU. So it depends on how your dataset is modelled and how efficiently the DAX measures are created.
In terms of concurrency, if you have a lot of users all using the same report, Power BI caches the results, so when a report is rendered using the cache it gets that straight from memory and does not require any CPU. It is only when running DAX queries that need to get the data from the model that it will then need CPU.
If you design your data with a star schema (facts and dimensions), that means that your DAX measures will be easy to be calculated and will not require much CPU, which means not a very high demand on the CPU.
With the above of best practice design a single report or multiple reports can scale to many users because they all share the cache and also when need a new DAX measure it will not require a lot of CPU. So it depends on how your dataset is modelled and how efficiently the DAX measures are created.