Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
I am currently facing a lot of heat for higher capacity consumption for a capacity shared between accounts globally.
I am not the tenant admin and neither I have access to the capacity metrics nor will I get access to the capacity. Hence, I have few questions to the community related to this so that I can tune my work from the feedback reieved from here.
The doc mentions that Requests to OneLake, such as reading, writing, or listing, consumes your Fabric capacity.
A. Reading- I am guessing reading from lakehouse, data warehouse, Kusto affects the capacity consumption. What about reading from datamart and semantic model ? Does reading data from external server hits the consumption as well? Also, which one among all can have the most efficient reading.
I believe following are currently possible
Reading lakehouse / DW (shortcut+abfss path)/Kusto (shortcut+ maybe Delta table api) tables via notebook (transformation + machine - learning)
Reading DW/ lakehouse tables via sql end point (only transformation)
Reading Kusto via Kusto connector in power bi (only transformation)
Also, is there a chart that tells who (notebook, pipeline, df gen2, df gen1, datamart) reads at what rate?
B. Writing - I am guessing writing includes writing to lakehouse, data warehouse, Kusto only. Please correct me if I am wrong. Since the consumption is hot, I need to select the db that takes the least time to write. Does any1 know which one would that be?
Also, is there a chart that tells who (pipeline, df gen2) writes at what rate?
C. Is there any way to check the capacity consumption (real-time) for the activities I am performing (notebook execution, pipeline execution, machine-learning) other than through capacity metrics?
D. Listing - Listing consumes capacity. What is Listing?
E. Is there anything else that affects the consumption?
The big picture here is to have the data ingested in the db which takes the least time to write. For reading, I need to understand if the final product will require machine learning or not and take a call based on that.
Overall, I am trying to understand what strategies I need to follow to minimize capacity consumption.
Thank you in advance and sorry for the long post.
Hi @smpa01
As of now, the Fabric Capacity Metrics app is where you can monitor the capacity consumption. It requires that you must be a capacity admin to install and view this app.
The operations of transactions are classified into five categories, regardless of the source and destination of their reads and writes, when the requests are sent to OneLake. If you read data from external server into a Fabric item in OneLake, this will consume your Fabric capacity.
The app provides a page to help analyze which operations and users contributed the most to your capacity's usage over a timerange. It provides more details like workspace, user, item, operation, duration, status .etc. Maybe we can analyze the data to know which operation consumes less capacity. However, it is difficult to predict in advance that one operation will necessarily consume less capacity than others.
These documents may provide helpful insights:
OneLake consumption - Microsoft Fabric | Microsoft Learn
Map each REST operation to a price - Azure Blob Storage | Microsoft Learn
Understand the metrics app compute page - Microsoft Fabric | Microsoft Learn
Understand the metrics app timepoint page - Microsoft Fabric | Microsoft Learn
Best Regards,
Jing
If this post helps, please Accept it as Solution to help other members find it. Appreciate your Kudos!
@Anonymous than you for this.
I came across this. As per this Redirect is safer the proxy.
Can you please elaborate how can I ensure that all my Read, Write operations follow Redirect and not proxy.
A typical case will be bulk ingestion of tables (upsert) from external data-vendor through on-prem server, transformation, machine-learning + exploratory anaalysis and reporting back to end users through power bi.
Is there a chart/doc exists somewhere that clearly shows Redirect vs proxy by Read and write and by agents avaialable (df gen1, df gen2, notebook, pipeline etc). Also, is there any way for the developer to know either through Monitot or any other API whether the operation results in Redirect/proxy? This is a blocker currently.
Also, since it is a global account, the workspace level admins will never be give access to that. How can we get exposure to those valuable piece of information without relying on the app.