Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Score big with last-minute savings on the final tickets to FabCon Vienna. Secure your discount

Reply
Shreya_Barhate
Advocate I
Advocate I

How Is "Compute Pool Capacity Usage CU" Calculated in Microsoft Fabric Billing?

Hi everyone,
I'm trying to understand the "Compute Pool Capacity Usage CU" line item that appears on my Azure bill under Microsoft Fabric capacity. I’ve reviewed the documentation here: Azure Billing for Fabric, but I’m still unclear on a few points:

  1. What specific workloads or services contribute to this meter?
  2. How is the CU calculated - does it reflect provisioned capacity, actual usage, or both?
  3. How does this differ from other meters like Spark Capacity Usage CU?
  4. Is the Compute Pool Capacity Usage CU based on available unused capacity or consumed capacity by Spark compute?
  5. Is there a way to break down this usage further to identify which workloads are consuming the most CUs?
    Any guidance or examples would be greatly appreciated. Thanks!

 

2 REPLIES 2
v-dineshya
Community Support
Community Support

Hi @Shreya_Barhate ,

Thank you for reaching out to the Microsoft Community Forum.

 

1. What Is "Compute Pool Capacity Usage CU"?

 

This meter represents the available compute capacity allocated to your Microsoft Fabric environment. It tracks the usage of general-purpose compute resources that are not tied to specific workloads like Spark or Data Warehouse.

 

Note: State is marked as GA (Generally Available), meaning it's fully supported and production-ready.


2. How is the CU calculated - does it reflect provisioned capacity, actual usage, or both?

 

CU is a normalized measure of compute power in Microsoft Fabric. The Compute Pool Capacity Usage CU reflects actual consumption of compute resources by various workloads that use the shared compute pool. It does not represent provisioned capacity alone it tracks consumed capacity based on active operations.


3. What specific workloads or services contribute to this meter?

 

This meter aggregates usage from workloads that Use shared compute resources not dedicated Spark or SQL compute. Include services like Dataflows, Eventstream processing, KQL databases, ML model endpoints, OneLake operations, Copilot and AI features, GraphQL APIs and Apache Airflow jobs. Each of these has its own meter, but their usage may also contribute to the general compute pool if not offloaded to dedicated resources.

 

4. How does this differ from other meters like Spark Capacity Usage CU?


Compute Pool Capacity Usage CU : It Tracks usage of shared compute resources across multiple workloads.

Spark Capacity Usage CU : Specifically tracks Spark job execution using dedicated or autoscaled serverless Spark compute.

 

Note: If Autoscale Billing for Spark is enabled, Spark jobs use serverless resources and are billed separately under Autoscale for Spark CU, not the general compute pool.

 

5. Is the Compute Pool Capacity Usage CU based on available unused capacity or consumed capacity by Spark compute?


It is based on consumed capacity, the actual compute used by workloads during execution. Idle or unused capacity is not billed under this meter.


6. Is there a way to break down this usage further to identify which workloads are consuming the most CUs?

 

Use the Microsoft Fabric Capacity Metrics App, Navigate to the Compute page. View usage trends over the past 14 days. Drill down by Workspace, Workload type and Item name.

 

Note: This helps identify which workloads are consuming the most CUs and optimize accordingly.

 

Please refer below Microsoft articles.

Troubleshooting guide - Monitor and identify capacity usage - Microsoft Fabric | Microsoft Learn

Plan your capacity size - Microsoft Fabric | Microsoft Learn

Understand the metrics app compute page - Microsoft Fabric | Microsoft Learn

Evaluate and optimize your Microsoft Fabric capacity - Microsoft Fabric | Microsoft Learn

Billing and utilization reports in Apache Spark for Fabric - Microsoft Fabric | Microsoft Learn

Monitor Apache Spark capacity consumption - Microsoft Fabric | Microsoft Learn

Compute Management in Fabric Environments - Microsoft Fabric | Microsoft Learn

Billing and Utilization Reporting - Microsoft Fabric | Microsoft Learn

Configure Autoscale Billing for Spark in Microsoft Fabric - Microsoft Fabric | Microsoft Learn

 

I hope this information helps. Please do let us know if you have any further queries.

 

Regards,

Dinesh

 

AntoineW
Resolver II
Resolver II

Hello @Shreya_Barhate,

 

Thanks for the question, this is a summary of the answer : 

 

What is Compute Pool Capacity Usage CU?
This meter in Azure billing represents the provisioned compute pool capacity in your Fabric SKU (e.g., F64). It shows the available capacity you’ve purchased, not the real workload consumption.

 

Which workloads contribute to it?
The Compute Pool covers non-Spark workloads such as:

  • Power BI (dataset refresh, queries, DirectQuery, semantic models)

  • Dataflows Gen2

  • Pipelines orchestration

  • SQL queries in Lakehouse/Warehouse

  • Other lightweight Fabric services

Spark workloads are tracked separately under Spark Capacity Usage CU.

 

How it works in billing

  • Billing is based on provisioned capacity (your SKU), whether idle or fully used.

  • No extra charges if you exceed: workloads will be throttled (slowed or queued), not billed higher.

 

How to see real usage

  • For actual workload consumption, use the Fabric Capacity Metrics App, it's an application that you download from this linkhttps://learn.microsoft.com/en-us/fabric/enterprise/metrics-app.

  • It provides information of the fabric capacity consumption by workload (Power BI, SQL, Pipelines, Spark, etc.), by workspace, and by user, which Azure billing itself doesn’t expose.

 

Hope it can help you ! 

Best regards,

Antoine

Helpful resources

Announcements
August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.