Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
I'm having hard time understanding the SKUs of Fabric Capacity. I worked on a project where I needed to move the data from point A (source) to point B (lakehouse). I created Fabric Capacity resource in Azure and chosen a specific SKU and assigned created Fabric Capacity to a workspace. After a few weeks, pipeline stopped showing outputs, monitor stopped showing active piplines (update once it finishes) and won't able to load lakehouse tables using SQL endpoint.
Does choosing a specific Fabric SKU only allow certain storage capacity? It seems like Fabric performance degrated as we ingested a few weeks of data and it needed higher SKU. Even though we stopped the pipline to see if it is just compute issue, but it could not even load 1000 records.
If SKU does allow certain storage for lakehouse and warehouse then it means the compute and storage are tightly coupled, similar to traditional Data Warehouses? Then what's the advantage do we have here?
Databricks have separate storage and compute resources and it allows the flexibility to scale up and down these individual resources. Don't we have same concept in Fabric? I feel like we don't but I'm in denial and need confirmation to accept it 🙂
Solved! Go to Solution.
Did you get a specific error message?
Did you have a look in the Fabric Capacity Metrics App to see if your capacity's compute has been over utilized?
If your capacity is in a throttling or rejection state, it might mean you are not able to access the data via the Fabric workloads. Because reading/writing data consumes some capacity compute resources.
https://learn.microsoft.com/en-us/fabric/onelake/onelake-consumption#transactions
https://learn.microsoft.com/en-us/fabric/enterprise/throttling
If you are indeed experiencing throttling, there are some ways to pay to get rid of the throttling. Or I think you need to wait. If you choose to pause your capacity in order to get rid of the throttling, you will get billed extra.
https://learn.microsoft.com/en-us/fabric/enterprise/pause-resume
"When you pause your capacity, the remaining cumulative overages and smoothed operations on your capacity are summed, and added to your Azure bill. You can monitor a paused capacity using the Microsoft Fabric Capacity Metrics app.
If your capacity is being throttled, pausing it stops the throttling and returns your capacity to a healthy state immediately. This behavior enables you to pause your capacity as a self-service mechanism that ends throttling."
So if you are experiencing throttling, perhaps you just need to wait it out if you don't want to pay any extra. Or you can consider ways of paying to get out of the throttling state. E.g. pausing and resuming the capacity is a way of paying extra to get out of the throttling state.
I don't think there is a OneLake storage limit which depends on F SKU size. I haven't heard anything about that, at least.
Obviously the compute capacity is less on an F2 vs. an F64.
But I think the storage capacity is the same on both ("unlimited").
However you pay separately for OneLake storage, so don't go nuts ☺️
Reads/writes on the data consumes compute CUs. So having more data could put more load on the F SKU capacity if you're doing reads or writes.
https://learn.microsoft.com/en-us/fabric/onelake/onelake-capacity-consumption
https://learn.microsoft.com/en-us/fabric/onelake/onelake-consumption
https://azure.microsoft.com/en-us/pricing/details/microsoft-fabric/
Thank you @frithjof_v for sharing your thoughts and documenatation links. The one thing that really bothers me is that when I started experiencing compute issues, I stopped the pipeline to prevent ingesting any further data. Then I should have all those resources available to simply preview the lakehouse table to see top 1000 records, but it was not even able to do that. I checked notebook, other lakehouse (that has one table with single record), data warehouse and I see same capacity issue there as well. So I'm wondering capacity SKU's somehow linked to the amount of available data in Onelake (lakehouses, warehouse) and It only works when the amount of data is lower than it's threshold point??
Did you get a specific error message?
Did you have a look in the Fabric Capacity Metrics App to see if your capacity's compute has been over utilized?
If your capacity is in a throttling or rejection state, it might mean you are not able to access the data via the Fabric workloads. Because reading/writing data consumes some capacity compute resources.
https://learn.microsoft.com/en-us/fabric/onelake/onelake-consumption#transactions
https://learn.microsoft.com/en-us/fabric/enterprise/throttling
If you are indeed experiencing throttling, there are some ways to pay to get rid of the throttling. Or I think you need to wait. If you choose to pause your capacity in order to get rid of the throttling, you will get billed extra.
https://learn.microsoft.com/en-us/fabric/enterprise/pause-resume
"When you pause your capacity, the remaining cumulative overages and smoothed operations on your capacity are summed, and added to your Azure bill. You can monitor a paused capacity using the Microsoft Fabric Capacity Metrics app.
If your capacity is being throttled, pausing it stops the throttling and returns your capacity to a healthy state immediately. This behavior enables you to pause your capacity as a self-service mechanism that ends throttling."
So if you are experiencing throttling, perhaps you just need to wait it out if you don't want to pay any extra. Or you can consider ways of paying to get out of the throttling state. E.g. pausing and resuming the capacity is a way of paying extra to get out of the throttling state.
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
User | Count |
---|---|
12 | |
4 | |
3 | |
3 | |
3 |
User | Count |
---|---|
8 | |
7 | |
6 | |
5 | |
5 |