Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
schneiw
Advocate I
Advocate I

Lakehouse high CU usage, unknown source

Hi

Can anyone please help me understand what is causing this high usage. The ratio between seconds and CU's consumed seems very skewed, and I do not know what is causing it. Here are some screenshots from the monitoring app. How can 85 seconds cause 1.2 million CU's to be consumed??

schneiw_0-1753097517540.png

Here is the detail:

schneiw_1-1753097746209.png

 

 

1 ACCEPTED SOLUTION
schneiw
Advocate I
Advocate I

Support finally got back to me with this reply. They admitted to it being a backend Microsoft issue, but no word on how and if they would be fixing it or if I would be reinbursed the lost CU's. The abbreviated 'DMS' is for Data Movement Services:

 

The increased CU usage was attributed to a disabled file system (FS) on the DMS side, which resulted in a significant number of “GetBlobProperties” API calls from DMS to the One Lake Client (OLC). This led to the observed spike in CU usage. After the FS was re-enabled on July 29th, CU usage decreased accordingly.

 

The spike occurred on 07/18 and subsided on 07/29, aligning with the period when the FS was disabled and subsequently re-enabled on the DMS side, which reduced the number of calls from DMS to OLC.

View solution in original post

17 REPLIES 17
schneiw
Advocate I
Advocate I

Support finally got back to me with this reply. They admitted to it being a backend Microsoft issue, but no word on how and if they would be fixing it or if I would be reinbursed the lost CU's. The abbreviated 'DMS' is for Data Movement Services:

 

The increased CU usage was attributed to a disabled file system (FS) on the DMS side, which resulted in a significant number of “GetBlobProperties” API calls from DMS to the One Lake Client (OLC). This led to the observed spike in CU usage. After the FS was re-enabled on July 29th, CU usage decreased accordingly.

 

The spike occurred on 07/18 and subsided on 07/29, aligning with the period when the FS was disabled and subsequently re-enabled on the DMS side, which reduced the number of calls from DMS to OLC.

They won't reimburse unless you push them for it. Make sure you have the receipts.

schneiw
Advocate I
Advocate I

Just an update: Stopping and restarting the Capacity seemed to have stopped whatever these rogue background processes are that were consuming the CU's. The support ticket is still open/under investigation so no word on what the root cause is/was.

Hi @schneiw,

Thanks for your patience and please do post if any update on above issue. this will help forum community members find answers easily with similar issues.

 

 

Thanks,

Prashanth

schneiw
Advocate I
Advocate I

I have done nothing to or with (i.e. no pipeline, notebooks queries etc reading or writing)  these lakehouses today, yet the comsumed CU's is exceptionally high, I would have expected them to be zero.

 

schneiw_4-1753122116102.png

 

Very interesting.

Tomorrow I'll take a look at my lakehouses to see if we also have such high Cu utilization rates.
I would have noticed that, though.

I'll get back to you tomorrow.

 

Best regards

Since you have a Pro license you should open a Pro ticket at https://admin.powerplatform.microsoft.com/newsupportticket/powerbi

Thanks, I was trying in the wrong place for support ticket! I did successfully log a ticket and will update this post with Microsoft's findings.

I am having the same issue on the same timeline. It is not resolved.

 

The dramatic increase for me came from BCDR operations on lakehouses with shortcuts. These did not used get charged for BCDR before Saturday and now we are seeing a dramatic increase in CU usage. Nearly a quarter of our capacity is committed to these new BCDR operations as of Saturday. 

I had a call with support today, they are investigating and I will update their findings. The support tech said they had never seen spikes like this before so hopefully they can correlate to a change that they made to the backend.

Hi @schneiw,

We are following up once regarding your query. Could you please confirm if the issue has been addressed through the support ticket with Microsoft?

 

@zzthatcher@spaceman127@lbendlin, Thanks for your prompt response

 


If the issue has been resolved, we kindly request you to share the resolution or key insights here to help others in the community. 

 

 

Thanks,

Prashanth

Microsoft Fabric Community Forum

No support has still not resolved it.

 

We keep hitting 100% usage now to the point I cant really get any work done. Its a bad situation to say the least, we have hard deadlines for our project, we are paying for a service we cannot use, I am NOT happy right now.

 

Last night I deleted 2 Lakehouses that contained the most data to try and get some available CU's, but even that seemed not to help as an hour later I was still getting 100% CU usage emails. I paused the Capacity for 3 hours and enabled again last night and am now looking to see what the effect is.

 

To see how REDICULOUS this is have a look at my metrics chart. When I looked at it I thought they had lost my data, then I noticed the Y axis values:

schneiw_0-1753355221985.png

 

I checked it on a lakehouse that is practically not used at all. The CU is virtually zero.

I think opening a ticket with MS is also the right way to go.

 

Best regards

schneiw
Advocate I
Advocate I

To take it one step further, Iterative Read via Proxy is 4798 Cu's per 10000 vs the 104 per 10000 for Other Operations via Redirect (see below).

 

Yet take one day example of ours, where we have 1.4 million operations of Iterative Read via Proxy vs. 692K Other Operations via Redirect so about half. Yet the total CU's is  607 thousand vs 4.2 million

 

If I do the math, 692 326 operations / 10 000 * 104 = 7200 CUs. Not sure where they are getting 4 217 837 from ? That would mean we are being charged 692 326 / 10000 * x = 4 217 837 ....therefore x = 60 918 CUs per 10000???

 

  

schneiw_2-1753119954747.png

 

schneiw_3-1753120731278.png

 

 

schneiw
Advocate I
Advocate I

Thank you for your links to help me understand. Its still not that clear what is causing this. From the documentation its based on "operations", which are suppoed to be 104 CUs per 10000:

schneiw_0-1753118047762.png

 

Using the FAUM app, I can see then how many operations are being perfomred on a given day per Lakehouse. For example most of these lakehouses are averaging around 2900 operations for 3 seconds, yet the total CU's are vastly different for some . Something isnt adding up here.....

schneiw_1-1753118214286.png

 

spaceman127
Helper II
Helper II

Hello @schneiw ,

Here you can find a Microsoft article on what all the processes are.

 

https://learn.microsoft.com/en-us/fabric/onelake/onelake-consumption

 

Another article with an example, I think you should now be able to figure out what generates these CUs.

 

https://learn.microsoft.com/en-us/fabric/onelake/onelake-capacity-consumption

 

And then here is the general description of the metric APP.

 

https://learn.microsoft.com/en-us/fabric/enterprise/metrics-app

 

I hope the information helps you.

 

Best regards

Helpful resources

Announcements
Fabric July 2025 Monthly Update Carousel

Fabric Monthly Update - July 2025

Check out the July 2025 Fabric update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.