Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric certified for FREE! Don't miss your chance! Learn more

Reply
ebjim
Resolver I
Resolver I

Deteriorating warehouse and lakehouse performance throughout the week

I am not sure if anyone else has observed what I have: in the past 3 or 4 weeks, the warehouse and lakehouse in Fabric would work fine on Monday but by Wednesday, they crawl. Whether it's dataflow refreshing, DML/DDL in SSMS or data previewing in the dataflow UI, things just keep spinning until they fail via timeout. Things stay very slow on Thursday and Friday, then begin improving over the weekend.

1 ACCEPTED SOLUTION
Anonymous
Not applicable

Hi @ebjim ,
I have received an update from the internal team. After consulting with the team, they have created an internal ICM with Incident No- 431933400.
It would be great if you can create a Support Ticket , since the ICM is already created. While creating the support ticket you can reference this Incident number , which would help bridge support to the engineer quicker.
Thanks.




View solution in original post

8 REPLIES 8
DennesTorres
Impactful Individual
Impactful Individual

Hi,

About the lakehouse, did you check the possibility it's related to VACUUM problems?

Check these links:

https://www.red-gate.com/simple-talk/blogs/microsoft-fabric-and-the-delta-tables-secrets/
https://www.youtube.com/watch?v=BluZJxfwfCM&t=854s

And more related to those.

Kind Regards,

 

Dennes

@DennesTorres I have not checked that possibility. I do have a couple of questions for you, though:

 

1. Is the spark.databricks.delta.retentionDurationCheck.enabled property for lakehouse tables set to true by default?

 

2. The source lakehouse tables I am dealing with don't get updated often. As for destination lakehouse tables, I would delete existing ones and have the Dataflow create new ones. I have dataflows failing by timing out after 8 hours and data previews in the dataflow UI spinning forever. Even accounting for changes in data, does this sound like a VACUUM type issue?

DennesTorres
Impactful Individual
Impactful Individual

Hi,

1. Yes. Because it's very dangerous to run a vacuum if you have other parallel activities executing. You change this at your own risk.

2. Consider the configuration Fabric use to avoid the small files problem. It makes the default file size 1GB - spark will try to keep this amount, if you deliver this or more, it will break the data in 1GB files, if you deliver less, the files will be smaller.

Then consider if your tables are partitioned or not, what obliges spark to break down the files in different folders.

When you say "deleting existing ones", I will imagine it means rows. So, consider that each file containing at least one of the rows you are deleting will be entirely duplicated, without this row in particular. The delta log will be changed to reflect this. The inserts will create new files.

You can imagine the work this involves. Than consider what happens when the delta log needs to be extensively checked because there are lots of unlinked files.

However, I would in no way tell you that this problem would cause an 8 hours timeout of your dataflows. It may have some impact, but in no way it's this problem alone.

Kind Regards,

Dennes

@DennesTorres Actually, "existing ones" are tables. I would get rid of the whole tables and have the dataflow create new ones. That way, I would not need to be overly concerned about the presence of a large number of log files. 

 

Just this morning, I filed a support ticket because of an issue that happens quite a bit: when I delete a lakehouse table through the lakehouse explorer, the table still appears in the SQL endpoint. This out of sync behavior is another symptom of larger issues that I hope the Fabric product team resolves really soon.

Anonymous
Not applicable

Hi @ebjim ,
I have received an update from the internal team. After consulting with the team, they have created an internal ICM with Incident No- 431933400.
It would be great if you can create a Support Ticket , since the ICM is already created. While creating the support ticket you can reference this Incident number , which would help bridge support to the engineer quicker.
Thanks.




Anonymous
Not applicable

Hi @ebjim ,
Thanks for using Fabric Community and reporting your observation to us.
I have reached the internal team for help on this. I will update you once I hear back from them.
Appreciate your patience.

@Anonymous Thank you!

Anonymous
Not applicable

Hi @ebjim  ,
Could you send me your exact workspace id (it is in the http url), to see if we can find more information from telemetry? This would help us in finding the exact issue. I would need the https url as shown below. Copy the url link and send it.

 

 vnikhilanmsft_1-1697123406150.png

Thank you.

Helpful resources

Announcements
Sticker Challenge 2026 Carousel

Join our Community Sticker Challenge 2026

If you love stickers, then you will definitely want to check out our Community Sticker Challenge!

Free Fabric Certifications

Free Fabric Certifications

Get Fabric certified for free! Don't miss your chance.

January Fabric Update Carousel

Fabric Monthly Update - January 2026

Check out the January 2026 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.