Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
JorgeMarmol
Regular Visitor

Table Sync State: Failure after Pipeline execution

Hi all,

 

I have been facing a strange issue with my SQL Endpoint after my pipeline execution where I get the data from a SQL Server database and my sink is a Delta Table in a Lakehouse. After a few days I suddenly find this problem:

 

I find a  red cross next to my tables names:

 

JorgeMarmol_0-1733389758749.png

 

When I enter in the details of the error I find it:

 

JorgeMarmol_1-1733389808615.png

 

However, when I read the data with pyspark in a notebook I can see the data without problem but when I query in the SQL Endpoint the last data I see is from Fri Nov 29 2024 (the day of the failure). I know some peple is facing the same issue and they see the solution is set manually the setting for overwrite the table but It doesn't work for me: Solved: Re: Lakehosue table has an error: An internal erro... - Microsoft Fabric Community.

 

By the way, It used to work in the past but suddenly It happens.

 

Thanks!

Jorge.

 

1 ACCEPTED SOLUTION
Anonymous
Not applicable

Hi @JorgeMarmol ,

I think you can try these steps below:

1. Sometimes, the metadata might be out of sync. You can try refreshing the table metadata in your SQL Endpoint.

 

2. Since you can read the data with PySpark but not through the SQL Endpoint, there might be an issue with how the data is being indexed or cached. Try running a REFRESH TABLE <table_name> command in your SQL Endpoint.

 

3. Ensure there are no locks or long-running transactions on the table that might be causing the issue. You can check this by running SHOW TRANSACTIONS or SHOW LOCKS commands.

 

4. If manually setting the overwrite option didn't work, try updating the table properties to ensure they are correctly configured. You can use the following PySpark command to set the properties:

spark.sql("ALTER TABLE <table_name> SET TBLPROPERTIES ('delta.autoOptimize.optimizeWrite' = 'true', 'delta.autoOptimize.autoCompact' = 'true')")

 

 

Best Regards

Yilong Zhou

If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

View solution in original post

5 REPLIES 5
JorgeMarmol
Regular Visitor

@Anonymous Thanks a lot! I tried points 1 to 3 before, but I hadn't tried point 4. It works for me; I have been monitoring it since last Thursday.

 

Anonymous
Not applicable

Hi @JorgeMarmol ,

Have you solved your problem? If so, can you share your solution here and mark the correct answer as a standard answer to help other members find it faster? Thank you very much for your kind cooperation!

 

 

Best Regards

Yilong Zhou

If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

Anonymous
Not applicable

Hi @JorgeMarmol ,

I think you can try these steps below:

1. Sometimes, the metadata might be out of sync. You can try refreshing the table metadata in your SQL Endpoint.

 

2. Since you can read the data with PySpark but not through the SQL Endpoint, there might be an issue with how the data is being indexed or cached. Try running a REFRESH TABLE <table_name> command in your SQL Endpoint.

 

3. Ensure there are no locks or long-running transactions on the table that might be causing the issue. You can check this by running SHOW TRANSACTIONS or SHOW LOCKS commands.

 

4. If manually setting the overwrite option didn't work, try updating the table properties to ensure they are correctly configured. You can use the following PySpark command to set the properties:

spark.sql("ALTER TABLE <table_name> SET TBLPROPERTIES ('delta.autoOptimize.optimizeWrite' = 'true', 'delta.autoOptimize.autoCompact' = 'true')")

 

 

Best Regards

Yilong Zhou

If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

JorgeMarmol
Regular Visitor

Hi @FabianSchut  thanks for your answer. No, It doesn't work, I've tried it. It only works if I overwrite the table but a few days later after executing the pipeline in append model It fails again.

 

It's strange because It happens even in tables are empty.

 

Thanks, Jorge.

FabianSchut
Super User
Super User

Does it work when you manually refresh the metadata of your lakehouse? Microsoft just posted a blog showing the improvements of the sql endpoint and there is a 'Metadata sync' button available. You can read it here at point 2.:
https://blog.fabric.microsoft.com/en-us/blog/whats-new-in-the-fabric-sql-analytics-endpoint/

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June FBC25 Carousel

Fabric Monthly Update - June 2025

Check out the June 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.