Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hi all,
I have been facing a strange issue with my SQL Endpoint after my pipeline execution where I get the data from a SQL Server database and my sink is a Delta Table in a Lakehouse. After a few days I suddenly find this problem:
I find a red cross next to my tables names:
When I enter in the details of the error I find it:
However, when I read the data with pyspark in a notebook I can see the data without problem but when I query in the SQL Endpoint the last data I see is from Fri Nov 29 2024 (the day of the failure). I know some peple is facing the same issue and they see the solution is set manually the setting for overwrite the table but It doesn't work for me: Solved: Re: Lakehosue table has an error: An internal erro... - Microsoft Fabric Community.
By the way, It used to work in the past but suddenly It happens.
Thanks!
Jorge.
Solved! Go to Solution.
Hi @JorgeMarmol ,
I think you can try these steps below:
1. Sometimes, the metadata might be out of sync. You can try refreshing the table metadata in your SQL Endpoint.
2. Since you can read the data with PySpark but not through the SQL Endpoint, there might be an issue with how the data is being indexed or cached. Try running a REFRESH TABLE <table_name> command in your SQL Endpoint.
3. Ensure there are no locks or long-running transactions on the table that might be causing the issue. You can check this by running SHOW TRANSACTIONS or SHOW LOCKS commands.
4. If manually setting the overwrite option didn't work, try updating the table properties to ensure they are correctly configured. You can use the following PySpark command to set the properties:
spark.sql("ALTER TABLE <table_name> SET TBLPROPERTIES ('delta.autoOptimize.optimizeWrite' = 'true', 'delta.autoOptimize.autoCompact' = 'true')")
Best Regards
Yilong Zhou
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
@Anonymous Thanks a lot! I tried points 1 to 3 before, but I hadn't tried point 4. It works for me; I have been monitoring it since last Thursday.
Hi @JorgeMarmol ,
Have you solved your problem? If so, can you share your solution here and mark the correct answer as a standard answer to help other members find it faster? Thank you very much for your kind cooperation!
Best Regards
Yilong Zhou
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Hi @JorgeMarmol ,
I think you can try these steps below:
1. Sometimes, the metadata might be out of sync. You can try refreshing the table metadata in your SQL Endpoint.
2. Since you can read the data with PySpark but not through the SQL Endpoint, there might be an issue with how the data is being indexed or cached. Try running a REFRESH TABLE <table_name> command in your SQL Endpoint.
3. Ensure there are no locks or long-running transactions on the table that might be causing the issue. You can check this by running SHOW TRANSACTIONS or SHOW LOCKS commands.
4. If manually setting the overwrite option didn't work, try updating the table properties to ensure they are correctly configured. You can use the following PySpark command to set the properties:
spark.sql("ALTER TABLE <table_name> SET TBLPROPERTIES ('delta.autoOptimize.optimizeWrite' = 'true', 'delta.autoOptimize.autoCompact' = 'true')")
Best Regards
Yilong Zhou
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Hi @FabianSchut thanks for your answer. No, It doesn't work, I've tried it. It only works if I overwrite the table but a few days later after executing the pipeline in append model It fails again.
It's strange because It happens even in tables are empty.
Thanks, Jorge.
Does it work when you manually refresh the metadata of your lakehouse? Microsoft just posted a blog showing the improvements of the sql endpoint and there is a 'Metadata sync' button available. You can read it here at point 2.:
https://blog.fabric.microsoft.com/en-us/blog/whats-new-in-the-fabric-sql-analytics-endpoint/
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.