Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
I have been using notebooks to create and modify delta tables for several months. I run the notebooks in the browser app and then I query the table using the SQL endpoint to validate the changes. I use SSMS 19 to excecute the queries.
I have noticed that recently it takes much longer for the changes to show up through the SQL endpoint. I just did a test where I modified a table in a notebook and then repeatedly ran a query every few seconds until I saw the changes. It took almost 10 minutes from notebook completion until the table changes were reflected in the results of the query.
I'm curious if other users have noticed this delay recently and what delay should be considered normal. It has never been instant, but the current delay is a huge constraint to being able to test and validate changes.
Solved! Go to Solution.
Microsoft support told me that it is a known issue and is in their internal issues tracker. However, it will not be added to the public Fabric Known Issues page. There is currently no timeline for a fix.
This is the go-to resource for this problem. This contains everything end to end about how to fix this SQL Endpoint delay issue -
Tenemos el mismo problema desde prinipios de año, no hemos obtenido respuestas al respecto ni planes de solución, hemos levantadio varios tickets. Este problema se nos presenta en Dataflows v2 y Notebooks que escriben datos en Lakehouse.
I found this blog post that triggers the refresh of a lakehouse with a Python script. It works like a charm for me. I had the problem that the lakehouse had a lag of 30+ minutes. This Python script (which I used in a notebook) reduced it to 6 minutes and it waits until the sql endpoint is refreshed. So you are sure that other activities which are executed on completion of the Python notebook will have the refreshed sql endpoint.
https://www.obvience.com/blog/fix-sql-analytics-endpoint-sync-issues-in-microsoft-fabric-data-not-sh...
Great, here's another one I have written a few days back.
This is a known issue with the metadata sync process which results in stale data when queries through the sql endpoint.
The fix was scheduled for July 31 but has unfortunately been pushed back to September.
The workaround is to create a separate workspace and Lakehouse, shortcut your existing Lakehouse tables into the new Lakehouse and the data will be fresh.
Or you can refer this blog to quickly fix this?
Thanks for the update and also providing a workaround!
Is this issue about the case where we do some changes to the table schema?
(E.g. add columns, remove columns, rename columns, etc.)
Or is it about regular update of the data in the table (with no alterations to columns)?
Or both scenarios?
There are a few reasons why you will get stale data when querying your sql endpoint. If you are loading data into a table via a notebook, the first query you run on the endpoint will freeze the data for all other queries until it completes. If the initial query is long running, you will get stale data presented by all subsequent queries. This is the fix Microsoft are implementing.
We have also observed cases where data lags because the metadata sync job can be blocked by stats generation.
I'm experiencing the same delay.
See also related issue:
https://community.fabric.microsoft.com/t5/General-Discussion/Delayed-data-refresh-in-SQL-Analytical-...
Were you able to get this fixed? I can't get it to refresh at all in the past several hours.
Here adding that I have experienced the same thing, where a new column or renaming has happened on the Lakehouse, the SQL will not reflect it immediately. Something that happened before with no issue. We are also unable to see accesses given to Lakehouses being reflected in the SQL endpoint for Report consumers
It's been about 4 hours and the SQL Endpoint is still not updated the lakehouse schema changes. Is this problem becoming worse? I remember it being about 10-30 minutes to refresh.
Microsoft support told me that it is a known issue and is in their internal issues tracker. However, it will not be added to the public Fabric Known Issues page. There is currently no timeline for a fix.
Any updates regarding this issue?
Any news on this? I routinely have to wait 20-30 minutes.
It's particularly irritating when I want to test my results. I refresh the model hoping the data will be there, waste time looking, get scared that my changes/fixes etc have failed, only to then try later and it's working.
It is a bit rubbish I think.
Or maybe I'm doing something wrong?
Hello, I found a workaround that you might want to try (example code below the listing):
1. Take your Pandas df either by converting a spark one to Pandas (does not have to be loaded from SQL but could also be loaded via spark.read.csv('file') or spark.read.parquet('file') etc.
2. process you data as usual
3. Get list of columns and list of tuples of data. IMPORTANT: no special characters (even whitespaces) allowed in column names. you might need to rename them.
4. Do some spark stuff I do not know what it does, seems to generate a new spark like DataFrame.
5. Save you data. IMPORTANT: option('delta.columnMapping.mode','name') will create the table but it wont be accessible at the SQL endpoint.
6. If the writing fails, you might need to manually change the data type in the columns. This has to be done before step 3.
7. Refresh you lake/warehouse and use your tables :D.
# 1.
df = spark.sql("SELECT * FROM Lakehouse.SQLData").toPandas()
# 2.
test_df = ### processing(df)
# 3.
cols = list(test_df.columns.values) # gets list of columns
ncols = []
for c in cols:
ncols.append(re.sub(r'[-\%#\s\/]','_',c)
data = list(test_df.itertuples(index=False, name=None)) # gets data as list of tuples
# 4.
rdd = spark.sparkContext.parallelize(data)
n_df = rdd.toDF(cols)
# 5.
### Make sure that your column names do not include any special character, also not white space#
n_df.write.format("delta").saveAsTable('auto_test_3')
Hi @alozovoy ,
Thanks for using Fabric Community.
While I wasn't able to replicate the long delay you're experiencing, here are two things you can try:
If the issue persists after trying these suggestions, please let me know
The issue continues even after refreshing.
I have opened a Microsoft Support ticket #2402160010002646.
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
User | Count |
---|---|
13 | |
4 | |
3 | |
3 | |
3 |
User | Count |
---|---|
8 | |
8 | |
7 | |
6 | |
5 |