Power BI is turning 10, and we’re marking the occasion with a special community challenge. Use your creativity to tell a story, uncover trends, or highlight something unexpected.
Get startedJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hello,
Recently we have noticed that the automatic metadata discovery for our lakehouse has not been working. This has resulted in us manually refreshing the SQL endpoint for our Lakehouse each morning to ensure that the tables are up-to-date. I have been unable to find a method to do this automatically and wonder if anyone has a method of forcing this automatic metadata discovery or a method to automatically trigger the on-demand refresh button.
Below is a snip from the SQL Analytics End Point Performance Considerations that I have referenced, we have completed the necessary maintinance and it would be an extreme lift to move these lakehouses to individual workspaces.
Guidance
https://learn.microsoft.com/en-us/fabric/data-warehouse/sql-analytics-endpoint-performance#guidance
Solved! Go to Solution.
Hi @Bill_J99294 ,
You can create a notebook that connects to the lakehouse corresponding to your SQL endpoint by entering the following command in the cell:
from pyspark.sql import SparkSession
# create Spark session
spark = SparkSession.builder \
.appName(“Refresh SQL Endpoint Metadata”) \
.getOrCreate()
# refresh
spark.sql(“REFRESH TABLE salesorders”)
print(“Metadata refresh triggered successfully.”)
Set up a daily refresh in your notebook settings so that this refresh command is executed every day.
If you have any other questions please feel free to contact me.
Best Regards,
Yang
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
@Anonymous Can we refresh the whole Lakehouse SQL Endpoint using the spark notebook, as the command it looks like it is just going to refresh the table?
Our customers are facing issue in which when we upload the Delta File in Lakehouse after that table is not visible even after waiting for 5 mins. So is there a way we can do via notebook, please suggest.
When we discussed this with msft, they said we should run a dummy query (e.g. SELECT count(*) ...) and then wait a bit (they didn't say how long but try to experiment with 30, 60 seconds). This should start the serverless sql endpoint and after that run the automatic refresh.
We've also tried to call the undocummented API, which was working for us. https://gist.github.com/MarkPryceMaherMSFT/853cdc0d9d421482814b8195aba55434
Hi @Bill_J99294 ,
Thanks for the reply from R1k91 .
Is my follow-up just to ask if the problem has been solved?
If so, can you accept the correct answer as a solution or share your solution to help other members find it faster?
Thank you very much for your cooperation!
Best Regards,
Yang
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
they're talking about this also here:
https://www.reddit.com/r/MicrosoftFabric/comments/1ercfha/sql_analytics_endpoint_performance/
Hi @Bill_J99294 ,
You can create a notebook that connects to the lakehouse corresponding to your SQL endpoint by entering the following command in the cell:
from pyspark.sql import SparkSession
# create Spark session
spark = SparkSession.builder \
.appName(“Refresh SQL Endpoint Metadata”) \
.getOrCreate()
# refresh
spark.sql(“REFRESH TABLE salesorders”)
print(“Metadata refresh triggered successfully.”)
Set up a daily refresh in your notebook settings so that this refresh command is executed every day.
If you have any other questions please feel free to contact me.
Best Regards,
Yang
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
Hi,
I have a lakehouse using schema shortcuts and I want to update the whole lakehouse which essentially contains multiple schemas. Is there a command to refresh by schema or the entire lakeouse as per the manual Metadata Sync button?
Regards
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.
User | Count |
---|---|
3 | |
2 | |
1 | |
1 | |
1 |
User | Count |
---|---|
3 | |
2 | |
2 | |
2 | |
1 |