March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Early bird discount ends December 31.
Register NowBe one of the first to start using Fabric Databases. View on-demand sessions with database experts and the Microsoft product team to learn just how easy it is to get started. Watch now
Hello everyone.
I have a lakehouse that's connected to Dataverse with the Link to Fabric function. Also, I'm connected to Customer Voice preview feature.
The problem is: I ran a Logic App that starts the Fabric capacity in the morning, delay for 1min, refresh a Power BI semantic model and then suspend the capacity after all these steps.
But, after a few days (2-3), the semantic model doesn't refresh because of this:
"Data source error: DataSource.Error: Microsoft SQL: Failed to complete the command because the underlying location does not exist. Underlying data description: table 'dbo.msevtmgt_checkin', file 'https://onelake.dfs.fabric.microsoft.com/id-here/id-here/Tables/msevtmgt_checkin/PartitionId=2024/pa...'."
I extended the delay to wait 10min, but still get the error. Only after manually refreshing the SQL Endpoint I'm able to query again using the SQL Endp.
I've searched and saw that other users have the same problem, a few group managed by waiting 10-15min, but that's a lottery tbh.
Any ideas of what can I do? Is there a way to refresh the SQL Endpoint using notebooks or something else?
Thanks in advance.
Solved! Go to Solution.
Sorry for the late response. I don't find a specific API or something else that can help refresh the SQL Endpoint of a lakehouse automatically. I captured the trace of the "refresh" action in Explorer. It seems trigger a metadata refresh of the SQL endpoint. But I failed to covert it into a valid call we can use for the automation process.
You may try setting a longer waiting time after starting the capacity. If this still doesn't work, I would suggest creating a support ticket to ask escalation engineers to help check the failue. This looks more like a synchronization issue.
In addition, a lakehouse generates a default semantic model. You can manage the default semantic model or create customized semantic models from it. Maybe you can consider using the default semantic model or customized semantic models for your reports. Default Power BI semantic models - Microsoft Fabric | Microsoft Learn
Best Regards,
Jing
From this documentation, I learned that Dataverse will generate a lakehouse, SQL endpoint, and a default Power BI semantic model in the Fabric workspace. Do you run a Logic App to refresh the default semantic model generated by Dataverse? Or do you customize another semantic model then use a Logic App to refresh it?
May I know how you manually refresh the SQL Endpoint? To my knowledge, the associated SQL Endpoint will be updated automatically after the lakehouse has been updated. I don't find where I can refresh a SQL Endpoint. Can you help point it out? Maybe I have missed it.
Additionally, if you want to refresh a semantic model at a scheduled time, there is a preview Semantic model refresh pipeline activity in Data Factory currently. You can create a data pipeline in Data Factory and add a Semantic model refresh activity for refreshing a specific semantic model. Then schedule to run the pipeline. Maybe this could be an alternative.
Data Factory Spotlight: Semantic model refresh activity | Microsoft Fabric Blog | Microsoft Fabric
Best Regards,
Jing
If this post helps, please Accept it as Solution to help other members find it. Appreciate your Kudos!
Hello!
I refresh first the Lakehouse semantic model, then the Power BI semantic model.
I read in another thread that the Lakehouse semantic model doesn't affect the SQL endpoint refreshness, is that true? If so, refreshing it would not affect the error in the endpoint.
Edit: I refresh the SQL Endpoint by entering and manually click on refresh button:
Sorry, I misunderstood what you meant. So you have created a Power BI dataset which is getting data from a Fabric lakehouse by connecting to the SQL Endpoint. Then you use a Logic App to refresh the Power BI dataset, right?
Regards,
Jing
Let me explain better:
I created a Link to Fabric connection to get data from D365 CE (Dataverse).
After that, I created some views that read a few tables from the Lakehouse, and those views I connected them to a Power BI report (desktop to be specific) and then published my report.
Then I created a Logic App automation that starts the capacity in the morning, waits for around 20min, then refreshes the report.
Even after waiting 20min, I get the error of this case.
If I manually refresh the SQL Endpoint (by entering the SQL Endpoint and click on refresh button), I can refresh the report, but going this way I lose all the intention of automate the process..
Sorry for the late response. I don't find a specific API or something else that can help refresh the SQL Endpoint of a lakehouse automatically. I captured the trace of the "refresh" action in Explorer. It seems trigger a metadata refresh of the SQL endpoint. But I failed to covert it into a valid call we can use for the automation process.
You may try setting a longer waiting time after starting the capacity. If this still doesn't work, I would suggest creating a support ticket to ask escalation engineers to help check the failue. This looks more like a synchronization issue.
In addition, a lakehouse generates a default semantic model. You can manage the default semantic model or create customized semantic models from it. Maybe you can consider using the default semantic model or customized semantic models for your reports. Default Power BI semantic models - Microsoft Fabric | Microsoft Learn
Best Regards,
Jing
I tested waiting for 45min and seems o be working. It's not the best scenario but at least we're not facing any issues.
Here is also a suggested way to check if the SQL Analytics Endpoint has synced:
I like the idea of the canary table - easy enough to include in the ETL process and then easy enough to drop - unless that also has a delay?
I am thinking to use timestamp as part of the dummy table name, if I am going to try this method.
This way, I could easily identity the dummy table name and distinguish it from other dummy tables.
I would try to delete the dummy table perhaps at the end of each pipeline run, or once every day (delete all dummy tables created in the last day).
I am wondering if the existence of the dummy table in the SQL Analytics Endpoint guarantees that all write operations, performed on the Lakehouse up until the time of creating the dummy table, have also been synced to the SQL Analytics Endpoint.
I guess that's what the suggested canary table method is implying. So I am assuming the answer is "Yes".
However I didn't get a reply to my follow-up question - at least not yet.
This has been discussed in other threads. The SQL endpoints refresh, but not immediately. There is a random delay that makes things unpredictable.
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!
Your insights matter. That’s why we created a quick survey to learn about your experience finding answers to technical questions.
Arun Ulag shares exciting details about the Microsoft Fabric Conference 2025, which will be held in Las Vegas, NV.
User | Count |
---|---|
4 | |
1 | |
1 | |
1 | |
1 |
User | Count |
---|---|
7 | |
3 | |
3 | |
3 | |
2 |