Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join the OneLake & Platform Admin teams for an ask US anything on July 16th. Join now.

Reply
alfBI
Helper V
Helper V

Refresh SQL Endpoint using semantic link labs: Intermittent failures

Hi,

 

Recently we saw that MS has released Items - Refresh Sql Endpoint Metadata - REST API (SQLEndpoint) | Microsoft Learn as well its corresponding implementation on the semantic-labs library

https://github.com/microsoft/semantic-link-labs/wiki/Code-Examples#refresh-sql-endpoint-metadata

 

we have preferred to choose the implementation of the semantic-labs because its simplicity but after managing all the problems related with the libraries (they have to be include on a fabric environment ion order to allow the notebook be used on a pipeline) we notice that the notebook execution fails intermittebntly with following error message

 

Notebook execution failed at Notebook service with http status code - '200', please check the Run logs on Notebook, additional details
- 'Error name - KeyError, Error value - "['Table Name', 'Status', 'Start Time', 'End Time', 'Last Successful Sync Time'] not in index"' :

 

alfBI_0-1752131739513.png

 

 

The Run logs of notebook does not give so much detail.

 

Any idea about what is going wrong here?

 

Thanks,

 

Alfons

 

6 REPLIES 6
alfBI
Helper V
Helper V

Hi,

 

Add the time delay did not make any difference. It's quite clear that for some reasons call to API that manage trhe refresh of the SQL Endpoint fails but no idea why. 
I have tested that scheduling the notebook to run at different times, sometimes works fine others not

 

alfBI_0-1752424170125.png

 

Success execution\\

alfBI_1-1752424209838.png

 

 

Failed execution

 

alfBI_2-1752424260844.png

 

alfBI_3-1752424369502.png

 

 

alfBI_4-1752424381417.png

 

but I am not able to undertand what make the difference to make it fail. Same lakehouse, same tables,....

 

 

 

Alfons 

 

Hi @alfBI ,

Thank you for reaching out to the Microsoft Community Forum.

 

The intermittent failures when using the refresh_sql_endpoint_metadata function from the semantic-link-labs library in Microsoft Fabric, particularly encountering a KeyError related to missing DataFrame columns.

 

Please refer below workarounds.

 

1. Validate DataFrame Columns Before Access. Add a check before accessing the columns


expected_cols = ['Table Name', 'Status', 'Start Time', 'End Time', 'Last Successful Time']
if all(col in x.columns for col in expected_cols):
    display(x[expected_cols])
else:
    print("Expected columns not found. DataFrame is likely empty.")
expected_cols = ['Table Name', 'Status', 'Start Time', 'End Time', 'Last Successful Sync

 

Note: This prevents the notebook from failing when the DataFrame is empty.

 

2. Make sure at least one table exists in the Lakehouse before triggering the refresh. You can add a pre-check using the semantic-link-labs API to list tables and confirm presence.

 

3. Wrap the refresh logic in a try-except block


try:
    x = labs.refresh_sql_endpoint_metadata(item=item, type=type, workspace=workspace, tables=tables)
    display(x)
except KeyError as e:
    print(f"KeyError encountered: {e}")

 

Note: This helps log errors and optionally retry or skip execution

 

4. If you are using a multi-step ETL/ELT pipeline, consider forcing a sync of the T-SQL endpoint using Semantic link.

 

I hope this information helps. Please do let us know if you have any further queries.

 

Regards,

Dinesh

alfBI
Helper V
Helper V

I missed to add that what is curious is that If I open a failed execution

alfBI_0-1752213700550.png
and I rerun from the failed refresh

 

alfBI_1-1752213735662.png

 

 

it works, so it looks like just after the ingestion of tables on lakehouse the API needs some time to notice that lakehouse has tables. I will try again addind a time activity (30 seconds) in front of the refresh

 

 

 

Hi @alfBI ,

Thank you for response.  As you mentioned in your previous response, the notebook execution is success. You want to check the issue again by adding a time activity before the refresh. Once done your testing.  Please do let us know if you have any further queries.

 

Regards,

Dinesh

v-dineshya
Community Support
Community Support

Hi @alfBI ,

Thank you for reaching out to the Microsoft Community Forum.

 

The error message " KeyError, Error value - "['Table Name', 'Status', 'Start Time', 'End Time', 'Last Successful Sync Time'] not in index"'",  This typically indicates that the notebook is trying to access columns in a DataFrame that don’t exist, because the Lakehouse is empty or the SQL Endpoint metadata has not been initialized properly.

 

Please check below things to fix the issue.

 

1.  Before accessing columns in the notebook, check if the DataFrame contains the expected columns. Please refer below sample python script.


expected_cols = ['Table Name', 'Status', 'Start Time', 'End Time', 'Last Successful Sync Time']
if all(col in df.columns for col in expected_cols):
df = df[expected_cols]
else:
print("Expected columns not found. DataFrame is likely empty.")

 

2. Check that the Lakehouse has at least one table or object before triggering the SQL Endpoint refresh. An empty Lakehouse will cause the API to return an empty response.

 

3. Place the notebook execution in a try-except block and log errors to help with debugging. Please refer sample python code in try-except block.

 

try:
# notebook execution logic
except KeyError as e:
print(f"KeyError encountered: {e}")
# optionally skip or retry

 

I hope this information helps. Please do let us know if you have any further queries.

 

Regards,

Dinesh

HI v-dineshya,

 

Using semantic labs link the notebook code is extremely simple

 

 

#%pip install semantic-link-labs

# Welcome to your new notebook
# Type here in the cell editor to add code!
import sempy_labs as labs

item = 'Stage' # Enter the name or ID of the Fabric item
type = 'Lakehouse' # Enter the item type
workspace = 'a0ad263f-c689-480b-bcd2-cc1a5cc9169f' # Enter the name or ID of the workspace

# Example 1: Refresh the metadata of all tables
tables = None
x = labs.refresh_sql_endpoint_metadata(item=item, type=type, workspace=workspace, tables=tables)
display(x)
 

honestly I have no idea about how to apply your workaround here

 

Thx

 

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June FBC25 Carousel

Fabric Monthly Update - June 2025

Check out the June 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.

Top Solution Authors