Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!View all the Fabric Data Days sessions on demand. View schedule
Hello,
In a notebook which is in DEV I use this code to get the ID of the Lakehouse in DEV and the ID of the Workspace DEV :
df_Lakehouse = labs.list_lakehouses()
lakehouse_row = df_Lakehouses[df_lakehouses["Lakehouse Name"] == "Lakehouse"]
lakehouse_id = lakehouse_row.iloc[0]["Lakehouse ID]
workspace_id = spark.conf.get("trident.workspace.id)
And I'd like to add the IDs to this code, but unfortunately I can't set the variables correctly for the code to work.
%%configure -f
{
"defaultLakehouse": {
"name": 'Lakehouse',
"id": lakehouse_id,
"workspace": workspace_id
}
}
Does anyone have any ideas?
Solved! Go to Solution.
Hi @Charline_74
To change the default lakehouse in a Microsoft Fabric notebook, you can use the %%configure magic command with the defaultLakehouse parameter. However, the issue in your code is that you're trying to use Python variables directly within the JSON configuration, which isn't possible. Instead, you need to format the JSON string with the variable values.
could you please give this a try :
df_Lakehouses = labs.list_lakehouses()
lakehouse_row = df_Lakehouses[df_Lakehouses["Lakehouse Name"] == "Lakehouse"]
lakehouse_id = lakehouse_row.iloc[0]["Lakehouse ID"]
workspace_id = spark.conf.get("trident.workspace.id")
%%configure -f
{
"defaultLakehouse": {
"name": "Lakehouse",
"id": "%s",
"workspaceId": "%s"
}
}
""" % (lakehouse_id, workspace_id)
Please give kudos and mark this as solution if this helps.
Thanks
Thank you for your feedback. Do you know how to use this API? https://learn.microsoft.com/en-us/rest/api/fabric/notebook/items/update-notebook-definition?tabs=HTT...
I don't understand how to define the API body?
Hi @Charline_74 ,
I just tried it out and you can change the notebook definition including metdata using notebookutils, so we don't even need to explicitely invoke an API call. 🙂
This is what worked for me:
import sempy.fabric as fabric
# INSERT YOUR WORKSPACES SPECIFIC INFORMATION HERE
workspace_name = "workspace_id"
item_name = "notebook_name"
replacement_dict = {
"lakehouse_id" : {
"old" : "LH_ID_old",
"new" : "LH_ID_new",
},
"lakehouse_name" : {
"old" : "LH_old",
"new" : "LH_new",
},
"workspace_id_of_lakehouse" : {
"old" : "workspace_id_old",
"new" : "workspace_id_new",
},
}
workspace_id = fabric.resolve_workspace_id(workspace_name)
items = fabric.list_items(workspace=workspace_name)
item_id = items.where(items["Display Name"] == f"{item_name}").dropna().Id.item()
item_type = items.where(items["Display Name"] == f"{item_name}").dropna().Type.item()
definition = notebookutils.notebook.getDefinition(item_name, workspace_id)
for replacement in replacement_dict.keys():
definition = definition.replace(replacement_dict[replacement]["old"], replacement_dict[replacement]["new"])
notebookutils.notebook.updateDefinition(name=item_name, content=definition, workspaceId=workspace_id)
This is essentially doing a search and replace in the current definition of your notebook and overwrites the specific part concerning the default lakehouse. If your items are in the same workspace you could skip the "workspace_id_of_lakehouse" key in the dict since it doesn't need to be changed.
We found great success using these commands to change information which for instance cannot be parametrized during deployment via (the current form of) deployment pipelines.
Hope this helps you out. 🙂
Hi @Charline_74 ,
to add on what @nilendraFabric said, using the configure command to overwrite notebook metadata, such as the default lakehouse, will require you to restart the running session. This leads to problem when executing the notebook from a scheduler such as a Data Pipeline. This should be taken into account when relying on this option.
As an alternative to changing the default lakehouse within the same notebook, you could try to use the Fabric API and change the default lakehouse reference from a separate notebook.
Hi @Charline_74
To change the default lakehouse in a Microsoft Fabric notebook, you can use the %%configure magic command with the defaultLakehouse parameter. However, the issue in your code is that you're trying to use Python variables directly within the JSON configuration, which isn't possible. Instead, you need to format the JSON string with the variable values.
could you please give this a try :
df_Lakehouses = labs.list_lakehouses()
lakehouse_row = df_Lakehouses[df_Lakehouses["Lakehouse Name"] == "Lakehouse"]
lakehouse_id = lakehouse_row.iloc[0]["Lakehouse ID"]
workspace_id = spark.conf.get("trident.workspace.id")
%%configure -f
{
"defaultLakehouse": {
"name": "Lakehouse",
"id": "%s",
"workspaceId": "%s"
}
}
""" % (lakehouse_id, workspace_id)
Please give kudos and mark this as solution if this helps.
Thanks
Hi,
I'm getting the following error when trying this solution:
Cell In[17], line 14
""" % (lakehouse_id, workspace_id)
^
SyntaxError: incomplete input
When moving the configure magic to it's own cell I get this error:
MagicUsageError: Configuration should be a valid JSON object expression.
--> JsonReaderException: After parsing a value an unexpected character was encountered: ". Path 'defaultLakehouse', line 7, position 0.
Tried multiple variations of it, but I can't seem to figure it out. Appreciate any help.
Thanks!