Check your eligibility for this 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700.
Get StartedJoin us at the 2025 Microsoft Fabric Community Conference. March 31 - April 2, Las Vegas, Nevada. Use code FABINSIDER for $400 discount. Register now
Hello,
In a notebook which is in DEV I use this code to get the ID of the Lakehouse in DEV and the ID of the Workspace DEV :
df_Lakehouse = labs.list_lakehouses()
lakehouse_row = df_Lakehouses[df_lakehouses["Lakehouse Name"] == "Lakehouse"]
lakehouse_id = lakehouse_row.iloc[0]["Lakehouse ID]
workspace_id = spark.conf.get("trident.workspace.id)
And I'd like to add the IDs to this code, but unfortunately I can't set the variables correctly for the code to work.
%%configure -f
{
"defaultLakehouse": {
"name": 'Lakehouse',
"id": lakehouse_id,
"workspace": workspace_id
}
}
Does anyone have any ideas?
Solved! Go to Solution.
Hi @Charline_74
To change the default lakehouse in a Microsoft Fabric notebook, you can use the %%configure magic command with the defaultLakehouse parameter. However, the issue in your code is that you're trying to use Python variables directly within the JSON configuration, which isn't possible. Instead, you need to format the JSON string with the variable values.
could you please give this a try :
df_Lakehouses = labs.list_lakehouses()
lakehouse_row = df_Lakehouses[df_Lakehouses["Lakehouse Name"] == "Lakehouse"]
lakehouse_id = lakehouse_row.iloc[0]["Lakehouse ID"]
workspace_id = spark.conf.get("trident.workspace.id")
%%configure -f
{
"defaultLakehouse": {
"name": "Lakehouse",
"id": "%s",
"workspaceId": "%s"
}
}
""" % (lakehouse_id, workspace_id)
Please give kudos and mark this as solution if this helps.
Thanks
Thank you for your feedback. Do you know how to use this API? https://learn.microsoft.com/en-us/rest/api/fabric/notebook/items/update-notebook-definition?tabs=HTT...
I don't understand how to define the API body?
Hi @Charline_74 ,
I just tried it out and you can change the notebook definition including metdata using notebookutils, so we don't even need to explicitely invoke an API call. 🙂
This is what worked for me:
import sempy.fabric as fabric
# INSERT YOUR WORKSPACES SPECIFIC INFORMATION HERE
workspace_name = "workspace_id"
item_name = "notebook_name"
replacement_dict = {
"lakehouse_id" : {
"old" : "LH_ID_old",
"new" : "LH_ID_new",
},
"lakehouse_name" : {
"old" : "LH_old",
"new" : "LH_new",
},
"workspace_id_of_lakehouse" : {
"old" : "workspace_id_old",
"new" : "workspace_id_new",
},
}
workspace_id = fabric.resolve_workspace_id(workspace_name)
items = fabric.list_items(workspace=workspace_name)
item_id = items.where(items["Display Name"] == f"{item_name}").dropna().Id.item()
item_type = items.where(items["Display Name"] == f"{item_name}").dropna().Type.item()
definition = notebookutils.notebook.getDefinition(item_name, workspace_id)
for replacement in replacement_dict.keys():
definition = definition.replace(replacement_dict[replacement]["old"], replacement_dict[replacement]["new"])
notebookutils.notebook.updateDefinition(name=item_name, content=definition, workspaceId=workspace_id)
This is essentially doing a search and replace in the current definition of your notebook and overwrites the specific part concerning the default lakehouse. If your items are in the same workspace you could skip the "workspace_id_of_lakehouse" key in the dict since it doesn't need to be changed.
We found great success using these commands to change information which for instance cannot be parametrized during deployment via (the current form of) deployment pipelines.
Hope this helps you out. 🙂
Hi @Charline_74 ,
to add on what @nilendraFabric said, using the configure command to overwrite notebook metadata, such as the default lakehouse, will require you to restart the running session. This leads to problem when executing the notebook from a scheduler such as a Data Pipeline. This should be taken into account when relying on this option.
As an alternative to changing the default lakehouse within the same notebook, you could try to use the Fabric API and change the default lakehouse reference from a separate notebook.
Hi @Charline_74
To change the default lakehouse in a Microsoft Fabric notebook, you can use the %%configure magic command with the defaultLakehouse parameter. However, the issue in your code is that you're trying to use Python variables directly within the JSON configuration, which isn't possible. Instead, you need to format the JSON string with the variable values.
could you please give this a try :
df_Lakehouses = labs.list_lakehouses()
lakehouse_row = df_Lakehouses[df_Lakehouses["Lakehouse Name"] == "Lakehouse"]
lakehouse_id = lakehouse_row.iloc[0]["Lakehouse ID"]
workspace_id = spark.conf.get("trident.workspace.id")
%%configure -f
{
"defaultLakehouse": {
"name": "Lakehouse",
"id": "%s",
"workspaceId": "%s"
}
}
""" % (lakehouse_id, workspace_id)
Please give kudos and mark this as solution if this helps.
Thanks
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!
Check out the February 2025 Fabric update to learn about new features.
User | Count |
---|---|
38 | |
3 | |
2 | |
2 | |
1 |
User | Count |
---|---|
18 | |
9 | |
8 | |
4 | |
4 |