Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Be one of the first to start using Fabric Databases. View on-demand sessions with database experts and the Microsoft product team to learn just how easy it is to get started. Watch now

Reply
todd-wilson
Regular Visitor

Cannot reference a Notebook "that attaching" to a different default lakehouse.

I'm running a pipeline starting with a DAG. And getting an error I can't reference a notebook "that attaching to a different default lakehouse". I've tried passing "useRootDefaultLakehouse": True on runMultiple to ignore the error (although I don't prefer this) and configuring a default lakehouse for every script. I also tried removing and reattaching the lakehouse to all my notebooks. Any suggestions on how to fix this?
 
mssparkutils.notebook.runMultiple(DAG, {"displayDAGViaGraphviz": False, "DAGLayout": "spectral"})
 
%%configure -f
                 "name": '<lakehouse name>',
                 "id": '<lakehouse id>',
                 "workspaceId": '<workspace id>'
             }
 }

 

Notebook execution failed at Notebook service with http status code - '200', please check the Run logs on Notebook, additional details - 'Error name - Py4JJavaError, Error value - An error occurred while calling z:notebookutils.notebook.runMultiple.
: com.microsoft.spark.notebook.msutils.NotebookExecutionException: Cannot reference a Notebook that attaching to a different default lakehouse. You can pass the parameter useRootDefaultLakehouse to ignore it, for example in run API: mssparkutils.notebook.run('child_nb', 90, {'useRootDefaultLakehouse': True}), in runMultiple API, please run mssparkutils.notebook.help('runMultiple') for more details. You can check driver log or snapshot for detailed error info! See how to check logs: https://go.microsoft.com/fwlink/?linkid=2157243 .

 
1 ACCEPTED SOLUTION

So I fixed this by adding "useRootDefaultLakehouse": True arg to my dag under my silver script.

 

            "args": {
                "useRootDefaultLakehouse": True
            }
Larger snippet...
 {
            "name": "notebook_silver",
            "path": "Notebook Silver",
            "timeoutPerCellInSeconds": 3600,
            "dependencies": ["Notebook Bronze"],
            "args": {
                "useRootDefaultLakehouse": True
            },
        }
 
My notebook uses absolute abfss paths to access the bronze datalake and the silver datalake so I think it should have worked according to the docs.
 
"To specify the location to read from, you can use the relative path if the data is from the default lakehouse of your current notebook. Or, if the data is from a different lakehouse, you can use the absolute Azure Blob File"
 
 
 

View solution in original post

4 REPLIES 4
todd-wilson
Regular Visitor

Thank you for the reply. RunMultiple doesn't work because of dependencies. I did run all of these in sequence and the notebook which loads from the bronze dlh to the silver dlh is the issue.
 
This error occurs if I use the notebook in the DAG and use runMultiple or if I run the notebook via a single run command. If I just run the notebook in a session without using mssparkutils it completes without error.
 
How can I make sure both reference to my bronze and silver datalake houses are present in my silver notebook? I have added the bronze dlh to my silver notebook, but the issue is persisting. 😞
 
 

So I fixed this by adding "useRootDefaultLakehouse": True arg to my dag under my silver script.

 

            "args": {
                "useRootDefaultLakehouse": True
            }
Larger snippet...
 {
            "name": "notebook_silver",
            "path": "Notebook Silver",
            "timeoutPerCellInSeconds": 3600,
            "dependencies": ["Notebook Bronze"],
            "args": {
                "useRootDefaultLakehouse": True
            },
        }
 
My notebook uses absolute abfss paths to access the bronze datalake and the silver datalake so I think it should have worked according to the docs.
 
"To specify the location to read from, you can use the relative path if the data is from the default lakehouse of your current notebook. Or, if the data is from a different lakehouse, you can use the absolute Azure Blob File"
 
 
 

Thank you for sharing the solution with us!

v-jingzhan-msft
Community Support
Community Support

Hi @todd-wilson 

 

If you run a notebook directly with below code (not calling by data pipeline), will it run successfully?

%%configure -f
{
    "defaultLakehouse": {
        "name": "<lakehouse name>",
        "id": "<lakehouse id>",
        "workspaceId": "<workspace id>"
    }
}

 

In addition, if you don't use a DAG in the data pipeline, just run multiple notebooks concurrently like below, will it run successfully?

mssparkutils.notebook.runMultiple(["notebook1", "notebook2", "notebook3"])

 

How many notebooks are running concurrently? Is it possible to reduce some notebooks or run them separately to check if this error occurs on a specific notebook or may occur on each notebook? 

 

Best Regards,
Jing

Community Support Team

Helpful resources

Announcements
Las Vegas 2025

Join us at the Microsoft Fabric Community Conference

March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!

Dec Fabric Community Survey

We want your feedback!

Your insights matter. That’s why we created a quick survey to learn about your experience finding answers to technical questions.

ArunFabCon

Microsoft Fabric Community Conference 2025

Arun Ulag shares exciting details about the Microsoft Fabric Conference 2025, which will be held in Las Vegas, NV.

December 2024

A Year in Review - December 2024

Find out what content was popular in the Fabric community during 2024.