The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Notebook execution failed at Notebook service with http status code - '200', please check the Run logs on Notebook, additional details - 'Error name - Py4JJavaError, Error value - An error occurred while calling z:notebookutils.notebook.runMultiple.
: com.microsoft.spark.notebook.msutils.NotebookExecutionException: Cannot reference a Notebook that attaching to a different default lakehouse. You can pass the parameter useRootDefaultLakehouse to ignore it, for example in run API: mssparkutils.notebook.run('child_nb', 90, {'useRootDefaultLakehouse': True}), in runMultiple API, please run mssparkutils.notebook.help('runMultiple') for more details. You can check driver log or snapshot for detailed error info! See how to check logs: https://go.microsoft.com/fwlink/?linkid=2157243 .
Solved! Go to Solution.
So I fixed this by adding "useRootDefaultLakehouse": True arg to my dag under my silver script.
So I fixed this by adding "useRootDefaultLakehouse": True arg to my dag under my silver script.
Thank you for sharing the solution with us!
Hi @todd-wilson
If you run a notebook directly with below code (not calling by data pipeline), will it run successfully?
%%configure -f
{
"defaultLakehouse": {
"name": "<lakehouse name>",
"id": "<lakehouse id>",
"workspaceId": "<workspace id>"
}
}
In addition, if you don't use a DAG in the data pipeline, just run multiple notebooks concurrently like below, will it run successfully?
mssparkutils.notebook.runMultiple(["notebook1", "notebook2", "notebook3"])
How many notebooks are running concurrently? Is it possible to reduce some notebooks or run them separately to check if this error occurs on a specific notebook or may occur on each notebook?
Best Regards,
Jing
Community Support Team