Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.
I am having an issue with a pipleine executing a notebook in a different workspace.
The flow is basically a pipeline in Workspace1 has a notebook activity for a notebook in Workspace2 that writes to a delta table in a lakehouse in Workspace2. This is the only lakehouse attached to the notebook.
The intention is to have the Workspace2 notebook executed across many workspaces so the common logic in could be shared.
From the following error, it appears that the notebook doesn't run against it's default lakehouse.
Notebook execution failed at Notebook service with http status code - '200', please check the Run logs on Notebook, additional details - 'Error name - AnalysisException, Error value - org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Spark SQL queries are only possible in the context of a lakehouse. Please attach a lakehouse to proceed.)' :
Is there a way to achieve this?
I'm not sure what changed, but this now works appropriately.