Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredGet Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now
I am having an issue with a pipleine executing a notebook in a different workspace.
The flow is basically a pipeline in Workspace1 has a notebook activity for a notebook in Workspace2 that writes to a delta table in a lakehouse in Workspace2. This is the only lakehouse attached to the notebook.
The intention is to have the Workspace2 notebook executed across many workspaces so the common logic in could be shared.
From the following error, it appears that the notebook doesn't run against it's default lakehouse.
Notebook execution failed at Notebook service with http status code - '200', please check the Run logs on Notebook, additional details - 'Error name - AnalysisException, Error value - org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Spark SQL queries are only possible in the context of a lakehouse. Please attach a lakehouse to proceed.)' :
Is there a way to achieve this?
I'm not sure what changed, but this now works appropriately.
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!
Check out the October 2025 Fabric update to learn about new features.