The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
I am having an issue with a pipleine executing a notebook in a different workspace.
The flow is basically a pipeline in Workspace1 has a notebook activity for a notebook in Workspace2 that writes to a delta table in a lakehouse in Workspace2. This is the only lakehouse attached to the notebook.
The intention is to have the Workspace2 notebook executed across many workspaces so the common logic in could be shared.
From the following error, it appears that the notebook doesn't run against it's default lakehouse.
Notebook execution failed at Notebook service with http status code - '200', please check the Run logs on Notebook, additional details - 'Error name - AnalysisException, Error value - org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Spark SQL queries are only possible in the context of a lakehouse. Please attach a lakehouse to proceed.)' :
Is there a way to achieve this?
I'm not sure what changed, but this now works appropriately.