Don't miss your chance to take the Fabric Data Engineer (DP-700) exam on us!
Learn moreWe've captured the moments from FabCon & SQLCon that everyone is talking about, and we are bringing them to the community, live and on-demand. Starts on April 14th. Register now
This post is part issue-report, part workaround suggestion.
We are running Spark Jobs written in Scala and we use
com.microsoft.azure.synapse:synapseutils_2.12:1.6.2to access (among other things) notebookutils. I can find no official documentation that indicates that synapseutils is meant to be used with Fabric as well, but as it contains notebookutils (which is a Fabric utility library), that seems implied.
This library exposes dummy stubs, so calls to notebookutils can compile locally.
In particular we want to call the following function on notebookutils.lakehouse:
def list(workspaceId: String = ???): Array[Artifact]However, when running the job in Fabric, this function call triggers the following error:
java.lang.NoSuchMethodError:
notebookutils.lakehouse$.list(java.lang.String)
The (Python) signature in the official documentation is
# List all Lakehouse artifacts
list(workspaceId: String = "", maxResults: Int = 1000): Array[Artifact]
Apparently, the default arguments are not translated correctly in the Scala version. In particular, the Scala version does not expose a version of list with 2 arguments.
Note: Calling notebookutils.lakehouse.list(wsid) within a Spark Scala Notebook works without issue, the problem lies solely within the (dummy) function stubs provided by the synapseutils library.
A very very ugly workaround is to apply reflection to the notebookutils.lakehouses object at runtime to find the 2-argument version of list:
object FabricLakehouseCompat {
def listLakehouses(workspaceId: String, maxResults: Int = 1000): Array[AnyRef] = {
val lh = notebookutils.lakehouse
val cls = lh.getClass
val method = cls.getMethod(
"list",
classOf[String],
java.lang.Integer.TYPE
)
method
.invoke(lh, workspaceId, Int.box(maxResults))
.asInstanceOf[Array[AnyRef]]
}
}
Further reflection can be applied to convert the list of AnyRefs to Artifacts again.
Solved! Go to Solution.
Hi @s_hampe ,
Great investigation, your reflection-based workaround is solid, and I can confirm this is a known limitation rather than a simple bug.
Root Cause: NotebookUtils is not supported in Spark Job Definitions
The official Microsoft documentation for NotebookUtils explicitly states:
"Notebook utilities aren't applicable for Apache Spark job definitions (SJD)."
This means the notebookutils.lakehouse.list() method (and other notebookutils APIs) are designed to run inside Fabric Notebooks only, not in compiled Spark Job Definitions. The synapseutils_2.12:1.6.2 library provides compilation stubs, but these stubs are intentionally incomplete, they were never meant to fully replicate the runtime API available inside Notebooks.
That's why the method signature mismatch exists:
Your Java reflection approach is actually the correct pattern for this scenario. Since the stubs don't match the runtime, reflection bypasses the compilation check and calls the method directly at runtime.
Alternative: Use the Fabric REST API
For Spark Job Definitions, the recommended approach is to use the Fabric REST API instead of notebookutils:
The Lakehouse Management REST API provides full CRUD operations and works reliably in both Notebooks and Spark Job Definitions.
Suggestion to Microsoft
It would be helpful if the synapseutils library either:
This would save developers significant debugging time.
Hope this helps! Feel free to ask if you have follow-up questions.
Hi @s_hampe,
Thank you for reaching out to the Microsoft Fabric Community Forum. Also, thanks to @Tamanchu, for those inputs on this thread.
Has your issue been resolved? If the response provided by the community member @Tamanchu, addressed your query, could you please confirm? It helps us ensure that the solutions provided are effective and beneficial for everyone.
Hope this helps clarify things and let me know what you find after giving these steps a try happy to help you investigate this further.
Thank you for using the Microsoft Community Forum.
Hi @Tamanchu !
Thank you for your feedback.
As for the support for Spark Jobs: The notice you mention ("Notebook utilities aren't applicable for Apache Spark job definitions (SJD)." is found in this section:
https://learn.microsoft.com/en-us/fabric/data-engineering/notebook-utilities#notebook-utilities
which refers to a subset of functions of NotebookUtils intended specifically for Notebooks (notebookutils.notebook). It makes complete sense that this particular subset is not applicable to Spark Jobs.
For the other functions, no such notice is present - so I would still consider it a bug in the synapseutils library.
I agree with your suggestion though: synapseutils should match the correct signatures.
Hello @s_hampe,
You’re right that the documentation explicitly mentions only the notebook-specific namespace. My interpretation was based on the observed behavior in SJD where lakehouse APIs are partially stubbed.
If the behavior isn’t officially documented as unsupported, then this likely deserves clarification or a fix in the synapseutils library.
Hi @s_hampe ,
Great investigation, your reflection-based workaround is solid, and I can confirm this is a known limitation rather than a simple bug.
Root Cause: NotebookUtils is not supported in Spark Job Definitions
The official Microsoft documentation for NotebookUtils explicitly states:
"Notebook utilities aren't applicable for Apache Spark job definitions (SJD)."
This means the notebookutils.lakehouse.list() method (and other notebookutils APIs) are designed to run inside Fabric Notebooks only, not in compiled Spark Job Definitions. The synapseutils_2.12:1.6.2 library provides compilation stubs, but these stubs are intentionally incomplete, they were never meant to fully replicate the runtime API available inside Notebooks.
That's why the method signature mismatch exists:
Your Java reflection approach is actually the correct pattern for this scenario. Since the stubs don't match the runtime, reflection bypasses the compilation check and calls the method directly at runtime.
Alternative: Use the Fabric REST API
For Spark Job Definitions, the recommended approach is to use the Fabric REST API instead of notebookutils:
The Lakehouse Management REST API provides full CRUD operations and works reliably in both Notebooks and Spark Job Definitions.
Suggestion to Microsoft
It would be helpful if the synapseutils library either:
This would save developers significant debugging time.
Hope this helps! Feel free to ask if you have follow-up questions.
Experience the highlights from FabCon & SQLCon, available live and on-demand starting April 14th.
If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.
| User | Count |
|---|---|
| 11 | |
| 3 | |
| 3 | |
| 3 | |
| 3 |