Power BI is turning 10, and we’re marking the occasion with a special community challenge. Use your creativity to tell a story, uncover trends, or highlight something unexpected.
Get startedJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hello,
I want to develop and debug spark job definitions on my local machine using VSCode. I want to use mssparkutils and notebookutils. So I'm wondering how i can select the fabric kernel in VSCode. It works for notebooks but should also work for spark job definition if I understand the documentation correctly: Fabric runtime support in VS Code - Microsoft Fabric | Microsoft Learn
I can currently only select conda kernels:
Thanks for your response!
Martijn
Solved! Go to Solution.
Currently, Spark Job Definitions cannot be debugged using the Fabric kernel in VSCode. The Fabric kernel is mainly designed for running notebooks, whereas Spark Job Definitions require execution on the remote Spark Runtime. If you try to debug a Spark Job Definition using the Fabric kernel, it won’t function as expected since it lacks the necessary Spark execution environment.
To properly develop and debug Spark jobs, you need to select the remote Spark Runtime, which provides the required Spark components and dependencies. This ensures that your job runs in an environment similar to the one it will be deployed in, allowing for accurate testing and debugging.
If you were expecting this functionality within the Fabric kernel itself, it would be worth checking Microsoft’s official roadmap or documentation for any future support. Having the ability to debug Spark Job Definitions directly in the Fabric kernel would be a valuable enhancement, as it would streamline development and testing workflows without requiring an external runtime.
Are there any specific issues you’re facing while debugging your Spark Job Definition? Let me know how I can assist further!
Currently, Spark Job Definitions cannot be debugged using the Fabric kernel in VSCode. The Fabric kernel is mainly designed for running notebooks, whereas Spark Job Definitions require execution on the remote Spark Runtime. If you try to debug a Spark Job Definition using the Fabric kernel, it won’t function as expected since it lacks the necessary Spark execution environment.
To properly develop and debug Spark jobs, you need to select the remote Spark Runtime, which provides the required Spark components and dependencies. This ensures that your job runs in an environment similar to the one it will be deployed in, allowing for accurate testing and debugging.
If you were expecting this functionality within the Fabric kernel itself, it would be worth checking Microsoft’s official roadmap or documentation for any future support. Having the ability to debug Spark Job Definitions directly in the Fabric kernel would be a valuable enhancement, as it would streamline development and testing workflows without requiring an external runtime.
Are there any specific issues you’re facing while debugging your Spark Job Definition? Let me know how I can assist further!
Hello Apikpo09,
Thanks for your response!
Not the answer I was hoping for, but this explains why I couldn't select the fabric kernel.
Is there a way for me to request this as a new feature? As it would enable us to develop a maintainable and modulair codebase.
I want to develop a modulair codebase to facilitate our ETL process. As this is currently developed in multiple notebooks, which aren't very usefull when developing/debugging functions/classes accross multiple notebooks. Also the merging notebooks is a but of a hassle (putting it mildly :P).
However with the conda runtime I am able to develop/debug against a spark enviroment.
But specific issues I'm facing are:
- Wanting to use the latest spark/delta features, which are available in spark runtime 1-3
- Wanting to use notebookutils within my functions/classes.
Hi Broeks,
We sincerely appreciate your update.
We are reaching out to confirm whether the support ticket you raised has successfully resolved the issue. If so, we would appreciate it if you could share the solution with the community and mark it as the accepted solution. This will help others encountering similar challenges and contribute to the broader community’s benefit.
Thank you.
Hello,
I've raised a ticket and I'm still waiting for a response. Once I've got the solution I'll share it here.
greetings,
Martijn
Hi Broeks,
We are following up to check whether the support ticket raised from your end has provided a resolution to the issue. If it has, we kindly request you to share the solution with the community and mark it as the accepted solution, as this will assist others facing similar challenges and benefit the broader community.
Thank you.
Hello,
I've raised a ticket and I'm still waiting for a response. Once I've got the solution I'll share it here.
greetings,
Martijn
Hi Broeks,
We are following up to check whether the support ticket raised from your end has provided a resolution to the issue. If it has, we kindly request you to share the solution with the community and mark it as the accepted solution, as this will assist others facing similar challenges and benefit the broader community.
Thank you.
Hello,
Yes I'm in contact with the supportteam and will keep you posted. There are no updates for now.
Hi @Broeks,
Apologies for the inconvenience.
I regret the misunderstanding of the scenario. As mentioned in the document, it should also work for Spark job definition.
However, I am unable to reproduce the scenario. Therefore, I kindly request you to raise a support ticket for resolution using the link provided below. A dedicated backend team will assist you in resolving the issue.
Microsoft Fabric Support and Status | Microsoft Fabric
Thank you for your patience and understanding.
Thanks for your response.
I've created a support ticket.
This would help our use case to develop our own pyspark solution.
Are there any insights you can give me on what to expect for any further integration between fabric/spark and VSCode?
Hi Broeks,
Thank you for reaching out with your query. Kindly follow the steps below to address the issue:
Ensure that the latest Synapse VS Code extension is installed. Verify that OpenJDK 8 is correctly installed and added to the system PATH. Additionally, select the appropriate Fabric runtime (ex:fabric-synapse-runtime-1.2) in VS Code.
Currently, VS Code does not support the Fabric kernel for debugging Spark job definitions. Spark job definitions run locally and do not connect to the Fabric Spark cluster in the cloud. Tools such as mssparkutils and notebookutils are specifically designed for use in Notebooks and Fabric-managed environments, and they are not compatible with local Spark job definitions.
As an alternative, you may develop and debug the code in a Fabric-enabled notebook in VS Code, where mssparkutils and notebookutils are accessible. Once the code is finalized, test the Spark job definition in the Fabric portal to validate its behavior in the cloud.
For further reference, please refer to the following documentation:
If you find this response helpful, kindly mark it as the accepted solution and provide kudos. This will assist other community members facing similar queries.
Thank you.
Thanks! This explains why I'm not able to run spark job definitions on the fabric kernel. My suggestion would be to change the following line:"When you run the notebook or Spark Job Definition, you can choose the local conda environment or remote Spark Runtime. " on page: https://learn.microsoft.com/en-us/fabric/data-engineering/fabric-runtime-in-vscode#considerations-fo... as it's suggests it's possible to run the spark job definition on the fabric kernel.
Another question would be: will this be a feature in the (near) future? As it would open the door for us the develop spark job definitions in a proper way.
Thanks for your feedback! You’re absolutely right—this line could be misleading, as it implies that Spark Job Definitions can run on the Fabric kernel, which is not currently supported. This could cause confusion for users who expect seamless execution of Spark jobs in this environment. Updating this wording to clarify the distinction between running notebooks and Spark Job Definitions would definitely improve the documentation and help users understand the limitations of the current setup.
Additionally, I’d like to inquire whether there are any plans to support running Spark Job Definitions directly on the Fabric kernel in the future. Having this capability would be a major enhancement, as it would allow developers to create, test, and debug Spark jobs more efficiently within the Fabric ecosystem without relying on external runtimes. The ability to execute Spark Job Definitions seamlessly within Fabric would streamline development workflows and improve productivity, especially for teams that rely on structured Spark workloads.
If this feature is on the roadmap, it would be great to get some insights into its expected timeline or any upcoming improvements in this area. Looking forward to your thoughts on this!
Thanks for your response.
I think I've got my VsCode configured correctly. As I'm able to select the microsoft fabric runtime within a notebook in VSCode. However, I'm not able to do the same in VSCode for a spark job definition.
Can you please confirm that the Java version which you are using is OpenJDK 8
Select the right spark runtime version. You can set environment in notebook in the portal, if you use fabric spark 1.2, you need to choose fabric-synapse-runtime-1-2 kernel when running code in VS Code
Hope this helps
I've got the openjdk8 installed and in the path. It's pointing to the path for the openjdk: "C:\Program Files\Eclipse Adoptium\jre-8.0.442.6-hotspot\bin"
When I get the java version it says:
So i would assume it's correct.
When running the pyspark code in vscode I get the following error for the fabric 1-2 runtime:
And it also leaves me with the other question. Can I debug my spark job defintion code against the fabric spark cluster in the cloud. As I understand it correctly I'm running against a local version of spark and cannot use for example mssparkutils? Of fabric runtime 1-3?
Hello @Broeks
From the document it should work for spark job definition as well.
try these :
1.Ensure you have the Synapse VS Code extension installed and properly configured.
2. Open your Spark job definition in VS Code.
3. In the VS Code Explorer, look for the “Python Environments” option.
4. Select the “synapse-spark-kernel” from the list of available kernels
referenced docs
https://github.com/microsoft/SynapseVSCode/issues/13
https://learn.microsoft.com/vi-vn/fabric/data-engineering/author-sjd-with-vs-code
Please keep us posted here with the outcome.
if this works please accept the solution.
Thanks
User | Count |
---|---|
10 | |
4 | |
4 | |
3 | |
3 |