The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
We are still just testing Fabric and because the trial is over, have moved to the most basic fabric capacity.
Im the only one using it.
I run through a notebook and at the end I have a bit of code
Solved! Go to Solution.
My impression is that using spark.stop() doesn't work in Fabric notebooks.
You could try to use mssparkutils.session.stop() instead
Even if this documentation is for Azure Synapse Analytics, it seems to work, at least for the moment.
Here is the Notebookutils for Fabric, it doesn't mention this function.
https://learn.microsoft.com/en-us/fabric/data-engineering/notebook-utilities
You can also click the Stop button in the Notebook user interface.
If you want to programmatically run the next notebook after the first notebook has finished, you can use the notebookutils.notebook.run() function in Notebookutils.
https://learn.microsoft.com/en-us/fabric/data-engineering/notebook-utilities#reference-a-notebook
Any update on this, how to stop a Spark session in MS Fabric notebook?
I get this error all the time. Nothing is running in Monitor but I still get the error. I have no idea what I have to stop so that I can be allowed to run my notebooks.
I have the lowest capacity F2. We are also evaluating whether or not Fabric could be our future data platform.
My impression is that using spark.stop() doesn't work in Fabric notebooks.
You could try to use mssparkutils.session.stop() instead
Even if this documentation is for Azure Synapse Analytics, it seems to work, at least for the moment.
Here is the Notebookutils for Fabric, it doesn't mention this function.
https://learn.microsoft.com/en-us/fabric/data-engineering/notebook-utilities
You can also click the Stop button in the Notebook user interface.
If you want to programmatically run the next notebook after the first notebook has finished, you can use the notebookutils.notebook.run() function in Notebookutils.
https://learn.microsoft.com/en-us/fabric/data-engineering/notebook-utilities#reference-a-notebook
I see this
in the NotebookUtils (former MSSparkUtils) for Fabric - Microsoft Fabric | Microsoft Learn
Seems to work.
Hi @DebbieE,
Any update on this? For the job queue limitation you can refer to following document:
Job queueing in Apache Spark for Fabric - Microsoft Fabric | Microsoft Learn
If these processing meet to the limitations, you can consider reducing the total queue amount or upgrade the capacity tier.
Regards,
Xiaoxin Sheng
HI @DebbieE,
As the error message mentions, the spark job can't run because this operation hit the computer or API rate limit.
What type of capacity are you use to host these processes? How did you configurate the spark environment and pool settings? Please share some more detail information to help us clarify your scenario and test to troubleshoot.
Reference links:
Configure and manage starter pools in Fabric Spark. - Microsoft Fabric | Microsoft Learn
Compute management in Fabric environments - Microsoft Fabric | Microsoft Learn
Regards,
Xiaoxin Sheng
As above. Its the lowest capacity.
However. I have
Hi @DebbieE,
You can also refer to the following link to trace the notebook usages:
Notebook contextual monitoring and debugging - Microsoft Fabric | Microsoft Learn
Regards,
Xiaoxin Sheng
User | Count |
---|---|
6 | |
2 | |
2 | |
2 | |
2 |
User | Count |
---|---|
19 | |
18 | |
6 | |
5 | |
4 |