The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hello,
I'm getting the below error when I try to run the two notebooks parallelly.
This is the first notebook running with Standard Session
When I try to run the other notebook, i'm getting the error
I tried using the High Concurrent Seesion. But, Still seeing the same error
Error:
"[TooManyRequestsForCapacity] This spark job can't be run because you have hit a spark compute or API rate limit. To run this spark job, cancel an active Spark job through the Monitoring hub, choose a larger capacity SKU, or try again later. HTTP status code: 430 {Learn more} HTTP status code: 430."
Note: I'm using the Free trial account
Do you still have this error? Was adjusting the Spark Compute settings in Workspace helpful?
Best Regards,
Jing
You could try to decrease the Spark Compute for the entire workspace in these settings:
https://learn.microsoft.com/en-us/fabric/data-engineering/create-custom-spark-pools.
When you switch to a small node size and disable autoscale and set a fixed number of executors, you could try to find the best configuration to run your two notebooks while still running your code within reasonable time.
Instead of doing this for the whole workspace, you could also create a custom environment and set your notebooks to that environment. You find the documentation for the custom environment here: https://learn.microsoft.com/en-us/fabric/data-engineering/create-and-use-environment