The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hello,
I am currently working with Microsoft Fabric and I am encountering an issue related to resource limitations when running notebooks in a foreach loop.
From the documentation, I understood that job types are classified into two categories: interactive (notebook & lakehouse-based) and batch (spark job definitions). Given that we are using a notebook in the pipeline, it is considered as an interactive job and after a couple of iterations we reach the capacity limit:Response code 430: Unable to submit this request because all the available capacity is currently being used. The suggested solutions are to cancel a currently running job, increase the available capacity, or try again later.
On the other hand, for batch jobs, with queueing enabled, batch jobs are added to the queue and automatically retried when the capacity is freed up. But it seems we cannot trigger batch jobs via pipeline, only manually or by schedule.
One workaround is to go through the loop sequentially (doesn't work when invoking a child pipeline with a notebook activity) but i was wondering if there is a better way to do this.
Any help or guidance would be greatly appreciated.
Thank you,
Amnay
Solved! Go to Solution.
Hi @akanane unfortunately there is no ability to execute notebooks in a high concurrency session via pipelines, the only way I've been able to solve your challenge is either:
You could also have a read through this post by Lilliam
Hello again,
I appreciate your answers @AndyDDC , @Anonymous as it represents a valid workaround to achieve the goal. However, loading/transforming is considered a batch job so according to the documentation, it shouldn't be done with notebook as they're interactive and not designed for batch job.
What should be done in production though? Will we be able to trigger spark definition job from pipelines in the future ?
Hi @akanane ,
We can also use notebooks as a batch job based on our requirment as they can be used for both (interactive and batch operations).
At present we cannot trigger spark definition job from pipelines.
Appreciate if you could share the feedback on our feedback channel. Which would be open for the user community to upvote & comment on. This allows our product teams to effectively prioritize your request against our existing feature backlog and gives insight into the potential impact of implementing the suggested feature.
In mean while, I will try to check with team whether these activities are already in internal roadmap or not.
Hope this helps. Please let me know if you have any further queries.
Hi @akanane unfortunately there is no ability to execute notebooks in a high concurrency session via pipelines, the only way I've been able to solve your challenge is either:
You could also have a read through this post by Lilliam
Hi @AndyDDC, I had those two options in mind but the notebook becomes a "black-box" and requires additional development effort of error handling, logs etc...
Anyway, thank you for your insight! I will definitely checkout the post 🙂
Have a good day 🙂
Hi @akanane ,
Thanks for using Fabric Capacity.
Notebooks are considered as interactive jobs, and there is a limit on the number of interactive jobs that can run simultaneously. When you run a foreach loop in a notebook, each iteration of the loop is treated as a separate interactive job. If the number of iterations exceeds the capacity limit, you will receive the error message you described.
There are a few workarounds that you can use:
Hope this is helpful.
Hi @akanane ,
Glad to know you got some insights. Please continue using Fabric Community incase of any queries.