Check your eligibility for this 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700.
Get StartedDon't miss out! 2025 Microsoft Fabric Community Conference, March 31 - April 2, Las Vegas, Nevada. Use code MSCUST for a $150 discount. Prices go up February 11th. Register now.
Im running 4 background jobs
An event stream
2 pipelines running a notebook using pyspark streaming.
A pipeline running SQL to copy for Lakehouse to ware house.
I have been careful to add timeouts in the pipelines and notebooks
Amount of data is like 20 messages an hour.
What im seeing is during the day the amount of CU these background jobs use increase.
However when i stop and start Fabric instance it goes back down again.
How do i go about solving this ?
staggering did not help
2 of the notebooks are like this.
During day restart same. I cant believe that this is coincidence of more jobs running at the same peak load. There is only 4 jobs and it was doing the same when there were 2 jobs. ( Both pipelines running every 11 minutes running a notebook with a 2 minute trigger).
I know i can prob work around it by running the notebook once per day and leave it running but i want to know why this is happening and how to get detail ?
Hi @bklooste ,
Based on the description, try staggering job start times to distribute the load more evenly throughout the day.
Besides, try to reduce the polling frequency of the event stream and pipelines.
Best Regards,
Wisdom Wu
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
User | Count |
---|---|
25 | |
22 | |
11 | |
10 | |
9 |
User | Count |
---|---|
48 | |
30 | |
20 | |
17 | |
15 |