Check your eligibility for this 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700.
Get StartedDon't miss out! 2025 Microsoft Fabric Community Conference, March 31 - April 2, Las Vegas, Nevada. Use code MSCUST for a $150 discount. Prices go up February 11th. Register now.
Why does the capacity metrics app say that all my synapse notebooks are cancelled. Is this a "known issue"? I haven't found it in their known issues list yet.
The other parts of Fabric do not seem to believe that these notebooks failed. In the ADF pipeline U/I and in the monitoring blade, all these notebooks are presented as successful. But in the capacity metrics app it appears like everything is being cancelled. Below is the timepoint detail on the metrics app.
On a related note, why does the capacity metrics app even care if background operations succeed or fail? The PG probably shouldn't be presenting that column at all, since it is unrelated to performance and costs. I'm guessing that nobody tested this column. Even if they saw the bug, the developers probably considered it a lower priority. It is probably near the bottom of the list of things to fix.
Please let me know if there is a reason for "cancelled" or if anyone has ever seen a different status shown in this column, for the sake of synapse notebooks.
Hi @dbeavon3 , thank you for reaching out to the Microsoft Fabric Community Forum.
Ensure that your capacity metrics app and other Fabric components are up to date. And
Please consider reaching out to Microsoft Support. You can provide them with all the troubleshooting steps you've already taken, which will help them understand the issue better and provide a resolution. They might be able to identify something specific about your admin account setup or provide a solution that isn't immediately obvious.
Below is the link to create Microsoft Support ticket:
How to create a Fabric and Power BI Support ticket - Power BI | Microsoft Learn
If this helps, please consider marking it 'Accept as Solution' so others with similar queries may find it more easily. If not, please share the details.
Thank you.
@v-hashadapu
The Mindtree/professional support is already engaged, but nobody at the Microsoft PG seems to be interacting on the topic yet. I think there is an ICM between Mindtree and Microsoft, but there are no updates.
They are continuing to "check with the back-end team".
I don't expect a bug-fix right away but I would like to see this added to the "known issues" list at the very least.
I will probably close the case by the end of the week, and possibly move it over to a "unified" contract. Microsoft seems to be willing to fix the bugs that are reported thru their "unified" support organization, but the one reported thruy "professional" support can easily become starved for attention. (This is typically not the fault of the Mindtree engineers. )
Hi @dbeavon3 , thank you for reaching out to the Microsoft Fabric Community Forum.
We understand your concern about the uncertainty surrounding the General Availability (GA) timeline. At this time, we are still unsure when this feature will reach GA status. However, your feedback and participation in our preview features are incredibly valuable. By using these features and reporting any issues or bugs, you actively contribute to improving the product for everyone.
We sincerely appreciate your patience and support throughout this process. Your input is essential in ensuring that when the feature does reach GA, it meets the highest standards of quality and reliability. Thank you for being part of this journey with us!
A failure comes with an additional price - an automatic retry which will more than likely fail again, piling up the CUs like there's no tomorrow.
Hi @lbendlin
I wanted to share a tip that you may already know.
Just because a pool is defined to be massive, doesn't mean a notebook needs to use ALL of it.
Last night I just helped a user reduce the CU's for their scheduled notebooks in my capacity. The trick is to conservatively edit the so-called "Environment" definition that is used by the notebook.
See below that the (yellow) pool is a "Medium" which is FAR bigger than what any single notebook may need. The users were accidentally using all of the pool from their notebooks, and that was soaking up almost all our CU's.
... but by changing the "Environment" to decrease the driver and executor sizes (down to 28GB), and decreasing the max executors to 2, then it saved on CU's.
As-of today my CU's for notebooks are half what they were 200K per day instead of 400K. But notebooks run exactly the same amount of time. THe extra resource were costing money but they remained idle. The 56 GB of RAM was way over-provisioned for the work that was being done. I'd guess that > 90% of all the python users in fabric are using ONLY the driver node, and ONLY one core at a time, and ONLY about 10GB ram or less. However it is EASY for these users to accidentally consume more resources than what they actually need.
After my change, the notebooks are still the highest user of CU's in my capacity, but at least it is 50% less than it was yesterday.
In three years or so, Microsoft will probably have a CU "optimizer" tool that will tell users how to give their notebooks a smaller footprint on their spark resources. But for now I doubt they are in a big hurry to add that feature. In the very least they could have changed the defaults in the "Environment" to be the smallest option, and they could have presented a warning when switching the number of cores to say that it will cost 2x the CU's for eight cores than for four cores. These are simple things that would save a customer a lot of money.
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Prices go up Feb. 11th.
Check out the January 2025 Power BI update to learn about new features in Reporting, Modeling, and Data Connectivity.
User | Count |
---|---|
30 | |
26 | |
22 | |
20 | |
17 |
User | Count |
---|---|
54 | |
33 | |
27 | |
23 | |
20 |