Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
I have used Fabric Notebook before with the SKU2 capacity, it seems to be ok. But I have now purchased Fabric SKU2 with my personal account, the notebook seems to be very slow. E.g., I tried to install pdfplumber package to a new environment, it takes more than 10 minutes to publish the package.
I try to optimize spark setting in Workspace setting, but it won't allow me to change the autoscale setting . I tried to change spark setting in my custom environment, that doesn't work either.
Solved! Go to Solution.
Hi @Jeanxyz ,
Thank you for reaching out to us on Microsoft Fabric Community Forum!
Even though you are using SKU2, performance can vary between organizational and personal environments due to differences in how Spark resources are provisioned. Slow package installs like pdfplumber are common if the environment is initializing or the pool is small. To improve this, consider reusing the environment after the first install and increasing the number of nodes if your SKU allows.
Regarding the autoscale settings being greyed out in both the workspace and environment, this happens if the pool is already in use or was created without autoscale enabled. To resolve this, create a new Spark pool with autoscale configured during setup, then assign it to your environment.
Refer the links here:
Stock Keeping Unit (SKU) considerations - Microsoft Fabric | Microsoft Learn
Understand Autoscale compute for Spark detail page - Microsoft Fabric | Microsoft Learn
I hope this resolve your query. If so,give us kudos and consider accepting it as solution.
Regards,
Pallavi.
Thanks, I have actived autoscale and the problem is solved. However, just for your info that the Fabric is still very slow. I have published the same PB report to two workspaces: a Fabric workspace with SKU2 and a pro workspace. It takes 30 mins to refresh the dataset in Fabric workspace, in pro workspace, it takes 12 minutes only. So the conclusion is: if the semantic model is import mode, use data source from Fabric will slow the import speed unless you have a large Fabric capacit.
Hi @Jeanxyz ,
Thank you for reaching out to us on Microsoft Fabric Community Forum!
Even though you are using SKU2, performance can vary between organizational and personal environments due to differences in how Spark resources are provisioned. Slow package installs like pdfplumber are common if the environment is initializing or the pool is small. To improve this, consider reusing the environment after the first install and increasing the number of nodes if your SKU allows.
Regarding the autoscale settings being greyed out in both the workspace and environment, this happens if the pool is already in use or was created without autoscale enabled. To resolve this, create a new Spark pool with autoscale configured during setup, then assign it to your environment.
Refer the links here:
Stock Keeping Unit (SKU) considerations - Microsoft Fabric | Microsoft Learn
Understand Autoscale compute for Spark detail page - Microsoft Fabric | Microsoft Learn
I hope this resolve your query. If so,give us kudos and consider accepting it as solution.
Regards,
Pallavi.