Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
I'm using Fabric notebook to read data from PBI semantic model using Spark. The above table only has around 180k rows and but it took 45 minutes to retrieve data.
In the past, it didn't take that long - typically just a few minutes. I'm trying to write this data to a Lakehouse destination and it can never finish the job. This table is one of the smallest and I have other bigger tables to work on. I'm using Fabric capacity. Did MS change something last week or is there anything I can do?
Solved! Go to Solution.
Hello @mybarbie9917_LI
Thank you @nilendraFabric for your response!
Thank you for reaching out to the Microsoft Fabric Community. We understand you are experiencing a significant performance drop when querying a Power BI Semantic Model via Spark in Fabric.
This issue could be due to Fabric updates, Spark settings, or inefficient query execution. Since it worked in the past but has slowed down recently, it might be caused by Fabric capacity overload. If many users are running workloads, the capacity can be exhausted, slowing everything down.Could you please try the following methods:
If the issue persists, it may be helpful to open a support ticket with Microsoft Fabric for further investigation.
How to create a Fabric and Power BI Support ticket - Power BI | Microsoft Learn
If my response has resolved your query, please mark it as the Accepted Solution to assist others. Additionally, a 'Kudos' would be appreciated if you found my response helpful.
Thank you!
Which Fabric SKU are you using?
Hello @mybarbie9917_LI
I hope this information is helpful. Please let me know if you have any further questions or if you'd like to discuss this further. If this answers your question, please Accept it as a solution and give it a 'Kudos' so others can find it easily.
Thank you.
Hello @mybarbie9917_LI
I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions. If my response has addressed your query, please accept it as a solution and give a 'Kudos' so other members can easily find it.
Thank you.
@mybarbie9917_LI Have you tried running DAX query on Semnatic model.
Try the below. If you have issues on column names tweak in in wrangler.
from pyspark.sql import SparkSession
import sempy.fabric as fabric
import re
# Step 2: Initialize Spark Session (Required for PySpark)
spark = SparkSession.builder.appName("FabricDAXQuery").getOrCreate()
# Ensure the target database exists; create it if it doesn't.
spark.sql("CREATE DATABASE IF NOT EXISTS --- ANYTHING YOU WANT ---")
# Step 3: List available workspaces
df_workspaces = fabric.list_workspaces()
# Convert to PySpark DataFrame and display
df_spark_workspaces = spark.createDataFrame(df_workspaces)
df_spark_workspaces.show(truncate=False) # Display as a Spark DataFrame
# Step 4: List datasets in the specific workspace
workspace_name = "----- WORKSPACE NAME ----" # Change this to your workspace name
df_datasets = fabric.list_datasets(workspace=workspace_name)
# Step 5: Define Dataset and DAX Query
dataset_name = "---- SEMANTIC MODEL NAME -----" # Ensure this matches exactly
dax_string = """
INSERT HERE YOUR DAX QUERY
"""
# Step 6: Run DAX Query with Correct Argument Name
df_dax = fabric.evaluate_dax(
dataset=dataset_name,
dax_string=dax_string, # FIXED: Corrected argument name from `dax_query` to `dax_string`
workspace=workspace_name # Specify the correct workspace
)
# Convert result to PySpark DataFrame
df_spark_dax = spark.createDataFrame(df_dax)
for col_name in df_spark_dax.columns:
match = re.search(r'\[(.*?)\]', col_name)
if match:
# Extract the text within brackets and replace spaces with underscores
new_col_name = match.group(1).replace(" ", "_")
df_spark_dax = df_spark_dax.withColumnRenamed(col_name, new_col_name)
# Step 7: Save DataFrame as a Lakehouse table named 'DAX' under the 'LakeHouse_Sales_Report' database.
df_spark_dax.write.format("delta") \
.mode("overwrite") \
.saveAsTable("----DATABASE NAME ----.----TABLE NAME ----")
Hello @mybarbie9917_LI
Thank you @nilendraFabric for your response!
Thank you for reaching out to the Microsoft Fabric Community. We understand you are experiencing a significant performance drop when querying a Power BI Semantic Model via Spark in Fabric.
This issue could be due to Fabric updates, Spark settings, or inefficient query execution. Since it worked in the past but has slowed down recently, it might be caused by Fabric capacity overload. If many users are running workloads, the capacity can be exhausted, slowing everything down.Could you please try the following methods:
If the issue persists, it may be helpful to open a support ticket with Microsoft Fabric for further investigation.
How to create a Fabric and Power BI Support ticket - Power BI | Microsoft Learn
If my response has resolved your query, please mark it as the Accepted Solution to assist others. Additionally, a 'Kudos' would be appreciated if you found my response helpful.
Thank you!
Hello @mybarbie9917_LI
May I ask if you have resolved this issue? If so, please mark the helpful reply and accept it as the solution. This will be helpful for other community members who have similar problems to solve it faster.
Thank you.
Hello @mybarbie9917_LI
Not sure if something changed but worth trying the query using sempy
from sempy.fabric import FabricDataFrame
d_person = FabricDataFrame.read_table("Semantic Model", "d_person")
d_person = d_person[(d_person.on_leave == 'No') &
(d_person.fte_cw.isin(['FTE', 'CW', 'Other']))]
`FabricDataFrame` propagates Power BI metadata (relationships, hierarchies) for optimized execution
Hi @nilendraFabric! Thanks for your recommendation. I tried your codes but FabricDataFrame Class in Fabric doesn't have the method <FabricDataFrame.read_table>.
I also tried the regular <fabric.read_table> to read the whole table but it keeps running for 15 minutes without result.
import sempy.fabric as fabric
d_person = fabric.read_table(
dataset = "Semantic Model",
table = "d_person",
workspace = 'Workspace'
num_rows = 100,
verbose=1)
Hello @mybarbie9917_LI
Try with different modes :
df_onelake = FabricDataFrame.read_table(
dataset="YourDatasetName",
table="YourTableName",
mode='onelake',
onelake_import_method='spark'
)
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.