Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.
How to Efficiently Process and Load Multiple .xer Files from a Lakehouse into a Tabular Format for Power BI in Microsoft Fabric?
Hello Fabric Community,
I am working on a project where I need to process multiple .xer files stored in a Microsoft Fabric Lakehouse. The goal is to append the data from these files into a single tabular format and subsequently integrate the processed data with Power BI for analysis. Here's a detailed breakdown of my workflow and the challenges I'm facing:
File Location:
Required Data Transformation:
Integration with Power BI:
Solved! Go to Solution.
Hi @adarshthouti141 , thank you for reaching out to the Microsoft Fabric Community Forum.
Please consider below steps:
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("ProcessXerFiles").getOrCreate()
df = spark.read.format("xer").load("path/to/lakehouse/folder")
df = df.withColumn("source_file", spark.sparkContext._jvm.org.apache.spark.sql.functions.input_file_name())
# Example transformation: Filter specific rows
df = df.filter("your_condition_here")
df.write.format("delta").save("path/to/delta/table")
If this helps, please consider marking it 'Accept as Solution' so others with similar queries may find it more easily. If not, please share the details.
Thank you.
Hi @adarshthouti141 , Hope your issue is solved. If it is, please consider marking the answer 'Accept as solution', so others with similar issues may find it easily. If it isn't, please share the details.
Thank you.
Hi @adarshthouti141 , Hope your issue is solved. If it is, please consider marking the answer 'Accept as solution', so others with similar issues may find it easily. If it isn't, please share the details.
Thank you.
Hi @adarshthouti141 , Hope your issue is solved. If it is, please consider marking it 'Accept as solution', so others with similar issues may find it easily. If it isn't, please share the details. Thank you.
Hi @adarshthouti141 , thank you for reaching out to the Microsoft Fabric Community Forum.
Please consider below steps:
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("ProcessXerFiles").getOrCreate()
df = spark.read.format("xer").load("path/to/lakehouse/folder")
df = df.withColumn("source_file", spark.sparkContext._jvm.org.apache.spark.sql.functions.input_file_name())
# Example transformation: Filter specific rows
df = df.filter("your_condition_here")
df.write.format("delta").save("path/to/delta/table")
If this helps, please consider marking it 'Accept as Solution' so others with similar queries may find it more easily. If not, please share the details.
Thank you.
User | Count |
---|---|
29 | |
15 | |
13 | |
9 | |
9 |
User | Count |
---|---|
42 | |
31 | |
25 | |
16 | |
14 |