Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredGet Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Learn more
How to Efficiently Process and Load Multiple .xer Files from a Lakehouse into a Tabular Format for Power BI in Microsoft Fabric?
Hello Fabric Community,
I am working on a project where I need to process multiple .xer files stored in a Microsoft Fabric Lakehouse. The goal is to append the data from these files into a single tabular format and subsequently integrate the processed data with Power BI for analysis. Here's a detailed breakdown of my workflow and the challenges I'm facing:
File Location:
Required Data Transformation:
Integration with Power BI:
Solved! Go to Solution.
Hi @adarshthouti141 , thank you for reaching out to the Microsoft Fabric Community Forum.
Please consider below steps:
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("ProcessXerFiles").getOrCreate()
df = spark.read.format("xer").load("path/to/lakehouse/folder")
df = df.withColumn("source_file", spark.sparkContext._jvm.org.apache.spark.sql.functions.input_file_name())
# Example transformation: Filter specific rows
df = df.filter("your_condition_here")
df.write.format("delta").save("path/to/delta/table")
If this helps, please consider marking it 'Accept as Solution' so others with similar queries may find it more easily. If not, please share the details.
Thank you.
Hi @adarshthouti141 , Hope your issue is solved. If it is, please consider marking the answer 'Accept as solution', so others with similar issues may find it easily. If it isn't, please share the details.
Thank you.
Hi @adarshthouti141 , Hope your issue is solved. If it is, please consider marking the answer 'Accept as solution', so others with similar issues may find it easily. If it isn't, please share the details.
Thank you.
Hi @adarshthouti141 , Hope your issue is solved. If it is, please consider marking it 'Accept as solution', so others with similar issues may find it easily. If it isn't, please share the details. Thank you.
Hi @adarshthouti141 , thank you for reaching out to the Microsoft Fabric Community Forum.
Please consider below steps:
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("ProcessXerFiles").getOrCreate()
df = spark.read.format("xer").load("path/to/lakehouse/folder")
df = df.withColumn("source_file", spark.sparkContext._jvm.org.apache.spark.sql.functions.input_file_name())
# Example transformation: Filter specific rows
df = df.filter("your_condition_here")
df.write.format("delta").save("path/to/delta/table")
If this helps, please consider marking it 'Accept as Solution' so others with similar queries may find it more easily. If not, please share the details.
Thank you.
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!
Check out the October 2025 Fabric update to learn about new features.