Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
I can loop through the output using
files = mssparkutils.fs.ls('Files/orders/')
for file in files:
print(file.name, file.isDir, file.isFile, file.path, file.size)
But how do I send the output to a dataframe instead?
Solved! Go to Solution.
Hi @PeteSpillane ,
You can do this with the following code
from notebookutils import mssparkutils
# Initialise variables
data = []
columns = ["File Name", "Is Dir", "Is File", "File Path", "File Size"]
files = mssparkutils.fs.ls('Files/orders/')
# Add rows to lists
for file in files:
data.append([file.name, file.isDir, file.isFile, file.path, file.size])
# Create a dataframe
dataframe = spark.createDataFrame(data, columns)
# Show data frame
dataframe.show()
Tested my side in Fabric notebook and all seemed to work okay.
Hope it helps,
Kris
Works perfectly. Thanks Kris!
Hi @PeteSpillane ,
You can do this with the following code
from notebookutils import mssparkutils
# Initialise variables
data = []
columns = ["File Name", "Is Dir", "Is File", "File Path", "File Size"]
files = mssparkutils.fs.ls('Files/orders/')
# Add rows to lists
for file in files:
data.append([file.name, file.isDir, file.isFile, file.path, file.size])
# Create a dataframe
dataframe = spark.createDataFrame(data, columns)
# Show data frame
dataframe.show()
Tested my side in Fabric notebook and all seemed to work okay.
Hope it helps,
Kris
Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes!
Check out the September 2025 Fabric update to learn about new features.
User | Count |
---|---|
13 | |
5 | |
4 | |
3 | |
2 |