Supplies are limited. Contact info@espc.tech right away to save your spot before the conference sells out.
Get your discountScore big with last-minute savings on the final tickets to FabCon Vienna. Secure your discount
I can loop through the output using
files = mssparkutils.fs.ls('Files/orders/')
for file in files:
print(file.name, file.isDir, file.isFile, file.path, file.size)
But how do I send the output to a dataframe instead?
Solved! Go to Solution.
Hi @PeteSpillane ,
You can do this with the following code
from notebookutils import mssparkutils
# Initialise variables
data = []
columns = ["File Name", "Is Dir", "Is File", "File Path", "File Size"]
files = mssparkutils.fs.ls('Files/orders/')
# Add rows to lists
for file in files:
data.append([file.name, file.isDir, file.isFile, file.path, file.size])
# Create a dataframe
dataframe = spark.createDataFrame(data, columns)
# Show data frame
dataframe.show()
Tested my side in Fabric notebook and all seemed to work okay.
Hope it helps,
Kris
Works perfectly. Thanks Kris!
Hi @PeteSpillane ,
You can do this with the following code
from notebookutils import mssparkutils
# Initialise variables
data = []
columns = ["File Name", "Is Dir", "Is File", "File Path", "File Size"]
files = mssparkutils.fs.ls('Files/orders/')
# Add rows to lists
for file in files:
data.append([file.name, file.isDir, file.isFile, file.path, file.size])
# Create a dataframe
dataframe = spark.createDataFrame(data, columns)
# Show data frame
dataframe.show()
Tested my side in Fabric notebook and all seemed to work okay.
Hope it helps,
Kris
User | Count |
---|---|
5 | |
4 | |
3 | |
3 | |
2 |
User | Count |
---|---|
10 | |
8 | |
7 | |
6 | |
6 |