Don't miss your chance to take the Fabric Data Engineer (DP-600) exam for FREE! Find out how by attending the DP-600 session on April 23rd (pacific time), live or on-demand.
Learn moreNext up in the FabCon + SQLCon recap series: The roadmap for Microsoft SQL and Maximizing Developer experiences in Fabric. All sessions are available on-demand after the live show. Register now
Hi All,
While running any query on a table or running a pyspark code , I am able to see first 1000 records and also not able to export more records in csv.
Is there any way to export more than 1000 records in csv from fabric notebook ?
We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet .
In case if you have any resolution please do share that same with the community as it can be helpful to others .
Thanks
Hello @priyanksingh
I am not sure if it is in place in Databricks , as per this thread i think they are still working on that , apoloziges if you are able to see that and may be they did not updated the thread .
I do not think we can have that set in Fabric at this time , but then you can use the .head() and that should do the trick .df.head(10)
Thanks
Himanshu
Welcome to Microsoft Fabric Community and thanks for posting your question here.
As I understand that you want to export more than 1000 records in csv from Fabric Notebook.
When you run a PySpark query in Fabric Notebook, the results are limited to the first 1000 records by default. However, you can export more than 1000 records in csv fromat. Please follow the below mentioned workaround in order to achieve your resolution.
I have published 5000 records into my Lakehouse. Run the below mentioned code to save the file into your Lakehouse.
df = spark.sql("SELECT * FROM ShekarLH.5000_Records")
df.write.csv("abfss://42965892-5819-4760-ad68-8983467a9df3@msit-onelake.dfs.fabric.microsoft.com/5023fdc9-fd65-4ef6-bb9f-bfdb82bd646f/Files/Destination/SH5000")
After running the above code, the csv file gets created in the respective path.
To access this file in your local desktop, please download Onelake Explorer using this link Download Onelake. Make sure to sync with your respective workspace.
Now you will able to download the csv file which is having all the 5000 records.
Hope this helps, let us know in case of any further queries.
Following up to see if the above suggestion was helpful. And, if you have any further query do let us know.
Hi,
Thank you for the suggestion. Yes i had tried this method but i was looking for some setting where we can change the default value. This setting is present in Databricks
Experience the highlights from FabCon & SQLCon, available live and on-demand starting April 14th.
If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.
| User | Count |
|---|---|
| 7 | |
| 4 | |
| 4 | |
| 3 | |
| 3 |