Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Next up in the FabCon + SQLCon recap series: The roadmap for Microsoft SQL and Maximizing Developer experiences in Fabric. All sessions are available on-demand after the live show. Register now

Reply
priyanksingh
New Member

Can only see first 1000 records while running query in fabric notebook

Hi All,

 

While running any query on  a table or running a pyspark code , I am able to see first 1000 records and also not able to export more records in csv.

 

Is there any way to export more than 1000 records in csv from fabric notebook ?

5 REPLIES 5
v-cboorla-msft
Microsoft Employee
Microsoft Employee

Hi @priyanksingh 


We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet .
In case if you have any resolution please do share that same with the community as it can be helpful to others .

Thanks 

HimanshuS-msft
Microsoft Employee
Microsoft Employee

Hello  @priyanksingh
I am not sure if it is in place in Databricks , as per this thread i think they are still working on that , apoloziges if you are able to see that and may be they did not updated the thread . 
I do not think we can have that set in Fabric at this time , but then you can use the .head() and that should do the trick .

df.head(10)


Thanks
Himanshu 

 

v-cboorla-msft
Microsoft Employee
Microsoft Employee

Hi @priyanksingh 

 

Welcome to Microsoft Fabric Community and thanks for posting your question here.

 

As I understand that you want to export more than 1000 records in csv from Fabric Notebook.

 

When you run a PySpark query in Fabric Notebook, the results are limited to the first 1000 records by default. However, you can export more than 1000 records in csv fromat. Please follow the below mentioned workaround in order to achieve your resolution.

 

I have published 5000 records into my Lakehouse. Run the below mentioned code to save the file into your Lakehouse.

 

df = spark.sql("SELECT * FROM ShekarLH.5000_Records")
df.write.csv("abfss://42965892-5819-4760-ad68-8983467a9df3@msit-onelake.dfs.fabric.microsoft.com/5023fdc9-fd65-4ef6-bb9f-bfdb82bd646f/Files/Destination/SH5000")

 

vcboorlamsft_0-1697739048360.png

 

After running the above code, the csv file gets created in the respective path.

vcboorlamsft_5-1697734230746.png

 

To access this file in your local desktop, please download Onelake Explorer using this link Download Onelake. Make sure to sync with your respective workspace.

vcboorlamsft_6-1697734399425.png

 

Now you will able to download the csv file which is having all the 5000 records.

vcboorlamsft_7-1697734415921.png

 

Hope this helps, let us know in case of any further queries.

Hi @priyanksingh 

 

Following up to see if the above suggestion was helpful. And, if you have any further query do let us know.

Hi, 

Thank you for the suggestion. Yes i had tried this method but i was looking for some setting where we can change the default value. This setting is present in Databricks

Helpful resources

Announcements
FabCon and SQLCon Highlights Carousel

FabCon &SQLCon Highlights

Experience the highlights from FabCon & SQLCon, available live and on-demand starting April 14th.

New to Fabric survey Carousel

New to Fabric Survey

If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.

March Fabric Update Carousel

Fabric Monthly Update - March 2026

Check out the March 2026 Fabric update to learn about new features.