Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now

Reply
Ayushman1997
Frequent Visitor

undefined

QuestionForGroup
Hi All.

I have a dataset of 2.9M rows which upon unpivoting and merging with 1 table becomes some 246M rows. After that I have to remove duplicates from the table(based on Col1, Col2, Col3…. Col7). After this step if I close and apply the dataset loads for 3/4 hours and then error pops as “buffer memory out of space”.

Could anyone please help me out on this?

Possible solutions tried till now:
1. Taken a dataset of 75000 rows and did all steps and checked all working or not. After getting satisfactory results I uploaded the dashboard in power bi service and renamed the file of 2.9M rows same as file name of 75000 rows and refreshed it in power bi service. —- Result: This process didn’t work as it is not taking 2.9M rows.

2. Tried joining tables but this doesn’t work out as well. (Many to one join but corresponding values are unique.) Hence count is not matching.

1 REPLY 1
Shahid12523
Community Champion
Community Champion

Your dataset blows up (246M rows) and deduplication happens after folding breaks, causing memory errors.
Fix: push unpivot/merge/deduplication to the source (SQL/Dataflow), reduce columns early, and keep query folding alive. If needed, use incremental refresh or DirectQuery.

Shahed Shaikh

Helpful resources

Announcements
Fabric Data Days Carousel

Fabric Data Days

Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!

October Fabric Update Carousel

Fabric Monthly Update - October 2025

Check out the October 2025 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.