We've captured the moments from FabCon & SQLCon that everyone is talking about, and we are bringing them to the community, live and on-demand. Starts on April 14th. Register now
I have a Dataflow (Gen1) that contains a large volume of data. Both columns and rows. I have already profiled the data to delete redundant and unnecessary data.
Now I would like to further optimize the refresh.
In fact, the data in the dataflow is only the sales of the last 3 years. This data is updated weekly with the sales of the previous week.
What concepts / approaches are there to further optimize the refresh?
Hi Joshuha1990,
One thing to consider, did you already have followed good data processing techniques? What is your data source? You might benefit a lot from query folding. Additionally follow the processing iteration:
I don't know, what data source and whether query folding would be available. However, putting some transformations before, results in running those in your source system, from which you can benefit a lot.
Regards,
Oktay
Did I answer your question? Then please mark my post as the solution.
If I helped you, click on the Thumbs Up to give Kudos.
Hi @joshua1990 !
Please try to setup an incremental load to optimize your data load rather than full load;
https://learn.microsoft.com/en-us/power-bi/connect-data/incremental-refresh-overview
https://learn.microsoft.com/en-us/power-bi/connect-data/incremental-refresh-configure
Regards,
Hasham
If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.
A new Power BI DataViz World Championship is coming this June! Don't miss out on submitting your entry.
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.
| User | Count |
|---|---|
| 53 | |
| 38 | |
| 33 | |
| 17 | |
| 17 |
| User | Count |
|---|---|
| 67 | |
| 62 | |
| 38 | |
| 34 | |
| 22 |