Microsoft Fabric Community Conference 2025, March 31 - April 2, Las Vegas, Nevada. Use code FABINSIDER for a $400 discount.
Register nowThe Power BI DataViz World Championships are on! With four chances to enter, you could win a spot in the LIVE Grand Finale in Las Vegas. Show off your skills.
I'm using a large table as a dataset in Power BI. I applied an incremental refresh to this dataset, but after publishing with the initial scope defined by the RangeStart and RangeEnd, it requires a one-time back fill of archived data. I am running into connection and timeout issues with this back fill. Is there a way to run this back fill in smaller chunks?
Solved! Go to Solution.
Use CSV or Parquet files as fake partitions and then append them in Power Query. Yes, you will reload all the data each time but both CSV and Parquet are ingesting very fast.
Use the XMLA endpoint with tools like SSMS and then refresh individual partitions selectively.
This is a premium per user workspace but we do not have premium capacity. From what I've read, XMLA requires premium capacity, is this true? If so, is there a way to partition the data load outside of premium capacity?
Use CSV or Parquet files as fake partitions and then append them in Power Query. Yes, you will reload all the data each time but both CSV and Parquet are ingesting very fast.
Sorry if this is a novice question but how would I create separate csv files out of a sql table/view? This needs to be an automated, scheduled refresh.
There is another option - using bootstrapping to create empty partitions and then fill them individually via XMLA.
https://learn.microsoft.com/en-us/power-bi/connect-data/incremental-refresh-xmla
Troubleshoot incremental refresh and real-time data - Power BI | Microsoft Learn
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!
Check out the February 2025 Power BI update to learn about new features.
User | Count |
---|---|
82 | |
81 | |
52 | |
39 | |
34 |
User | Count |
---|---|
95 | |
78 | |
52 | |
49 | |
47 |