Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes! Register now.
I am working with a database that doesn't save historical data. I.e. rows get an override every once in a while, and an old record doesn't exist anymore.
Is there a way to have Query Editor import the data and save it to, say, another table every time and not refresh that data again? Upon refresh, I'd lose all the rows that got an override between 2 refreshes. The process would be smth like this: import new data -> append to the data from the last import -> remove duplicates.
For now that process was to do a CSV import out of the database on a regular basis, save it on the local drive with a timestamp, and then let Query Editor import all the csv files and kill the duplicates. But I want to avoid the CSV file step and do it directly in Query Edtior.
Another way is to have my DBA create a custom table like this, but I want to see if I can do this on my own before rattling that cage.
Check out if this works for you:
Unfortunately the R-Export doesn't run in PBI-service, so you cannot automatically shedule the refresh.
Imke Feldmann (The BIccountant)
If you liked my solution, please give it a thumbs up. And if I did answer your question, please mark this post as a solution. Thanks!
How to integrate M-code into your solution -- How to get your questions answered quickly -- How to provide sample data -- Check out more PBI- learning resources here -- Performance Tipps for M-queries