Starting December 3, join live sessions with database experts and the Microsoft product team to learn just how easy it is to get started
Learn moreGet certified in Microsoft Fabric—for free! For a limited time, get a free DP-600 exam voucher to use by the end of 2024. Register now
Hello all,
Currentely we have the the problem that we cannot perform our full upload of Salesforce data into Power BI using a Phyton script.
The goal is that we load in the past 100 weeks of 2 backup files (opportunity.csv and account.csv), so a total of 200 files through a Phyton script that transforms this into a file where are saved 4 rows per week and a number of colums.
If we perform this action directely from Spyder (Phyton app) then it works out fine (this takes about 15 minutes). Only if we paste the same script in our PowerBI desktop (were we make the visuals from our data) and do the refresh it keeps on running endlessly and does not seem to complete. This is a new situation for us and we seem not te be able to figure out where it goes wrong, because a year ago we had no trouble at all with this process.
One thing that we know for sure is that our backup files increase in size over time. Currentely the files have a size of Account.csv 6.273 Kb and Opportunity.csv 10.514 Kb. If we look back into for instance december 2022 the files have a size of Account.csv 4.649 Kb and Opportunity.csv 7.027 Kb. We have the idea that the combination of the number of files and the size of the files create this problem.
What we have found as a "temporarely" solution is that we exclude some "big weeks" of files, but now for some time this influence al lot our outcomes in the visuals.
The current situation is that we can load in 14 files from the past weeks and if we try to load in more then the refresh does not finish. If we on the other hand include an extra 100 files from 2022 and earlier the refresh works fully whithout any problem.
Does anybody have any idea where this process goes wrong or gives us any direction where to go to be working towards a solution?
Thanks in advance for thinking with us,
Annette
Solved! Go to Solution.
Run your data operations outside of Power Query/Power BI, save the results as CSV or Parquet, and then ingest them in Power BI.
Why Python? Why not place the CSV files on a SharePoint and ingest natively into Power BI?
We decided that based on the operations that we do on the data that this would be the best solution. So to build the table that we want outside of PowerBI because it gave to much difficulties when we tried it in PowerBI itself.
Run your data operations outside of Power Query/Power BI, save the results as CSV or Parquet, and then ingest them in Power BI.
That is definitely a good option to explore and could be a suitable solution.
Do you have any idea why the problem occurs if I run it through PowerBI
The Python scripting in Power BI is limited in the variety of supported libraries, the performance, and the need for a personal gateway. Python is not designed for massive data transforms. Just not the right tool.
Starting December 3, join live sessions with database experts and the Fabric product team to learn just how easy it is to get started.
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Early Bird pricing ends December 9th.
User | Count |
---|---|
9 | |
6 | |
3 | |
3 | |
2 |