Power BI is turning 10, and we’re marking the occasion with a special community challenge. Use your creativity to tell a story, uncover trends, or highlight something unexpected.
Get startedJoin us for an expert-led overview of the tools and concepts you'll need to become a Certified Power BI Data Analyst and pass exam PL-300. Register now.
Hi Community,
I have an issue with PowerBI (PowerQuery).
I have several SAP tables which I can import with ‚Denodo‘ (some kind of middleware) via Dataflows. One Table is for example ‚EKPO‘ and it has around 300.000 rows with around 500 columns in our configuration.
My task is to create a data quality report and therefore I have to create several checks. Each check requires certain steps (filters, joins with other tables, group by‘s and so on).
I did create a main EKPO table which I prepared properly and I did try the reference approach.
But now data update is kind of impossible and PowerBI crashes. After some research, I’ve leaned that each reference causes the root table to update itself. Thus, the reference approach is kind of not available.
DAX and/or calculated columns might be another approach, but I fear that PowerBI might collapse (DAX tables create a duplicate of the data and the PBIX might become to big).
It is no option to modify the source as there are no capacities available.
Now my question: what is your suggestion to handle my issue/ What is the best approach?
Thank you in advance for any hint!
Solved! Go to Solution.
Hi @DaGuggi
Here are a few suggestions to help you manage your data quality report more efficiently:
Consider using DirectQuery mode instead of importing the data. DirectQuery allows you to work with large datasets without loading all the data into Power BI, which can significantly improve performance . This way, queries are sent directly to the data source, and only the results are returned to Power BI.
DirectQuery for SAP HANA in Power BI - Power BI | Microsoft Learn
Implementing incremental refresh can help manage large datasets by only refreshing the data that has changed . This reduces the load on Power BI and can prevent crashes. You can set up incremental refresh policies in Power BI to handle large tables more efficiently.
Configure incremental refresh and real-time data for Power BI semantic models - Power BI | Microsoft...
Best Regards,
Jayleny
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Hi @DaGuggi
Here are a few suggestions to help you manage your data quality report more efficiently:
Consider using DirectQuery mode instead of importing the data. DirectQuery allows you to work with large datasets without loading all the data into Power BI, which can significantly improve performance . This way, queries are sent directly to the data source, and only the results are returned to Power BI.
DirectQuery for SAP HANA in Power BI - Power BI | Microsoft Learn
Implementing incremental refresh can help manage large datasets by only refreshing the data that has changed . This reduces the load on Power BI and can prevent crashes. You can set up incremental refresh policies in Power BI to handle large tables more efficiently.
Configure incremental refresh and real-time data for Power BI semantic models - Power BI | Microsoft...
Best Regards,
Jayleny
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Hello @Anonymous , you will find these useful:
Your scenario seems to be suitable for scheduled refresh from SAP.
Microsoft recommended architecture for PowerBI / Fabric integration with SAP (See the cached method):
Questions? please reach out.
Anupam
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Power BI update to learn about new features.
User | Count |
---|---|
80 | |
79 | |
58 | |
36 | |
35 |
User | Count |
---|---|
99 | |
56 | |
56 | |
46 | |
40 |