Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredGet Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Learn more
 
					
				
		
I am new to Power BI, so this post may seem naive.
All of our reporting use our replicated data server for our accounting system. We do not have a data warehouse yet. We are working on one, but it will be several months before we have a rudimentary one set up. So, our fact tables are extensive and include the tyoically accounting types of data and entries. We currently have Cognos to do some of reporting, and we have createde some Excel reports that use ODBC and data connections to retrieve data. The SQL code used by Cognos and Excel has been tested and verified. We know that code is accurate.
Now that we are starting up with Power BI, I have built several test models by referring directly to the tables in our accounting system. I have also used Advanced options when getting data using SQL queries to return one set of columns.
Which method would be faster? Using the SQL queries we already have or using the native Power BI capability to retrieve and manipulate the table and columns?
For example, our accouting system has many look up or dimension tables that serve several purposes. Our order table has quotes and orders in it. Our SQL queries do joins to this table ablong with filters to report on quotes only or on orders only. If I build that into a model which calls up the orders table, I will have to build a filter in the type column to eliminate the quotes, and vice versa when I want to report on quotes.
The biggest challenge when we wrote the queries is that some columns need to process multiple records in other tables to determine the most recent record to use. As an example, our orders have a status. As an order is changed, a new status record might be written to a tale. Any order might have multiple status records, only the most recent current. How would this be handled in Power BI without writing SQL code?
Any thoughts on this would be greatly appreciated.
Solved! Go to Solution.
It depends. It depends on the size of your Power BI capacity, it depends on the performance of your SQL database, it depends on the connection type and it depends on your UX expectations.
The general guidance is that you want to push transforms as far upstream as possible, but not farther.
The DAX part of Power BI is managed by a duo called the storage engine and the formula engine. The storage engine is dumb but fast (very fast, as everything is in memory), the formula engine is smart but slow.
For each of your queries you can see what part is executed by SE or FE. Ideally you want to see more SE load than FE load, but then there's the question of parallelism etc.
Long story short - try it out both ways and then decide which approach is more appropriate in your scenario.
It depends. It depends on the size of your Power BI capacity, it depends on the performance of your SQL database, it depends on the connection type and it depends on your UX expectations.
The general guidance is that you want to push transforms as far upstream as possible, but not farther.
The DAX part of Power BI is managed by a duo called the storage engine and the formula engine. The storage engine is dumb but fast (very fast, as everything is in memory), the formula engine is smart but slow.
For each of your queries you can see what part is executed by SE or FE. Ideally you want to see more SE load than FE load, but then there's the question of parallelism etc.
Long story short - try it out both ways and then decide which approach is more appropriate in your scenario.
 
					
				
				
			
		
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!
Check out the October 2025 Power BI update to learn about new features.
 
            | User | Count | 
|---|---|
| 85 | |
| 49 | |
| 36 | |
| 31 | |
| 30 |