Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Special holiday offer! You and a friend can attend FabCon with a BOGO code. Supplies are limited. Register now.

Find articles, guides, information and community news

Most Recent
tharunkumarRTK
Super User

Power Query Editor needs no introduction for anyone who works with Excel’s Get Data experience, Power BI Desktop, Power BI and Fabric dataflows, or Power Platform dataflows. This same interface has even been introduced recently in Paginated Reports.

tharunkumarRTK_0-1766760801799.png

 

Referencing Power Query queries is one of the most commonly used techniques in the Power Query Editor. Developers use it to reuse transformation logic, create staging queries, and build layered query structures that are easier to maintain.

Many developers believe that when Query B references Query A, Power Query evaluates Query A first and then reuses its results, avoiding additional database queries.

In this blog, I will test this assumption using a few simple scenarios and show how Power Query actually evaluates queries when one query depends on another.

Scenario 1: Direct Query Referencing

I created a query named dimaccount in Power Query that reads data from a Fabric Lakehouse using the SQL Analytics endpoint. I then created another query named dimaccountReference, which references the dimaccount query.

dimaccount

 

let
    Source = Sql.Database(HostName, DatabaseName),
    dbo_dimaccount = Source{[Schema="dbo",Item="dimaccount"]}[Data]
in
    dbo_dimaccount

 dimaccountReference

let
    Source = dimaccount
in
    Source

Query Dependency View

tharunkumarRTK_1-1766760871846.png

 

After loading both queries into Power BI, I used the Query activity view to observe the database queries executed during refresh.

Results

Two queries hit the database:

tharunkumarRTK_2-1766760893467.png

 

  • Several metadata and preview queries are executed first. These queries validate object existence, schema access, and permissions.
  • The dimaccount table is then queried twice, once for dimaccount and once for dimaccountReference.

Even though dimaccountReference references dimaccount, both queries independently execute against the data source.

There is another common assumption that using the Table.Buffer function on the referenced query (dimaccount) will prevent the referencing query (dimaccountReference) from hitting the database again. This assumption is also incorrect. Buffered results are only reused within the scope of a single query evaluation and are not shared across different queries.

In the scenario described above, buffering dimaccount can actually degrade performance, because the results would be loaded into memory twice, once for each query evaluation. More details on this behavior can be found here and here.

 

Let’s explore few more scenarios

 

Scenario 2: Appending Queries

In this scenario, I created two queries, dimaccountAssets and dimaccountLiabilities. Both queries read from the same dimaccount table but apply different filters.

dimaccountAssets

let
  Source           = Sql.Database(HostName, DatabaseName),
  dbo_dimaccount   = Source{[Schema = "dbo", Item = "dimaccount"]}[Data],
  #"Filtered Rows" = Table.SelectRows(dbo_dimaccount, each ([AccountType] = "Assets"))
in
  #"Filtered Rows"

dimaccountLiabilities

 

let
  Source           = Sql.Database(HostName, DatabaseName),
  dbo_dimaccount   = Source{[Schema = "dbo", Item = "dimaccount"]}[Data],
  #"Filtered Rows" = Table.SelectRows(dbo_dimaccount, each ([AccountType] = "Liabilities"))
in
  #"Filtered Rows"

I then created a third query that appends these two queries using Table.Combine

tharunkumarRTK_3-1766760981450.png

Query Dependency View

tharunkumarRTK_4-1766760999059.png

All three queries were loaded into Power BI without disabling any of them.

Results

Three queries hit the database:

tharunkumarRTK_5-1766761019681.png

 

  • dimaccountAssets queried the table once to retrieve Asset accounts.
  • dimaccountLiabilities queried the table once to retrieve Liability accounts.
  • dimaccountAppended executed a separate query that scanned the table twice and combined the results using UNION ALL.

tharunkumarRTK_6-1766761054313.png

The generated SQL clearly shows two filtered scans of the same table, appended together.

Once again, this indicates that the appended query does not reuse the results of the referenced queries. It independently evaluates the logic defined in those queries.

In this case, the append operation successfully folded, which prevented an even worse outcome. If folding had failed, Power Query would have retrieved full datasets and performed the append locally.

Scenario 3: Merging Queries

In this scenario, I merged the dimaccount table with a factfinance table to bring an aggregated value into the dimension.

tharunkumarRTK_7-1766761086518.png

This example is intentional. In a tabular model, this logic should be handled through relationships and measures, not in Power Query. The purpose here is to illustrate query evaluation behavior, not to recommend this pattern.

factfinance

let
    Source = Sql.Database(HostName, DatabaseName),
    dbo_factfinance = Source{[Schema="dbo",Item="factfinance"]}[Data]
in
    dbo_factfinance

dimaccount (after merge)

let
    Source = Sql.Database(HostName, DatabaseName),
    dbo_dimaccount = Source{[Schema = "dbo", Item = "dimaccount"]}[Data],
    #"Merged Queries" = Table.NestedJoin(
        dbo_dimaccount,
        {"AccountKey"},
        factfinance,
        {"AccountKey"},
        "factfinance",
        JoinKind.LeftOuter
    ),
    #"Aggregated factFinance" = Table.AggregateTableColumn(
        #"Merged Queries",
        "factfinance",
        {{"Amount", List.Sum, "Sum of Amount"}}
    )
in
    #"Aggregated factFinance"

Query Dependency View

tharunkumarRTK_8-1766761114794.png

To measure query execution accurately, I disabled the standalone factfinance query and loaded only dimaccount.

Results

100 queries hit the database:

  • dimaccount was queried once.
  • factfinance was queried 99 times.

The reason is straightforward. The dimaccount table contains 99 distinct account keys, and for each key, Power Query executed a separate query against the factfinance table with a filter on that key.

tharunkumarRTK_9-1766761181530.png

While Power Query avoided scanning the entire fact table for each query, this execution pattern is still highly inefficient and illustrates how query dependencies can explode database activity.

You can easily handle this scenario by writing a SQL statement with a join condition and avoid hitting the db these many times, I would still want to suggest an option which I learned from Chris Webb’s blog post. Here it is

Using Table.AddKey function, add account key column as primary key in dimaccount and then perform the merge operation between factfinance and dimaccount in a different table, in my case I created a table called dimaccountMerged, below is the M code

dimaccount

 

let
    Source = Sql.Database(HostName, DatabaseName),
    dbo_dimaccount = Source{[Schema="dbo",Item="dimaccount"]}[Data],
    Custom1 = Table.AddKey(dbo_dimaccount, {"AccountKey"}, true)
in
    Custom1

dimaccountMerged

 

let
  #"Merged Queries" = Table.NestedJoin(
    dimaccount,
    {"AccountKey"},
    factFinance,
    {"AccountKey"},
    "factfinance",
    JoinKind.LeftOuter
  ),
  #"Aggregated factfinance" = Table.AggregateTableColumn(
    #"Merged Queries",
    "factfinance",
    {{"Amount", List.Sum, "Sum of Amount"}}
  )
in
  #"Aggregated factFinance"

Query Dependency View

tharunkumarRTK_10-1766761207566.png

This has reduced the number of queries from 100 to 5.

tharunkumarRTK_11-1766761241470.png

One query to read the data from factfinance table

tharunkumarRTK_12-1766761273893.png

The other 4 queries are to perform the complete join operation, Power Query grouped 99 AccountKeys into multiple batches (25 accountKeys per batch) and executed one query per batch. As Chris Webb mentioned in his article, this has significantly reduced the number of queries and can improve the performance.

tharunkumarRTK_13-1766761307952.png

Key Observation based on 3 scenarios

The key takeaway here is this: Each query is evaluated independently and referencing one query in another reuses transformation logic, not query results.

Queries run in parallel, maximum number of queries that run in parallel depends on maximum number of concurrent jobs settings in Power BI Desktop.

If query referencing does not reduce database calls, how can we do that effectively?

Using Power BI Dataflows

You can create a dataflow and move all source queries other queries that contain repetitive transformation logic into it, such as dimaccount and factfinance in the scenarios discussed above. In your Power BI semantic model, connect to the dataflow and consume the data from there.

With this approach, even if multiple queries reference the same data in the semantic model, the requests are handled by the dataflow rather than being sent to the underlying data source.

While dataflows introduces one more artifact into the solution and increases the maintenance efforts, using it is one of the most effective way to reduce number of queries that hit the db.

Additional Optimization Recommendations

If dataflows are not an option, the following practices can still help reduce database load and refresh time to some extent:

  1. Prefer native SQL queries for complex transformations. This reduces dependency on query folding and avoids inefficient execution patterns. In the merge scenario above, a single SQL query could have replaced dozens of individual queries.
  2. Uncheck “Enable load to report” for staging queries and any queries that are not directly consumed by the report.
  3. For truly static tables whose data never changes, Uncheck the “Include in report refresh” option. This avoids unnecessary database reads while still allowing the table to be used in the model.

In this blog, I shared my observations on how Power Query evaluates queries when using a Fabric Lakehouse SQL Analytics endpoint as the data source. Based on my experience, the behavior is largely consistent across other SQL Server–based sources, though it may vary with other sources like: Databricks, Snowflake and Redshift etc.

 

Credits: Some of the topics discussed in this blog are inspired by Microsoft documentation and the excellent articles written by Chris Webb. I hope this blog helped clarify some common assumptions and provided a few useful insights. Feel free to share your thoughts, observations, or feedback in the comments section.

 

Happy learning!!!

SachinNandanwar
Super User
Super User

The WINDOW function in DAX has made life easier when dealing with complex calculations—especially those that require running totals, ranking, and dynamic windowed aggregations across different contexts and partitions.

 

I am particularly intrigued by the ABS and REL arguments of the WINDOW function, as they provide a high level of flexibility when working with running totals across window partitions. In this post, I will share some insights on how to leverage these two arguments effectively.

Definition

ABS: Starts the window from the first row of the partition (absolute position).

REL: Ends the window at a position relative to the current row.

 

You can find the official definition here

 

Sounds simple, right? Maybe. Let’s look at a few examples using the WINDOW function.

 

We have a dataset with the following columns: Region_id, Region, Sale Date, Total Sales

 

SachinNandanwar_0-1766075574070.png

With the new WINDOW function, calculating a running total has become easier across window partitions. The partition for this data is based on Region.

Window_Fn =
CALCULATE (
    SUM ( Table[total_sales] ),
    WINDOW (
        0,
        ABS,
        1,
        REL,
        ALLEXCEPT ( Table, Table[region] ),
        ORDERBY ( Table[Date].[Date], ASC ),
        PARTITIONBY ( Table[region_id] )
    )
)

The above measure Window_Fn with 0, ABS and 1, REL implies that for the current row, the aggregateSUM(total_sales) is calculated as:

  • All previous rows in the partition
  • The current row
  • The immediate next row relative to the current row

Let’s focus on the output for North America and Europe.

 

SachinNandanwar_0-1766076379373.png

 

For North America for the date 1/1/2024 the calculated window value would be 1500+1550=3050.This is because the 0th(which defaults to 1) absolute position of the partition is 1/1/2024 and the relative position which is next row relative to the current row of the partition is 1/5/2024.Now this would repeat for each row till we reach a position in the partition where the REL value and the ABS value for a given current row are equal. This happens for row where the Sale Date is 1/20/2024 and 1/25/2024 for North America and 1/23/2024 and 1/28/2024 for Europe.

 

At row 1/20/2024 the relative row is 1/25/2024 and for 1/25/2024 for North America there is no relative row value , so the value of the measure will be sum of all the total_sales values in the partition for those 2 rows.

 

This continues until the ABS and REL boundaries converg for example at 1/20/2024 and 1/25/2024 for North America, and 1/23/2024 and 1/28/2024 for Europe.

 

Lets change the ABS and REL values to 2 and -1 respectively. In this case it tells the window to start at second row of the partition and for any given current row the partition ends one row prior to the current row.

 

Window_Fn =
CALCULATE (
    SUM ( Table[total_sales] ),
    WINDOW (
        2,
        ABS,
        -1,
        REL,
        ALLEXCEPT ( Table, Table[region] ),
        ORDERBY ( Table[Date].[Date], ASC ),
        PARTITIONBY ( Table[region_id] )
    )
)

SachinNandanwar_0-1766076741075.png

 

Here, the window starts at the second row of the partition and ends one row prior to the current row. If the relative boundary is before the absolute boundary, the result is blank.

 

The ABS value of the partition starts at 1/5/2024. For the first row of the partition for North America the ABS and REL value wouldn’t add much meaning as there is no row prior to 1/1/2024 and the row that matches the ABS value of 2 is 1/5/2024.

 

To put it more into perspective, say for example if there's an interval with a start value of 10 and an end value of 2, it’s impractical to find a value of 3 within that range. This makes the proposition illogical or meaningless. Same would be the case for row 1/5/2024 for North America and 1/8/2024 for Europe. Remember the start value of the partition is 1/5/2024 for North America and the relative value for each current row should be equal or greater than the absolute value.

 

Lets move to the third row i.e. 1/10/2024 for North America. For this row the window starts at 1/5/2024 and ends at 1/5/2024 as the ABS value is 1/5/2024 and the relative value for the current row of 1/10/2024 is 1/5/2024.

 

Lets perform the same checks for rows of Europe. The start date of the partition is 1/8/2024 and for the first row 1/3/2024 the relative value does not exist. Moving to the next row 1/8/2024 the start row would be 1/8/2024 and end row is 1/3/2024 which is meaningless. For the third row 1/13/2024 the start of the window is 1/8/2024 and end of the window is 1/8/2024.

 

For the last row for Europe, the window partition starts at 1/8/2024 and ends at 1/23/2024.Output of the function for this row would the sum of all values of Total Sales from date 1/8/2024 to 1/23/2024 i.e. 1250+1300+1350+1400 =5300.

In the last two previous examples we saw the start position of the partition defined by ABS and the end partition by REL.

 

Let us see in this example on how the summation is calculated where the start position of the partition is defined with REL and the end position by ABS.

Let us now reverse the logic—using REL to define the start and ABS to define the end.

Window_Fn =
CALCULATE (
    SUM ( Table[total_sales] ),
    WINDOW (
        -2,
        REL,
        2,
        ABS,
        ALLEXCEPT ( Table, Table[region] ),
        ORDERBY ( Table[Date].[Date], ASC ),
        PARTITIONBY ( Table[region_id] )
    )
)

 

SachinNandanwar_0-1766076882880.png

 

The start row of the partition for North America for any given current row is two rows before the current row and the end row is the second row of the partition. So, for the first row in the partition the output for the window function is 1500+1550 as there are no rows prior to 1/1/2024.For second row as well it would be the same 1550+1500 and same would be the case for third row.

 

For the fourth row, as the argument is two rows relative, the start row would be 1/5/2024 which is the equal to the start row of the partition defined by ABS , thus the summation will be 1550 .The two rows relative for the current row of 1/15/2024 is 1/5/2024 and the absolute value is also is 1/5/2024 the output for this row of 1/15/2024 is 1550.

 

The output of the measure for the fifth row will be empty as the start row for the partition is in between 1/10/2024 and 1/5/2024 which is meaningless. Same is the case for the sixth and final row 1/25/20204.

To sum it up :

  • The window starts two rows before the current row
  • The window ends at the second row of the partition
  • If the window boundaries are invalid, the measure returns blank

For North America:

  • First few rows include all available rows up to the ABS boundary
  • Later rows return blank when the window becomes illogical

Conclusion

The WINDOW function significantly simplifies complex analytical calculations in DAX. A solid understanding of the ABS and REL parameters provides powerful flexibility for performing row-by-row calculations across window partitions.

NHariGouthami
Microsoft Employee
Microsoft Employee

How I Built Daily Monitoring Dashboard in Under 10 Minutes

In today’s data‑driven world, building monitoring dashboards in Power BI often feels repetitive: drag fields… configure axes… format markers… copy styles… fix errors… repeat. Creating even a handful of visuals can easily take hours.

But what if the entire process could be automated, consistent, and done in minutes?

Recently, I built a complete monitoring dashboard for our A3P Analytics platform using GitHub Copilot Agent Mode and the Power BI Project (.pbip) format—and what would normally take 2–3 hours was completed in ~10 minutes.

Here’s exactly how.

Read more...

AnkitaaMishra
Super User
Super User

Struggling to decide whether a transformation belongs in Power Query or DAX? In this practical guide, we break down when and why to use each tool, with clear decision rules, real scenarios, performance considerations, and best practices every Power BI author should know.

Read more...

aysegulyigit
Regular Visitor

Building a data-driven culture requires more than just modern tools—it requires trust. This article explores the three pillars of Fabric governance: Endorsement, Metadata Scanning, and Data Lineage, helping you turn data chaos into a managed, transparent environment.

Read more...

burakkaragoz
Community Champion
Community Champion

The November 2025 Feature Summary is out! While everyone talks about AI, I'm diving into the features that actually matter for report builders: The "Grow to Fit" Matrix, the GA New Card Visual, and TMDL. Here is why you should care.

Read more...

Abhilash_P
Impactful Individual
Impactful Individual

As Power BI adoption grows across organizations, maintaining control over reports, datasets, and access becomes increasingly complex. Without proper governance, data duplication, inconsistent reports, and security risks can emerge quickly.

 

A Power BI Governance Framework is a structured approach to manage data, users, and content in a consistent, secure, and scalable way. In this blog we will discuss how to build a practical, easy-to-follow governance model for Power BI in an organization.

Read more...

techies
Super User
Super User

Understanding user behavior in learning systems, SaaS platforms, or product telemetry often starts with semi-additive calculations. 

But semi-additive metrics only tell what happened at a point in time. They do not reveal behavioral continuity. To address this, a solution requires sequence analysis, specifically using run-length encoding through DAX or SQL window functions.

Read more...

slindsay
Community Admin
Community Admin

Join our final QuickViz challenge of Fabric Data Days! Spend 30 minutes with us building a single Power BI visual.

Read more...

slindsay
Community Admin
Community Admin

Meet the four talented finalists of the Fabric Data Days Student Dataviz Contest - each selected for their creativity and insight. Their work will take the spotlight as the audience votes live to crown the winner.

Read more...

slindsay
Community Admin
Community Admin

We’re excited to celebrate the top notebooks from this year’s Fabric Data Days contest! These creators impressed the judges with their clean code, thoughtful explanations, and insightful analyses.

Read more...

slindsay
Community Admin
Community Admin

The results are in! Discover the winning Power BI CVs that stood out for their clarity, creativity, and ability to showcase personal experience through data.

Read more...

ajaybabuinturi
Memorable Member
Memorable Member

When you are working on Power BI, one of the fundamental aspects of building a robust data model is, defining how different tables are related to each other. This relationship determines how filters and slicers affect the data, and one of the key settings you will encounter is the Cross Filter Direction. But what does this actually mean, and how does changing it impact your report’s behaviour?

 

I will walk you through in this blog the feature of Cross Filter Direction in Power BI. the types of filtering relationships available, the potential effects of changing these directions, and the advantages and considerations of each approach.

Blog Cover Image.png

Read more...

SachinNandanwar
Super User
Super User

Ever wondered how to replicate SQL APPLY clauses in DAX ?

 

In the following article ,I propose a ingenious method to replicate the APPLY behavior in DAX.

Read more...

tharunkumarRTK
Super User
Super User

Before the introduction of TMDL (Tabular Model Definition Language), modeling tasks in Power BI Desktop was almost entirely GUI-driven. Any advanced tabular modeling tasks required external tools like Tabular Editor, ALM Toolkit, or SQL Server Management Studio.

These tools are not created by Microsoft, many organizations do not allow them, and some require additional licensing. In this blog, let’s talk about a few tasks that the new TMDL view enables directly inside Power BI Desktop tasks that were not possible using the GUI at the time of writing.
Changing the Storage Mode
Before TMDL, switching a table from Import to DirectQuery was not possible in the GUI. TMDL makes it simple.
 
  1. Open your semantic model.
  2. Switch to the TMDL view.
  3. Select the table in the Model Explorer on the right.
    tharunkumarRTK_0-1763897289647.png
4. Click Script TMDL to → Script tab.
tharunkumarRTK_0-1763898268281.png

 

5. A new tab will open with the TMDL script for creating or replacing the table.
6. Scroll down to the partition definition and find the mode attribute
tharunkumarRTK_0-1763900955222.png

7. Change the value to the desired storage mode and click Apply.

tharunkumarRTK_0-1763899368664.png

That's it, the storage mode is now updated.

 
tharunkumarRTK_1-1763898384000.png

 

 
Note:
If you switch from DirectQuery to Import, remember to refresh the table so that data loads into the model.
You can use the same approach to switch to other modes like Dual.

Object-Level Security Implementation
Before TMDL, implementing OLS required Tabular Editor or SSMS. Now it can be done directly in Power BI Desktop.
Below is an example that creates a role called RestrictEmployeeId and hides the employeeId column in the Employee table:
tharunkumarRTK_2-1763899916107.png
Another example that hides the entire Employee table:
tharunkumarRTK_1-1763899889902.png

 

Write your own script based on your model and run it in the TMDL view.
tharunkumarRTK_0-1763898429062.png

 

You can see the created roles in Model Explorer.
tharunkumarRTK_1-1763898442127.png

 

Creation of Perspectives
If you have used the Personalized Visuals feature, you may already know what Perspectives are. They are used to provide a simplified, focused view of a model.
Earlier, creating perspectives required Tabular Editor. TMDL now supports it directly.
The example below creates a perspective called EmployeePerspective that includes selected columns from two tables:
tharunkumarRTK_0-1763899995615.png
Run a similar script based on your model in the TMDL view.
tharunkumarRTK_0-1763899460036.png

 

Your perspective will appear in Model Explorer.
tharunkumarRTK_1-1763899479666.png

 

Addition of Cultures and Translations
If you have created multi-language Power BI reports, you may already know about Cultures.
Previously, adding a new language or translation required Translation Builder or Tabular Editor. With TMDL, you can define cultures directly.
Example: Adding Spanish (es-ES) language and translating several objects:
tharunkumarRTK_1-1763900040544.png

 

Run the script in the TMDL view.
tharunkumarRTK_2-1763900090466.png

 

The culture will appear in Model Explorer.
 tharunkumarRTK_3-1763900111837.png

 

Configuration of Detail Rows Expressions
If you build self-service datasets for Excel users, you may already know about Detail Rows. This property controls exactly which columns are returned when users drill through in Excel PivotTables connected to your Power BI semantic model.
Before TMDL, setting this property required Tabular Editor. Now it can be done in Desktop.
The example below creates a measure EmployeeCount and defines a Detail Rows expression for it:
tharunkumarRTK_0-1763900195949.png

 

Run this script in the TMDL view.
tharunkumarRTK_0-1763899554392.png

 

To validate the property, you can run a DAX query:
EVALUATE DETAILROWS([EmployeeCount])
tharunkumarRTK_1-1763899580243.png

 

Creating Custom Table Partitions
Before TMDL, splitting a table into custom partitions required Tabular Editor or other external tools. Now this can be done directly in Desktop.
Script your table into TMDL view, locate the partition section, and modify it as needed.
Example: Splitting the Employee table into two partitions:
tharunkumarRTK_1-1763900267509.png
 
Run the script in the TMDL view.
tharunkumarRTK_2-1763899611008.png

 

You will see the new partitions under the table in Model Explorer.
tharunkumarRTK_3-1763899629453.png
Note: After updating the partitions, refresh the data in the table.
I hope you find this information useful, If I missed any other TMDL capabilities that were not possible through the Power BI Desktop GUI, please share them with me.
Happy Learning!

Helpful resources

Join Blog
Interested in blogging for the community? Let us know.