Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Learn from the best! Meet the four finalists headed to the FINALS of the Power BI Dataviz World Championships! Register now
Hi,
I'm debugging a Power BI report that is running very slow. I found that the power query code contains 4 lines of Table.NestedJoin(...), which means it's merging the first table with a seccond, then with a third and then with a fourth.
It seems that because it's merging so many tables one after the other, it's taking a long time to load. I was wondering if there are any tips on speeding the report up? (e.g. if it's possible to use views (similar to SQL) or another way to do merging.)
Jason
Hello @jasonyeung87 ,
I work with DevOps data quite a bit and encountered this challenge in the past. When working with joins in Power Query against Azure DevOps OData, I find that the shape of the data and the query design tends to have a greater impact on performance rather than specific join functions. I'm including a few different sections in my response so I can address the specific question asked and also provide solution options, since the two are not really the same thing in this scenario.
General best practices (foldable and non-foldable sources)
Avoid joins altogether if a single entity or pre-shaped view will meet your requirement.
Recommendations for DevOps queries:
Model DevOps analytics views or OData queries to return the shape you need, or as close to it as possible.
Analytics views simplify a lot of the work. This is a query to return an analytics view which is saved in my favorites folder in DevOps.
Be sure to replace the variables with your actual values: {org name}, {project name}, {team name}.
let
Source = VSTS.AnalyticsViews("{org name}", "{project name}", []),
Favorites_Folder = Source{[Id="Favorites",Kind="Folder"]}[Data],
data = Favorites_Folder{[Id="eecf4346-71bf-4f93-815d-b663fd0c193c",Kind="Table"]}[Data]
in
data
OData query for velocity by team based on stories:
Be sure to replace the variables with your actual values: {org name}, {project name}, {team name}. I have also included lines that are commented which show you how to apply other types of filters and aggregations if needed. I needed to show the velocity for each team separately which is why you see two separate queries (team_data1 and team_data2) which are then appended together.
let
team_data1 = OData.Feed ( "https://analytics.dev.azure.com/{org name}/{project name}/_odata/v3.0-preview/Iterations?"
&"$apply=filter( "
&"Teams/any(t:t/TeamName eq '{team name}') "
&"and StartDate lt now() "
// &"and BoardName eq 'Stories' "
// ------------------------------------------------------------------------
// Examples of additional filters if needed
// &"and startswith(Area/AreaPath,'{team name}\{area path}') "
// &"and DateValue ge Iteration/StartDate "
// &"and DateValue le Iteration/EndDate "
// &"and Iteration/StartDate le now() "
// &"and Iteration/EndDate ge now() "
// &"and BoardName eq 'Stories' "
// &"and DateValue ge 2023-01-01Z "
// &") "
// &"/groupby( "
// &"(DateValue,ColumnName,LaneName,State,WorkItemType,AssignedTo/UserName,Area/AreaPath), "
// &"aggregate($count as Count) "
// &") "
// ------------------------------------------------------------------------
&") &$orderby=IsEnded,StartDate desc,EndDate desc,IterationName "
&"&$select=IterationSK,IterationName,StartDate,EndDate,IsEnded,IterationPath "
&"&$top=100 ",
null, [Implementation="2.0",OmitValues = ODataOmitValues.Nulls,ODataVersion = 4] ),
// repeat for every team
team_data2 = OData.Feed ( "https://analytics.dev.azure.com/{org name}/{project name}/_odata/v3.0-preview/Iterations?"
&"$apply=filter( "
&"Teams/any(t:t/TeamName eq '{team name}') "
&"and StartDate lt now() "
&") &$orderby=IsEnded,StartDate desc,EndDate desc,IterationName "
&"&$select=IterationSK,IterationName,StartDate,EndDate,IsEnded,IterationPath "
&"&$top=100 ",
null, [Implementation="2.0",OmitValues = ODataOmitValues.Nulls,ODataVersion = 4] ),
Result = Table.Combine ( { team_data1, team_data2 } ),
#"Extracted Date" = Table.TransformColumns(Result,{{"StartDate", DateTime.Date, type date}, {"EndDate", DateTime.Date, type date}}),
#"Added Custom" = Table.AddColumn(#"Extracted Date", "DateRange", each Date.ToText ( [StartDate], "MM.dd.yy" ) & " - " & Date.ToText ( [EndDate], "MM.dd.yy" ), type text),
#"Inserted Text Before Delimiter" = Table.AddColumn(#"Added Custom", "Project", each Text.BeforeDelimiter([IterationPath], "\"), type text),
#"Added Custom1" = Table.AddColumn(#"Inserted Text Before Delimiter", "ProjectIterationStartDate", each [Project] & "_" & Date.ToText ( [StartDate], "yyyyMMdd" ), type text)
in
#"Added Custom1"
OData query to return Users as a dimension:
let
Source = OData.Feed ("https://analytics.dev.azure.com/{org name}/{project name}/_odata/v4.0-preview/Users?"
&"$count=true&$select=UserName,UserEmail"
,null, [Implementation="2.0",OmitValues = ODataOmitValues.Nulls,ODataVersion = 4])
in
Source
If you must join locally, do so on as little data as possible.
Please let me know if you have questions about this info.
If this post helps to answer your questions, please consider giving it a kudo and/or marking it as a solution so others can find it more quickly when faced with a similar challenge.
Proud to be a Microsoft Fabric Super User
Hi @jasonyeung87,
Have you had a chance to review the solution we shared by @MarkLaf ? If the issue persists, feel free to reply so we can help further.
Thank you.
Another approach: don't perform the joins.
Instead, load the tables separately, then use relationships in calculations (or, if you must, pull data over with RELATED in calculated columns).
If you can, leverage dataflows to stage the data, ensure join columns are distinct on at least one table (such that their primary key is the table you are joining on - check with Table.Keys)
If you have Fabric capacity, see if you can set up a copy activity to pull data into a lakehouse or something. Have not done it personally, but search/ai says you can do it with an OData connector
Hi @anilelmastasi ,
It's using "Azure DevOps (Boards only)". I noticed it's using quite a bit of Table.NestedJoin() lines.
Hi all,
Thanks for all your tips! Here's some more details on the report:
- the data sources are mainly from DevOps, which appear to not fold. When I right clicked the first step, the "View Native Query" option was greyed out.
- I'm not too sure if I can do views in DevOps, I thought I could do something similar in Power Query
- I'll try using the other ways to join the tables as @cengizhanarslan mentioned.
Sincerely,
Jason
Hi @jasonyeung87,
As you mentioned earlier, have you implemented the @cengizhanarslan suggested solution for your issue?
Thank you.
Nested joins don't necessarily eat up performance. Nested joins fold, as long as your query is still folding.
But if your query is still folding before the join step, then plain Table.Join is faster, especially if these are inner joins your data is sorted--you can use the JoinAlgorithm.SortMerge parameter and stream that data in nice and fast.
But yeah, it's not necessarily the nested joins that are slowing your query down, if it's still folding.
--Nate
If the source is SQL/Databricks/Fabric Warehouse/Lakehouse SQL endpoint, merges can fold into one server-side query only if you haven’t broken folding earlier. In Power Query, right-click the step just before each merge → View Native Query. If it’s disabled, folding is broken → merges run locally → slow.
If your data is coming from a SQL-capable source, the fastest approach is usually creating a view (or one SQL query) that does all joins than import to Power BI the already-joined result.
Hi @jasonyeung87 , Yes Nested join will eat up the PBI performance if the query is not folding properly
It is always recommended to push data transformations as close to the source as possible.
As you mentioned, using a view is a good option for these requirements.
Unless it cannot be accomplished on the source side, aim to push all changes there.
Always bring in only the necessary data. Proper formatting and ensuring the correct data types will all help improve performance.
If you still want to use the join inside pbi check whther you can use the Table.Join instead of NestedJoin , also check if your query is getting folded
https://blog.crossjoin.co.uk/2020/06/07/optimising-the-performance-of-power-query-merges-in-power-bi...
Thanks
If this response was helpful in any way, I’d gladly accept a kudo.
Please mark it as the correct solution. It helps other community members find their way faster
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.
Check out the February 2026 Power BI update to learn about new features.
| User | Count |
|---|---|
| 16 | |
| 12 | |
| 10 | |
| 7 | |
| 6 |