Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get certified in Microsoft Fabric—for free! For a limited time, the Microsoft Fabric Community team will be offering free DP-600 exam vouchers. Prepare now

Reply
Anonymous
Not applicable

Why does my query run so slow? How do I read a diagnostic report? diagnostic report attached

I have a query that is really slow and I want to identify what step in my query is the problem. I ran it for a couple of days worth of data (I want to run it for a years worth of data but it won't finish). I ran the query and turned the diagnostic report on but I'm unclear how to interpret the results?

 

I'm tyring to identify where in my query I have logic that is causing it to run slow. Below is a link to the diagnostic report.

 

Diagnostic Report 

1 ACCEPTED SOLUTION

Any CSV is text, and the issue isn't it is a text file, it is that it is a flat file not in a database, like SQL Server or SAP. So any Excel, CSV, TSV, TXT, or other flat file will always run slower because Power Query must do 100% of the work, whereas if the table is in a database, there is a good chance you can optimize your queries to put as much work back on the server. 

 

For example, I am working on a project  and this table has a few million rows, probably 3.5-4.0 million. This takes about 3 seconds to run, because all of the steps I did in Power Query (a few filters, column selections, etc) were sent back to the SQL Server as a SQL Statement:

 

select [_].[t0_0] as [Order Type],
    [_].[t1_0] as [Order Number],
    [_].[entered_dt] as [EnteredDateKey],
    [_].[ord_dt] as [OrderDateKey],
    [_].[t2_0] as [OE PO Number],
    [_].[t3_0] as [Customer Number],
    [_].[t4_0] as [AR Reference],
    [_].[tot_sls_amt] as [Total Sales Amount],
    [_].[inv_no] as [Invoice Number],
    [_].[inv_dt] as [InvoiceDateKey],
    [_].[t5_0] as [PaymentDateKey]
from 
(
    select [_].[entered_dt] as [entered_dt],
        [_].[ord_dt] as [ord_dt],
        [_].[tot_sls_amt] as [tot_sls_amt],
        [_].[inv_no] as [inv_no],
        [_].[inv_dt] as [inv_dt],
        ltrim(rtrim([_].[ord_type])) as [t0_0],
        ltrim(rtrim([_].[ord_no])) as [t1_0],
        ltrim(rtrim([_].[oe_po_no])) as [t2_0],
        ltrim(rtrim([_].[cus_no])) as [t3_0],
        ltrim(rtrim([_].[ar_reference])) as [t4_0],
        ltrim(rtrim([_].[user_def_fld_1])) as [t5_0]
    from 
    (
        select [_].[ord_type],
            [_].[ord_no],
            [_].[entered_dt],
            [_].[ord_dt],
            [_].[oe_po_no],
            [_].[cus_no],
            [_].[ar_reference],
            [_].[tot_sls_amt],
            [_].[inv_no],
            [_].[inv_dt],
            [_].[user_def_fld_1]
        from [dbo].[OEHDRHST_SQL] as [_]
        where [_].[cus_no] like 'SFN100%'
    ) as [_]
) as [_]
where (([_].[t1_0] <> '00379675' or [_].[t1_0] is null) and ([_].[t1_0] <> '00379320' or [_].[t1_0] is null)) and ([_].[t1_0] <> '00379674' or [_].[t1_0] is null)

 

That would take minutes to run, perhaps tens of minutes, if it was a text file because I would have had to load all 3.5M rows in Power Query to filter it down to the 7 records I need in this work in progress.

 

As to your question on the SUMIFS, I don't know. I'd have to test both ways. I suspect a merge will be faster, but I'd do it both ways to see.



Did I answer your question? Mark my post as a solution!
Did my answers help arrive at a solution? Give it a kudos by clicking the Thumbs Up!

DAX is for Analysis. Power Query is for Data Modeling


Proud to be a Super User!

MCSA: BI Reporting

View solution in original post

9 REPLIES 9
edhans
Super User
Super User

That report doesn't help a lot as some info is behind the [record] fields.

 

Can you show us what your M code is doing and explain a little about the model? for example, is it merging two text files that are 2 million rows each? That would run slow...



Did I answer your question? Mark my post as a solution!
Did my answers help arrive at a solution? Give it a kudos by clicking the Thumbs Up!

DAX is for Analysis. Power Query is for Data Modeling


Proud to be a Super User!

MCSA: BI Reporting
Anonymous
Not applicable

I put the M code below. The report is merging three data sources together and does have a ton of data. There is a text file that has a few 100K lines and two different Business Warehouse data pulls that also has a few 100K lines of data. 

AlB
Super User
Super User

Hi @Anonymous 

Can you please share the pbix file so that we can have a look in detail? or at least the M code of your query plus a sample (or all) the data the query is acting on

 

Please mark the question solved when done and consider giving kudos if posts are helpful.

Contact me privately for support with any larger-scale BI needs, tutoring, etc.

Cheers 

 

SU18_powerbi_badge

Anonymous
Not applicable

Thanks. Below is the M code.

Regarding attaching the actual pbix file, to does contain confidential information. Is there a good way to randomize the data? 

 

 

let

    Source = SapBusinessWarehouse.Cubes("vsapb90", "90", "010", [Implementation="2.0"]),

    ZSD_O02 = Source{[Name="ZSD_O02"]}[Data],

    #"ZSD_O02/Y_ZSD_O02_DELIVER_PBI_TRAX_JUD" = ZSD_O02{[Id="ZSD_O02/Y_ZSD_O02_DELIVER_PBI_TRAX_JUD"]}[Data],

    #"Added Items (Pull by Delivery by Sales - weight calc at Matr level)" = Cube.Transform(#"ZSD_O02/Y_ZSD_O02_DELIVER_PBI_TRAX_JUD",

        {

            {Cube.ApplyParameter, "[0I_FPER]", {"001/2020", "001/2020"}},

            {Cube.AddAndExpandDimensionColumn, "[0ACT_GI_DTE]", {"[0ACT_GI_DTE].[LEVEL01]"}, {"Act.goods issue date.Act.goods issue date Level 01"}},

            {Cube.AddAndExpandDimensionColumn, "[0DELIV_NUMB]", {"[0DELIV_NUMB].[LEVEL01]"}, {"Delivery.Delivery Level 01"}},

            {Cube.AddAndExpandDimensionColumn, "[0DOC_NUMBER]", {"[0DOC_NUMBER].[LEVEL01]"}, {"Sales document.Sales document Level 01"}},

            {Cube.AddMeasureColumn, "Delivery quantity", "[Measures].[2M3WEA8ZK0QFKU517D8CGZLEJ]"},

            {Cube.AddMeasureColumn, "Weight Formula (Matr Wgt X Del Qty)", "[Measures].[2M3WEA8ZK0QFKUN2RU84U8AOE]"},

            {Cube.AddAndExpandDimensionColumn, "[0MATERIAL]", {"[0MATERIAL].[LEVEL01]"}, {"Material.Material Level 01"}},

            {Table.AddColumn, "Material.Material Level 01.Key", each Cube.AttributeMemberProperty([Material.Material Level 01], "[20MATERIAL]")},

            {Cube.AddAndExpandDimensionColumn, "[0MATERIAL__ZPHCAT]", {"[0MATERIAL__ZPHCAT].[LEVEL01]"}, {"PH Category*.PH Category* Level 01"}},

            {Cube.AddAndExpandDimensionColumn, "[0PRODH3]", {"[0PRODH3].[LEVEL01]"}, {"Prod. hier. level 3.Prod. hier. level 3 Level 01"}},

            {Cube.AddMeasureColumn, "Delivery USD net val", "[Measures].[2M3WEA8ZK0QDULVK77UTBG079]"},

            {Cube.AddAndExpandDimensionColumn, "[0DOC_TYPE]", {"[0DOC_TYPE].[LEVEL01]"}, {"Sales doc. type.Sales doc. type Level 01"}},

            {Cube.AddAndExpandDimensionColumn, "[0S_ORD_ITEM]", {"[0S_ORD_ITEM].[LEVEL01]"}, {"Item.Item Level 01"}},

            {Cube.AddAndExpandDimensionColumn, "[0DOC_CURRCY]", {"[0DOC_CURRCY].[LEVEL01]"}, {"Document currency.Document currency Level 01"}},

            {Cube.ApplyParameter, "[0I_DAYS]", {"01/15/2020", "01/16/2020"}}

        }),

    #"Changed Type" = Table.TransformColumnTypes(#"Added Items (Pull by Delivery by Sales - weight calc at Matr level)",{{"Act.goods issue date.Act.goods issue date Level 01", type date}, {"Delivery USD net val", Currency.Type}}),

    #"Calculate Sumif weight of Delivery" = Table.AddColumn(#"Changed Type", "Sumif weight of Delivery", each let _item = [Delivery.Delivery Level 01] in

    List.Sum(Table.SelectRows(#"Changed Type", each [Delivery.Delivery Level 01] = _item)[#"Weight Formula (Matr Wgt X Del Qty)"])),

    #"Merged Queries: Tie Trax data to BW Deliery" = Table.NestedJoin(#"Calculate Sumif weight of Delivery", {"Delivery.Delivery Level 01"}, Trax, {"Delivery Document"}, "Trax data ship date 7 19-3 20 fbid", JoinKind.LeftOuter),

    #"*new* Merged Queries Sales and Delivery to bring in freight rev" = Table.NestedJoin(#"Merged Queries: Tie Trax data to BW Deliery", {"Sales document.Sales document Level 01", "Item.Item Level 01"}, Sales, {"Sales Doc", "Item"}, "Sales", JoinKind.LeftOuter),

    #"Expanded Trax data" = Table.ExpandTableColumn(#"*new* Merged Queries Sales and Delivery to bring in freight rev", "Trax data ship date 7 19-3 20 fbid", {"Master Fb Id", "%T019", "GL", "Cost Center", "Day of Ship Date", "Weight Actual Lb", "Weight Fncl Lb", "Orig State Prov", "Dest State Prov", "CA Amount USD", "Number of Freight Bills", "Mode", "Orig Post Code", "Dest Post Code", "Source", "Delivery Document"}, {"Trax data ship date 7 19-3 20 fbid.Master Fb Id", "Trax data ship date 7 19-3 20 fbid.%T019", "Trax data ship date 7 19-3 20 fbid.GL", "Trax data ship date 7 19-3 20 fbid.Cost Center", "Trax data ship date 7 19-3 20 fbid.Day of Ship Date", "Trax data ship date 7 19-3 20 fbid.Weight Actual Lb", "Trax data ship date 7 19-3 20 fbid.Weight Fncl Lb", "Trax data ship date 7 19-3 20 fbid.Orig State Prov", "Trax data ship date 7 19-3 20 fbid.Dest State Prov", "Trax data ship date 7 19-3 20 fbid.CA Amount USD", "Trax data ship date 7 19-3 20 fbid.Number of Freight Bills", "Trax data ship date 7 19-3 20 fbid.Mode", "Trax data ship date 7 19-3 20 fbid.Orig Post Code", "Trax data ship date 7 19-3 20 fbid.Dest Post Code", "Trax data ship date 7 19-3 20 fbid.Source", "Trax data ship date 7 19-3 20 fbid.Delivery Document"}),

    #"Expanded Sales" = Table.ExpandTableColumn(#"Expanded Trax data", "Sales", {"Freight Discount", "Freight Revenue", "Out Freight Costs"}, {"Sales.Freight Discount", "Sales.Freight Revenue", "Sales.Out Freight Costs"}),

    #"Filtered Rows: remove rows where no Trax match" = Table.SelectRows(#"Expanded Sales", each [#"Trax data ship date 7 19-3 20 fbid.Delivery Document"] <> null and [#"Trax data ship date 7 19-3 20 fbid.Delivery Document"] <> ""),

    #"Renamed Columns" = Table.RenameColumns(#"Filtered Rows: remove rows where no Trax match",{{"Act.goods issue date.Act.goods issue date Level 01", "Act Goods Issue Date"}, {"Delivery.Delivery Level 01", "Delivery"}, {"Sales document.Sales document Level 01", "Sales Doc"}, {"Trax data ship date 7 19-3 20 fbid.Master Fb Id", "Master FB ID"}, {"Trax data ship date 7 19-3 20 fbid.%T019", "Trax  fbid.%T019"}, {"Trax data ship date 7 19-3 20 fbid.GL", "GL"}, {"Trax data ship date 7 19-3 20 fbid.Cost Center", "Cost Center"}, {"Trax data ship date 7 19-3 20 fbid.Day of Ship Date", "Ship Date.Trax"}, {"Trax data ship date 7 19-3 20 fbid.Weight Actual Lb", "Actual Lb"}, {"Trax data ship date 7 19-3 20 fbid.Weight Fncl Lb", "Weight Fncl Lb"}, {"Trax data ship date 7 19-3 20 fbid.Orig State Prov", "Orig State Prov"}, {"Trax data ship date 7 19-3 20 fbid.Dest State Prov", "Dest State Prov"}, {"Trax data ship date 7 19-3 20 fbid.CA Amount USD", "CA Amount USD"}, {"Trax data ship date 7 19-3 20 fbid.Number of Freight Bills", "Number of Freight Bills"}, {"Trax data ship date 7 19-3 20 fbid.Delivery Document", "Delivery Document.Trax"}, {"Material.Material Level 01", "Material Name"}, {"Material.Material Level 01.Key", "Material"}, {"Sales doc. type.Sales doc. type Level 01", "Sales Doc Type"}, {"Trax data ship date 7 19-3 20 fbid.Mode", "Mode"}, {"Trax data ship date 7 19-3 20 fbid.Orig Post Code", "Orig Post Code"}, {"Trax data ship date 7 19-3 20 fbid.Dest Post Code", "Dest Post Code"}, {"Document currency.Document currency Level 01", "Delivery Currency"}}),

    #"Added column: divide" = Table.AddColumn(#"Renamed Columns", "Sales Doc Weight % of Delivery", each [#"Weight Formula (Matr Wgt X Del Qty)"]/[Sumif weight of Delivery]),

    #"Changed Type1" = Table.TransformColumnTypes(#"Added column: divide",{{"Sales Doc Weight % of Delivery", Percentage.Type}}),

    #"Add column: allocate weight based on %" = Table.AddColumn(#"Changed Type1", "CA Amt (Allocated)", each [#"Sales Doc Weight % of Delivery"]*[CA Amount USD]),

    #"Changed Type2" = Table.TransformColumnTypes(#"Add column: allocate weight based on %",{{"CA Amt (Allocated)", Currency.Type}}),

    #"Concatenate Del + Sales Doc + Item" = Table.AddColumn(#"Changed Type2", "Del + Sales Doc + Item", each Text.Combine({[Delivery], [Sales Doc], [Item.Item Level 01]}, "~"), type text),

    #"Replaced null w/zero (if no match to sales doc b/c free delivery make the value zero instead of null)" = Table.ReplaceValue(#"Concatenate Del + Sales Doc + Item",null,0,Replacer.ReplaceValue,{"Sales.Freight Discount", "Sales.Freight Revenue", "Sales.Out Freight Costs"}),

    #"Inserted Start of Month to match FX rate" = Table.AddColumn(#"Replaced null w/zero (if no match to sales doc b/c free delivery make the value zero instead of null)", "Act Godds Issue Month", each Date.StartOfMonth([Act Goods Issue Date]), type date),

    #"Merged Queries: tie to FX to correct net sales" = Table.NestedJoin(#"Inserted Start of Month to match FX rate", {"Act Godds Issue Month", "Delivery Currency"}, #"FX Rates", {"INPUTCURRENCY", "BW to BPC Currency Name.Delivery Currency"}, "FX Rates", JoinKind.LeftOuter),

    #"Expanded FX Rates" = Table.ExpandTableColumn(#"Merged Queries: tie to FX to correct net sales", "FX Rates", {"FX"}, {"FX Rates.FX"}),

    #"Renamed Columns3" = Table.RenameColumns(#"Expanded FX Rates",{{"FX Rates.FX", "FX"}}),

    #"Grouped Rows to count double counts" = Table.Group(#"Renamed Columns3", {"Del + Sales Doc + Item"}, {{"Count of Del + Sales Doc + Item", each Table.RowCount(_), type number}, {"All Rows", each _, type table [Act Goods Issue Date=date, Delivery=text, Sales Doc=text, Delivery quantity=number, #"Weight Formula (Matr Wgt X Del Qty)"=number, Material Name=text, Material=text, #"PH Category*.PH Category* Level 01"=text, Prod. hier. level 3.Prod. hier. level 3 Level 01=text, Delivery USD net val=number, Sales Doc Type=text, Item.Item Level 01=text, Delivery Currency=text, Sumif weight of Delivery=number, Master FB ID=text, #"Trax  fbid.%T019"=text, GL=text, Cost Center=text, Ship Date.Trax=date, Actual Lb=number, Weight Fncl Lb=number, Orig State Prov=text, Dest State Prov=text, CA Amount USD=number, Number of Freight Bills=text, Mode=text, Orig Post Code=text, Dest Post Code=text, #"Trax data ship date 7 19-3 20 fbid.Source"=text, Delivery Document.Trax=text, Sales.Freight Discount=number, Sales.Freight Revenue=number, Sales.Out Freight Costs=number, #"Sales Doc Weight % of Delivery"=number, #"CA Amt (Allocated)"=number, #"Del + Sales Doc + Item"=text, Act Godds Issue Month=date, FX=number]}}),

    #"Expanded All Rows" = Table.ExpandTableColumn(#"Grouped Rows to count double counts", "All Rows", {"Act Goods Issue Date", "Delivery", "Sales Doc", "Delivery quantity", "Weight Formula (Matr Wgt X Del Qty)", "Material Name", "Material", "PH Category*.PH Category* Level 01", "Prod. hier. level 3.Prod. hier. level 3 Level 01", "Delivery USD net val", "Sales Doc Type", "Item.Item Level 01", "Delivery Currency", "Sumif weight of Delivery", "Master FB ID", "Trax  fbid.%T019", "GL", "Cost Center", "Ship Date.Trax", "Actual Lb", "Weight Fncl Lb", "Orig State Prov", "Dest State Prov", "CA Amount USD", "Number of Freight Bills", "Mode", "Orig Post Code", "Dest Post Code", "Trax data ship date 7 19-3 20 fbid.Source", "Delivery Document.Trax", "Sales.Freight Discount", "Sales.Freight Revenue", "Sales.Out Freight Costs", "Sales Doc Weight % of Delivery", "CA Amt (Allocated)", "Act Godds Issue Month", "FX"}, {"Act Goods Issue Date", "Delivery", "Sales Doc", "Delivery quantity", "Weight Formula (Matr Wgt X Del Qty)", "Material Name", "Material", "PH Category*.PH Category* Level 01", "Prod. hier. level 3.Prod. hier. level 3 Level 01", "Delivery USD net val", "Sales Doc Type", "Item.Item Level 01", "Delivery Currency", "Sumif weight of Delivery", "Master FB ID", "Trax  fbid.%T019", "GL", "Cost Center", "Ship Date.Trax", "Actual Lb", "Weight Fncl Lb", "Orig State Prov", "Dest State Prov", "CA Amount USD", "Number of Freight Bills", "Mode", "Orig Post Code", "Dest Post Code", "Trax data ship date 7 19-3 20 fbid.Source", "Delivery Document.Trax", "Sales.Freight Discount", "Sales.Freight Revenue", "Sales.Out Freight Costs", "Sales Doc Weight % of Delivery", "CA Amt (Allocated)", "Act Godds Issue Month", "FX"}),

    #"Fix double count by divide by double count column #1" = Table.AddColumn(#"Expanded All Rows", "Freight Discount (adj)", each [Sales.Freight Discount]/[#"Count of Del + Sales Doc + Item"]),

    #"....same as above but another column" = Table.AddColumn(#"Fix double count by divide by double count column #1", "Freight Rev adj", each [Sales.Freight Revenue]/[#"Count of Del + Sales Doc + Item"]),

    #"Added Custom2" = Table.AddColumn(#"....same as above but another column", "Sales Freight Out (adj)", each [Sales.Out Freight Costs]/[#"Count of Del + Sales Doc + Item"]),

    #"Added Custom3" = Table.AddColumn(#"Added Custom2", "Delivery Qty (adj)", each [Delivery quantity]/[#"Count of Del + Sales Doc + Item"]),

    #"Added Custom" = Table.AddColumn(#"Added Custom3", "Weight Formula (Matr Wgt X Del Qty) Adj", each [#"Weight Formula (Matr Wgt X Del Qty)"]/[#"Count of Del + Sales Doc + Item"]),

    #".....same as above but also X FX to fix Delivery USD" = Table.AddColumn(#"Added Custom", "Delivery USD net val (adj)", each [Delivery USD net val]/[#"Count of Del + Sales Doc + Item"]*[FX]),

    #"Changed Type3" = Table.TransformColumnTypes(#".....same as above but also X FX to fix Delivery USD",{{"Delivery USD net val (adj)", Currency.Type}, {"Delivery Qty (adj)", Currency.Type}, {"Sales Freight Out (adj)", Currency.Type}, {"Freight Rev adj", Currency.Type}, {"Freight Discount (adj)", Int64.Type}}),

    #"Renamed Delivery net USD to Local curr b/c not really in USD" = Table.RenameColumns(#"Changed Type3",{{"Delivery USD net val", "Delivery net val (local curr)"}})

in

    #"Renamed Delivery net USD to Local curr b/c not really in USD"

You are doing a number of joins, which may or may not be the issue, except that one is a text file, which is absolutely breaking any folding your SAP connection may otherwise be doing.  While I am certian there are optimizations that could be done, which could yield varying levels of improvement, there is no way I can begin to make suggestions with just one query from your model without understanding what is in your overall query model. Even with the PBIX file I couldnt help because none of the connections would work for me. Someone else perhaps more versed in M, especially with SAP data sources may be able to assist, but this is the kind of thing I'd need to get my hands on and play with.

 

These lines concern me, but again, no clue how much is actually going on here. 10 records, 10,000, or 1,000,000?

   #"Calculate Sumif weight of Delivery" = Table.AddColumn(#"Changed Type", "Sumif weight of Delivery", each let _item = [Delivery.Delivery Level 01] in List.Sum(Table.SelectRows(#"Changed Type", each [Delivery.Delivery Level 01] = _item)[#"Weight Formula (Matr Wgt X Del Qty)"])),

Your grouped rows can be a problem too if it is happening after folding has broken.

 



Did I answer your question? Mark my post as a solution!
Did my answers help arrive at a solution? Give it a kudos by clicking the Thumbs Up!

DAX is for Analysis. Power Query is for Data Modeling


Proud to be a Super User!

MCSA: BI Reporting
Anonymous
Not applicable

You were right about the sumif formula making the query very slow. I removed the sumif and changed it to  grouping and then expand that that got the query to finally run, thanks!

Great @Anonymous - sometimes it is just a matter of trial an error to figure out the best most efficient way to get Power Query to do something.



Did I answer your question? Mark my post as a solution!
Did my answers help arrive at a solution? Give it a kudos by clicking the Thumbs Up!

DAX is for Analysis. Power Query is for Data Modeling


Proud to be a Super User!

MCSA: BI Reporting
Anonymous
Not applicable

Thank you for your ideas.

 

You mention the text file could be a problem. Would would be a better format? I could change the format. Is .csv better?

 

For the sumif line...would it be better to group the data in a seperate query? I originally had the query group itself and then merge back, but I changed to a sumif because I thought merging would be too slow.

 

I'm dealing with about 100,000 lines of data to answer your other quesiton.

Any CSV is text, and the issue isn't it is a text file, it is that it is a flat file not in a database, like SQL Server or SAP. So any Excel, CSV, TSV, TXT, or other flat file will always run slower because Power Query must do 100% of the work, whereas if the table is in a database, there is a good chance you can optimize your queries to put as much work back on the server. 

 

For example, I am working on a project  and this table has a few million rows, probably 3.5-4.0 million. This takes about 3 seconds to run, because all of the steps I did in Power Query (a few filters, column selections, etc) were sent back to the SQL Server as a SQL Statement:

 

select [_].[t0_0] as [Order Type],
    [_].[t1_0] as [Order Number],
    [_].[entered_dt] as [EnteredDateKey],
    [_].[ord_dt] as [OrderDateKey],
    [_].[t2_0] as [OE PO Number],
    [_].[t3_0] as [Customer Number],
    [_].[t4_0] as [AR Reference],
    [_].[tot_sls_amt] as [Total Sales Amount],
    [_].[inv_no] as [Invoice Number],
    [_].[inv_dt] as [InvoiceDateKey],
    [_].[t5_0] as [PaymentDateKey]
from 
(
    select [_].[entered_dt] as [entered_dt],
        [_].[ord_dt] as [ord_dt],
        [_].[tot_sls_amt] as [tot_sls_amt],
        [_].[inv_no] as [inv_no],
        [_].[inv_dt] as [inv_dt],
        ltrim(rtrim([_].[ord_type])) as [t0_0],
        ltrim(rtrim([_].[ord_no])) as [t1_0],
        ltrim(rtrim([_].[oe_po_no])) as [t2_0],
        ltrim(rtrim([_].[cus_no])) as [t3_0],
        ltrim(rtrim([_].[ar_reference])) as [t4_0],
        ltrim(rtrim([_].[user_def_fld_1])) as [t5_0]
    from 
    (
        select [_].[ord_type],
            [_].[ord_no],
            [_].[entered_dt],
            [_].[ord_dt],
            [_].[oe_po_no],
            [_].[cus_no],
            [_].[ar_reference],
            [_].[tot_sls_amt],
            [_].[inv_no],
            [_].[inv_dt],
            [_].[user_def_fld_1]
        from [dbo].[OEHDRHST_SQL] as [_]
        where [_].[cus_no] like 'SFN100%'
    ) as [_]
) as [_]
where (([_].[t1_0] <> '00379675' or [_].[t1_0] is null) and ([_].[t1_0] <> '00379320' or [_].[t1_0] is null)) and ([_].[t1_0] <> '00379674' or [_].[t1_0] is null)

 

That would take minutes to run, perhaps tens of minutes, if it was a text file because I would have had to load all 3.5M rows in Power Query to filter it down to the 7 records I need in this work in progress.

 

As to your question on the SUMIFS, I don't know. I'd have to test both ways. I suspect a merge will be faster, but I'd do it both ways to see.



Did I answer your question? Mark my post as a solution!
Did my answers help arrive at a solution? Give it a kudos by clicking the Thumbs Up!

DAX is for Analysis. Power Query is for Data Modeling


Proud to be a Super User!

MCSA: BI Reporting

Helpful resources

Announcements
OCT PBI Update Carousel

Power BI Monthly Update - October 2024

Check out the October 2024 Power BI update to learn about new features.

September Hackathon Carousel

Microsoft Fabric & AI Learning Hackathon

Learn from experts, get hands-on experience, and win awesome prizes.

October NL Carousel

Fabric Community Update - October 2024

Find out what's new and trending in the Fabric Community.

Top Solution Authors