Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us for an expert-led overview of the tools and concepts you'll need to become a Certified Power BI Data Analyst and pass exam PL-300. Register now.

Reply
Anonymous
Not applicable

Remove duplicates by prioritizing rows based on another column

Hi, everyone.

 

I am having the following problem and don't seem to find any solution to implement this in Power Query.

I have a dataset of loading lists of orders. It looks like this:

 

SergeyShelest_1-1630059451503.png

 

 For the given Loading list (Ladeliste) I want to remove duplicate NVE numbers (shipping unit number) but only those that have empty values in LHM number column (loading unit ID). If a corresponding cell in LHM-Nr. column is not empty I want to keep it. For example, when removing duplicates in rows 2 and 6, I want to remove row 2 since LHM-Nr. column cell is empty. Correspondingly, between rows 3 and 7 I want to remove row 3. 

 

This is how the resulting table should look:

SergeyShelest_2-1630059719434.png

Removing duplicates in Power Query using Remove rows removes first rows. Sorting by loading list and then deleting rows might work but I am not sure if it will also not be deleting rows that have values in LHM-Nr. column.

 

Does anyone have any idea how to implement this?

1 ACCEPTED SOLUTION

Okay, please try this.  I have included comments that explain each step.

 

BEFORE: 

The goal is to remove rows 1, 3, 6, 8 and to have all columns present in the result.

jennratten_0-1630267603190.png

 

RESULT:

jennratten_1-1630267728521.png

 

SCRIPT:

There are two different options in the grouped step along with scenarios for when one would apply as opposed to the other, based on the specifics of the source data.  Currently, option 2 is in use.  To switch to option 1, add two forward slashes in front of Table.LastN and remove the two slashes at the beginning of let varTable and Table.FirstN.

let
    Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("jcy5DcAgEETRXjZGaHbBVwi+irDovw0vJiFAmOQjxGOeh0IIZIgB6BHiroW3YCsQ1kt+pGT+ISNXKiu9UTcEGaL1n40x5n/F7sfZGJ0q6HtwHoIMp10qOxV7XndjdB2CDK/dKKUX", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [Latest = _t, #"NVE-Nr." = _t, Item = _t, Date = _t, #"LHM-Nr." = _t, Index = _t]),
    grouped = Table.Group(
        Source, 
        // Column(s) containing the values from which you'd like duplicates removed.  
        {"NVE-Nr."},
        {
            { "Table", // Name of the new column.
                each 
                //---------------------------------------------------------
                // Option 1: Sort descending, then select the first result.
                //---------------------------------------------------------
                // This will work if there is a maximum of only two rows per NVE-Nr.
                //let varTable = Table.Sort ( _, {{"LHM-Nr.", Order.Descending}}) in
                //Table.FirstN ( varTable, 1 ),
                //---------------------------------------------------------
                // Option 2: Select the last LHM-Hr for each NVE-Nr.
                //---------------------------------------------------------
                // This will work if the row to keep always appears last in the group.
                Table.LastN ( _, 1 ),                
                type table
            }
        }
    ),
    expand = Table.ExpandTableColumn ( 
        grouped, 
        "Table",                                  // Expand the tables in this column
        List.Difference (                         // New column names
            Table.ColumnNames (                   // are the column names 
                Table.Combine ( grouped[Table] )  // in the nested tables 
            ), 
            Table.ColumnNames ( grouped )         // that do not appear in the grouped table.
        ) 
    )

in
    expand

 

 

 

If this post helps to answer your questions, please consider marking it as a solution so others can find it more quickly when faced with a similar challenge.

Proud to be a Microsoft Fabric Super User

View solution in original post

25 REPLIES 25
Anonymous
Not applicable

@Anonymous 

jennratten's solution is working in my test, if it doesn't return the correct output, please share a short sample pbix with expected output. So we can provide the exact solution.

 

 

Paul Zheng _ Community Support Team

Anonymous
Not applicable

Hi, @Anonymous 

 

Here is a sample data file. I did the grouping exactly as per jennratten's solution. If you expand the table you will get exactly the same table as before the grouping.

 

The file can be found here. I created a .pbix file but it would produce an error when opened on another pc since the source files (pdfs) are on my pc. The resulting table I put into the same folder:

 

https://docs.google.com/spreadsheets/d/1DxeOckz_5eTte3wUdyDUcmXC5zCGXIbW/edit?usp=sharing&ouid=107609193436761136051&rtpof=true&sd=true,  

 

https://drive.google.com/file/d/1kLj8vwUHDV14X2S273cIGtkbVLVLMXpc/view?usp=sharing 

 

Please let me know if you need anything else.

 

Regards

Hello again - here is the same query as the last response with your source data plugged in.  Note - I am not able to download your files (blocker on my end), so I converted the image of your original table to an actual table.  

 

The only difference between the last query and this one is the value of the Source step.  Everything else is the same.  In case you are not already doing so, I recommend you copy this script, paste it into a blank query and then view the output.

 

BEFORE:

jennratten_0-1630341929109.png

 

AFTER:

jennratten_1-1630341977727.png

 

SCRIPT:

let
    Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("vZVPa4MwGIe/i+ci778k5ihsDEYZwjZ2KEUctWPQw2D7/uxN5oyhxZMaQjAa8jzmF+PhUBAQAoK04MWRb/fdqb98fv/0LQtR+XU6F7sCpAQsw1DtvNUobAS8XnuPxGIgFATnOdzU2tR7bcPopvvo9VnsHXdL84imPLcgz4C5wYNqyqtW56GZ8vzq6xnz0wm1OIfWpijD2r5075cBDmuFOcB1pd0IJ14SPpNsevMJXNaHx5gTvEpwMwtHRgsuh5tb8Pr1+QGB2WHoPT7dGUBk0UmAtIrlv31239TaMs9ttOWglEFlGyhmULMNFDKo3QbK46YiQrSJP/8tL5vvuKtjvoNAtZEAjgJ6ssWsBwG/kQAkAWsoCQjMCNgg4BEzAbH/AvZaYDi+rgWcHQWAIVIHgfzfrPzjLw==", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [Ladeliste = _t, Datum = _t, #"Auftragsnr." = _t, #"NVE-Nr." = _t, #"LNM-Nr." = _t, #"LHM-Typ" = _t, Index = _t, Page001 = _t, DeleteRowIndex = _t]),
    grouped = Table.Group(
        Source, 
        // Column(s) containing the values from which you'd like duplicates removed.  
        {"NVE-Nr."},
        {
            { "Table", // Name of the new column.
                each 
                //---------------------------------------------------------
                // Option 1: Sort descending, then select the first result.
                //---------------------------------------------------------
                // This will work if there is a maximum of only two rows per NVE-Nr.
                //let varTable = Table.Sort ( _, {{"LHM-Nr.", Order.Descending}}) in
                //Table.FirstN ( varTable, 1 ),
                //---------------------------------------------------------
                // Option 2: Select the last LHM-Hr for each NVE-Nr.
                //---------------------------------------------------------
                // This will work if the row to keep always appears last in the group.
                Table.LastN ( _, 1 ),                
                type table
            }
        }
    ),
    expand = Table.ExpandTableColumn ( 
        grouped, 
        "Table",                                  // Expand the tables in this column
        List.Difference (                         // New column names
            Table.ColumnNames (                   // are the column names 
                Table.Combine ( grouped[Table] )  // in the nested tables 
            ), 
            Table.ColumnNames ( grouped )         // that do not appear in the grouped table.
        ) 
    )

in
    expand

 

 

Hello again - here is the same query as the last response with your source data plugged in.  Note - I am not able to download your files (blocker on my end), so I converted the image of your original table to an actual table.  

 

The only difference between the last query and this one is the value of the Source step.  Everything else is the same.  In case you are not already doing so, I recommend you copy this script, paste it into a blank query and then view the output.

 

BEFORE:

jennratten_0-1630341929109.png

 

AFTER:

jennratten_1-1630341977727.png

 

SCRIPT:

let
    Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("vZVPa4MwGIe/i+ci778k5ihsDEYZwjZ2KEUctWPQw2D7/uxN5oyhxZMaQjAa8jzmF+PhUBAQAoK04MWRb/fdqb98fv/0LQtR+XU6F7sCpAQsw1DtvNUobAS8XnuPxGIgFATnOdzU2tR7bcPopvvo9VnsHXdL84imPLcgz4C5wYNqyqtW56GZ8vzq6xnz0wm1OIfWpijD2r5075cBDmuFOcB1pd0IJ14SPpNsevMJXNaHx5gTvEpwMwtHRgsuh5tb8Pr1+QGB2WHoPT7dGUBk0UmAtIrlv31239TaMs9ttOWglEFlGyhmULMNFDKo3QbK46YiQrSJP/8tL5vvuKtjvoNAtZEAjgJ6ssWsBwG/kQAkAWsoCQjMCNgg4BEzAbH/AvZaYDi+rgWcHQWAIVIHgfzfrPzjLw==", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [Ladeliste = _t, Datum = _t, #"Auftragsnr." = _t, #"NVE-Nr." = _t, #"LNM-Nr." = _t, #"LHM-Typ" = _t, Index = _t, Page001 = _t, DeleteRowIndex = _t]),
    grouped = Table.Group(
        Source, 
        // Column(s) containing the values from which you'd like duplicates removed.  
        {"NVE-Nr."},
        {
            { "Table", // Name of the new column.
                each 
                //---------------------------------------------------------
                // Option 1: Sort descending, then select the first result.
                //---------------------------------------------------------
                // This will work if there is a maximum of only two rows per NVE-Nr.
                //let varTable = Table.Sort ( _, {{"LHM-Nr.", Order.Descending}}) in
                //Table.FirstN ( varTable, 1 ),
                //---------------------------------------------------------
                // Option 2: Select the last LHM-Hr for each NVE-Nr.
                //---------------------------------------------------------
                // This will work if the row to keep always appears last in the group.
                Table.LastN ( _, 1 ),                
                type table
            }
        }
    ),
    expand = Table.ExpandTableColumn ( 
        grouped, 
        "Table",                                  // Expand the tables in this column
        List.Difference (                         // New column names
            Table.ColumnNames (                   // are the column names 
                Table.Combine ( grouped[Table] )  // in the nested tables 
            ), 
            Table.ColumnNames ( grouped )         // that do not appear in the grouped table.
        ) 
    )

in
    expand

 

 

If this post helps to answer your questions, please consider marking it as a solution so others can find it more quickly when faced with a similar challenge.

Proud to be a Microsoft Fabric Super User

Anonymous
Not applicable

Hi, @jennratten .

 

Thank you very much for your help. I don't seem to understand the source step. So far I always left it since I though it was irrelevant for my case. Could you please explain what it does and what this random characters do (do I also need to adjust it if I am applying to a table with a bigger size and if yes how do I do it?)

 

Regards

Sure thing - this value is generated by copying data and pasting it into Excel.  For example, I converted your image into an Excel table.  Then I copied the Excel data and in Power Query, Home> Enter Data > paste > OK (the options will be slightly different if you are using Excel).  After doing so, a new query will be created and the only step that will be in the query is Source, with a similar script.  See the image below.  I  To integrate the sample script I provided with your query, copy the grouped and expand steps from my script, open the Advanced Editor for your query, paste them after the step which generated the table in your first post (likely the last step in your query), then replace "Source" in the grouped step with the name of your last step.

 

jennratten_0-1630343577737.png

 

 

Anonymous
Not applicable

Hi,

 

I am now a bit confused since I import lots of pds into Power Query first and clean them first. The images/samples I provided are the ones after the cleanup, meaning I have multiple previous steps that are in Power Query. I could copy and paste an Excel table into the Power Query but the thing is, I will be expanding my table when I get new pdfs. 

 

Is there any automatic way to remove duplicates without having to copy cleaned table in Excel and open it again in a new query but rather continue where I just left off processing my data?

Duplicates-1.PNG

 

Thanks and regards,

 

 

I apologize if my last response was confusing.  I will separate it clearly.

 

Your question:

I don't seem to understand the source step. So far I always left it since I though it was irrelevant for my case. Could you please explain what it does and what this random characters do?

 

The first part of the response (answering your question).

This value is generated by copying data and pasting it into Excel.  For example, I converted your image into an Excel table.  Then I copied the Excel data and in Power Query, Home> Enter Data > paste > OK (the options will be slightly different if you are using Excel).  After doing so, a new query will be created and the only step that will be in the query is Source, with a similar script. 

 

The second part of the response (explains how to add this to your existing query). 

To integrate the sample script I provided with your query, copy the grouped and expand steps from my script, open the Advanced Editor for your query, paste them after the step which generated the table in your first post (likely the last step in your query), then replace "Source" in the grouped step with the name of your last step.

 

If this post helps to answer your questions, please consider marking it as a solution so others can find it more quickly when faced with a similar challenge.

Proud to be a Microsoft Fabric Super User

Anonymous
Not applicable

Still doesn't answer my question:

 

what happens if new data comes in and table expand? I will always have to copy and paste my whole table to adjust the code?

 

Second question:

At the moment I am not able to copy the whole table (it contains over 12 500 rows) and paste it into Data Editor within Power Query. How would you go about it?

 

Regards,

No, you don't have to copy and paste at all.  Copy and pasting was only applicable for my sample data since I could not download your file.

 

You must integrate the steps I provided into your script in the Advanced Editor. 

 

from the prior message

The second part of the response (explains how to add this to your existing query). 

To integrate the sample script I provided with your query, copy the grouped and expand steps from my script, open the Advanced Editor for your query, paste them after the step which generated the table in your first post (likely the last step in your query), then replace "Source" in the grouped step with the name of your last step.

 

If you will post your script, I will show you how to integrate it.  To post your script, go to Power Query, select your query, click Advanced Editor on the ribbon, copy the entire script from the Advanced Editor window, come back here, reply, click the script/code button in the message and paste. 

 

If this post helps to answer your questions, please consider marking it as a solution so others can find it more quickly when faced with a similar challenge.

Proud to be a Microsoft Fabric Super User

Sure thing - this value is generated by copying data and pasting it into Excel.  For example, I converted your image into an Excel table.  Then I copied the Excel data and in Power Query, Home> Enter Data > paste > OK (the options will be slightly different if you are using Excel).  After doing so, a new query will be created and the only step that will be in the query is Source, with a similar script.  See the image below.  I  To integrate the sample script I provided with your query, copy the grouped and expand steps from my script, open the Advanced Editor for your query, paste them after the step which generated the table in your first post (likely the last step in your query), then replace "Source" in the grouped step with the name of your last step.

 

jennratten_0-1630343577737.png

 

 

If this post helps to answer your questions, please consider marking it as a solution so others can find it more quickly when faced with a similar challenge.

Proud to be a Microsoft Fabric Super User

Syndicate_Admin
Administrator
Administrator

Hello - here is an option for returning the result.

 

I have created a sample table with similar specs.  The goal is to keep rows 2, 3, 5, 7, 8.

 

BEFORE

jennratten_0-1630088951723.png

 

AFTER

jennratten_1-1630089014823.png

 

SCRIPT

let
    Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WMlTSUXJ0cgaSSrE6yFzfxJKMYrCYEZDn5OwCJF3z0nMyizPQRMFcYyDD2cUVRSNCDMw1ATJcXN1gXFOQgW7uyMbGAgA=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [#"Student ID" = _t, #"Student Name" = _t, Subject = _t]),
    #"Changed Type" = Table.TransformColumnTypes(Source,{{"Student ID", Int64.Type}, {"Student Name", type text},  {"Subject", type text}}),
    #"Grouped Rows" = Table.Group(
        #"Changed Type", 
        {"Student ID", "Student Name"}, 
        {
            { "Subject", each List.First ( List.Sort ( _[Subject], Order.Descending ) ) },
            { "All", each _, type table [Student ID=nullable number, Student Name=nullable text, Subject=nullable text]}}
    )
in
    #"Grouped Rows"

 

 

jennratten
Super User
Super User

Hello - here is an option for returning the result.

 

I have created a sample table with similar specs.  The goal is to keep rows 2, 3, 5, 7, 8.

 

BEFORE

jennratten_0-1630088951723.png

 

AFTER

jennratten_1-1630089014823.png

 

SCRIPT

let
    Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WMlTSUXJ0cgaSSrE6yFzfxJKMYrCYEZDn5OwCJF3z0nMyizPQRMFcYyDD2cUVRSNCDMw1ATJcXN1gXFOQgW7uyMbGAgA=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [#"Student ID" = _t, #"Student Name" = _t, Subject = _t]),
    #"Changed Type" = Table.TransformColumnTypes(Source,{{"Student ID", Int64.Type}, {"Student Name", type text},  {"Subject", type text}}),
    #"Grouped Rows" = Table.Group(
        #"Changed Type", 
        {"Student ID", "Student Name"}, 
        {
            { "Subject", each List.First ( List.Sort ( _[Subject], Order.Descending ) ) },
            { "All", each _, type table [Student ID=nullable number, Student Name=nullable text, Subject=nullable text]}}
    )
in
    #"Grouped Rows"

 

 

If this post helps to answer your questions, please consider marking it as a solution so others can find it more quickly when faced with a similar challenge.

Proud to be a Microsoft Fabric Super User

Anonymous
Not applicable

Hi, @jennratten .

 

Thank you for your solution.

 

I am not that advanced user in Power Query, that's I don't really fully understand what the code does. Could you please explain what the code does so that I adjust it to my case? Why do I actually need index column of Student ID and what does it do? Do I need to also create an index column in my case?

 

Regards  

Hi - no problem at all.  You don't need an index column.  That just happened to be the column in my sample data that I wanted to use as the column from which duplicates should be removed.  See this new script below.  I have added comments and examples of what you need to change to make it work for you.  Please let me know if you have any other questions.

 

let
    Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WMlTSUXJ0cgaSSrE6yFzfxJKMYrCYEZDn5OwCJF3z0nMyizPQRMFcYyDD2cUVRSNCDMw1ATJcXN1gXFOQgW7uyMbGAgA=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [#"Student ID" = _t, #"Student Name" = _t, Subject = _t]),
    #"Changed Type" = Table.TransformColumnTypes(Source,{{"Student ID", Int64.Type}, {"Student Name", type text},  {"Subject", type text}}),
    #"Grouped Rows" = Table.Group(
        #"Changed Type", 
        // Column(s) containing the values from which you'd like duplicates removed.  
        // Replace this with the NVE numbers column. {"NVE-Nr."}
        {"Student ID", "Student Name"}, 
        {
            // Column that contains blank/empty/null values that should be removed if 
            // another row exists that is not blank/empty/null.
            // "Subject" will be the name of the new column.
            // _[Subject] is the column in the current table whose values should be evaluated.
            // Your column name in the current table will need to be referenced a little differently
            // since it contains a hyphen.
            // { "LHM-Nr.", each List.First ( List.Sort ( _[#"LHM-Nr."], Order.Descending ) ) },            
            { "Subject", each List.First ( List.Sort ( _[Subject], Order.Descending ) ) },
            { "All", each _, type table }}
    )
in
    #"Grouped Rows"

If this post helps to answer your questions, please consider marking it as a solution so others can find it more quickly when faced with a similar challenge.

Proud to be a Microsoft Fabric Super User

Anonymous
Not applicable

Hey,

 

thanks again for your help.

 

It has worked but my goal is to get the duplicates removed and also pertain all the other information in the other columns as well. Now I only have two columns of NVE and LHM numbers with grouped tables that have the duplicate values and all the information I need. When I expand the tables I again get all the duplicates -- seems like I have to go over it again.  Is there a way to get the duplicates removed (as you already showed) and pertain all the other columns just next to the sorted data (after removing duplicates) so that I would have clean table with all the information I need?

SergeyShelest_0-1630179176344.png

 

Regards

You can include all columns that you want to retain as a list in the second argument of the Table.Group function, like this: 

{"NVE-Nr.", "Next Column", "Next Column"} 

 

What you are doing there is creating a list of the column names to return, where each column name is inside double-quotes, the columns are separated by a comma, and wrapped in curly braces.

 

Alternatively, if you want to keep all columns, you could use this as the 2nd argument. Table.ColumnNames returns a list of columns for a table, and Lost.RemoveItems does just that, removes the column names from the list, so you can add a new column for that value.

 

List.RemoveItems(Table.ColumnNames(PriorStep), {"LHM-Nr."})

If this post helps to answer your questions, please consider marking it as a solution so others can find it more quickly when faced with a similar challenge.

Proud to be a Microsoft Fabric Super User

Anonymous
Not applicable

Hi,

 

Your first solution produced exactly the same table as before grouping with all the duplicates I wanted to get rid of, which I would like to avoid. Second one led to error where it says that Date column can't be converted into function.

 

Do you have any hint why it happened? I feel like more lost now. 

 

Here is a screenshot.

SergeyShelest_0-1630242592842.png

 

Regards

Okay, please try this.  I have included comments that explain each step.

 

BEFORE: 

The goal is to remove rows 1, 3, 6, 8 and to have all columns present in the result.

jennratten_0-1630267603190.png

 

RESULT:

jennratten_1-1630267728521.png

 

SCRIPT:

There are two different options in the grouped step along with scenarios for when one would apply as opposed to the other, based on the specifics of the source data.  Currently, option 2 is in use.  To switch to option 1, add two forward slashes in front of Table.LastN and remove the two slashes at the beginning of let varTable and Table.FirstN.

let
    Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("jcy5DcAgEETRXjZGaHbBVwi+irDovw0vJiFAmOQjxGOeh0IIZIgB6BHiroW3YCsQ1kt+pGT+ISNXKiu9UTcEGaL1n40x5n/F7sfZGJ0q6HtwHoIMp10qOxV7XndjdB2CDK/dKKUX", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [Latest = _t, #"NVE-Nr." = _t, Item = _t, Date = _t, #"LHM-Nr." = _t, Index = _t]),
    grouped = Table.Group(
        Source, 
        // Column(s) containing the values from which you'd like duplicates removed.  
        {"NVE-Nr."},
        {
            { "Table", // Name of the new column.
                each 
                //---------------------------------------------------------
                // Option 1: Sort descending, then select the first result.
                //---------------------------------------------------------
                // This will work if there is a maximum of only two rows per NVE-Nr.
                //let varTable = Table.Sort ( _, {{"LHM-Nr.", Order.Descending}}) in
                //Table.FirstN ( varTable, 1 ),
                //---------------------------------------------------------
                // Option 2: Select the last LHM-Hr for each NVE-Nr.
                //---------------------------------------------------------
                // This will work if the row to keep always appears last in the group.
                Table.LastN ( _, 1 ),                
                type table
            }
        }
    ),
    expand = Table.ExpandTableColumn ( 
        grouped, 
        "Table",                                  // Expand the tables in this column
        List.Difference (                         // New column names
            Table.ColumnNames (                   // are the column names 
                Table.Combine ( grouped[Table] )  // in the nested tables 
            ), 
            Table.ColumnNames ( grouped )         // that do not appear in the grouped table.
        ) 
    )

in
    expand

 

 

 

If this post helps to answer your questions, please consider marking it as a solution so others can find it more quickly when faced with a similar challenge.

Proud to be a Microsoft Fabric Super User

Anonymous
Not applicable

Hi, @jennratten .

 

Please do apologise for my late reply but I would like to get back to my thread and leave a feedback regardin my question.

 

I want to thank you for your help and say that this solution worked for me. I had to figure out some things but your help and insights were such a great help. Happy to be a member of this wondeful community. 

 

Regards,

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June 2025 Power BI Update Carousel

Power BI Monthly Update - June 2025

Check out the June 2025 Power BI update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.