Don't miss your chance to take the Fabric Data Engineer (DP-700) exam on us!
Learn moreNext up in the FabCon + SQLCon recap series: The roadmap for Microsoft SQL and Maximizing Developer experiences in Fabric. All sessions are available on-demand after the live show. Register now
Hi guy's,
I'm trying to find a better way to perform what I'm doing at the moment.
I what to transform visualize data like this, from a excel file that is generated automatically by a system
| Project ID | Steep | Date started | Date Completed |
| 1 | Contact | 01/06/2020 | 01/06/2020 |
| 1 | Assesment | 05/06/2020 | 05/06/2020 |
| 2 | Contact | 15/06/2020 | 15/06/2020 |
| 2 | Assesment | 20/06/2020 | 20/06/2020 |
| 3 | Contact | 01/07/2020 | 01/07/2020 |
| 3 | Assesment | 01/07/2020 | 02/07/2020 |
| 1 | Teste | 01/07/2020 | 01/07/2020 |
| 3 | Teste | 02/07/2020 | 02/07/2020 |
| 1 | Contract | 02/07/2020 | 02/07/2020 |
| 1 | Implement | 03/07/2020 | 03/07/2020 |
| 2 | Teste | 04/07/2020 | 04/07/2020 |
| 3 | Contract | 05/07/2020 | 05/07/2020 |
| 2 | Contract | 08/07/2020 | 08/07/2020 |
| 3 | Implement | 10/07/2020 | 10/07/2020 |
| 1 | KPI OK | 14/07/2020 | 14/07/2020 |
| 3 | KPI OK | 15/07/2020 | 15/07/2020 |
| 2 | Implement | ||
| 2 | KPI OK |
To show in PowerBI in a table
| Project ID | Contact | Assesment | Teste | Contract | Implement | KPI OK |
| 1 | 01/06/2020 | 05/06/2020 | 01/07/2020 | 02/07/2020 | 03/07/2020 | 14/07/2020 |
| 2 | 15/06/2020 | 20/06/2020 | 04/07/2020 | 08/07/2020 | ||
| 3 | 01/07/2020 | 02/07/2020 | 02/07/2020 | 05/07/2020 | 10/07/2020 | 15/07/2020 |
What I'm doing at the moment is importing the same file 6 times and filter "Steep" in each import by Contact, Assessment, etc.
Then I connect each table by a relation for "Project ID" and construct a table as shown before.
I believe there is a better way to do it, so I do not have to import the same file 6 times, filter it and do Relations between the 6 imports.
Do you have any suggestion?
Solved! Go to Solution.
Hi @wolfy_ ,
Believe that the best option would be to use a matrix visualization and using the column Steep to make you calculation and filtering in measure or something.
However if you only have one file what you need to do is some advance pivoting.
Check the code below:
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("jc8xD4IwEAXgv0KYSegdVFiNE2HQwY0wGNNN0Nj+/2AR8F0uUQcSrv3y7rXrUkqz9HAfw+UaXn+GcrPL2bCRQ5+95d575wc3ztaitcKySCWUpCWmsgELQ7SF6lph10pK0VVYFja+6+x8cH9kbo6/58WWz6XmD9oMj5tbaxZoC2EZ15foSlUT11ukVkUirZHWKhWbkgELw/qq9tQkxzbeYVXSVT8Qi5IuituT+VtvtojluJ8A", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [#"Project ID" = _t, Steep = _t, #"Date started" = _t, #"Date Completed" = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Project ID", Int64.Type}, {"Steep", type text}, {"Date started", type date}, {"Date Completed", type date}}),
#"Removed Columns1" = Table.RemoveColumns(#"Changed Type",{"Date Completed"}),
#"Unpivoted Columns" = Table.UnpivotOtherColumns(#"Removed Columns1", {"Project ID", "Steep"}, "Attribute", "Value"),
#"Removed Duplicates" = Table.Distinct(#"Unpivoted Columns"),
#"Added Index" = Table.AddIndexColumn(#"Removed Duplicates", "Index", 0, 1, Int64.Type),
#"Grouped Rows" = Table.Group(#"Added Index", {"Steep", "Project ID"}, {{"Count", each _, type table [Project ID=nullable number, Steep=nullable text, Attribute=text, Value=date, Index=number]}}),
#"Duplicated Column" = Table.DuplicateColumn(#"Grouped Rows", "Count", "Count - Copy"),
#"Aggregated Count" = Table.AggregateTableColumn(#"Duplicated Column", "Count", {{"Index", List.Min, "Min of Count.Index"}}),
#"Expanded Count - Copy" = Table.ExpandTableColumn(#"Aggregated Count", "Count - Copy", {"Attribute", "Index", "Value"}, {"Count - Copy.Attribute", "Count - Copy.Index", "Count - Copy.Value"}),
#"Added Custom" = Table.AddColumn(#"Expanded Count - Copy", "Custom", each [#"Count - Copy.Index"]-[Min of Count.Index]+1),
#"Removed Columns" = Table.RemoveColumns(#"Added Custom",{"Min of Count.Index", "Count - Copy.Index"}),
#"Pivoted Column" = Table.Pivot(#"Removed Columns", List.Distinct(#"Removed Columns"[Steep]), "Steep", "Count - Copy.Value"),
#"Removed Columns2" = Table.RemoveColumns(#"Pivoted Column",{"Count - Copy.Attribute", "Custom"})
in
#"Removed Columns2"
Also added the PBIX file
Regards
Miguel Félix
Proud to be a Super User!
Check out my blog: Power BI em PortuguêsHi @wolfy_ ,
Believe that the best option would be to use a matrix visualization and using the column Steep to make you calculation and filtering in measure or something.
However if you only have one file what you need to do is some advance pivoting.
Check the code below:
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("jc8xD4IwEAXgv0KYSegdVFiNE2HQwY0wGNNN0Nj+/2AR8F0uUQcSrv3y7rXrUkqz9HAfw+UaXn+GcrPL2bCRQ5+95d575wc3ztaitcKySCWUpCWmsgELQ7SF6lph10pK0VVYFja+6+x8cH9kbo6/58WWz6XmD9oMj5tbaxZoC2EZ15foSlUT11ukVkUirZHWKhWbkgELw/qq9tQkxzbeYVXSVT8Qi5IuituT+VtvtojluJ8A", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [#"Project ID" = _t, Steep = _t, #"Date started" = _t, #"Date Completed" = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Project ID", Int64.Type}, {"Steep", type text}, {"Date started", type date}, {"Date Completed", type date}}),
#"Removed Columns1" = Table.RemoveColumns(#"Changed Type",{"Date Completed"}),
#"Unpivoted Columns" = Table.UnpivotOtherColumns(#"Removed Columns1", {"Project ID", "Steep"}, "Attribute", "Value"),
#"Removed Duplicates" = Table.Distinct(#"Unpivoted Columns"),
#"Added Index" = Table.AddIndexColumn(#"Removed Duplicates", "Index", 0, 1, Int64.Type),
#"Grouped Rows" = Table.Group(#"Added Index", {"Steep", "Project ID"}, {{"Count", each _, type table [Project ID=nullable number, Steep=nullable text, Attribute=text, Value=date, Index=number]}}),
#"Duplicated Column" = Table.DuplicateColumn(#"Grouped Rows", "Count", "Count - Copy"),
#"Aggregated Count" = Table.AggregateTableColumn(#"Duplicated Column", "Count", {{"Index", List.Min, "Min of Count.Index"}}),
#"Expanded Count - Copy" = Table.ExpandTableColumn(#"Aggregated Count", "Count - Copy", {"Attribute", "Index", "Value"}, {"Count - Copy.Attribute", "Count - Copy.Index", "Count - Copy.Value"}),
#"Added Custom" = Table.AddColumn(#"Expanded Count - Copy", "Custom", each [#"Count - Copy.Index"]-[Min of Count.Index]+1),
#"Removed Columns" = Table.RemoveColumns(#"Added Custom",{"Min of Count.Index", "Count - Copy.Index"}),
#"Pivoted Column" = Table.Pivot(#"Removed Columns", List.Distinct(#"Removed Columns"[Steep]), "Steep", "Count - Copy.Value"),
#"Removed Columns2" = Table.RemoveColumns(#"Pivoted Column",{"Count - Copy.Attribute", "Custom"})
in
#"Removed Columns2"
Also added the PBIX file
Regards
Miguel Félix
Proud to be a Super User!
Check out my blog: Power BI em PortuguêsIf you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.
A new Power BI DataViz World Championship is coming this June! Don't miss out on submitting your entry.
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.
| User | Count |
|---|---|
| 56 | |
| 47 | |
| 44 | |
| 20 | |
| 20 |
| User | Count |
|---|---|
| 73 | |
| 72 | |
| 34 | |
| 33 | |
| 31 |