Supplies are limited. Contact info@espc.tech right away to save your spot before the conference sells out.
Get your discountScore big with last-minute savings on the final tickets to FabCon Vienna. Secure your discount
Hi all,
May I seek any help for this table transform?
ID | Name | Value |
1 | Category | Shipment |
1 | Item | A |
1 | Price | 100 |
1 | Item | B |
1 | Price | 200 |
1 | Item | C |
2 | Category | Delivery |
2 | Item | A |
2 | Price | 150 |
Desired Output:
ID | Category | Item | Price |
1 | Shipment | A | 100 |
1 | Shipment | B | 200 |
1 | Shipment | C | |
2 | Delivery | A | 150 |
Appreciated with your help!
Solved! Go to Solution.
Given your most recent set of data with four columns, you can modify my code to shift the Cost column by 2 to get your desired results.
let
Source = Excel.CurrentWorkbook(){[Name="Table5"]}[Content],
#"Changed Type" = Table.TransformColumnTypes(Source,{{"ID", Int64.Type}, {"Name", type text}, {"Value", type any}}),
//add index column which will retain rows in original order
#"Added Index" = Table.AddIndexColumn(#"Changed Type", "Index", 0, 1, Int64.Type),
//custom pivot
pivot = fnPivotAll(#"Added Index","Name","Value"),
//There will be some extra nulls as a result of above
//Each column will be offset by one (1) from the previous column
//So we readjust back
offsetPrice = Table.FromColumns(
Table.ToColumns(Table.SelectColumns(pivot,{"ID","Category","Item"})) &
{List.RemoveFirstN(pivot[Price],1) & {null}} &
{List.RemoveFirstN(pivot[Cost],2) & {null,null}},
type table[ID=Int64.Type, Category=text, Item=text,Price=Currency.Type,Cost=Currency.Type]),
//Now we can fill down the category column and filter out the nulls
#"Filled Down" = Table.FillDown(offsetPrice,{"Category"}),
#"Filtered Rows" = Table.SelectRows(#"Filled Down", each ([Item] <> null)),
//Merge the Category and Item columns for final report
#"Merged Columns" = Table.CombineColumns(#"Filtered Rows",{"Category", "Item"},
Combiner.CombineTextByDelimiter(" ", QuoteStyle.None),"Category Item")
in
#"Merged Columns"
However, if the number of columns will be dynamic --, then we should write a custom function which shifts the appropriate number of columns, the appropriate amount depending on how many "extra" columns there are.
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WMlTSUXJOLElNzy+qBDKDMzILclPzSpRidSByniWpuUDKES4QUJSZnAqkDQ0M0BU5YSgywlTkjFOREapTXFJzMstSgUyYHIpTjJCdYgo0IBYA", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [ID = _t, Name = _t, Value = _t]),
Custom1 = Table.ExpandTableColumn(Table.Group(Source,"ID",{"n",each Table.Combine(List.Transform(Table.Split(Table.Skip(_),2),(x)=>Table.AddColumn(Table.PromoteHeaders(Table.Skip(Table.Transpose(x))),[Name]{0},(x)=>[Value]{0})))}),"n",{"Category","Item","Price"})
in
Custom1
@wdx223_Daniel The script looks great, but it is not working in my dataset. In my dataset, like in the revised question, Item C may not have "Price".
Besdies, there are 22 distinct value in "Name".
How could I modify your code in this case. Great appredicated!
it seems my code is ok for your case.
@wdx223_Daniel , I found the data are messed if there are more than 4 columns in the script.
Will be appreciated if you could help take a look.
Given:
The problem with a non-aggregated pivot in PQ is that it => errors where there are multiple entries for the same category.
This is handled nicely by a custom function developed by Cam Wallace, which I rename fnPivotAll to use in other queries
//credit: Cam Wallace https://www.dingbatdata.com/2018/03/08/non-aggregate-pivot-with-multiple-rows-in-powerquery/
(Source as table,
ColToPivot as text,
ColForValues as text)=>
let
PivotColNames = List.Buffer(List.Distinct(Table.Column(Source,ColToPivot))),
#"Pivoted Column" = Table.Pivot(Source, PivotColNames, ColToPivot, ColForValues, each _),
TableFromRecordOfLists = (rec as record, fieldnames as list) =>
let
PartialRecord = Record.SelectFields(rec,fieldnames),
RecordToList = Record.ToList(PartialRecord),
Table = Table.FromColumns(RecordToList,fieldnames)
in
Table,
#"Added Custom" = Table.AddColumn(#"Pivoted Column", "Values", each TableFromRecordOfLists(_,PivotColNames)),
#"Removed Other Columns" = Table.RemoveColumns(#"Added Custom",PivotColNames),
#"Expanded Values" = Table.ExpandTableColumn(#"Removed Other Columns", "Values", PivotColNames)
in
#"Expanded Values"
So in your problem it would be used like:
let
//data was pasted into a table
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WMlTSUXJOLElNzy+qBDKDMzILclPzSpRidSByniWpuUDKES4QUJSZnAqkDQ0M0BU5YSgywlTkjFOREapTXFJzMstSgUyYHIpTjJCdYgo0IBYA", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [ID = _t, Name = _t, Value = _t]),
//custom function to do a non-aggregated pivot
pivotAll = fnPivotAll(Source,"Name","Value"),
//fill down the Category column then ZMerge with the Item column
#"Filled Down" = Table.FillDown(pivotAll,{"Category"}),
#"Merged Columns" = Table.CombineColumns(#"Filled Down",{"Category", "Item"},
Combiner.CombineTextByDelimiter(" ", QuoteStyle.None),"Category Item"),
//set the data type for the Price column
typePrice = Table.TransformColumnTypes(#"Merged Columns",{"Price",Currency.Type})
in
typePrice
Resulting in:
@ronrsnfld , thanks but it is not working in my model.
I found out that the problem is my dataset is like that - there is no Price row for itemC
then the data is messed up after runnung the function.
IDNameValue
1 | Category | Shipment |
1 | Item | A |
1 | Price | 100 |
1 | Item | B |
1 | Price | 200 |
1 | Item | C |
2 | Category | Delivery |
2 | Item | A |
2 | Price | 150 |
Do you think it is possible to fix. Appreciated!
Your example with a missing `C Price` doesn't seem to show the problem.
Before:
After
I tried again but it shows like that, there is "Price" for "ShipmentC". Would you mind to look into any steps I have missed.Sample_Pivot Function
Really Appreciated or that !!
I don't understand why there are different results in Excel PQ vs Power BI PQ, but here is code that should work in any event.
let
Source = Excel.CurrentWorkbook(){[Name="Table5"]}[Content],
#"Changed Type" = Table.TransformColumnTypes(Source,{{"ID", Int64.Type}, {"Name", type text}, {"Value", type any}}),
//add index column which will retain rows in original order
#"Added Index" = Table.AddIndexColumn(#"Changed Type", "Index", 0, 1, Int64.Type),
//custom pivot
pivot = fnPivotAll(#"Added Index","Name","Value"),
//There will be some extra nulls as a result of above
//Each column will be offset by one (1) from the previous column
//This is only an issue with the Price column so we will offset that back
offsetPrice = Table.FromColumns(
Table.ToColumns(Table.SelectColumns(pivot,{"ID","Category","Item"})) &
{List.RemoveFirstN(pivot[Price],1) & {null}},
type table[ID=Int64.Type, Category=text, Item=text,Price=Currency.Type]),
//Now we can fill down the category column and filter out the nulls
#"Filled Down" = Table.FillDown(offsetPrice,{"Category"}),
#"Filtered Rows" = Table.SelectRows(#"Filled Down", each ([Item] <> null)),
//Merge the Category and Item columns for final report
#"Merged Columns" = Table.CombineColumns(#"Filtered Rows",{"Category", "Item"},
Combiner.CombineTextByDelimiter(" ", QuoteStyle.None),"Category Item")
in
#"Merged Columns"
Results in Excel
This is wierd and will require further investigation.
If the query is run using Power BI desktop, it comes out OK. This is a Power BI forum, after all <g>
If the query is run using Excel, it is messed up.
I don't have time today to examine it more closely, but I will look into it later.
@ngct1112 you can transform it this way
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45W8nRR0lHyS8xNBVJhiTmlqUqxOtFKhkCec2JJanp+USWQGZyRWZCbmlcCl/MsSc0FUo5wgYCizGSQEYYGBuiKnDAUGWEqcsapyAjVKS6pOZllqUAmTA7FKUbITjEFGhALAA==", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [Column1 = _t, Column2 = _t, Column3 = _t]),
#"Promoted Headers" = Table.PromoteHeaders(Source, [PromoteAllScalars=true]),
#"Filtered Rows" = Table.SelectRows(#"Promoted Headers", each ((try Number.From([Value]) otherwise -999) =-999)),
#"Added Custom" = Table.AddColumn(#"Filtered Rows", "Custom", each if [Name]="Category" then [Value] else null),
#"Filled Down" = Table.FillDown(#"Added Custom",{"Custom"}),
#"Filtered Rows1" = Table.SelectRows(#"Filled Down", each ([Name] = "Item")),
#"Removed Columns" = Table.RemoveColumns(#"Filtered Rows1",{"Name"}),
#"Merged Columns" = Table.CombineColumns(#"Removed Columns",{"Custom", "Value"},Combiner.CombineTextByDelimiter(" ", QuoteStyle.None),"Attribute"),
_1 = Table.ToColumns(#"Merged Columns"),
_2 = Table.SelectRows(#"Promoted Headers", each ((try Number.From([Value]) otherwise -999) <>-999))[Value],
colName = List.Combine({Table.ColumnNames(#"Merged Columns"),{"Value"}}),
Custom1 = Table.FromColumns(_1&{_2},colName),
#"Changed Type" = Table.TransformColumnTypes(Custom1,{{"Value", Int64.Type}})
in
#"Changed Type"
@ngct1112 you can transform it this way
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45W8nRR0lHyS8xNBVJhiTmlqUqxOtFKhkCec2JJanp+USWQGZyRWZCbmlcCl/MsSc0FUo5wgYCizGSQEYYGBuiKnDAUGWEqcsapyAjVKS6pOZllqUAmTA7FKUbITjEFGhALAA==", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [Column1 = _t, Column2 = _t, Column3 = _t]),
#"Promoted Headers" = Table.PromoteHeaders(Source, [PromoteAllScalars=true]),
#"Filtered Rows" = Table.SelectRows(#"Promoted Headers", each ((try Number.From([Value]) otherwise -999) =-999)),
#"Added Custom" = Table.AddColumn(#"Filtered Rows", "Custom", each if [Name]="Category" then [Value] else null),
#"Filled Down" = Table.FillDown(#"Added Custom",{"Custom"}),
#"Filtered Rows1" = Table.SelectRows(#"Filled Down", each ([Name] = "Item")),
#"Removed Columns" = Table.RemoveColumns(#"Filtered Rows1",{"Name"}),
#"Merged Columns" = Table.CombineColumns(#"Removed Columns",{"Custom", "Value"},Combiner.CombineTextByDelimiter(" ", QuoteStyle.None),"Attribute"),
_1 = Table.ToColumns(#"Merged Columns"),
_2 = Table.SelectRows(#"Promoted Headers", each ((try Number.From([Value]) otherwise -999) <>-999))[Value],
colName = List.Combine({Table.ColumnNames(#"Merged Columns"),{"Value"}}),
Custom1 = Table.FromColumns(_1&{_2},colName),
#"Changed Type" = Table.TransformColumnTypes(Custom1,{{"Value", Int64.Type}})
in
#"Changed Type"
Given your most recent set of data with four columns, you can modify my code to shift the Cost column by 2 to get your desired results.
let
Source = Excel.CurrentWorkbook(){[Name="Table5"]}[Content],
#"Changed Type" = Table.TransformColumnTypes(Source,{{"ID", Int64.Type}, {"Name", type text}, {"Value", type any}}),
//add index column which will retain rows in original order
#"Added Index" = Table.AddIndexColumn(#"Changed Type", "Index", 0, 1, Int64.Type),
//custom pivot
pivot = fnPivotAll(#"Added Index","Name","Value"),
//There will be some extra nulls as a result of above
//Each column will be offset by one (1) from the previous column
//So we readjust back
offsetPrice = Table.FromColumns(
Table.ToColumns(Table.SelectColumns(pivot,{"ID","Category","Item"})) &
{List.RemoveFirstN(pivot[Price],1) & {null}} &
{List.RemoveFirstN(pivot[Cost],2) & {null,null}},
type table[ID=Int64.Type, Category=text, Item=text,Price=Currency.Type,Cost=Currency.Type]),
//Now we can fill down the category column and filter out the nulls
#"Filled Down" = Table.FillDown(offsetPrice,{"Category"}),
#"Filtered Rows" = Table.SelectRows(#"Filled Down", each ([Item] <> null)),
//Merge the Category and Item columns for final report
#"Merged Columns" = Table.CombineColumns(#"Filtered Rows",{"Category", "Item"},
Combiner.CombineTextByDelimiter(" ", QuoteStyle.None),"Category Item")
in
#"Merged Columns"
However, if the number of columns will be dynamic --, then we should write a custom function which shifts the appropriate number of columns, the appropriate amount depending on how many "extra" columns there are.