Power BI is turning 10, and we’re marking the occasion with a special community challenge. Use your creativity to tell a story, uncover trends, or highlight something unexpected.
Get startedJoin us for an expert-led overview of the tools and concepts you'll need to become a Certified Power BI Data Analyst and pass exam PL-300. Register now.
Hi!
I combined a lot of csvs now I have to calculate (in M language because I'll have to pivot a column next) a new column which increases by 1 after every time [Stop] value is 1. In Calc1 there's the expected result. I tried with = Table.AddColumn(Source, "Event Index", each List.Sum(List.Range(Source[Stop],0,[Index]))) but since it's a combination of csvs it takes too much time and crashes. With this function it seems that it operates like a crossjoin, it operates for every single file combination. So I need an alternative, maybe a function that verifies only the punctual previous row and adds 1 only if there's 1 in Stop column.
Note that there's not a fixed number of rows between the stop=1 and the previous one.
Thank you very much
Stop | Calc1 |
0 | 0 |
0 | 0 |
0 | 0 |
0 | 0 |
0 | 0 |
0 | 0 |
0 | 0 |
0 | 0 |
0 | 0 |
0 | 0 |
1 | 0 |
0 | 1 |
0 | 1 |
0 | 1 |
0 | 1 |
0 | 1 |
0 | 1 |
0 | 1 |
0 | 1 |
0 | 1 |
0 | 1 |
0 | 1 |
1 | 1 |
0 | 2 |
0 | 2 |
0 | 2 |
0 | 2 |
0 | 2 |
0 | 2 |
0 | 2 |
0 | 2 |
0 | 2 |
0 | 2 |
0 | 2 |
Solved! Go to Solution.
Hi @AGo
please check out this solution (paste the code into the advanced editor and follow the steps):
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WMlCK1SGXNKRAL9XNiQUA", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type text) meta [Serialized.Text = true]) in type table [Stop = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Stop", Int64.Type}}),
Custom1 = List.Buffer( #"Changed Type"[Stop] ),
Calc1 = List.Generate( ()=>
[Level = 0, Counter = 0],
each [Counter] < List.Count(Custom1) ,
each [
Level = if Custom1{[Counter]} = 1 then [Level] + 1 else [Level],
Counter = [Counter] + 1
],
each [Level] ),
AutomaticConversionWithFullTable = Table.FromColumns( Table.ToColumns(#"Changed Type") & {#"Calc1"}, Table.ColumnNames(#"Changed Type") & {"Calc1"} )
in
AutomaticConversionWithFullTable
Imke Feldmann (The BIccountant)
If you liked my solution, please give it a thumbs up. And if I did answer your question, please mark this post as a solution. Thanks!
How to integrate M-code into your solution -- How to get your questions answered quickly -- How to provide sample data -- Check out more PBI- learning resources here -- Performance Tipps for M-queries
Hi @AGo
please check out this solution (paste the code into the advanced editor and follow the steps):
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WMlCK1SGXNKRAL9XNiQUA", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type text) meta [Serialized.Text = true]) in type table [Stop = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Stop", Int64.Type}}),
Custom1 = List.Buffer( #"Changed Type"[Stop] ),
Calc1 = List.Generate( ()=>
[Level = 0, Counter = 0],
each [Counter] < List.Count(Custom1) ,
each [
Level = if Custom1{[Counter]} = 1 then [Level] + 1 else [Level],
Counter = [Counter] + 1
],
each [Level] ),
AutomaticConversionWithFullTable = Table.FromColumns( Table.ToColumns(#"Changed Type") & {#"Calc1"}, Table.ColumnNames(#"Changed Type") & {"Calc1"} )
in
AutomaticConversionWithFullTable
Imke Feldmann (The BIccountant)
If you liked my solution, please give it a thumbs up. And if I did answer your question, please mark this post as a solution. Thanks!
How to integrate M-code into your solution -- How to get your questions answered quickly -- How to provide sample data -- Check out more PBI- learning resources here -- Performance Tipps for M-queries
Hi @ImkeF ,
at first your solution worked but caused the error "Expression.Error: Evaluation resulted in a stack overflow and cannot continue." right after using the pivoting function on another column of the same table (in don't aggregate mode).
Then I used your function in the example file transformation step before combining multiple files, I concatenated your result with the column with file name, so it worked.
I don't know why it returned that error, maybe this function is too heavy for massive calculation after files combining.
Thanks
User | Count |
---|---|
59 | |
59 | |
56 | |
38 | |
29 |
User | Count |
---|---|
78 | |
62 | |
45 | |
40 | |
39 |