Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Learn from the best! Meet the four finalists headed to the FINALS of the Power BI Dataviz World Championships! Register now

Reply
Spotto
Helper IV
Helper IV

SHOW ITEM QUANTITY PER RULE

HELLO EVERYONE, I need help in POWER QUERY, I have 2 tables

I'll put a simplified example below:
the ITEM column is the result of a calculation of the amount of CODE that needs to be allocated for each name and REGION.

-->RULE:
a) - The amount allocated to each name and REGION cannot be greater than the ITEM column.
Example: Each name must be given two (2) CODE (ITEM column)
b) - CODE cannot repeat for different names, a CODE must be allocated for each NAME
c) - If the amount of REGION is less than the ITEM, you must allocate other lines of the  DIFFERENT REGIONS.

d) - The distribution of REGION and CODE can be done randomly for each NAME and REGION

 

Spotto_1-1665432778602.png

REGION VALUE CODE ITEM

MS54QQWWW2
MS78UUIUIU2
MS667EEERHT2
MS54YYRYRY2
MS87TTTRRT2
MS98GGGF2
MS355BBBFBF2
AB54ADR42

 

 

Spotto_2-1665433014921.png

NAMEREGION

MARCELOMS
ROBERTAAB
JONATHANMS

 

 

RESULT

Spotto_3-1665433144775.png

 

I hope I explained it as best as possible.

 

Thank you for your help

1 ACCEPTED SOLUTION
jbwtp
Memorable Member
Memorable Member

Hi @Spotto,

 

This is the code to do the above;

let
    dtRegions = 
        let
            Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45W8g1W0lEyNQESgYHh4eFA2kgpVgcqbm4BJEJDPYEQVcLMzBxIurq6BnmEoMqAjYqMDAJCVAkLkI6QkJCgIDQdliBL3N3d3VCFjU1NgaSTk5ObE0LG0QlmhaNLEIgyVoqNBQA=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [REGION = _t, VALUE = _t, CODE = _t, ITEM = _t]),
            #"Changed Type" = Table.TransformColumnTypes(Source,{{"REGION", type text}, {"VALUE", Int64.Type}, {"CODE", type text}, {"ITEM", Int64.Type}})
        in
            #"Changed Type",
    dtRegionsToNames = 
        let
            Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45W8nUMcnb18VfSUfINVorViVYK8ndyDQpxBAo4OoEFvPz9HEM8HP2gSmIB", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [NAME = _t, REGION = _t]),
            #"Changed Type" = Table.TransformColumnTypes(Source,{{"NAME", type text}, {"REGION", type text}})
        in
            #"Changed Type",
    dtRegionsToNamesEx = Table.AddColumn(dtRegionsToNames, "ITEM", each Number.RoundUp(Table.RowCount(dtRegions)/Table.RowCount(dtRegionsToNames), 0), Int64.Type),
    f = (regs as list, regName as text, regCount as number) as list => 
        let 
            //regs = Table.ToRecords(dtRegions),
            direct = List.Buffer(List.FirstN(List.Select(regs, each [REGION] = regName),regCount)),
            regsPass = List.RemoveItems(regs, direct),
            indirect = if List.Count(direct) < 2 then List.FirstN(regsPass, regCount-List.Count(direct)) else {},
            pass = List.RemoveItems(regsPass, indirect),
            out = {pass, direct & indirect}
        in out,
    process = List.Skip(List.Accumulate(Table.ToRecords(dtRegionsToNamesEx), {[Data = {Table.ToRecords(dtRegions)}]}, (a,n)=> a & {n & [Data = f(List.Last(a)[Data]{0}, n[REGION], n[ITEM]), Records = Table.FromRecords(Data{1})]})),
    FromRecords = Table.FromRecords(process),
    #"Expanded Records" = Table.ExpandTableColumn(FromRecords, "Records", {"REGION", "VALUE", "CODE"}, {"REGION.ACTUAL", "VALUE", "CODE"}),
    #"Removed Other Columns" = Table.SelectColumns(#"Expanded Records",{"NAME", "REGION", "ITEM", "VALUE", "REGION.ACTUAL", "CODE"})
in
    #"Removed Other Columns"

 

Cheers,

John

View solution in original post

9 REPLIES 9
jbwtp
Memorable Member
Memorable Member

Hi @Spotto,

 

Sorry, could you please clarify. In your example, I could not figure out:

  1. Why the first line (CODE = QQWWW) has disappeared?
  2. I think I understand then next two lines as (following the sequence in the Names table) we allocate AB to ROBERTA and because there is only one AB in the table (I guess that the row count is what you call amount) we allocate the next consequtively available row in the Region table (which is CODE = YYRYRY) to ROBERTA to make up for 2 lines per name as the AB region suggests. 
    1. I guess by the rule (d) the row which we assign to ROBERTA could be anything (as long as it is not already assigned by other rules to someone else), not necessarily next consecutive?
  3. If the above is correct, what would happen if REGION = MS would have ITEM = 1 or 3? In case of 3 would we then need another line to make up for the MS's ITEM rule? Or we are always guided by a REGION assigned to the NAME?
  4. Why do we skip the next available row CODE = TTTRRT and start assigning rows to JONATHAN from CODE =  GGGF?

 

Thanks,

John

@jbwtp thanks for the help and answering your questions:

 

1.Why the first line (CODE = QQWWW) has disappeared?

--> the code has not disappeared, the main rule is to distribute the CODE by the amount defined in the ITEM column by NAMES and REGION, which can be randomly. In the example the ITEM = 2 so the names must receive any CODE that is from your REGION
example:
MARCELO receives two(2)  CODES only from REGIAO (MS), It can be any code from the MS region

 

2. I think I understand then next two lines as (following the sequence in the Names table) we allocate AB to ROBERTA and because there is only one AB in the table (I guess that the row count is what you call amount) we allocate the next consequtively available row in the Region table (which is CODE = YYRYRY) to ROBERTA to make up for 2 lines per name as the AB region suggests. 

  1. I guess by the rule (d) the row which we assign to ROBERTA could be anything (as long as it is not already assigned by other rules to someone else), not necessarily next consecutive

-->yes that's right, in the example I have to distribute 2 ITEM (this value changes with each BI update) to Roberta in the AB region, however we have only one CODE for AB and therefore we can take any other CODE to reach a total of 2 (ITEM = 2 )

 

  1. If the above is correct, what would happen if REGION = MS would have ITEM = 1 or 3? In case of 3 would we then need another line to make up for the MS's ITEM rule? Or we are always guided by a REGION assigned to the NAME?

--> in the example for MARCELO that receives only from the MS REGION, if MS had only 1 line the names MARCELO (MS) and JONATHAN (MS) would receive other lines from other REGIONs randomly, but the CODE cannot repeat for more than one person.

 

 

  1. Why do we skip the next available row CODE = TTTRRT and start assigning rows to JONATHAN from CODE =  GGGF?

---> because the distribution can be random, it doesn't necessarily have to go line by line

I hope I have helped in the doubts and thank you very much

jbwtp
Memorable Member
Memorable Member

Hi @Spotto,

 

the only one thing that I forgot to claify: do we have any priority of assiging CODEs? E.g. if ROBERTA with REGION = AB would be first and have ITEMS = 100, would she steal all the records from MARCELLO and JONOTHAN? Or matching regions are assigned first to as much as we can and then we fill the gaps with the unallocated CODEs?

 

Below I assume the former - lines are assigned on first come first served basis potentially "stealing" from apparent "owners".

 

let
    dtRegions = 
        let
            Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45W8g1W0lEyNQESgYHh4eFA2kgpVgcqbm4BJEJDPYEQVcLMzBxIurq6BnmEoMqAjYqMDAJCVAkLkI6QkJCgIDQdliBL3N3d3VCFjU1NgaSTk5ObE0LG0QlmhaNLEIgyVoqNBQA=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [REGION = _t, VALUE = _t, CODE = _t, ITEM = _t]),
            #"Changed Type" = Table.TransformColumnTypes(Source,{{"REGION", type text}, {"VALUE", Int64.Type}, {"CODE", type text}, {"ITEM", Int64.Type}})
        in
            #"Changed Type",
    dtRegionsToNames = 
        let
            Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45W8nUMcnb18VfSUfINVorViVYK8ndyDQpxBAo4OoEFvPz9HEM8HP2gSmIB", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [NAME = _t, REGION = _t]),
            #"Changed Type" = Table.TransformColumnTypes(Source,{{"NAME", type text}, {"REGION", type text}})
        in
            #"Changed Type",
    MoveITEMStoRegionsToNamesWhereTheySupposedToBe = Table.ExpandTableColumn(Table.NestedJoin(dtRegionsToNames, {"REGION"}, Table.Group(dtRegions, {"REGION"}, {{"ITEM", each List.Max([ITEM]), type nullable number}}), {"REGION"}, "dtRegionsToNames", JoinKind.LeftOuter), "dtRegionsToNames", {"ITEM"}, {"ITEM"}),
    dtRegionsToNamesEx = MoveITEMStoRegionsToNamesWhereTheySupposedToBe,
    f = (regs as list, regName as text, regCount as number) as list => 
        let 
            //regs = Table.ToRecords(dtRegions),
            direct = List.Buffer(List.FirstN(List.Select(regs, each [REGION] = regName),regCount)),
            regsPass = List.RemoveItems(regs, direct),
            indirect = if List.Count(direct) < 2 then List.FirstN(regsPass, regCount-List.Count(direct)) else {},
            pass = List.RemoveItems(regsPass, indirect),
            out = {pass, direct & indirect}
        in out,
    process = List.Skip(List.Accumulate(Table.ToRecords(dtRegionsToNamesEx), {[Data = {Table.ToRecords(dtRegions)}]}, (a,n)=> a & {n & [Data = f(List.Last(a)[Data]{0}, n[REGION], n[ITEM]), Records = Table.FromRecords(Data{1})]})),
    FromRecords = Table.FromRecords(process),
    #"Expanded Records" = Table.ExpandTableColumn(FromRecords, "Records", {"REGION", "VALUE", "CODE"}, {"REGION.ACTUAL", "VALUE", "CODE"}),
    #"Removed Other Columns" = Table.SelectColumns(#"Expanded Records",{"NAME", "REGION", "ITEM", "VALUE", "REGION.ACTUAL", "CODE"})
in
    #"Removed Other Columns"

 

The code seems to be quite complex and I wonder how it would perform on a large dataset.

However, it seems to do the job. Please try it.

 

Kind regrads,

John

@jbwtp thank you very much for your help, today i received information from my director and he asked to make a small change to the rule.
- He wants you to count the total rows of the table (TABLE) and separate equally for each NAME always observing the REGION.

 

EXAMPLE:

Spotto_0-1665670746350.png

we have 8 lines for 3 people (Marcelo, Roberta and Janathan) so 8/3 = 3. 

** there is no code priority**
- Marcelo will receive 3 lines from any MS REGION (ANY Line of the MS region)
- Jonathan will receive 3 lines from any MS REGION (ANY Line of the MS region)
- Roberta will receive 2 lines from any AB REGION (ANY Line of the AB region)

TOTAL = 8

---> If you don't have enough region (Roberta's example) she will receive any region to complete. (cannot be a line that has already been sent to someone else)

 

IMPORTANT:
--> the name and region table will always change on power bi update

 

Could you please check this change in your solution because it was too complex and I couldn't insert this new rule 🙂

 

I did the separation of the total lines by analyst but I could not separate by REGION.
Could you check the file below my solution and maybe add the solution by region by analyst?

 

example.pbix 

 

jbwtp
Memorable Member
Memorable Member

How do you define that ROBERTA should get 2 lines, not MARCELLO?

Because she comes last? Or because we need to distribute matching regions first to max capacity? And then pick the missing bits to as much as we can?

 

Cheers,

John

@jbwtp That's right, we need to distribute the total of regions for the names following the regions, if any region is missing for any name, it must be filled up to the total with any other region.

 

Of course, it will not always be the same for everyone because if the division (total lines / total names) gives a value by comma, one of the names will receive more lines to result in the total.

jbwtp
Memorable Member
Memorable Member

Hi @Spotto,

 

Sorry, to illustrate my question:

NAMEREGION

MARCELOMS
ROBERTAAB
JONATHANMS

In the case above, we have 8 regions (i.e. up to 8/3 =>3 regions pp). One of them AB and others are all MS. In this case, should the number of regions allocated to each would be:

MARCELOMS3
ROBERTAAB1
ROBERTAMS2
JONATHANMS2

 

Making ROBERTA having 3 in total and JONATHAN having only 2 as he comes last?

 

Thanks,

John

@jbwtp who will be last doesn't matter, some name will always be last and will receive a little less or more region.
It doesn't matter if Jonathan will have 3 and Roberta will have 2 or if Roberta will have 3 and Jonathan will have 2. The main rule is to distribute all regions by names observing the region of each name.
I hope I have solved your doubt 😊

jbwtp
Memorable Member
Memorable Member

Hi @Spotto,

 

This is the code to do the above;

let
    dtRegions = 
        let
            Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45W8g1W0lEyNQESgYHh4eFA2kgpVgcqbm4BJEJDPYEQVcLMzBxIurq6BnmEoMqAjYqMDAJCVAkLkI6QkJCgIDQdliBL3N3d3VCFjU1NgaSTk5ObE0LG0QlmhaNLEIgyVoqNBQA=", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [REGION = _t, VALUE = _t, CODE = _t, ITEM = _t]),
            #"Changed Type" = Table.TransformColumnTypes(Source,{{"REGION", type text}, {"VALUE", Int64.Type}, {"CODE", type text}, {"ITEM", Int64.Type}})
        in
            #"Changed Type",
    dtRegionsToNames = 
        let
            Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45W8nUMcnb18VfSUfINVorViVYK8ndyDQpxBAo4OoEFvPz9HEM8HP2gSmIB", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [NAME = _t, REGION = _t]),
            #"Changed Type" = Table.TransformColumnTypes(Source,{{"NAME", type text}, {"REGION", type text}})
        in
            #"Changed Type",
    dtRegionsToNamesEx = Table.AddColumn(dtRegionsToNames, "ITEM", each Number.RoundUp(Table.RowCount(dtRegions)/Table.RowCount(dtRegionsToNames), 0), Int64.Type),
    f = (regs as list, regName as text, regCount as number) as list => 
        let 
            //regs = Table.ToRecords(dtRegions),
            direct = List.Buffer(List.FirstN(List.Select(regs, each [REGION] = regName),regCount)),
            regsPass = List.RemoveItems(regs, direct),
            indirect = if List.Count(direct) < 2 then List.FirstN(regsPass, regCount-List.Count(direct)) else {},
            pass = List.RemoveItems(regsPass, indirect),
            out = {pass, direct & indirect}
        in out,
    process = List.Skip(List.Accumulate(Table.ToRecords(dtRegionsToNamesEx), {[Data = {Table.ToRecords(dtRegions)}]}, (a,n)=> a & {n & [Data = f(List.Last(a)[Data]{0}, n[REGION], n[ITEM]), Records = Table.FromRecords(Data{1})]})),
    FromRecords = Table.FromRecords(process),
    #"Expanded Records" = Table.ExpandTableColumn(FromRecords, "Records", {"REGION", "VALUE", "CODE"}, {"REGION.ACTUAL", "VALUE", "CODE"}),
    #"Removed Other Columns" = Table.SelectColumns(#"Expanded Records",{"NAME", "REGION", "ITEM", "VALUE", "REGION.ACTUAL", "CODE"})
in
    #"Removed Other Columns"

 

Cheers,

John

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.

February Power BI Update Carousel

Power BI Monthly Update - February 2026

Check out the February 2026 Power BI update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.