Get certified in Microsoft Fabric—for free! For a limited time, the Microsoft Fabric Community team will be offering free DP-600 exam vouchers. Prepare now
Hi,
I am a bit confuse about the use of variables in this case:
Case with two measures.
Solved! Go to Solution.
Hey, from what you have provided I created this:
The first measure:
Diff min.max = max('Table1'[value])-min('Table1'[value])
The 2nd measure that creates the correct result:
ms correct = sumx(values('Table1'[type]),[Diff min.max])
The measure that creates the wrong result:
ms incorrect = var Diffminmax = max('Table1'[value])-min('Table1'[value]) return sumx(values('Table1'[type]),Diffminmax)
In addition I also created the following measures, to make things more obvious:
ms incorrect just the max value = var Diffminmax = max('Table1'[value]) return Diffminmax
and
ms incorrect just the min value = var Diffminmax = min('Table1'[value]) return Diffminmax
and this
no of types = COUNTROWS(VALUES('Table1'[type]))
All this allows me to create this little table:
Why returns the measure ms incorrect the wrong value 18:
In the Total line, there is no filter present from the column type for this reason the max value equals 10 and the min value equals 1, the difference equals 9, this value is stored inside the variable
... var Diffminmax = max('Table1'[value])-min('Table1'[value]) ...
As there is no filter present at the Total line VALUES('Table1'[Type]) contains two rows a and b. For each of this rows the value of the variable is used, and returns the wrong result.
You have to be aware that variables will not be evaluated if they are called but during definition.
One can rewrite the incorrect measure like so:
ms incorrect rewritten = sumx( values('Table1'[type]), var Diffminmax = CALCULATE(max('Table1'[value]))-CALCULATE(min('Table1'[value])) return Diffminmax )
Now this measure will also return the coorect value 8.
An important fact, that sometimes moves out of sight is this: a measure automatically transforms an exisiting row context into a filter context. This explains why the measure ms correct returns the expexted value 8, the sum of a =(2-1) + b = (10-3).
Hopefully this explains the behaviour much better.
Regards,
Tom
Hey,
the second example evaluates the value of the variable, before the iteration (SUMX iterates over the table VALUES(...), this means the SUMX will create the result like so valueofvariable * numberofrowsreturnedbyvalues
Besides the fact that SUMX is one of the table iterator functions, it also create a row context, for each row of VALUES(), the measure is evaluated in the context Table1[Type], and then these results are summed.
Hopefully this provides the explanation you are looking for.
Regards,
Tom
@TomMartens well explained.
But what is the different between the 2 cases.
both looks the same, but why is case 1 not correct if i want to sum the min max diff of 2 types.
Hey,
please create a pbix file that contains sample data, upload the pbix file to onedrive or sharepoint, and publish the link.
Maybe I will find some time to explain my writing in more detail based on your sample data.
Regards,
Tom
Here is the sample data
type value
a 1
a 2
b 3
b 10
Diff min.max: a=1
b=7
Sum diff = 8
2 separate Measures correct answer (8):
Diff min.max = max(value)-min(value)
sumx(values(type);Diff min.max
variable/measure not correct answer (18):
var Diff min.max = max(value)-min(value)
return
sumx(values(type);Diff min.max
Hey, from what you have provided I created this:
The first measure:
Diff min.max = max('Table1'[value])-min('Table1'[value])
The 2nd measure that creates the correct result:
ms correct = sumx(values('Table1'[type]),[Diff min.max])
The measure that creates the wrong result:
ms incorrect = var Diffminmax = max('Table1'[value])-min('Table1'[value]) return sumx(values('Table1'[type]),Diffminmax)
In addition I also created the following measures, to make things more obvious:
ms incorrect just the max value = var Diffminmax = max('Table1'[value]) return Diffminmax
and
ms incorrect just the min value = var Diffminmax = min('Table1'[value]) return Diffminmax
and this
no of types = COUNTROWS(VALUES('Table1'[type]))
All this allows me to create this little table:
Why returns the measure ms incorrect the wrong value 18:
In the Total line, there is no filter present from the column type for this reason the max value equals 10 and the min value equals 1, the difference equals 9, this value is stored inside the variable
... var Diffminmax = max('Table1'[value])-min('Table1'[value]) ...
As there is no filter present at the Total line VALUES('Table1'[Type]) contains two rows a and b. For each of this rows the value of the variable is used, and returns the wrong result.
You have to be aware that variables will not be evaluated if they are called but during definition.
One can rewrite the incorrect measure like so:
ms incorrect rewritten = sumx( values('Table1'[type]), var Diffminmax = CALCULATE(max('Table1'[value]))-CALCULATE(min('Table1'[value])) return Diffminmax )
Now this measure will also return the coorect value 8.
An important fact, that sometimes moves out of sight is this: a measure automatically transforms an exisiting row context into a filter context. This explains why the measure ms correct returns the expexted value 8, the sum of a =(2-1) + b = (10-3).
Hopefully this explains the behaviour much better.
Regards,
Tom
Check out the October 2024 Power BI update to learn about new features.
Learn from experts, get hands-on experience, and win awesome prizes.
User | Count |
---|---|
112 | |
112 | |
105 | |
94 | |
58 |
User | Count |
---|---|
174 | |
147 | |
136 | |
102 | |
82 |