Don't miss your chance to take the Fabric Data Engineer (DP-600) exam for FREE! Find out how by attending the DP-600 session on April 23rd (pacific time), live or on-demand.
Learn moreJoin the FabCon + SQLCon recap series. Up next: Power BI, Real-Time Intelligence, IQ and AI, and Data Factory take center stage. All sessions are available on-demand after the live show. Register now
Hi,
I want to use the "Correlation Plot" visual from the Power BI visuals store to get a better understanding of my dataset.
Correlation Plot
However, I'm if using measures in the plot, I'm uncertain at what level of granularity Power BI / DAX is computing the result and want to obtain a better understanding of what is used as the input for the correlation plot.
I primarily want to compute correlations at a individual customer level. I can obtain this by adding a calculated column to my customer table with the desired measure. However, this doesn't seem ideal to me, as I'm not planning to use the measures for filtering or categories purposes.
Thus... how can I ensure that the created measure I pass through to the Correlation Plot is computed at the desired granularity level? - i.e. customer level
The plot only accepts a column or measure as input, and I'm therefore uncertain if I can apply the same filtering logic that is suggested by Daxpatterns: Dax Patterns
IF(
NOT(ISFILTERED( Customer_Key ) ),
CALCULATE(X) )
)Anyone know how to ensure the right results with a measure, or do I need to stick to a calculated column on my customer table?
@Sharon
Solved! Go to Solution.
Hi Lydia,
I believe that I've gotten the answer I needed from this blogpost: BPI Community Blog
The key for me was to understand what dataframe was loaded into the R visual when using measures, as measures are calculated differently depending on the granularity. To get the right dataframe loaded at the right granularity level, I need to use catagorical columns, which will return an error if I plot it directly into the R correlation plot avaliable form the store.
The blog describes the importance of this, and also shows how to remove the first three columns before the dataframe is loaded into the R visual.
This means I can use measures instead of calculated columns, thus optimizing performance as I don't need to add multiple columns to my customer or product table to compute my correlation at the right granularity level.
@Anonymous,
Could you please share dummy data of your table and post expected result here?
Regards,
Lydia
Hi Lydia,
I believe that I've gotten the answer I needed from this blogpost: BPI Community Blog
The key for me was to understand what dataframe was loaded into the R visual when using measures, as measures are calculated differently depending on the granularity. To get the right dataframe loaded at the right granularity level, I need to use catagorical columns, which will return an error if I plot it directly into the R correlation plot avaliable form the store.
The blog describes the importance of this, and also shows how to remove the first three columns before the dataframe is loaded into the R visual.
This means I can use measures instead of calculated columns, thus optimizing performance as I don't need to add multiple columns to my customer or product table to compute my correlation at the right granularity level.
Anyone able to share insights on how R-Scripts handles data from measures?
Check out the April 2026 Power BI update to learn about new features.
If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.
A new Power BI DataViz World Championship is coming this June! Don't miss out on submitting your entry.
| User | Count |
|---|---|
| 48 | |
| 46 | |
| 41 | |
| 20 | |
| 17 |
| User | Count |
|---|---|
| 69 | |
| 67 | |
| 32 | |
| 27 | |
| 26 |