Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Be one of the first to start using Fabric Databases. View on-demand sessions with database experts and the Microsoft product team to learn just how easy it is to get started. Watch now

Reply
tamerj1
Super User
Super User

Power Bi Challenge - Round 2

Hello Community Champions!

This is definitely not a common requirement but it is a real one. People working in the industrial sectors would probably encounter similar situations. Engineering companies who design Central Monitoring Systems or Building Management Systems usually design the shape of the output data in a way that makes it easy to handle with Excel. However, today more and more businesses are shifting to Power Bi. Industrial firms are no exception specially that they’re having probably more concerns about their data security.

 

Enough of introduction.. the challenge this time is about handling aggregations in horizontally expanded tables. Tables that contain huge number of value columns. In our case the CMS is recording a temperature measurement every 10 minutes accumulating 144 columns in a single row every day.

2.png

Power Query experts do not get excited.. The table, which is supposed to be retrieved via a Live Connection due to data control measures, contains a row for each temperature sensor each day for the current year 2024. Everyday one more row is added to each sensor. The 144 columns contains the measurement of the temperature recorded every 10 minutes. This daily variation in the temperature is important, however this is not our problem for today’s challenge.

 

Today we are interested in Daily Averages. The requirement is simple: Create a line chart of the average daily temperature against [Date] in the x-axis.
1.png
Attached is the sample data file.


@lbendlin  @Greg_Deckler @Jihwan_Kim  @sjoerdvn @OwenAuger @DataNinja777  @quantumudit @ThxAlot @johnbasha33 @Dangar332 @ValtteriN @AntrikshSharma @AlexisOlson @Martin_D 

4 ACCEPTED SOLUTIONS

@tamerj1 Hey, here's a version that doesn't require hardcoding and you can theoretically add as many pre-columns as you want. Basically Measure 2 but with a dynamic finding of the number of commas before the first value column.

Measure 4 = 
    VAR __Text = SUBSTITUTE( TOCSV( 'T_MEASURES', , ",", 1 ), UNICHAR(10), "|" )
    VAR __1 = PATHITEM( __Text, 1 )
    VAR __FirstColon = FIND(":", __1)
    VAR __Left = LEFT( __1, __FirstColon )
    VAR __Commas = LEN( __Left ) - LEN( SUBSTITUTE( __Left, ",", "" ) )
    VAR __2 = PATHITEM( __Text, 2 )
    VAR __3 = SUBSTITUTE( __2, ",", "|", __Commas )
    VAR __4 = PATHITEM( __3, 2 )
    VAR __Path = SUBSTITUTE( __4, ",", "|" )
    VAR __Table = 
        ADDCOLUMNS(
            GENERATESERIES(1, 144, 1),
            "__Value", PATHITEM( __Path, [Value]) + 0
        )
    VAR __Result = AVERAGEX( __Table, [__Value] )
RETURN
    __Result

 



Follow on LinkedIn
@ me in replies or I'll lose your thread!!!
Instead of a Kudo, please vote for this idea
Become an expert!: Enterprise DNA
External Tools: MSHGQM
YouTube Channel!: Microsoft Hates Greg
Latest book!:
Power BI Cookbook Third Edition (Color)

DAX is easy, CALCULATE makes DAX hard...

View solution in original post

@tamerj1 OK, one "final" version. This one *should* work with multiple rows of sensor data on the same day and doesn't assume basically anything in terms of how many columns of data, just that the first column of data that the name includes a ":" and that after that the data columns are sequential with no non-data columns mixed in. 

 

I don't have the original data file so I can't actually test it with multiple data on the same day (although that doesn't make sense in this scenario it might make sense in other similar circumstances or scenarios so I wanted to try doing it).

Measure 5 = 
    // Deal with the header to figure out what columns need to be removed
    VAR __TableWHeaders = SUBSTITUTE( TOCSV( 'T_MEASURES', , ",", 1 ), UNICHAR(10), "|" ) // TOCSV with Headers
    VAR __Headers = PATHITEM( __TableWHeaders, 1 ) //headers
    VAR __FirstColon = FIND(":", __Headers) // find the first colon, which should be the first column of data
    VAR __Left = LEFT( __Headers, __FirstColon ) // grab everything to the left of the first colon
    VAR __Commas = LEN( __Left ) - LEN( SUBSTITUTE( __Left, ",", "" ) ) // figure out how many commas (columns) are in the "pre-columns" before the data

    // Now deal with the data which might have multiple rows
    VAR __TableWOHeaders = SUBSTITUTE( TOCSV( 'T_MEASURES', , ",", 0 ), UNICHAR(10), "|" ) // TOCSV without Headers
    VAR __Count = COUNTROWS('T_MEASURES') // number of rows in table
    VAR __Data = // Generate a table with a column that strips out the pre-columns
        ADDCOLUMNS(
            GENERATESERIES(1, __Count, 1),
            "__Data", 
                VAR __Text = PATHITEM( __TableWOHeaders, [Value])
                VAR __Path = SUBSTITUTE( __Text, ",", "|", __Commas )
                VAR __Result = PATHITEM( __Path, 2 )
            RETURN
                __Result
        )
    VAR __DataText = CONCATENATEX( __Data, [__Data], "|" ) // convert the table back into a text string with each row as a path item
    VAR __Path = SUBSTITUTE( __DataText, ",", "|" ) // convert all columns to path items
    VAR __DataColumns = LEN( __DataText) - LEN( SUBSTITUTE( __DataText, ",", "" ) ) // figure out how many columns of data we are dealing with

    VAR __Table = // Convert the path into a table
        ADDCOLUMNS(
            GENERATESERIES(1, __DataColumns, 1),
            "__Value", PATHITEM( __Path, [Value]) + 0
        )
    VAR __Result = AVERAGEX( __Table, [__Value] ) // run an aggregation against the table
RETURN
    __Result

 

 



Follow on LinkedIn
@ me in replies or I'll lose your thread!!!
Instead of a Kudo, please vote for this idea
Become an expert!: Enterprise DNA
External Tools: MSHGQM
YouTube Channel!: Microsoft Hates Greg
Latest book!:
Power BI Cookbook Third Edition (Color)

DAX is easy, CALCULATE makes DAX hard...

View solution in original post

AlexisOlson
Super User
Super User

The other obvious hard-coded solution (manual unpivoting -- shown below) works too. It has more overhead but performance is still acceptable. The overhead is largely fixed and performance is barely affected by changing from one month of data to five months of data, unlike the text-parsing solutions whose timing scales roughly linearly with the number of months of data selected and quickly become impractical for larger amounts of data.

Testing twenty cold cache trials on the full set of data (still only 876 rows):

My simple solution above:

AlexisOlson_0-1717187928029.png

 

A more general unpivot solution (below):

AlexisOlson_1-1717187955408.png

 

@tamerj1's solution from above:

AlexisOlson_2-1717188241505.png

 

Avg Temp (Unpivot) = 
VAR __Unpivot__ =
    UNION (
        SELECTCOLUMNS ( T, "@Time", "00:00", "@Temp", T[00:00] ),
        SELECTCOLUMNS ( T, "@Time", "00:10", "@Temp", T[00:10] ),
        SELECTCOLUMNS ( T, "@Time", "00:20", "@Temp", T[00:20] ),
        SELECTCOLUMNS ( T, "@Time", "00:30", "@Temp", T[00:30] ),
        SELECTCOLUMNS ( T, "@Time", "00:40", "@Temp", T[00:40] ),
        SELECTCOLUMNS ( T, "@Time", "00:50", "@Temp", T[00:50] ),
        SELECTCOLUMNS ( T, "@Time", "01:00", "@Temp", T[01:00] ),
        SELECTCOLUMNS ( T, "@Time", "01:10", "@Temp", T[01:10] ),
        SELECTCOLUMNS ( T, "@Time", "01:20", "@Temp", T[01:20] ),
        SELECTCOLUMNS ( T, "@Time", "01:30", "@Temp", T[01:30] ),
        SELECTCOLUMNS ( T, "@Time", "01:40", "@Temp", T[01:40] ),
        SELECTCOLUMNS ( T, "@Time", "01:50", "@Temp", T[01:50] ),
        SELECTCOLUMNS ( T, "@Time", "02:00", "@Temp", T[02:00] ),
        SELECTCOLUMNS ( T, "@Time", "02:10", "@Temp", T[02:10] ),
        SELECTCOLUMNS ( T, "@Time", "02:20", "@Temp", T[02:20] ),
        SELECTCOLUMNS ( T, "@Time", "02:30", "@Temp", T[02:30] ),
        SELECTCOLUMNS ( T, "@Time", "02:40", "@Temp", T[02:40] ),
        SELECTCOLUMNS ( T, "@Time", "02:50", "@Temp", T[02:50] ),
        SELECTCOLUMNS ( T, "@Time", "03:00", "@Temp", T[03:00] ),
        SELECTCOLUMNS ( T, "@Time", "03:10", "@Temp", T[03:10] ),
        SELECTCOLUMNS ( T, "@Time", "03:20", "@Temp", T[03:20] ),
        SELECTCOLUMNS ( T, "@Time", "03:30", "@Temp", T[03:30] ),
        SELECTCOLUMNS ( T, "@Time", "03:40", "@Temp", T[03:40] ),
        SELECTCOLUMNS ( T, "@Time", "03:50", "@Temp", T[03:50] ),
        SELECTCOLUMNS ( T, "@Time", "04:00", "@Temp", T[04:00] ),
        SELECTCOLUMNS ( T, "@Time", "04:10", "@Temp", T[04:10] ),
        SELECTCOLUMNS ( T, "@Time", "04:20", "@Temp", T[04:20] ),
        SELECTCOLUMNS ( T, "@Time", "04:30", "@Temp", T[04:30] ),
        SELECTCOLUMNS ( T, "@Time", "04:40", "@Temp", T[04:40] ),
        SELECTCOLUMNS ( T, "@Time", "04:50", "@Temp", T[04:50] ),
        SELECTCOLUMNS ( T, "@Time", "05:00", "@Temp", T[05:00] ),
        SELECTCOLUMNS ( T, "@Time", "05:10", "@Temp", T[05:10] ),
        SELECTCOLUMNS ( T, "@Time", "05:20", "@Temp", T[05:20] ),
        SELECTCOLUMNS ( T, "@Time", "05:30", "@Temp", T[05:30] ),
        SELECTCOLUMNS ( T, "@Time", "05:40", "@Temp", T[05:40] ),
        SELECTCOLUMNS ( T, "@Time", "05:50", "@Temp", T[05:50] ),
        SELECTCOLUMNS ( T, "@Time", "06:00", "@Temp", T[06:00] ),
        SELECTCOLUMNS ( T, "@Time", "06:10", "@Temp", T[06:10] ),
        SELECTCOLUMNS ( T, "@Time", "06:20", "@Temp", T[06:20] ),
        SELECTCOLUMNS ( T, "@Time", "06:30", "@Temp", T[06:30] ),
        SELECTCOLUMNS ( T, "@Time", "06:40", "@Temp", T[06:40] ),
        SELECTCOLUMNS ( T, "@Time", "06:50", "@Temp", T[06:50] ),
        SELECTCOLUMNS ( T, "@Time", "07:00", "@Temp", T[07:00] ),
        SELECTCOLUMNS ( T, "@Time", "07:10", "@Temp", T[07:10] ),
        SELECTCOLUMNS ( T, "@Time", "07:20", "@Temp", T[07:20] ),
        SELECTCOLUMNS ( T, "@Time", "07:30", "@Temp", T[07:30] ),
        SELECTCOLUMNS ( T, "@Time", "07:40", "@Temp", T[07:40] ),
        SELECTCOLUMNS ( T, "@Time", "07:50", "@Temp", T[07:50] ),
        SELECTCOLUMNS ( T, "@Time", "08:00", "@Temp", T[08:00] ),
        SELECTCOLUMNS ( T, "@Time", "08:10", "@Temp", T[08:10] ),
        SELECTCOLUMNS ( T, "@Time", "08:20", "@Temp", T[08:20] ),
        SELECTCOLUMNS ( T, "@Time", "08:30", "@Temp", T[08:30] ),
        SELECTCOLUMNS ( T, "@Time", "08:40", "@Temp", T[08:40] ),
        SELECTCOLUMNS ( T, "@Time", "08:50", "@Temp", T[08:50] ),
        SELECTCOLUMNS ( T, "@Time", "09:00", "@Temp", T[09:00] ),
        SELECTCOLUMNS ( T, "@Time", "09:10", "@Temp", T[09:10] ),
        SELECTCOLUMNS ( T, "@Time", "09:20", "@Temp", T[09:20] ),
        SELECTCOLUMNS ( T, "@Time", "09:30", "@Temp", T[09:30] ),
        SELECTCOLUMNS ( T, "@Time", "09:40", "@Temp", T[09:40] ),
        SELECTCOLUMNS ( T, "@Time", "09:50", "@Temp", T[09:50] ),
        SELECTCOLUMNS ( T, "@Time", "10:00", "@Temp", T[10:00] ),
        SELECTCOLUMNS ( T, "@Time", "10:10", "@Temp", T[10:10] ),
        SELECTCOLUMNS ( T, "@Time", "10:20", "@Temp", T[10:20] ),
        SELECTCOLUMNS ( T, "@Time", "10:30", "@Temp", T[10:30] ),
        SELECTCOLUMNS ( T, "@Time", "10:40", "@Temp", T[10:40] ),
        SELECTCOLUMNS ( T, "@Time", "10:50", "@Temp", T[10:50] ),
        SELECTCOLUMNS ( T, "@Time", "11:00", "@Temp", T[11:00] ),
        SELECTCOLUMNS ( T, "@Time", "11:10", "@Temp", T[11:10] ),
        SELECTCOLUMNS ( T, "@Time", "11:20", "@Temp", T[11:20] ),
        SELECTCOLUMNS ( T, "@Time", "11:30", "@Temp", T[11:30] ),
        SELECTCOLUMNS ( T, "@Time", "11:40", "@Temp", T[11:40] ),
        SELECTCOLUMNS ( T, "@Time", "11:50", "@Temp", T[11:50] ),
        SELECTCOLUMNS ( T, "@Time", "12:00", "@Temp", T[12:00] ),
        SELECTCOLUMNS ( T, "@Time", "12:10", "@Temp", T[12:10] ),
        SELECTCOLUMNS ( T, "@Time", "12:20", "@Temp", T[12:20] ),
        SELECTCOLUMNS ( T, "@Time", "12:30", "@Temp", T[12:30] ),
        SELECTCOLUMNS ( T, "@Time", "12:40", "@Temp", T[12:40] ),
        SELECTCOLUMNS ( T, "@Time", "12:50", "@Temp", T[12:50] ),
        SELECTCOLUMNS ( T, "@Time", "13:00", "@Temp", T[13:00] ),
        SELECTCOLUMNS ( T, "@Time", "13:10", "@Temp", T[13:10] ),
        SELECTCOLUMNS ( T, "@Time", "13:20", "@Temp", T[13:20] ),
        SELECTCOLUMNS ( T, "@Time", "13:30", "@Temp", T[13:30] ),
        SELECTCOLUMNS ( T, "@Time", "13:40", "@Temp", T[13:40] ),
        SELECTCOLUMNS ( T, "@Time", "13:50", "@Temp", T[13:50] ),
        SELECTCOLUMNS ( T, "@Time", "14:00", "@Temp", T[14:00] ),
        SELECTCOLUMNS ( T, "@Time", "14:10", "@Temp", T[14:10] ),
        SELECTCOLUMNS ( T, "@Time", "14:20", "@Temp", T[14:20] ),
        SELECTCOLUMNS ( T, "@Time", "14:30", "@Temp", T[14:30] ),
        SELECTCOLUMNS ( T, "@Time", "14:40", "@Temp", T[14:40] ),
        SELECTCOLUMNS ( T, "@Time", "14:50", "@Temp", T[14:50] ),
        SELECTCOLUMNS ( T, "@Time", "15:00", "@Temp", T[15:00] ),
        SELECTCOLUMNS ( T, "@Time", "15:10", "@Temp", T[15:10] ),
        SELECTCOLUMNS ( T, "@Time", "15:20", "@Temp", T[15:20] ),
        SELECTCOLUMNS ( T, "@Time", "15:30", "@Temp", T[15:30] ),
        SELECTCOLUMNS ( T, "@Time", "15:40", "@Temp", T[15:40] ),
        SELECTCOLUMNS ( T, "@Time", "15:50", "@Temp", T[15:50] ),
        SELECTCOLUMNS ( T, "@Time", "16:00", "@Temp", T[16:00] ),
        SELECTCOLUMNS ( T, "@Time", "16:10", "@Temp", T[16:10] ),
        SELECTCOLUMNS ( T, "@Time", "16:20", "@Temp", T[16:20] ),
        SELECTCOLUMNS ( T, "@Time", "16:30", "@Temp", T[16:30] ),
        SELECTCOLUMNS ( T, "@Time", "16:40", "@Temp", T[16:40] ),
        SELECTCOLUMNS ( T, "@Time", "16:50", "@Temp", T[16:50] ),
        SELECTCOLUMNS ( T, "@Time", "17:00", "@Temp", T[17:00] ),
        SELECTCOLUMNS ( T, "@Time", "17:10", "@Temp", T[17:10] ),
        SELECTCOLUMNS ( T, "@Time", "17:20", "@Temp", T[17:20] ),
        SELECTCOLUMNS ( T, "@Time", "17:30", "@Temp", T[17:30] ),
        SELECTCOLUMNS ( T, "@Time", "17:40", "@Temp", T[17:40] ),
        SELECTCOLUMNS ( T, "@Time", "17:50", "@Temp", T[17:50] ),
        SELECTCOLUMNS ( T, "@Time", "18:00", "@Temp", T[18:00] ),
        SELECTCOLUMNS ( T, "@Time", "18:10", "@Temp", T[18:10] ),
        SELECTCOLUMNS ( T, "@Time", "18:20", "@Temp", T[18:20] ),
        SELECTCOLUMNS ( T, "@Time", "18:30", "@Temp", T[18:30] ),
        SELECTCOLUMNS ( T, "@Time", "18:40", "@Temp", T[18:40] ),
        SELECTCOLUMNS ( T, "@Time", "18:50", "@Temp", T[18:50] ),
        SELECTCOLUMNS ( T, "@Time", "19:00", "@Temp", T[19:00] ),
        SELECTCOLUMNS ( T, "@Time", "19:10", "@Temp", T[19:10] ),
        SELECTCOLUMNS ( T, "@Time", "19:20", "@Temp", T[19:20] ),
        SELECTCOLUMNS ( T, "@Time", "19:30", "@Temp", T[19:30] ),
        SELECTCOLUMNS ( T, "@Time", "19:40", "@Temp", T[19:40] ),
        SELECTCOLUMNS ( T, "@Time", "19:50", "@Temp", T[19:50] ),
        SELECTCOLUMNS ( T, "@Time", "20:00", "@Temp", T[20:00] ),
        SELECTCOLUMNS ( T, "@Time", "20:10", "@Temp", T[20:10] ),
        SELECTCOLUMNS ( T, "@Time", "20:20", "@Temp", T[20:20] ),
        SELECTCOLUMNS ( T, "@Time", "20:30", "@Temp", T[20:30] ),
        SELECTCOLUMNS ( T, "@Time", "20:40", "@Temp", T[20:40] ),
        SELECTCOLUMNS ( T, "@Time", "20:50", "@Temp", T[20:50] ),
        SELECTCOLUMNS ( T, "@Time", "21:00", "@Temp", T[21:00] ),
        SELECTCOLUMNS ( T, "@Time", "21:10", "@Temp", T[21:10] ),
        SELECTCOLUMNS ( T, "@Time", "21:20", "@Temp", T[21:20] ),
        SELECTCOLUMNS ( T, "@Time", "21:30", "@Temp", T[21:30] ),
        SELECTCOLUMNS ( T, "@Time", "21:40", "@Temp", T[21:40] ),
        SELECTCOLUMNS ( T, "@Time", "21:50", "@Temp", T[21:50] ),
        SELECTCOLUMNS ( T, "@Time", "22:00", "@Temp", T[22:00] ),
        SELECTCOLUMNS ( T, "@Time", "22:10", "@Temp", T[22:10] ),
        SELECTCOLUMNS ( T, "@Time", "22:20", "@Temp", T[22:20] ),
        SELECTCOLUMNS ( T, "@Time", "22:30", "@Temp", T[22:30] ),
        SELECTCOLUMNS ( T, "@Time", "22:40", "@Temp", T[22:40] ),
        SELECTCOLUMNS ( T, "@Time", "22:50", "@Temp", T[22:50] ),
        SELECTCOLUMNS ( T, "@Time", "23:00", "@Temp", T[23:00] ),
        SELECTCOLUMNS ( T, "@Time", "23:10", "@Temp", T[23:10] ),
        SELECTCOLUMNS ( T, "@Time", "23:20", "@Temp", T[23:20] ),
        SELECTCOLUMNS ( T, "@Time", "23:30", "@Temp", T[23:30] ),
        SELECTCOLUMNS ( T, "@Time", "23:40", "@Temp", T[23:40] ),
        SELECTCOLUMNS ( T, "@Time", "23:50", "@Temp", T[23:50] )
    )
VAR __Result = AVERAGEX ( __Unpivot__, [@Temp] )
RETURN
    __Result

 

View solution in original post

@Tamer I thought about an alternate solution instead of DAX. 

 

@tamerj1  I have though about an alternate approach to solve the above challenge.

 

As in the original fole we do not have the option to model the data then we publish it and we set the maximum amount of refresh's possible(depending on licence)

 

We open a new .pbix file and then through XMLA endpoint we bring the published data in the new .pbix file. Where we unpivot all the columns with the values so we get rid of the multi column aggregations. From that on it pretty simple to build the visuals.

 

I have cheated a bit as it has not been solved with DAX but it works.I am also attaching a video link of the steps how to do it.

 

 

https://1drv.ms/v/s!AkczoVaXetcahki934iuuSZHMUTB?e=etN80R 

View solution in original post

26 REPLIES 26

@Greg_Deckler 

I don't have enough words to describe my happiness while reading this. I really didn't expect to learn something out of this challenge but indeed you you are one of best teachers I have learned from. The way you approach the problem made me speachless. That opened my eye to many ideas that can realize new methods of solving problems. I realized that I still have so much to learn. 

You are right, at some point you have to hardcode something. True that the column order might change and true that more information columns can be added but tye number of value columns and their NAMES remain unchanged. TOCSV ofers more than the values of the columns. 

@tamerj1 OK, one "final" version. This one *should* work with multiple rows of sensor data on the same day and doesn't assume basically anything in terms of how many columns of data, just that the first column of data that the name includes a ":" and that after that the data columns are sequential with no non-data columns mixed in. 

 

I don't have the original data file so I can't actually test it with multiple data on the same day (although that doesn't make sense in this scenario it might make sense in other similar circumstances or scenarios so I wanted to try doing it).

Measure 5 = 
    // Deal with the header to figure out what columns need to be removed
    VAR __TableWHeaders = SUBSTITUTE( TOCSV( 'T_MEASURES', , ",", 1 ), UNICHAR(10), "|" ) // TOCSV with Headers
    VAR __Headers = PATHITEM( __TableWHeaders, 1 ) //headers
    VAR __FirstColon = FIND(":", __Headers) // find the first colon, which should be the first column of data
    VAR __Left = LEFT( __Headers, __FirstColon ) // grab everything to the left of the first colon
    VAR __Commas = LEN( __Left ) - LEN( SUBSTITUTE( __Left, ",", "" ) ) // figure out how many commas (columns) are in the "pre-columns" before the data

    // Now deal with the data which might have multiple rows
    VAR __TableWOHeaders = SUBSTITUTE( TOCSV( 'T_MEASURES', , ",", 0 ), UNICHAR(10), "|" ) // TOCSV without Headers
    VAR __Count = COUNTROWS('T_MEASURES') // number of rows in table
    VAR __Data = // Generate a table with a column that strips out the pre-columns
        ADDCOLUMNS(
            GENERATESERIES(1, __Count, 1),
            "__Data", 
                VAR __Text = PATHITEM( __TableWOHeaders, [Value])
                VAR __Path = SUBSTITUTE( __Text, ",", "|", __Commas )
                VAR __Result = PATHITEM( __Path, 2 )
            RETURN
                __Result
        )
    VAR __DataText = CONCATENATEX( __Data, [__Data], "|" ) // convert the table back into a text string with each row as a path item
    VAR __Path = SUBSTITUTE( __DataText, ",", "|" ) // convert all columns to path items
    VAR __DataColumns = LEN( __DataText) - LEN( SUBSTITUTE( __DataText, ",", "" ) ) // figure out how many columns of data we are dealing with

    VAR __Table = // Convert the path into a table
        ADDCOLUMNS(
            GENERATESERIES(1, __DataColumns, 1),
            "__Value", PATHITEM( __Path, [Value]) + 0
        )
    VAR __Result = AVERAGEX( __Table, [__Value] ) // run an aggregation against the table
RETURN
    __Result

 

 



Follow on LinkedIn
@ me in replies or I'll lose your thread!!!
Instead of a Kudo, please vote for this idea
Become an expert!: Enterprise DNA
External Tools: MSHGQM
YouTube Channel!: Microsoft Hates Greg
Latest book!:
Power BI Cookbook Third Edition (Color)

DAX is easy, CALCULATE makes DAX hard...

@Greg_Deckler, just FYI, your Measure 5 is a bit off. I'd guess some off-by-one bug somewhere.

AlexisOlson_3-1717185718971.png

I'm attaching a pbix that has the data baked in (rather than referencing an Excel file) in case you or others want to tinker with the source data. I used my custom Table.ToM function to produce this (and discovered I hadn't accounted for column names that need to be escaped with a #" " wrapper, so I'll need to update my function to handle that).

@tamerj1 Hey, here's a version that doesn't require hardcoding and you can theoretically add as many pre-columns as you want. Basically Measure 2 but with a dynamic finding of the number of commas before the first value column.

Measure 4 = 
    VAR __Text = SUBSTITUTE( TOCSV( 'T_MEASURES', , ",", 1 ), UNICHAR(10), "|" )
    VAR __1 = PATHITEM( __Text, 1 )
    VAR __FirstColon = FIND(":", __1)
    VAR __Left = LEFT( __1, __FirstColon )
    VAR __Commas = LEN( __Left ) - LEN( SUBSTITUTE( __Left, ",", "" ) )
    VAR __2 = PATHITEM( __Text, 2 )
    VAR __3 = SUBSTITUTE( __2, ",", "|", __Commas )
    VAR __4 = PATHITEM( __3, 2 )
    VAR __Path = SUBSTITUTE( __4, ",", "|" )
    VAR __Table = 
        ADDCOLUMNS(
            GENERATESERIES(1, 144, 1),
            "__Value", PATHITEM( __Path, [Value]) + 0
        )
    VAR __Result = AVERAGEX( __Table, [__Value] )
RETURN
    __Result

 



Follow on LinkedIn
@ me in replies or I'll lose your thread!!!
Instead of a Kudo, please vote for this idea
Become an expert!: Enterprise DNA
External Tools: MSHGQM
YouTube Channel!: Microsoft Hates Greg
Latest book!:
Power BI Cookbook Third Edition (Color)

DAX is easy, CALCULATE makes DAX hard...
Greg_Deckler
Super User
Super User

@tamerj1 So, to be clear, we are not supposed to transform the table in any way with Power Query? At first glance, it looks like MC Aggregations. Multi-Column Aggregations (MC Aggregations) - Microsoft Fabric Community or maybe DAX Unpivot. DAX Unpivot - Microsoft Fabric Community



Follow on LinkedIn
@ me in replies or I'll lose your thread!!!
Instead of a Kudo, please vote for this idea
Become an expert!: Enterprise DNA
External Tools: MSHGQM
YouTube Channel!: Microsoft Hates Greg
Latest book!:
Power BI Cookbook Third Edition (Color)

DAX is easy, CALCULATE makes DAX hard...

@Greg_Deckler 

I think your other solution makes more sense. To unpivot 144 columns manually using DAX is not expected to be a pleasant experience. Unless you insist to proof the opposite and produce the required DAX over the provided sample data 😅

Helpful resources

Announcements
Las Vegas 2025

Join us at the Microsoft Fabric Community Conference

March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!

Dec Fabric Community Survey

We want your feedback!

Your insights matter. That’s why we created a quick survey to learn about your experience finding answers to technical questions.

ArunFabCon

Microsoft Fabric Community Conference 2025

Arun Ulag shares exciting details about the Microsoft Fabric Conference 2025, which will be held in Las Vegas, NV.

December 2024

A Year in Review - December 2024

Find out what content was popular in the Fabric community during 2024.