Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Find everything you need to get certified on Fabric—skills challenges, live sessions, exam prep, role guidance, and more. Get started

Reply
tamerj1
Super User
Super User

Power Bi Challenge - Round 2

Hello Community Champions!

This is definitely not a common requirement but it is a real one. People working in the industrial sectors would probably encounter similar situations. Engineering companies who design Central Monitoring Systems or Building Management Systems usually design the shape of the output data in a way that makes it easy to handle with Excel. However, today more and more businesses are shifting to Power Bi. Industrial firms are no exception specially that they’re having probably more concerns about their data security.

 

Enough of introduction.. the challenge this time is about handling aggregations in horizontally expanded tables. Tables that contain huge number of value columns. In our case the CMS is recording a temperature measurement every 10 minutes accumulating 144 columns in a single row every day.

2.png

Power Query experts do not get excited.. The table, which is supposed to be retrieved via a Live Connection due to data control measures, contains a row for each temperature sensor each day for the current year 2024. Everyday one more row is added to each sensor. The 144 columns contains the measurement of the temperature recorded every 10 minutes. This daily variation in the temperature is important, however this is not our problem for today’s challenge.

 

Today we are interested in Daily Averages. The requirement is simple: Create a line chart of the average daily temperature against [Date] in the x-axis.
1.png
Attached is the sample data file.


@lbendlin  @Greg_Deckler @Jihwan_Kim  @sjoerdvn @OwenAuger @DataNinja777  @quantumudit @ThxAlot @johnbasha33 @Dangar332 @ValtteriN @AntrikshSharma @AlexisOlson @Martin_D 

4 ACCEPTED SOLUTIONS

@tamerj1 Hey, here's a version that doesn't require hardcoding and you can theoretically add as many pre-columns as you want. Basically Measure 2 but with a dynamic finding of the number of commas before the first value column.

Measure 4 = 
    VAR __Text = SUBSTITUTE( TOCSV( 'T_MEASURES', , ",", 1 ), UNICHAR(10), "|" )
    VAR __1 = PATHITEM( __Text, 1 )
    VAR __FirstColon = FIND(":", __1)
    VAR __Left = LEFT( __1, __FirstColon )
    VAR __Commas = LEN( __Left ) - LEN( SUBSTITUTE( __Left, ",", "" ) )
    VAR __2 = PATHITEM( __Text, 2 )
    VAR __3 = SUBSTITUTE( __2, ",", "|", __Commas )
    VAR __4 = PATHITEM( __3, 2 )
    VAR __Path = SUBSTITUTE( __4, ",", "|" )
    VAR __Table = 
        ADDCOLUMNS(
            GENERATESERIES(1, 144, 1),
            "__Value", PATHITEM( __Path, [Value]) + 0
        )
    VAR __Result = AVERAGEX( __Table, [__Value] )
RETURN
    __Result

 


Follow on LinkedIn
@ me in replies or I'll lose your thread!!!
Instead of a Kudo, please vote for this idea
Become an expert!: Enterprise DNA
External Tools: MSHGQM
YouTube Channel!: Microsoft Hates Greg
Latest book!:
The Definitive Guide to Power Query (M)

DAX is easy, CALCULATE makes DAX hard...

View solution in original post

@tamerj1 OK, one "final" version. This one *should* work with multiple rows of sensor data on the same day and doesn't assume basically anything in terms of how many columns of data, just that the first column of data that the name includes a ":" and that after that the data columns are sequential with no non-data columns mixed in. 

 

I don't have the original data file so I can't actually test it with multiple data on the same day (although that doesn't make sense in this scenario it might make sense in other similar circumstances or scenarios so I wanted to try doing it).

Measure 5 = 
    // Deal with the header to figure out what columns need to be removed
    VAR __TableWHeaders = SUBSTITUTE( TOCSV( 'T_MEASURES', , ",", 1 ), UNICHAR(10), "|" ) // TOCSV with Headers
    VAR __Headers = PATHITEM( __TableWHeaders, 1 ) //headers
    VAR __FirstColon = FIND(":", __Headers) // find the first colon, which should be the first column of data
    VAR __Left = LEFT( __Headers, __FirstColon ) // grab everything to the left of the first colon
    VAR __Commas = LEN( __Left ) - LEN( SUBSTITUTE( __Left, ",", "" ) ) // figure out how many commas (columns) are in the "pre-columns" before the data

    // Now deal with the data which might have multiple rows
    VAR __TableWOHeaders = SUBSTITUTE( TOCSV( 'T_MEASURES', , ",", 0 ), UNICHAR(10), "|" ) // TOCSV without Headers
    VAR __Count = COUNTROWS('T_MEASURES') // number of rows in table
    VAR __Data = // Generate a table with a column that strips out the pre-columns
        ADDCOLUMNS(
            GENERATESERIES(1, __Count, 1),
            "__Data", 
                VAR __Text = PATHITEM( __TableWOHeaders, [Value])
                VAR __Path = SUBSTITUTE( __Text, ",", "|", __Commas )
                VAR __Result = PATHITEM( __Path, 2 )
            RETURN
                __Result
        )
    VAR __DataText = CONCATENATEX( __Data, [__Data], "|" ) // convert the table back into a text string with each row as a path item
    VAR __Path = SUBSTITUTE( __DataText, ",", "|" ) // convert all columns to path items
    VAR __DataColumns = LEN( __DataText) - LEN( SUBSTITUTE( __DataText, ",", "" ) ) // figure out how many columns of data we are dealing with

    VAR __Table = // Convert the path into a table
        ADDCOLUMNS(
            GENERATESERIES(1, __DataColumns, 1),
            "__Value", PATHITEM( __Path, [Value]) + 0
        )
    VAR __Result = AVERAGEX( __Table, [__Value] ) // run an aggregation against the table
RETURN
    __Result

 

 


Follow on LinkedIn
@ me in replies or I'll lose your thread!!!
Instead of a Kudo, please vote for this idea
Become an expert!: Enterprise DNA
External Tools: MSHGQM
YouTube Channel!: Microsoft Hates Greg
Latest book!:
The Definitive Guide to Power Query (M)

DAX is easy, CALCULATE makes DAX hard...

View solution in original post

AlexisOlson
Super User
Super User

The other obvious hard-coded solution (manual unpivoting -- shown below) works too. It has more overhead but performance is still acceptable. The overhead is largely fixed and performance is barely affected by changing from one month of data to five months of data, unlike the text-parsing solutions whose timing scales roughly linearly with the number of months of data selected and quickly become impractical for larger amounts of data.

Testing twenty cold cache trials on the full set of data (still only 876 rows):

My simple solution above:

AlexisOlson_0-1717187928029.png

 

A more general unpivot solution (below):

AlexisOlson_1-1717187955408.png

 

@tamerj1's solution from above:

AlexisOlson_2-1717188241505.png

 

Avg Temp (Unpivot) = 
VAR __Unpivot__ =
    UNION (
        SELECTCOLUMNS ( T, "@Time", "00:00", "@Temp", T[00:00] ),
        SELECTCOLUMNS ( T, "@Time", "00:10", "@Temp", T[00:10] ),
        SELECTCOLUMNS ( T, "@Time", "00:20", "@Temp", T[00:20] ),
        SELECTCOLUMNS ( T, "@Time", "00:30", "@Temp", T[00:30] ),
        SELECTCOLUMNS ( T, "@Time", "00:40", "@Temp", T[00:40] ),
        SELECTCOLUMNS ( T, "@Time", "00:50", "@Temp", T[00:50] ),
        SELECTCOLUMNS ( T, "@Time", "01:00", "@Temp", T[01:00] ),
        SELECTCOLUMNS ( T, "@Time", "01:10", "@Temp", T[01:10] ),
        SELECTCOLUMNS ( T, "@Time", "01:20", "@Temp", T[01:20] ),
        SELECTCOLUMNS ( T, "@Time", "01:30", "@Temp", T[01:30] ),
        SELECTCOLUMNS ( T, "@Time", "01:40", "@Temp", T[01:40] ),
        SELECTCOLUMNS ( T, "@Time", "01:50", "@Temp", T[01:50] ),
        SELECTCOLUMNS ( T, "@Time", "02:00", "@Temp", T[02:00] ),
        SELECTCOLUMNS ( T, "@Time", "02:10", "@Temp", T[02:10] ),
        SELECTCOLUMNS ( T, "@Time", "02:20", "@Temp", T[02:20] ),
        SELECTCOLUMNS ( T, "@Time", "02:30", "@Temp", T[02:30] ),
        SELECTCOLUMNS ( T, "@Time", "02:40", "@Temp", T[02:40] ),
        SELECTCOLUMNS ( T, "@Time", "02:50", "@Temp", T[02:50] ),
        SELECTCOLUMNS ( T, "@Time", "03:00", "@Temp", T[03:00] ),
        SELECTCOLUMNS ( T, "@Time", "03:10", "@Temp", T[03:10] ),
        SELECTCOLUMNS ( T, "@Time", "03:20", "@Temp", T[03:20] ),
        SELECTCOLUMNS ( T, "@Time", "03:30", "@Temp", T[03:30] ),
        SELECTCOLUMNS ( T, "@Time", "03:40", "@Temp", T[03:40] ),
        SELECTCOLUMNS ( T, "@Time", "03:50", "@Temp", T[03:50] ),
        SELECTCOLUMNS ( T, "@Time", "04:00", "@Temp", T[04:00] ),
        SELECTCOLUMNS ( T, "@Time", "04:10", "@Temp", T[04:10] ),
        SELECTCOLUMNS ( T, "@Time", "04:20", "@Temp", T[04:20] ),
        SELECTCOLUMNS ( T, "@Time", "04:30", "@Temp", T[04:30] ),
        SELECTCOLUMNS ( T, "@Time", "04:40", "@Temp", T[04:40] ),
        SELECTCOLUMNS ( T, "@Time", "04:50", "@Temp", T[04:50] ),
        SELECTCOLUMNS ( T, "@Time", "05:00", "@Temp", T[05:00] ),
        SELECTCOLUMNS ( T, "@Time", "05:10", "@Temp", T[05:10] ),
        SELECTCOLUMNS ( T, "@Time", "05:20", "@Temp", T[05:20] ),
        SELECTCOLUMNS ( T, "@Time", "05:30", "@Temp", T[05:30] ),
        SELECTCOLUMNS ( T, "@Time", "05:40", "@Temp", T[05:40] ),
        SELECTCOLUMNS ( T, "@Time", "05:50", "@Temp", T[05:50] ),
        SELECTCOLUMNS ( T, "@Time", "06:00", "@Temp", T[06:00] ),
        SELECTCOLUMNS ( T, "@Time", "06:10", "@Temp", T[06:10] ),
        SELECTCOLUMNS ( T, "@Time", "06:20", "@Temp", T[06:20] ),
        SELECTCOLUMNS ( T, "@Time", "06:30", "@Temp", T[06:30] ),
        SELECTCOLUMNS ( T, "@Time", "06:40", "@Temp", T[06:40] ),
        SELECTCOLUMNS ( T, "@Time", "06:50", "@Temp", T[06:50] ),
        SELECTCOLUMNS ( T, "@Time", "07:00", "@Temp", T[07:00] ),
        SELECTCOLUMNS ( T, "@Time", "07:10", "@Temp", T[07:10] ),
        SELECTCOLUMNS ( T, "@Time", "07:20", "@Temp", T[07:20] ),
        SELECTCOLUMNS ( T, "@Time", "07:30", "@Temp", T[07:30] ),
        SELECTCOLUMNS ( T, "@Time", "07:40", "@Temp", T[07:40] ),
        SELECTCOLUMNS ( T, "@Time", "07:50", "@Temp", T[07:50] ),
        SELECTCOLUMNS ( T, "@Time", "08:00", "@Temp", T[08:00] ),
        SELECTCOLUMNS ( T, "@Time", "08:10", "@Temp", T[08:10] ),
        SELECTCOLUMNS ( T, "@Time", "08:20", "@Temp", T[08:20] ),
        SELECTCOLUMNS ( T, "@Time", "08:30", "@Temp", T[08:30] ),
        SELECTCOLUMNS ( T, "@Time", "08:40", "@Temp", T[08:40] ),
        SELECTCOLUMNS ( T, "@Time", "08:50", "@Temp", T[08:50] ),
        SELECTCOLUMNS ( T, "@Time", "09:00", "@Temp", T[09:00] ),
        SELECTCOLUMNS ( T, "@Time", "09:10", "@Temp", T[09:10] ),
        SELECTCOLUMNS ( T, "@Time", "09:20", "@Temp", T[09:20] ),
        SELECTCOLUMNS ( T, "@Time", "09:30", "@Temp", T[09:30] ),
        SELECTCOLUMNS ( T, "@Time", "09:40", "@Temp", T[09:40] ),
        SELECTCOLUMNS ( T, "@Time", "09:50", "@Temp", T[09:50] ),
        SELECTCOLUMNS ( T, "@Time", "10:00", "@Temp", T[10:00] ),
        SELECTCOLUMNS ( T, "@Time", "10:10", "@Temp", T[10:10] ),
        SELECTCOLUMNS ( T, "@Time", "10:20", "@Temp", T[10:20] ),
        SELECTCOLUMNS ( T, "@Time", "10:30", "@Temp", T[10:30] ),
        SELECTCOLUMNS ( T, "@Time", "10:40", "@Temp", T[10:40] ),
        SELECTCOLUMNS ( T, "@Time", "10:50", "@Temp", T[10:50] ),
        SELECTCOLUMNS ( T, "@Time", "11:00", "@Temp", T[11:00] ),
        SELECTCOLUMNS ( T, "@Time", "11:10", "@Temp", T[11:10] ),
        SELECTCOLUMNS ( T, "@Time", "11:20", "@Temp", T[11:20] ),
        SELECTCOLUMNS ( T, "@Time", "11:30", "@Temp", T[11:30] ),
        SELECTCOLUMNS ( T, "@Time", "11:40", "@Temp", T[11:40] ),
        SELECTCOLUMNS ( T, "@Time", "11:50", "@Temp", T[11:50] ),
        SELECTCOLUMNS ( T, "@Time", "12:00", "@Temp", T[12:00] ),
        SELECTCOLUMNS ( T, "@Time", "12:10", "@Temp", T[12:10] ),
        SELECTCOLUMNS ( T, "@Time", "12:20", "@Temp", T[12:20] ),
        SELECTCOLUMNS ( T, "@Time", "12:30", "@Temp", T[12:30] ),
        SELECTCOLUMNS ( T, "@Time", "12:40", "@Temp", T[12:40] ),
        SELECTCOLUMNS ( T, "@Time", "12:50", "@Temp", T[12:50] ),
        SELECTCOLUMNS ( T, "@Time", "13:00", "@Temp", T[13:00] ),
        SELECTCOLUMNS ( T, "@Time", "13:10", "@Temp", T[13:10] ),
        SELECTCOLUMNS ( T, "@Time", "13:20", "@Temp", T[13:20] ),
        SELECTCOLUMNS ( T, "@Time", "13:30", "@Temp", T[13:30] ),
        SELECTCOLUMNS ( T, "@Time", "13:40", "@Temp", T[13:40] ),
        SELECTCOLUMNS ( T, "@Time", "13:50", "@Temp", T[13:50] ),
        SELECTCOLUMNS ( T, "@Time", "14:00", "@Temp", T[14:00] ),
        SELECTCOLUMNS ( T, "@Time", "14:10", "@Temp", T[14:10] ),
        SELECTCOLUMNS ( T, "@Time", "14:20", "@Temp", T[14:20] ),
        SELECTCOLUMNS ( T, "@Time", "14:30", "@Temp", T[14:30] ),
        SELECTCOLUMNS ( T, "@Time", "14:40", "@Temp", T[14:40] ),
        SELECTCOLUMNS ( T, "@Time", "14:50", "@Temp", T[14:50] ),
        SELECTCOLUMNS ( T, "@Time", "15:00", "@Temp", T[15:00] ),
        SELECTCOLUMNS ( T, "@Time", "15:10", "@Temp", T[15:10] ),
        SELECTCOLUMNS ( T, "@Time", "15:20", "@Temp", T[15:20] ),
        SELECTCOLUMNS ( T, "@Time", "15:30", "@Temp", T[15:30] ),
        SELECTCOLUMNS ( T, "@Time", "15:40", "@Temp", T[15:40] ),
        SELECTCOLUMNS ( T, "@Time", "15:50", "@Temp", T[15:50] ),
        SELECTCOLUMNS ( T, "@Time", "16:00", "@Temp", T[16:00] ),
        SELECTCOLUMNS ( T, "@Time", "16:10", "@Temp", T[16:10] ),
        SELECTCOLUMNS ( T, "@Time", "16:20", "@Temp", T[16:20] ),
        SELECTCOLUMNS ( T, "@Time", "16:30", "@Temp", T[16:30] ),
        SELECTCOLUMNS ( T, "@Time", "16:40", "@Temp", T[16:40] ),
        SELECTCOLUMNS ( T, "@Time", "16:50", "@Temp", T[16:50] ),
        SELECTCOLUMNS ( T, "@Time", "17:00", "@Temp", T[17:00] ),
        SELECTCOLUMNS ( T, "@Time", "17:10", "@Temp", T[17:10] ),
        SELECTCOLUMNS ( T, "@Time", "17:20", "@Temp", T[17:20] ),
        SELECTCOLUMNS ( T, "@Time", "17:30", "@Temp", T[17:30] ),
        SELECTCOLUMNS ( T, "@Time", "17:40", "@Temp", T[17:40] ),
        SELECTCOLUMNS ( T, "@Time", "17:50", "@Temp", T[17:50] ),
        SELECTCOLUMNS ( T, "@Time", "18:00", "@Temp", T[18:00] ),
        SELECTCOLUMNS ( T, "@Time", "18:10", "@Temp", T[18:10] ),
        SELECTCOLUMNS ( T, "@Time", "18:20", "@Temp", T[18:20] ),
        SELECTCOLUMNS ( T, "@Time", "18:30", "@Temp", T[18:30] ),
        SELECTCOLUMNS ( T, "@Time", "18:40", "@Temp", T[18:40] ),
        SELECTCOLUMNS ( T, "@Time", "18:50", "@Temp", T[18:50] ),
        SELECTCOLUMNS ( T, "@Time", "19:00", "@Temp", T[19:00] ),
        SELECTCOLUMNS ( T, "@Time", "19:10", "@Temp", T[19:10] ),
        SELECTCOLUMNS ( T, "@Time", "19:20", "@Temp", T[19:20] ),
        SELECTCOLUMNS ( T, "@Time", "19:30", "@Temp", T[19:30] ),
        SELECTCOLUMNS ( T, "@Time", "19:40", "@Temp", T[19:40] ),
        SELECTCOLUMNS ( T, "@Time", "19:50", "@Temp", T[19:50] ),
        SELECTCOLUMNS ( T, "@Time", "20:00", "@Temp", T[20:00] ),
        SELECTCOLUMNS ( T, "@Time", "20:10", "@Temp", T[20:10] ),
        SELECTCOLUMNS ( T, "@Time", "20:20", "@Temp", T[20:20] ),
        SELECTCOLUMNS ( T, "@Time", "20:30", "@Temp", T[20:30] ),
        SELECTCOLUMNS ( T, "@Time", "20:40", "@Temp", T[20:40] ),
        SELECTCOLUMNS ( T, "@Time", "20:50", "@Temp", T[20:50] ),
        SELECTCOLUMNS ( T, "@Time", "21:00", "@Temp", T[21:00] ),
        SELECTCOLUMNS ( T, "@Time", "21:10", "@Temp", T[21:10] ),
        SELECTCOLUMNS ( T, "@Time", "21:20", "@Temp", T[21:20] ),
        SELECTCOLUMNS ( T, "@Time", "21:30", "@Temp", T[21:30] ),
        SELECTCOLUMNS ( T, "@Time", "21:40", "@Temp", T[21:40] ),
        SELECTCOLUMNS ( T, "@Time", "21:50", "@Temp", T[21:50] ),
        SELECTCOLUMNS ( T, "@Time", "22:00", "@Temp", T[22:00] ),
        SELECTCOLUMNS ( T, "@Time", "22:10", "@Temp", T[22:10] ),
        SELECTCOLUMNS ( T, "@Time", "22:20", "@Temp", T[22:20] ),
        SELECTCOLUMNS ( T, "@Time", "22:30", "@Temp", T[22:30] ),
        SELECTCOLUMNS ( T, "@Time", "22:40", "@Temp", T[22:40] ),
        SELECTCOLUMNS ( T, "@Time", "22:50", "@Temp", T[22:50] ),
        SELECTCOLUMNS ( T, "@Time", "23:00", "@Temp", T[23:00] ),
        SELECTCOLUMNS ( T, "@Time", "23:10", "@Temp", T[23:10] ),
        SELECTCOLUMNS ( T, "@Time", "23:20", "@Temp", T[23:20] ),
        SELECTCOLUMNS ( T, "@Time", "23:30", "@Temp", T[23:30] ),
        SELECTCOLUMNS ( T, "@Time", "23:40", "@Temp", T[23:40] ),
        SELECTCOLUMNS ( T, "@Time", "23:50", "@Temp", T[23:50] )
    )
VAR __Result = AVERAGEX ( __Unpivot__, [@Temp] )
RETURN
    __Result

 

View solution in original post

@Tamer I thought about an alternate solution instead of DAX. 

 

@tamerj1  I have though about an alternate approach to solve the above challenge.

 

As in the original fole we do not have the option to model the data then we publish it and we set the maximum amount of refresh's possible(depending on licence)

 

We open a new .pbix file and then through XMLA endpoint we bring the published data in the new .pbix file. Where we unpivot all the columns with the values so we get rid of the multi column aggregations. From that on it pretty simple to build the visuals.

 

I have cheated a bit as it has not been solved with DAX but it works.I am also attaching a video link of the steps how to do it.

 

 

https://1drv.ms/v/s!AkczoVaXetcahki934iuuSZHMUTB?e=etN80R 

View solution in original post

26 REPLIES 26
AlexisOlson
Super User
Super User

Yep. If Import is allowed rather than a live connection or DirectQuery, then unpivoting in Power Query is a great solution.

 

For future readers, here's my summary of @BIstvan's approach: Connect to the Power BI semantic model as an Analysis Services database and then do whatever you like (like unpivot) in Power Query. Here's a sample query:

let
  Source = AnalysisServices.Database(
    "powerbi://api.powerbi.com/v1.0/myorg/WorkspaceName", 
    "Power Bi Challenge - Round2", 
    [Query = "EVALUATE T_MEASURES"]
  )
in
  Source

In the above, WorkspaceName should be the name of the workspace where the semantic model is stored, "Power BI Challenge - Round2" is the name of the dataset/semantic model, and Query is a simple EVALUATE statement to load the T_MEASURES table.

nyarlathotep
Regular Visitor

The way I see it, i am 100% with @AlexisOlson solution just because of the stellar performance since we are talking about a production environment. Hardcoding 144 columns is a breeze with chatgpt. But, kudos to Greg for a smart approach, converting to csv and parsing text is a great learning suggestion.

AlexisOlson
Super User
Super User

The other obvious hard-coded solution (manual unpivoting -- shown below) works too. It has more overhead but performance is still acceptable. The overhead is largely fixed and performance is barely affected by changing from one month of data to five months of data, unlike the text-parsing solutions whose timing scales roughly linearly with the number of months of data selected and quickly become impractical for larger amounts of data.

Testing twenty cold cache trials on the full set of data (still only 876 rows):

My simple solution above:

AlexisOlson_0-1717187928029.png

 

A more general unpivot solution (below):

AlexisOlson_1-1717187955408.png

 

@tamerj1's solution from above:

AlexisOlson_2-1717188241505.png

 

Avg Temp (Unpivot) = 
VAR __Unpivot__ =
    UNION (
        SELECTCOLUMNS ( T, "@Time", "00:00", "@Temp", T[00:00] ),
        SELECTCOLUMNS ( T, "@Time", "00:10", "@Temp", T[00:10] ),
        SELECTCOLUMNS ( T, "@Time", "00:20", "@Temp", T[00:20] ),
        SELECTCOLUMNS ( T, "@Time", "00:30", "@Temp", T[00:30] ),
        SELECTCOLUMNS ( T, "@Time", "00:40", "@Temp", T[00:40] ),
        SELECTCOLUMNS ( T, "@Time", "00:50", "@Temp", T[00:50] ),
        SELECTCOLUMNS ( T, "@Time", "01:00", "@Temp", T[01:00] ),
        SELECTCOLUMNS ( T, "@Time", "01:10", "@Temp", T[01:10] ),
        SELECTCOLUMNS ( T, "@Time", "01:20", "@Temp", T[01:20] ),
        SELECTCOLUMNS ( T, "@Time", "01:30", "@Temp", T[01:30] ),
        SELECTCOLUMNS ( T, "@Time", "01:40", "@Temp", T[01:40] ),
        SELECTCOLUMNS ( T, "@Time", "01:50", "@Temp", T[01:50] ),
        SELECTCOLUMNS ( T, "@Time", "02:00", "@Temp", T[02:00] ),
        SELECTCOLUMNS ( T, "@Time", "02:10", "@Temp", T[02:10] ),
        SELECTCOLUMNS ( T, "@Time", "02:20", "@Temp", T[02:20] ),
        SELECTCOLUMNS ( T, "@Time", "02:30", "@Temp", T[02:30] ),
        SELECTCOLUMNS ( T, "@Time", "02:40", "@Temp", T[02:40] ),
        SELECTCOLUMNS ( T, "@Time", "02:50", "@Temp", T[02:50] ),
        SELECTCOLUMNS ( T, "@Time", "03:00", "@Temp", T[03:00] ),
        SELECTCOLUMNS ( T, "@Time", "03:10", "@Temp", T[03:10] ),
        SELECTCOLUMNS ( T, "@Time", "03:20", "@Temp", T[03:20] ),
        SELECTCOLUMNS ( T, "@Time", "03:30", "@Temp", T[03:30] ),
        SELECTCOLUMNS ( T, "@Time", "03:40", "@Temp", T[03:40] ),
        SELECTCOLUMNS ( T, "@Time", "03:50", "@Temp", T[03:50] ),
        SELECTCOLUMNS ( T, "@Time", "04:00", "@Temp", T[04:00] ),
        SELECTCOLUMNS ( T, "@Time", "04:10", "@Temp", T[04:10] ),
        SELECTCOLUMNS ( T, "@Time", "04:20", "@Temp", T[04:20] ),
        SELECTCOLUMNS ( T, "@Time", "04:30", "@Temp", T[04:30] ),
        SELECTCOLUMNS ( T, "@Time", "04:40", "@Temp", T[04:40] ),
        SELECTCOLUMNS ( T, "@Time", "04:50", "@Temp", T[04:50] ),
        SELECTCOLUMNS ( T, "@Time", "05:00", "@Temp", T[05:00] ),
        SELECTCOLUMNS ( T, "@Time", "05:10", "@Temp", T[05:10] ),
        SELECTCOLUMNS ( T, "@Time", "05:20", "@Temp", T[05:20] ),
        SELECTCOLUMNS ( T, "@Time", "05:30", "@Temp", T[05:30] ),
        SELECTCOLUMNS ( T, "@Time", "05:40", "@Temp", T[05:40] ),
        SELECTCOLUMNS ( T, "@Time", "05:50", "@Temp", T[05:50] ),
        SELECTCOLUMNS ( T, "@Time", "06:00", "@Temp", T[06:00] ),
        SELECTCOLUMNS ( T, "@Time", "06:10", "@Temp", T[06:10] ),
        SELECTCOLUMNS ( T, "@Time", "06:20", "@Temp", T[06:20] ),
        SELECTCOLUMNS ( T, "@Time", "06:30", "@Temp", T[06:30] ),
        SELECTCOLUMNS ( T, "@Time", "06:40", "@Temp", T[06:40] ),
        SELECTCOLUMNS ( T, "@Time", "06:50", "@Temp", T[06:50] ),
        SELECTCOLUMNS ( T, "@Time", "07:00", "@Temp", T[07:00] ),
        SELECTCOLUMNS ( T, "@Time", "07:10", "@Temp", T[07:10] ),
        SELECTCOLUMNS ( T, "@Time", "07:20", "@Temp", T[07:20] ),
        SELECTCOLUMNS ( T, "@Time", "07:30", "@Temp", T[07:30] ),
        SELECTCOLUMNS ( T, "@Time", "07:40", "@Temp", T[07:40] ),
        SELECTCOLUMNS ( T, "@Time", "07:50", "@Temp", T[07:50] ),
        SELECTCOLUMNS ( T, "@Time", "08:00", "@Temp", T[08:00] ),
        SELECTCOLUMNS ( T, "@Time", "08:10", "@Temp", T[08:10] ),
        SELECTCOLUMNS ( T, "@Time", "08:20", "@Temp", T[08:20] ),
        SELECTCOLUMNS ( T, "@Time", "08:30", "@Temp", T[08:30] ),
        SELECTCOLUMNS ( T, "@Time", "08:40", "@Temp", T[08:40] ),
        SELECTCOLUMNS ( T, "@Time", "08:50", "@Temp", T[08:50] ),
        SELECTCOLUMNS ( T, "@Time", "09:00", "@Temp", T[09:00] ),
        SELECTCOLUMNS ( T, "@Time", "09:10", "@Temp", T[09:10] ),
        SELECTCOLUMNS ( T, "@Time", "09:20", "@Temp", T[09:20] ),
        SELECTCOLUMNS ( T, "@Time", "09:30", "@Temp", T[09:30] ),
        SELECTCOLUMNS ( T, "@Time", "09:40", "@Temp", T[09:40] ),
        SELECTCOLUMNS ( T, "@Time", "09:50", "@Temp", T[09:50] ),
        SELECTCOLUMNS ( T, "@Time", "10:00", "@Temp", T[10:00] ),
        SELECTCOLUMNS ( T, "@Time", "10:10", "@Temp", T[10:10] ),
        SELECTCOLUMNS ( T, "@Time", "10:20", "@Temp", T[10:20] ),
        SELECTCOLUMNS ( T, "@Time", "10:30", "@Temp", T[10:30] ),
        SELECTCOLUMNS ( T, "@Time", "10:40", "@Temp", T[10:40] ),
        SELECTCOLUMNS ( T, "@Time", "10:50", "@Temp", T[10:50] ),
        SELECTCOLUMNS ( T, "@Time", "11:00", "@Temp", T[11:00] ),
        SELECTCOLUMNS ( T, "@Time", "11:10", "@Temp", T[11:10] ),
        SELECTCOLUMNS ( T, "@Time", "11:20", "@Temp", T[11:20] ),
        SELECTCOLUMNS ( T, "@Time", "11:30", "@Temp", T[11:30] ),
        SELECTCOLUMNS ( T, "@Time", "11:40", "@Temp", T[11:40] ),
        SELECTCOLUMNS ( T, "@Time", "11:50", "@Temp", T[11:50] ),
        SELECTCOLUMNS ( T, "@Time", "12:00", "@Temp", T[12:00] ),
        SELECTCOLUMNS ( T, "@Time", "12:10", "@Temp", T[12:10] ),
        SELECTCOLUMNS ( T, "@Time", "12:20", "@Temp", T[12:20] ),
        SELECTCOLUMNS ( T, "@Time", "12:30", "@Temp", T[12:30] ),
        SELECTCOLUMNS ( T, "@Time", "12:40", "@Temp", T[12:40] ),
        SELECTCOLUMNS ( T, "@Time", "12:50", "@Temp", T[12:50] ),
        SELECTCOLUMNS ( T, "@Time", "13:00", "@Temp", T[13:00] ),
        SELECTCOLUMNS ( T, "@Time", "13:10", "@Temp", T[13:10] ),
        SELECTCOLUMNS ( T, "@Time", "13:20", "@Temp", T[13:20] ),
        SELECTCOLUMNS ( T, "@Time", "13:30", "@Temp", T[13:30] ),
        SELECTCOLUMNS ( T, "@Time", "13:40", "@Temp", T[13:40] ),
        SELECTCOLUMNS ( T, "@Time", "13:50", "@Temp", T[13:50] ),
        SELECTCOLUMNS ( T, "@Time", "14:00", "@Temp", T[14:00] ),
        SELECTCOLUMNS ( T, "@Time", "14:10", "@Temp", T[14:10] ),
        SELECTCOLUMNS ( T, "@Time", "14:20", "@Temp", T[14:20] ),
        SELECTCOLUMNS ( T, "@Time", "14:30", "@Temp", T[14:30] ),
        SELECTCOLUMNS ( T, "@Time", "14:40", "@Temp", T[14:40] ),
        SELECTCOLUMNS ( T, "@Time", "14:50", "@Temp", T[14:50] ),
        SELECTCOLUMNS ( T, "@Time", "15:00", "@Temp", T[15:00] ),
        SELECTCOLUMNS ( T, "@Time", "15:10", "@Temp", T[15:10] ),
        SELECTCOLUMNS ( T, "@Time", "15:20", "@Temp", T[15:20] ),
        SELECTCOLUMNS ( T, "@Time", "15:30", "@Temp", T[15:30] ),
        SELECTCOLUMNS ( T, "@Time", "15:40", "@Temp", T[15:40] ),
        SELECTCOLUMNS ( T, "@Time", "15:50", "@Temp", T[15:50] ),
        SELECTCOLUMNS ( T, "@Time", "16:00", "@Temp", T[16:00] ),
        SELECTCOLUMNS ( T, "@Time", "16:10", "@Temp", T[16:10] ),
        SELECTCOLUMNS ( T, "@Time", "16:20", "@Temp", T[16:20] ),
        SELECTCOLUMNS ( T, "@Time", "16:30", "@Temp", T[16:30] ),
        SELECTCOLUMNS ( T, "@Time", "16:40", "@Temp", T[16:40] ),
        SELECTCOLUMNS ( T, "@Time", "16:50", "@Temp", T[16:50] ),
        SELECTCOLUMNS ( T, "@Time", "17:00", "@Temp", T[17:00] ),
        SELECTCOLUMNS ( T, "@Time", "17:10", "@Temp", T[17:10] ),
        SELECTCOLUMNS ( T, "@Time", "17:20", "@Temp", T[17:20] ),
        SELECTCOLUMNS ( T, "@Time", "17:30", "@Temp", T[17:30] ),
        SELECTCOLUMNS ( T, "@Time", "17:40", "@Temp", T[17:40] ),
        SELECTCOLUMNS ( T, "@Time", "17:50", "@Temp", T[17:50] ),
        SELECTCOLUMNS ( T, "@Time", "18:00", "@Temp", T[18:00] ),
        SELECTCOLUMNS ( T, "@Time", "18:10", "@Temp", T[18:10] ),
        SELECTCOLUMNS ( T, "@Time", "18:20", "@Temp", T[18:20] ),
        SELECTCOLUMNS ( T, "@Time", "18:30", "@Temp", T[18:30] ),
        SELECTCOLUMNS ( T, "@Time", "18:40", "@Temp", T[18:40] ),
        SELECTCOLUMNS ( T, "@Time", "18:50", "@Temp", T[18:50] ),
        SELECTCOLUMNS ( T, "@Time", "19:00", "@Temp", T[19:00] ),
        SELECTCOLUMNS ( T, "@Time", "19:10", "@Temp", T[19:10] ),
        SELECTCOLUMNS ( T, "@Time", "19:20", "@Temp", T[19:20] ),
        SELECTCOLUMNS ( T, "@Time", "19:30", "@Temp", T[19:30] ),
        SELECTCOLUMNS ( T, "@Time", "19:40", "@Temp", T[19:40] ),
        SELECTCOLUMNS ( T, "@Time", "19:50", "@Temp", T[19:50] ),
        SELECTCOLUMNS ( T, "@Time", "20:00", "@Temp", T[20:00] ),
        SELECTCOLUMNS ( T, "@Time", "20:10", "@Temp", T[20:10] ),
        SELECTCOLUMNS ( T, "@Time", "20:20", "@Temp", T[20:20] ),
        SELECTCOLUMNS ( T, "@Time", "20:30", "@Temp", T[20:30] ),
        SELECTCOLUMNS ( T, "@Time", "20:40", "@Temp", T[20:40] ),
        SELECTCOLUMNS ( T, "@Time", "20:50", "@Temp", T[20:50] ),
        SELECTCOLUMNS ( T, "@Time", "21:00", "@Temp", T[21:00] ),
        SELECTCOLUMNS ( T, "@Time", "21:10", "@Temp", T[21:10] ),
        SELECTCOLUMNS ( T, "@Time", "21:20", "@Temp", T[21:20] ),
        SELECTCOLUMNS ( T, "@Time", "21:30", "@Temp", T[21:30] ),
        SELECTCOLUMNS ( T, "@Time", "21:40", "@Temp", T[21:40] ),
        SELECTCOLUMNS ( T, "@Time", "21:50", "@Temp", T[21:50] ),
        SELECTCOLUMNS ( T, "@Time", "22:00", "@Temp", T[22:00] ),
        SELECTCOLUMNS ( T, "@Time", "22:10", "@Temp", T[22:10] ),
        SELECTCOLUMNS ( T, "@Time", "22:20", "@Temp", T[22:20] ),
        SELECTCOLUMNS ( T, "@Time", "22:30", "@Temp", T[22:30] ),
        SELECTCOLUMNS ( T, "@Time", "22:40", "@Temp", T[22:40] ),
        SELECTCOLUMNS ( T, "@Time", "22:50", "@Temp", T[22:50] ),
        SELECTCOLUMNS ( T, "@Time", "23:00", "@Temp", T[23:00] ),
        SELECTCOLUMNS ( T, "@Time", "23:10", "@Temp", T[23:10] ),
        SELECTCOLUMNS ( T, "@Time", "23:20", "@Temp", T[23:20] ),
        SELECTCOLUMNS ( T, "@Time", "23:30", "@Temp", T[23:30] ),
        SELECTCOLUMNS ( T, "@Time", "23:40", "@Temp", T[23:40] ),
        SELECTCOLUMNS ( T, "@Time", "23:50", "@Temp", T[23:50] )
    )
VAR __Result = AVERAGEX ( __Unpivot__, [@Temp] )
RETURN
    __Result

 

@AlexisOlson Right, this is simply MC Aggregations and/or DAX Unpivot that I mentioned in the first post. Kudos to you for coding it all out!


Follow on LinkedIn
@ me in replies or I'll lose your thread!!!
Instead of a Kudo, please vote for this idea
Become an expert!: Enterprise DNA
External Tools: MSHGQM
YouTube Channel!: Microsoft Hates Greg
Latest book!:
The Definitive Guide to Power Query (M)

DAX is easy, CALCULATE makes DAX hard...

@AlexisOlson 

This one allows for double aggregation if a GROUPBY step is added. I'm saying because in the original real life problem the data was energy consumption values and the requirement was to get the average of the daily total consumption (the average of the sum). Thus by adding a GROUPBY step you have the flexibility to double aggregate as you like. 
Despite the fact I was not looking for a hardcoded solution but that was never listed as part of the conditions neither was the double aggregation requirement. Therefore this solution shall be marked as acceptable solution. 

AlexisOlson
Super User
Super User

These text-parsing solutions are clever but really slow. These are the timings I got for just the 31 days of January 2024.

@tamerj1 solution

AlexisOlson_0-1717184213222.png


@Greg_Deckler's [Measure 5] solution:

AlexisOlson_1-1717184292996.png

 

The hard-coded solution to this problem is super fast:

AlexisOlson_2-1717184421542.png

 

Avg Temp = 
DIVIDE (
    SUMX (
        T,
        T[00:00] + T[00:10] + T[00:20] + T[00:30] + T[00:40] + T[00:50] +
        T[01:00] + T[01:10] + T[01:20] + T[01:30] + T[01:40] + T[01:50] +
        T[02:00] + T[02:10] + T[02:20] + T[02:30] + T[02:40] + T[02:50] +
        T[03:00] + T[03:10] + T[03:20] + T[03:30] + T[03:40] + T[03:50] +
        T[04:00] + T[04:10] + T[04:20] + T[04:30] + T[04:40] + T[04:50] +
        T[05:00] + T[05:10] + T[05:20] + T[05:30] + T[05:40] + T[05:50] +
        T[06:00] + T[06:10] + T[06:20] + T[06:30] + T[06:40] + T[06:50] +
        T[07:00] + T[07:10] + T[07:20] + T[07:30] + T[07:40] + T[07:50] +
        T[08:00] + T[08:10] + T[08:20] + T[08:30] + T[08:40] + T[08:50] +
        T[09:00] + T[09:10] + T[09:20] + T[09:30] + T[09:40] + T[09:50] +
        T[10:00] + T[10:10] + T[10:20] + T[10:30] + T[10:40] + T[10:50] +
        T[11:00] + T[11:10] + T[11:20] + T[11:30] + T[11:40] + T[11:50] +
        T[12:00] + T[12:10] + T[12:20] + T[12:30] + T[12:40] + T[12:50] +
        T[13:00] + T[13:10] + T[13:20] + T[13:30] + T[13:40] + T[13:50] +
        T[14:00] + T[14:10] + T[14:20] + T[14:30] + T[14:40] + T[14:50] +
        T[15:00] + T[15:10] + T[15:20] + T[15:30] + T[15:40] + T[15:50] +
        T[16:00] + T[16:10] + T[16:20] + T[16:30] + T[16:40] + T[16:50] +
        T[17:00] + T[17:10] + T[17:20] + T[17:30] + T[17:40] + T[17:50] +
        T[18:00] + T[18:10] + T[18:20] + T[18:30] + T[18:40] + T[18:50] +
        T[19:00] + T[19:10] + T[19:20] + T[19:30] + T[19:40] + T[19:50] +
        T[20:00] + T[20:10] + T[20:20] + T[20:30] + T[20:40] + T[20:50] +
        T[21:00] + T[21:10] + T[21:20] + T[21:30] + T[21:40] + T[21:50] +
        T[22:00] + T[22:10] + T[22:20] + T[22:30] + T[22:40] + T[22:50] +
        T[23:00] + T[23:10] + T[23:20] + T[23:30] + T[23:40] + T[23:50]
    ),
    144 * COUNTROWS ( T )
)

 

This doesn't work if you want a distinct count aggregation but SUM and MIN/MAX work just fine.

tamerj1
Super User
Super User

@Greg_Deckler 
I cannot thank you enough for the great input that you have provided in this subject. you really did a fantastic job that made more than happy and enjoyed realing every single paragraph and every single line of code. I personally learned much more than expected. I hope others did/will do.
Here is my original solution of the probelm with the explanation

Average Temperature = 
AVERAGEX (
    T_MEASURES,
    VAR T1 = CALCULATETABLE ( T_MEASURES )
    VAR String1 = TOCSV ( T1, 1, ",", TRUE )
    VAR Items1 = SUBSTITUTE ( String1, UNICHAR ( 10 ), "|" )
    VAR T2 = SELECTCOLUMNS ( { ( 1, 2 ) }, "@Headers", PATHITEM ( Items1, 1 ), "@Details", PATHITEM ( Items1, 2 ) )
    VAR T3 =
        GENERATE ( 
            T2, 
            VAR HeaderString = [@Headers]
            VAR HeaderItems = SUBSTITUTE ( HeaderString, ",", "|" )
            VAR DetailString = [@Details]
            VAR DetailItems = SUBSTITUTE ( DetailString, ",", "|" )
            VAR Length = PATHLENGTH ( HeaderItems )
            VAR T4 = GENERATESERIES ( 1, Length, 1 )
            RETURN
                FILTER ( 
                    SELECTCOLUMNS ( 
                        T4, 
                        "@Header", PATHITEM ( HeaderItems, [Value] ),
                        "@Detail", PATHITEM ( DetailItems, [Value] ) 
                    ),
                    CONTAINSSTRING ( [@Header], ":" )
                )
        )
    RETURN
        AVERAGEX ( T3, VALUE ( [@Detail] ) )
)

4.png

@Tamer I thought about an alternate solution instead of DAX. 

 

@tamerj1  I have though about an alternate approach to solve the above challenge.

 

As in the original fole we do not have the option to model the data then we publish it and we set the maximum amount of refresh's possible(depending on licence)

 

We open a new .pbix file and then through XMLA endpoint we bring the published data in the new .pbix file. Where we unpivot all the columns with the values so we get rid of the multi column aggregations. From that on it pretty simple to build the visuals.

 

I have cheated a bit as it has not been solved with DAX but it works.I am also attaching a video link of the steps how to do it.

 

 

https://1drv.ms/v/s!AkczoVaXetcahki934iuuSZHMUTB?e=etN80R 

@BIstvan 

This is absolutely amazing!!! I didn't know you can do that!! Actually I was hoping to receive this type of solutions and you just made my wish come true. Thank you so much for you participation.

I will add all the accepted solutions to the original post along with links and attachement. I might also post the same in LinkedIn. I hope you don't mind tagging you and including your solution?

@tamerj1 it would be my pleasure.

 

@tamerj1 Very cool, a lot of the same concepts and patterns! I need to review this solution in the file!


Follow on LinkedIn
@ me in replies or I'll lose your thread!!!
Instead of a Kudo, please vote for this idea
Become an expert!: Enterprise DNA
External Tools: MSHGQM
YouTube Channel!: Microsoft Hates Greg
Latest book!:
The Definitive Guide to Power Query (M)

DAX is easy, CALCULATE makes DAX hard...
Greg_Deckler
Super User
Super User

@tamerj1 Here's another DAX approach that seems better than the other two I listed:

Measure = 
    VAR __Text = SUBSTITUTE( TOCSV('T_MEASURES',,,0), ",", "|" )
    VAR __Table = 
        ADDCOLUMNS(
            GENERATESERIES(3, 144, 1),
            "__Value", PATHITEM( __Text, [Value]) + 0
        )
    VAR __Result = AVERAGEX( __Table, [__Value] )
RETURN
    __Result

Follow on LinkedIn
@ me in replies or I'll lose your thread!!!
Instead of a Kudo, please vote for this idea
Become an expert!: Enterprise DNA
External Tools: MSHGQM
YouTube Channel!: Microsoft Hates Greg
Latest book!:
The Definitive Guide to Power Query (M)

DAX is easy, CALCULATE makes DAX hard...

@Greg_Deckler 

This one looks fantastic! 😍 I would like to see it working in the chart. Have you tried it? Otherwise, I'll do tomorrow morning. 

@tamerj1 I did try it and it seems to work great. Maybe a tad slow I suppose. I'll attach PBIX below signature.


Follow on LinkedIn
@ me in replies or I'll lose your thread!!!
Instead of a Kudo, please vote for this idea
Become an expert!: Enterprise DNA
External Tools: MSHGQM
YouTube Channel!: Microsoft Hates Greg
Latest book!:
The Definitive Guide to Power Query (M)

DAX is easy, CALCULATE makes DAX hard...

@Greg_Deckler 

Please allow until tomorrow morning to check the sample file. I'm about to jump to my bed. I'm pretty sure it does satisfy the challenge requirement in full. Yet I have some comments. Will get back to you soon. Have a great day!

@tamerj1 Sounds good, minor tweak with PBIX attached again. I have to have the counter go up to 146, not 144. Duh.

Measure = 
    VAR __Text = SUBSTITUTE( TOCSV('T_MEASURES',,,0), ",", "|" )
    VAR __Table = 
        ADDCOLUMNS(
            GENERATESERIES(3, 146, 1),
            "__Value", PATHITEM( __Text, [Value]) + 0
        )
    VAR __Result = AVERAGEX( __Table, [__Value] )
RETURN
    __Result

Follow on LinkedIn
@ me in replies or I'll lose your thread!!!
Instead of a Kudo, please vote for this idea
Become an expert!: Enterprise DNA
External Tools: MSHGQM
YouTube Channel!: Microsoft Hates Greg
Latest book!:
The Definitive Guide to Power Query (M)

DAX is easy, CALCULATE makes DAX hard...

@Greg_Deckler 

Looks like you are the only one who has definitely hit the point by utilizing TOCSV to handle this situation. Your genius solution deserves, with no doubt, to be marked as the accepted solution for this challenge.

 

However, I’m not going to make it easy for you 😁. Your solution works like magic in your file, but seems to have a little trouble in my file. Looks like someone has changed the order of the columns or someone has added another attribute column like “System ID”, “Sensor Location” or just “Row ID”. Not sure who did that 🤣

1.png

@tamerj1 Yeah, the order of the columns is important or at least you need to know the order in terms of which column goes where. Would need to know specifics of your file to come up with a specific solution but will try giving a general solution a shot. But, for example, if you just added an additional column of information to the "front" then you could change your GENERATESERIES from 3 to 146 to 4 to 147. I have some ideas on a general "can't fail" solution but need to test it out. 


Follow on LinkedIn
@ me in replies or I'll lose your thread!!!
Instead of a Kudo, please vote for this idea
Become an expert!: Enterprise DNA
External Tools: MSHGQM
YouTube Channel!: Microsoft Hates Greg
Latest book!:
The Definitive Guide to Power Query (M)

DAX is easy, CALCULATE makes DAX hard...

@Greg_Deckler 

True, but I was hoping not to have to change the code. Does that sound greedy? Cause I am greedy 🤑

@tamerj1 Yeah, the problem is that there isn't a SELECTCOLUMNSEXCEPT function that would select all the columns except those that you specify. If you had that function, even still you would have to modify the code if you add an additional column. The only way to achieve a no code revisions would be if you hard coded all of the desired columns into a SELECTCOLUMNS statement or if you could come up with a way to SELECTCOLUMNSIN function where you could pass a table and if the column name appeared in a row value then it would be selected otherwise not. It is possible to create your time table, one is included in the PBIX but I couldn't pull off the last piece of it where I used that as selection criteria for columns.

 

I came up with a couple additional approaches to lessen the burden on modifying the code. PBIX is attached as well. Also, these approaches seem to make the code more stable across charts types in my testing. 

 

This one abuses the PATH functions even more and uses SUBSTITUTE to find the 2nd comma and replace it. All you need to do then is if you add an additional column to the front then you modify the instance of the comma you are replacing. Note that this version is limited to when you will only ever have a single row of data that you are averaging.

Measure 2 = 
    VAR __Text = SUBSTITUTE( TOCSV( 'T_MEASURES', , ",", 1 ), UNICHAR(10), "|" )
    VAR __1 = PATHITEM( __Text, 1 )
    VAR __2 = PATHITEM( __Text, 2 )
    VAR __3 = SUBSTITUTE( __2, ",", "|", 2 )
    VAR __4 = PATHITEM( __3, 2 )
    VAR __Path = SUBSTITUTE( __4, ",", "|" )
    VAR __Table = 
        ADDCOLUMNS(
            GENERATESERIES(1, 144, 1),
            "__Value", PATHITEM( __Path, [Value]) + 0
        )
    VAR __Result = AVERAGEX( __Table, [__Value] )
RETURN
    __Result

 

This version attempts to ensure that all "pre" columns are moved to the end prior to grabbing the data. So, if you add a "pre" column, you will need to add an additional DISTINCT and correpsonding SUBSTITUTEWITHINDEX. The index is added to the end of the table so once this is done you can just grab items 1 - 144 every time. I tried multiple ways of trying to eliminate multiple columns at a time using this approach but the function refused to cooperate.

Measure 3 = 
    VAR __Table = 'T_MEASURES'
    VAR __Sensors = DISTINCT('T_MEASURES'[Sensor Tag])
    VAR __Dates = DISTINCT('T_MEASURES'[Date])
    VAR __Table1 = SUBSTITUTEWITHINDEX( __Table, "Index", __Sensors, [Sensor Tag], ASC)
    VAR __Table2 = SUBSTITUTEWITHINDEX( __Table1, "Index1", __Dates, [Date], ASC)
    VAR __Text = SUBSTITUTE( TOCSV(__Table2,,,0), ",", "|" )
    VAR __Table3 = 
        ADDCOLUMNS(
            GENERATESERIES(1, 144, 1),
            "__Value", PATHITEM( __Text, [Value]) + 0
        )
    VAR __Result = AVERAGEX( __Table3, [__Value] )
RETURN
    __Result

 


Follow on LinkedIn
@ me in replies or I'll lose your thread!!!
Instead of a Kudo, please vote for this idea
Become an expert!: Enterprise DNA
External Tools: MSHGQM
YouTube Channel!: Microsoft Hates Greg
Latest book!:
The Definitive Guide to Power Query (M)

DAX is easy, CALCULATE makes DAX hard...

Helpful resources

Announcements
July 2024 Power BI Update

Power BI Monthly Update - July 2024

Check out the July 2024 Power BI update to learn about new features.

July Newsletter

Fabric Community Update - July 2024

Find out what's new and trending in the Fabric Community.

Top Solution Authors