March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Early bird discount ends December 31.
Register NowBe one of the first to start using Fabric Databases. View on-demand sessions with database experts and the Microsoft product team to learn just how easy it is to get started. Watch now
Hello
I am trying to move from qliksense to power bi . I am looking an alternative to the pick(wildmatch(4010,40??,50??,30*),1,2,3) in qliksese.
With this formula I am able to use wild card mapping for example the result o n the following formula will be pick(wildmatch(4010,40??,50??,30*),1,2,3) =1. As the 4010 is in the first grup so the first element of the second array is selected. .
Is there anything in power bi that will work like this.
In Vba i was able to do this with an inverse like .
For each member in teh array
if 4010 like Member then
Result
else
end if
I will apreciate if you could help me
regards
hi, @smo
You could use SEARCH Function for pick and nested IF or SWITCH Function to for wildmatch to create a formula as your logic.
for example:
New Column 2 = SEARCH( SWITCH( TRUE(), SEARCH("40",[Column],1,0)>0,1, SEARCH("50",[Column],1,0)>0,2, SEARCH("30",[Column],1,0)>0,3) , "123",1,0)
Result:
and here is sample pbix file, please try it.
Regards,
Lin
Hello Thansk for the response I manage to use the following DAX formula to get thing working
PB_Account_F9991 =
VAR Matches =
Topn(1,
CALCULATETABLE(
GENERATE(
'SUN_YTD',
FILTER(
ACCOUNT_F9991,
SEARCH(
[ACCOUNT_F9991_EXTERNAL],
SUN_YTD[KeyAccount],
1,
0
)
>0)
))
)
RETURN
CONCATENATEX(
Matches,
[ACCOUNT_F9991_INTERNAL],",")
But I would like to move this to M . As the function take to much time in Dax and I need to refer other calculated fild on this
IS there any way to move this to Power Query?
IS there a any way to use the result query including the calculated columns from other Power Bi importing to be able to modified with Power query ?
Regards
Sofia
hi, @smo
It is difficult to do it in power query, we may try to optimize the formula, could you share some sample data and your expected output for us have a test?
Regards,
Lin
Hello
I am trying to do this with R at the moment
I have a table with with 2 columns. WIth a big list of option I will put 4 examples but is longer
WildCardColumn | Mapping |
*_I*_???_007870000000100 | 4111 |
*_???_007870000000100 | 4112 |
*_I?*_???_006330000000005 | 5111 |
*_I?*_???_0063300000????? | 5116 |
My other table with the data
Entity | Account | Counterpaty | Amount |
B22 | 007870000000100 | IR22 | 2000 |
B22 | 007870000000100 | T222 | 4000 |
B22 | 006330000000005 | R222 | 4500 |
B22 | 006330000000007 | IR22 | 5000 |
I want it as result
Entity | Account | Counterpaty | Amount | Result |
B22 | 007870000000100 | IR22 | 2000 | 4111 |
B22 | 007870000000100 | T222 | 4000 | 4112 |
B22 | 006330000000005 | R222 | 4500 | 5111 |
B22 | 006330000000007 | IR22 | 5000 | 5116 |
I am thinking of using R
Toconvert the wildcard to regular exprecions I am using bellow formula
outputRegEx <-within (dataset, {RegEx_WildCardColumn= glob2rx(dataset$WildCardColumn)})
I am able to add a column with the pattern in bellow way
library("stringr")
pattern <-"^.*_???_0062102040.....$"
cleanRegEx <- str_extract(dataset$KeyAccount, pattern)
outputRegEx <-within (dataset, {RegEx= cleanRegEx})
My idea is to maybe loop thorught the wild car table getting the regular expresion and filtering the data table forming a new data frame with the new column
Could you guide me on this?
Regards
Hello
I do not know if is the best solution from performance point of view but I solve the issue using bellow R code
First in the mapping table I apply the following R funtion to transform the wild cards into Regular exprecions
# 'dataset' holds the input data for this script
outputRegEx <-within (dataset, {RegEx_ACCOUNT_F9991_EXTERNAL= glob2rx(dataset$ACCOUNT_F9991_EXTERNAL)})
Then on the table with the data I use the following code where I filter by the mapping table taking the regular exprecions and filtering the data table once is filter I add the code on the second column. I reduce the data frame to not get duplicate mapping. And I add the data frame that now matching has been encounter.
Hope this help other people with the same issue .
# 'dataset' holds the input data for this script
library("stringr")
dataset<- dataset
dataset2 <- dataset2
subData <- data.frame(
ENTITY=character(),
KeyAccount=character(),
BCP_ACCOUNT=character() )
for (i in 1:nrow(dataset2)) {
patternLook <-dataset2[i,1]
patternResult <-dataset2[i,2]
isValid <- function(x) {grepl(patternLook , as.character(x), ignore.case=TRUE)}
subData <- rbind(subData ,subset(within(dataset,{BCP_ACCOUNT =toString(patternResult)}),isValid(dataset$KeyAccount)=="TRUE"))
dataset<- dataset[ !(dataset$KeyAccount%in% subset(dataset,isValid(dataset$KeyAccount)=="TRUE")$KeyAccount), ]
}
subData <- rbind(subData ,within(dataset,{BCP_ACCOUNT ="NA"}))
It is important to add the data frame of the second column next to the other data frame just add a column and add the second data frame [dataset=#"Added Custom", dataset2=ACCOUNT_F9991].
Make sure that is in the same run R script
Refer
https://dax.tips/2017/05/23/power-bi-and-regular-expressions/
March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!
User | Count |
---|---|
90 | |
89 | |
85 | |
73 | |
49 |
User | Count |
---|---|
170 | |
145 | |
90 | |
67 | |
58 |