The ultimate Microsoft Fabric, Power BI, Azure AI, and SQL learning event: Join us in Stockholm, September 24-27, 2024.
Save €200 with code MSCUST on top of early bird pricing!
Find everything you need to get certified on Fabric—skills challenges, live sessions, exam prep, role guidance, and more. Get started
Hi there,
I am using Power BI Desktop: Version: 2.130.930.0 64-bit (June 2024)
I have a table (data set) is about 60K records, and i need to create a new column to see which ID is removed by comparing 2 dates' data sets (see raw tables below for exmaples)
I tried to create the new column using the DAX (DAX sytac below) in the Table view, but for such a small data set, I keep getting the not enough memory error.
Is there a way to increase the memory used by Power Bi? If so, how?
Or should I create the new column in the "Transform Data" (Power Query Editor or Power Query M)?
Dax sytax (this syntax is tested on a smaller data set, and proven working):
the new column is
here is the raw table :
id | bu | type | date |
208 | 10 | red | 6/18/2024 |
209 | 10 | red | 6/18/2024 |
210 | 10 | red | 6/18/2024 |
211 | 10 | red | 6/18/2024 |
209 | 10 | red | 6/19/2024 |
210 | 20 | red | 6/19/2024 |
211 | 10 | blue | 6/19/2024 |
212 | 10 | red | 6/19/2024 |
Here is the final table wanted:
id | bu | type | date | New_Column |
208 | 10 | red | 6/18/2024 | -1 |
209 | 10 | red | 6/18/2024 | 0 |
210 | 10 | red | 6/18/2024 | 0 |
211 | 10 | red | 6/18/2024 | 0 |
209 | 10 | red | 6/19/2024 | 0 |
210 | 20 | red | 6/19/2024 | 0 |
211 | 10 | blue | 6/19/2024 | 0 |
212 | 10 | red | 6/19/2024 | 0 |
id - whole number
bu - whole number
type - text
date - date
Solved! Go to Solution.
@zenz - Instead of a column, you could create this as a measure. The DAX needs to change slightly, but the below should work for you:
VAR _id =
SELECTEDVALUE ( 'Table (7)'[id] )
VAR next_date =
SELECTEDVALUE ( 'Table (7)'[date] ) + 1
VAR current_date =
SELECTEDVALUE ( 'Table (7)'[date] )
VAR id_next =
CALCULATETABLE (
VALUES ( 'Table (7)'[id] ),
REMOVEFILTERS ( 'Table (7)' ),
'Table (7)'[date] = next_date,
'Table (7)'[id] = _id
)
VAR max_date =
CALCULATE ( MAX ( 'Table (7)'[date] ), REMOVEFILTERS ( 'Table (7)' ) )
RETURN
SWITCH (
TRUE (),
_id IN id_next, 0,
current_date = max_date, 0,
NOT ( _id ) IN id_next, -1
)
Screenshot below shows it's working for me:
If this works for you, please mark it as the solution.
@zenz - Please remember to accept this as the solution, it helps others learning DAX find answers to their problems.
@zenz - Instead of a column, you could create this as a measure. The DAX needs to change slightly, but the below should work for you:
VAR _id =
SELECTEDVALUE ( 'Table (7)'[id] )
VAR next_date =
SELECTEDVALUE ( 'Table (7)'[date] ) + 1
VAR current_date =
SELECTEDVALUE ( 'Table (7)'[date] )
VAR id_next =
CALCULATETABLE (
VALUES ( 'Table (7)'[id] ),
REMOVEFILTERS ( 'Table (7)' ),
'Table (7)'[date] = next_date,
'Table (7)'[id] = _id
)
VAR max_date =
CALCULATE ( MAX ( 'Table (7)'[date] ), REMOVEFILTERS ( 'Table (7)' ) )
RETURN
SWITCH (
TRUE (),
_id IN id_next, 0,
current_date = max_date, 0,
NOT ( _id ) IN id_next, -1
)
Screenshot below shows it's working for me:
If this works for you, please mark it as the solution.
thank you so much. You are super start!.
This should solve another question I had for you. really apprecitate it!
@zenz - Please remember to accept this as the solution, it helps others learning DAX find answers to their problems.
Hello @zenz ,
I believe adjusting memory usage should be in option.
However, I am just sharing my experience when I faced this memory issue before. If you have big data bank with complex calculation, you probably want to slice the data first.
in your id_prior variable, you are comparing 'table'[date] with date_prior and 'table'[id] with _id which both date_prior and eID are variables you have defined before. This makes big performance impact since that DAX will compare every value in those column (this is my understanding from my previous issue, might be wrong though).
I wrote exactly same DAX as yours and got low memory issue as well.
To solve this matter, in my case, I slice my data first before applying or comparing variable by using SUMMARIZE so it would not do comparison to all data.
But if your data is cleaned, then perhaps adding RAM might be able to help.
But other Power BI expert might have better solution.
Hope this will help you.
Thank you.
Thanks you Irwan.
I am very new to DAX (1 week self taught). Would you mind recompose the DAX code to show me how you would slice the data using SUMMARIZE? Appreciate it!
Join the community in Stockholm for expert Microsoft Fabric learning including a very exciting keynote from Arun Ulag, Corporate Vice President, Azure Data.
Check out the August 2024 Power BI update to learn about new features.
User | Count |
---|---|
21 | |
19 | |
18 | |
17 | |
12 |
User | Count |
---|---|
36 | |
34 | |
20 | |
19 | |
14 |