Don't miss your chance to take the Fabric Data Engineer (DP-700) exam on us!
Learn moreNext up in the FabCon + SQLCon recap series: The roadmap for Microsoft SQL and Maximizing Developer experiences in Fabric. All sessions are available on-demand after the live show. Register now
Hi, first of all sorry for my english I will try my best.
Our system have bug it's sometime duplicates almost the same information two or three times. It's look something like that:
ID created_at customer_id Start_date End_date work_time
1 2021.01.01 10:48 13 2021.01.01 2021.01.21 21
2 2021.01.01 10:48 13 2021.01.01 2021.01.21 21
3 2021.01.01 10:50 13 2021.01.01 2021.01.21 21
4 2021.01.05 11:12 5 2021.01.05 2021.01.08 4
5 2021.01.05 11:13 5 2021.01.05 2021.01.08 4
For work_time i create new collumn with =datediff(Start_date,End_date,day)+1
The task is calculate the average work time, but I can't use Average function becouse math don't work
(21+4)/2=12,5 (21+21+21+4+4)/5 = 14,2
maybe you have any ideas how remove dublicated rows or maybe here is other solution for my task.
And again sorry for my english)
Solved! Go to Solution.
@AndrejZevzikov , Create a measure like
averageX(summarize(Table, Table[customer_id], Table[created_at], Table[work_time]),[work_time])
Or you can delete duplicates in power query
https://www.youtube.com/watch?v=Hc5bIXkpGVE
Your best option is to delete duplicate rows in power Query as @amitchandak rightly suggests.
If you cannot access Power Query, you can create a new table as your "work" table (and ignore the original completely) using DAX. In the ribbon under Modeling, select "New Table" and type the equivalent DAX for your table (do not include the "ID" column since it's unique and will therefore just create the same table you already have) :
New Table =
SUMMARIZE (
'Old Table',
'Old Table'[created_at],
'Old Table'[customer_id],
'Old Table'[Start_date],
'Old Table'[End_date],
'Old Table'[work_time]
)
and you will get this:
Beware that you have two rows (highlighted in the image) which are the same except for the time they were created. If you know these are duplicate, you will have to define a business logic to identify them and then delete them.
To add a new "ID" column, choose new column in the ribbon and type:
ID =
RANK.EQ ( 'New Table'[created_at], 'New Table'[created_at], ASC )
Now you have a clean table to work with, and you can ignore the original.
Proud to be a Super User!
Paul on Linkedin.
@AndrejZevzikov , Create a measure like
averageX(summarize(Table, Table[customer_id], Table[created_at], Table[work_time]),[work_time])
Or you can delete duplicates in power query
Nice, it's seems working perfectly.
Thanks!
If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.
A new Power BI DataViz World Championship is coming this June! Don't miss out on submitting your entry.
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.
| User | Count |
|---|---|
| 48 | |
| 45 | |
| 41 | |
| 19 | |
| 18 |
| User | Count |
|---|---|
| 68 | |
| 67 | |
| 33 | |
| 31 | |
| 29 |