The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends September 15. Request your voucher.
Hi all
I have two tables in my dataset. The first, called VW_FIRE_IDEAL_TICKETS contains a list of documents where the field SEQID is a unique identifier. This table has about 25 columns, and currently 16,000 records (growing at approx 20 per day)
The second table, FIRE_FIELD_TICKET_CHARGES_ONLY has around 3.5m rows, most of which are completely useless because they are either old, or relate to another division of our company. The only rows I want in this table is where the SEQID matches back to the other table. This should pull approx 200,000 rows, which is also growing at around 100-300 per day.
Does anybody know how to filter this table efficiently? Most of the solutions I've found online have suggested merging tables or looking up to a master list. I haven't found a way to do this because of the large dataset. They have either resulted in errors or have just sent powerquery into a seemingly endless loop. As a starting point I've tried filtering the date which cut out millions of rows, but I'm still left with around 450,000 of which I know the majority are not relevant.
Any help would be appreciated! Thank you!
Hey @davidoz ,
You can try -"Table.Contains"
Refer - Filter Table by Another in Power Query (aaronjgrossman.com)