Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hello, I am very new to synapse and am tasked with getting data from cloud azure database to a dedicated pool. So far I have created a copy task to bring the data in, so far so good, the copy task is now populating tables (around 20 tables) , however now the issue is duplication and increasing data. What is the best option for deduplicating the tables, should I now create pipeline tasks to drop the tables each day, since a full data refresh is needed daily and if so how should this be done in the simplest and most efficient way?
All assistance greatly received.
Thank you
Solved! Go to Solution.
HI @Elisa112,
Perhaps you can try to invoke the query editor for further data cleanup in the data pipeline if it suitable for your requirements:
Use a dataflow in a pipeline - Microsoft Fabric | Microsoft Learn
Regards,
Xiaoxin Sheng
@Elisa112 You need to implement an incremental copy algorithm so you only ingest new data every day instead of ingesting everything from scratch again, day after day.
HI @Elisa112,
Perhaps you can try to invoke the query editor for further data cleanup in the data pipeline if it suitable for your requirements:
Use a dataflow in a pipeline - Microsoft Fabric | Microsoft Learn
Regards,
Xiaoxin Sheng