Starting December 3, join live sessions with database experts and the Microsoft product team to learn just how easy it is to get started
Learn moreGet certified in Microsoft Fabric—for free! For a limited time, get a free DP-600 exam voucher to use by the end of 2024. Register now
Hello everyone,
I’m currently working on a data pipeline project in Microsoft Fabric that involves processing transactional data emailed to me monthly, and I’d be grateful for any advice on the best approach to structuring this pipeline.
Thanks in advance for any insights or experiences you can share.
Solved! Go to Solution.
Hi @HamidBee ,
If you are more familiar with SQL you can go with Stored procedure activity. But for mre complex transformation its better to use Spark.
Stored Procedures vs. Notebooks for Transformations:
Typically for the complex transformation it is better to use Notebook(pyspark), moreover we have the flexibility to configure spark settings for the workload like setting your executors, dynamic scaling etc. for better performance and cost effective.
Monthly File Handling:
In data pipeline we have Triggers(Preview feature so check with your internal team), you can set the triggers based on file arrival in azure blob storage and have your file name in such a way for dynamic pipeline parameters. In case if you dont want to use Triggers then you can schedule your pipelne based on your monthly schedule and have parametized file name to pick up.
Performance Optimization:
Analyse the pattern, load and adjust spark configuration accordingly for cost effective and performance using Fabric metrics app.
Thanks,
Srisakthi
Hi @HamidBee ,
If you are more familiar with SQL you can go with Stored procedure activity. But for mre complex transformation its better to use Spark.
Stored Procedures vs. Notebooks for Transformations:
Typically for the complex transformation it is better to use Notebook(pyspark), moreover we have the flexibility to configure spark settings for the workload like setting your executors, dynamic scaling etc. for better performance and cost effective.
Monthly File Handling:
In data pipeline we have Triggers(Preview feature so check with your internal team), you can set the triggers based on file arrival in azure blob storage and have your file name in such a way for dynamic pipeline parameters. In case if you dont want to use Triggers then you can schedule your pipelne based on your monthly schedule and have parametized file name to pick up.
Performance Optimization:
Analyse the pattern, load and adjust spark configuration accordingly for cost effective and performance using Fabric metrics app.
Thanks,
Srisakthi
Thanks for sharing this information.
Starting December 3, join live sessions with database experts and the Fabric product team to learn just how easy it is to get started.
User | Count |
---|---|
5 | |
4 | |
3 | |
2 | |
1 |
User | Count |
---|---|
17 | |
11 | |
9 | |
7 | |
6 |