Power BI is turning 10, and we’re marking the occasion with a special community challenge. Use your creativity to tell a story, uncover trends, or highlight something unexpected.
Get startedJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hello everyone,
I’m currently working on a data pipeline project in Microsoft Fabric that involves processing transactional data emailed to me monthly, and I’d be grateful for any advice on the best approach to structuring this pipeline.
Thanks in advance for any insights or experiences you can share.
Solved! Go to Solution.
Hi @HamidBee ,
If you are more familiar with SQL you can go with Stored procedure activity. But for mre complex transformation its better to use Spark.
Stored Procedures vs. Notebooks for Transformations:
Typically for the complex transformation it is better to use Notebook(pyspark), moreover we have the flexibility to configure spark settings for the workload like setting your executors, dynamic scaling etc. for better performance and cost effective.
Monthly File Handling:
In data pipeline we have Triggers(Preview feature so check with your internal team), you can set the triggers based on file arrival in azure blob storage and have your file name in such a way for dynamic pipeline parameters. In case if you dont want to use Triggers then you can schedule your pipelne based on your monthly schedule and have parametized file name to pick up.
Performance Optimization:
Analyse the pattern, load and adjust spark configuration accordingly for cost effective and performance using Fabric metrics app.
Thanks,
Srisakthi
Hi @HamidBee ,
If you are more familiar with SQL you can go with Stored procedure activity. But for mre complex transformation its better to use Spark.
Stored Procedures vs. Notebooks for Transformations:
Typically for the complex transformation it is better to use Notebook(pyspark), moreover we have the flexibility to configure spark settings for the workload like setting your executors, dynamic scaling etc. for better performance and cost effective.
Monthly File Handling:
In data pipeline we have Triggers(Preview feature so check with your internal team), you can set the triggers based on file arrival in azure blob storage and have your file name in such a way for dynamic pipeline parameters. In case if you dont want to use Triggers then you can schedule your pipelne based on your monthly schedule and have parametized file name to pick up.
Performance Optimization:
Analyse the pattern, load and adjust spark configuration accordingly for cost effective and performance using Fabric metrics app.
Thanks,
Srisakthi
Thanks for sharing this information.
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.