Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredGet Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Learn more
Hi,
When writing data from a lakehouse to a warehouse (and doing some transformations), I am wondering about theese two options:
1. Create stored procedure in warehouse and trigger it from my pipeline
2. Write the SQL in a notebook and trigger it from my pipeline
Is there any major drawbacks of using any of the two approaches?
//Alexander
Solved! Go to Solution.
Hello @AlexanderPowBI
| Feature | SQL in Notebooks | Stored Procedures |
| Interactivity | Highly interactive; great for exploration and debugging. | Less interactive; designed for predefined batch execution. |
| Performance | May incur overhead; not as optimized for large-scale operations. | Precompiled; optimized for high-performance transformations. |
| Flexibility | Supports integration with other languages (e.g., Python). | Limited to T-SQL; less flexible for multi-language workflows. |
| Maintainability | Can become fragmented in production setups. | Modular and reusable; easier to manage in production pipelines. |
| Security | Relies on workspace-level security; less granular control. | Strong RBAC and object-level security controls available. |
If this is helpful , please accept the answer and give kudos
Both options work, but the best choice depends on your needs.
I prefer using a notebook because it's more flexible you can mix SQL with Python/PySpark for advanced transformations. It's also easier to debug and test.
Hello @AlexanderPowBI
| Feature | SQL in Notebooks | Stored Procedures |
| Interactivity | Highly interactive; great for exploration and debugging. | Less interactive; designed for predefined batch execution. |
| Performance | May incur overhead; not as optimized for large-scale operations. | Precompiled; optimized for high-performance transformations. |
| Flexibility | Supports integration with other languages (e.g., Python). | Limited to T-SQL; less flexible for multi-language workflows. |
| Maintainability | Can become fragmented in production setups. | Modular and reusable; easier to manage in production pipelines. |
| Security | Relies on workspace-level security; less granular control. | Strong RBAC and object-level security controls available. |
If this is helpful , please accept the answer and give kudos
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!
Check out the October 2025 Fabric update to learn about new features.
| User | Count |
|---|---|
| 14 | |
| 5 | |
| 2 | |
| 2 | |
| 2 |