Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hello Everyone,
I am new to this forum, so please feel free to direct my question elsewhere, if it is more appropriate.
BACKGROUND: My team created a Pipeline triggering our T-SQL Notebook to run using DROP TABLE IF EXISTS and CREATE TABLE statements to overwrite/update our table in the designated Warehouse.
ISSUE: When we run the T-SQL Notebook manually it works perfectly and refreshs with the latest data on the first attempt. However, when we use the Pipeline, it doesn't work the first time. Sometimes when we run the Pipeline 3 times, we finally get refreshed data results. We have tried putting delay activities between the steps in the Pipeline. However, that doesn't work consistently either.
TESTING: We have performed a comparsion using T-SQL Notebook vs. PySpark Notebook. The PySpark Notebook works perfectly with the Pipeline and has no problems refreshing the data on the first attempt. However, as stated in the ISSUE section above, the T-SQL Notebook does not work on the first Pipeline refresh attempt.
QUESTION: Has anyone ran into this issue using T-SQL Notebooks? And if so, what solution(s) have you figured out to assure your Pipeline refreshes and works successfully on the first attempt?
THANK YOU so much for your time, effort, and guidance,
WDixon2025
Solved! Go to Solution.
Hi [Recipient's Name],
Thank you for the update. I’m glad to hear that upgrading to Spark Runtime 1.3 has enhanced the consistency of the Pipeline refreshes.
Given that the issue was resolved following the upgrade, it’s likely that the previous runtime version experienced execution delays or metadata commit inconsistencies affecting the T-SQL operations. In contrast, PySpark may have managed metadata updates more efficiently, which could explain why it performed without any issues.
To ensure continued stability, I recommend the following actions:
If this resolves your issue, kindly consider accepting your response as the solution. Doing so will help other community members facing similar challenges.
Thank you.
Thank you so much for the thorough response and all the great suggestions! We did evaluate/test everything you mentioned before I posted. We did just change our Spark settings today --> Runtime to 1.3 and tested the Pipeline again with new data and it refreshed on the first pipeline attempt - YAHOO!!! Only time will tell, but we are hopeful that maybe this Runtime upgrade resolved our issue.
THANK YOU again for the collaboration!!!
Hi [Recipient's Name],
Thank you for the update. I’m glad to hear that upgrading to Spark Runtime 1.3 has enhanced the consistency of the Pipeline refreshes.
Given that the issue was resolved following the upgrade, it’s likely that the previous runtime version experienced execution delays or metadata commit inconsistencies affecting the T-SQL operations. In contrast, PySpark may have managed metadata updates more efficiently, which could explain why it performed without any issues.
To ensure continued stability, I recommend the following actions:
If this resolves your issue, kindly consider accepting your response as the solution. Doing so will help other community members facing similar challenges.
Thank you.
Hi @WDixon2025,
Welcome to the Microsoft Fabric forum and thank you for your thorough explanation of the issue! It's great to see that you've conducted some testing and found that PySpark performs well within the same Pipeline. This is valuable information.
If this post helps, then please give us ‘Kudos’ and consider Accept it as a solution to help the other members find it more quickly.
Thank you.
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.
User | Count |
---|---|
3 | |
1 | |
1 | |
1 | |
1 |
User | Count |
---|---|
3 | |
2 | |
2 | |
1 | |
1 |