Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hello everyone,
I am wondering if it is possible to store data directly from a notebook into a warehouse in Microsoft Fabric. If this is not feasible, I would like to implement a copy data activity that only loads data incrementally using a date field.
Could anyone guide me on how to set this up or share best practices for this approach?
Thank you in advance for your help!
Solved! Go to Solution.
Hi @CrhIT ,
I think you can do the steps below:
1. Just as CrhIT said, Microsoft announced the public preview of T-SQL notebooks in Fabric.
2. You can first store your data in a Lakehouse using Python cells in your notebook. Then, use T-SQL cells to create tables in the warehouse from the Lakehouse tables. Here is the document you can read: Data warehouse tutorial - analyze data with a notebook - Microsoft Fabric | Microsoft Learn
3. If direct storage isn't feasible or you prefer a different approach, you can use a Data Factory pipeline to implement incremental data loading:
In the Data Factory section, create a new pipeline and name it appropriately. Add a Copy Data activity from the Move & Transform section.
Use a date field to filter and load only new or updated records. You can set this up in the Source settings of the Copy Data activity by specifying a query that selects data based on the date field.
You can also look at this document: Data warehouse tutorial - ingest data into a Warehouse in Microsoft Fabric - Microsoft Fabric | Micr...
Best Regards
Yilong Zhou
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Hi @CrhIT ,
I think you can do the steps below:
1. Just as CrhIT said, Microsoft announced the public preview of T-SQL notebooks in Fabric.
2. You can first store your data in a Lakehouse using Python cells in your notebook. Then, use T-SQL cells to create tables in the warehouse from the Lakehouse tables. Here is the document you can read: Data warehouse tutorial - analyze data with a notebook - Microsoft Fabric | Microsoft Learn
3. If direct storage isn't feasible or you prefer a different approach, you can use a Data Factory pipeline to implement incremental data loading:
In the Data Factory section, create a new pipeline and name it appropriately. Add a Copy Data activity from the Move & Transform section.
Use a date field to filter and load only new or updated records. You can set this up in the Source settings of the Copy Data activity by specifying a query that selects data based on the date field.
You can also look at this document: Data warehouse tutorial - ingest data into a Warehouse in Microsoft Fabric - Microsoft Fabric | Micr...
Best Regards
Yilong Zhou
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Hi @CrhIT,
Microsoft announced the public preview of T-SQL notebooks in Fabric: https://blog.fabric.microsoft.com/en-us/blog/announcing-public-preview-of-t-sql-notebook-in-fabric/. With this feature, you are able to modify the datawarehouse tables. You ask whether it is possible to store data from a notebook into a warehouse. Which data are you talking about? Are you first retrieving some data using Python cells?
If that is the case, you could first store this data in the lakehouse using Python and use a T-SQL cell to create a table in the warehouse from the lakehouse table with the 'create table as' function. I'm not sure whether that is possible within a single notebook or that you should create two notebooks.
Using this approach, you could load the data incrementally with the Python script first in the lakehouse and then copy it to the warehouse. When you run this more than once (what you probably will), don't forget to drop the old warehouse table before you create the table as a copy of the lakehouse table in the second run.
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.