Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
I have been able to create a partitioned table in a Lakehouse usign a pipeline where the data is read from a CSV file and partitioned on the YEAR_MONTH field (all data in the file is of the same YEAR_MONTH), and then sub-partitioned on another two fields in the file. However, as new data is realeased on a monthly basis I want to be able to append it to this table. At the moment this is not possible using the same method as the original table was created. It seems you can paritition a table when using OVERWRITE but not when using APPEND (even though adding the new data will create an additional top level parition and not add data to any of the exisiting partitions).
I am missing something obvious? Is this a preview restriction? Are there other ways of achieving the same thing?
Solved! Go to Solution.
I made it work. Is there a better solution to this, very likely.
First, I created a regular table using the overwrite option in ADF
2nd, I created an empty delta-partitioned table in a notebook
3rd I inserted into the partitioned table from the staging table created in the first step.
Sorry I missed the last screenshot
Thanks, I will try this out today. 🙂
I made it work. Is there a better solution to this, very likely.
First, I created a regular table using the overwrite option in ADF
2nd, I created an empty delta-partitioned table in a notebook
3rd I inserted into the partitioned table from the staging table created in the first step.
The original table was created in the lakehouse with the following parameters
But when you select the append option the fields for enabling paritioning disappear and the execution of the pipeline fails.
Thanks for the info.
I'll try this myself, but I wonder if the overwrite partition clause is just to define the table.
Once you have to append, the table is already defined. You just go ahead and append. Data should flow into the proper partitions
Agreed, that was my expectation but, as I say, it errors when run.
Oh well, I was not expecting this:
I am starting to think that our only option today is to go to a staging table with ADF and they do the insert into the partitioned using a notebook script.
May I ask you how you created the Delta Lake table? Directly from a Copy? Can you share a screenshot?
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.